<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-utils.c, branch v3.12.13</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: _is_prefix should handle 0-length paths</title>
<updated>2018-08-17T05:34:22+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@redhat.com</email>
</author>
<published>2018-07-10T15:26:08+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=4fc27a7e3022088dbadedb5c2a83b94ae52e1766'/>
<id>4fc27a7e3022088dbadedb5c2a83b94ae52e1766</id>
<content type='text'>
If one of the paths given to _is_prefix is 0-length, then it is not a
prefix of the other. Hence, _is_prefix should return false.

Change-Id: I54aa577a64a58940ec91872d0d74dc19cff9106d
BUG: 1599788
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If one of the paths given to _is_prefix is 0-length, then it is not a
prefix of the other. Hence, _is_prefix should return false.

Change-Id: I54aa577a64a58940ec91872d0d74dc19cff9106d
BUG: 1599788
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: import volumes in separate synctask</title>
<updated>2018-04-06T12:46:32+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-02-08T03:39:00+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0e3206c6a8ef36737e5b303580b87a87f6dc1c8e'/>
<id>0e3206c6a8ef36737e5b303580b87a87f6dc1c8e</id>
<content type='text'>
With brick multiplexing, to attach a brick to an existing brick process
the prerequisite is to have the compatible brick to finish it's
initialization and portmap sign in and hence the thread might have to go
to a sleep and context switch the synctask to allow the brick process to
communicate with glusterd. In normal code path, this works fine as
glusterd_restart_bricks () is launched through a separate synctask.

In case there's a mismatch of the volume when glusterd restarts,
glusterd_import_friend_volume is invoked and then it tries to call
glusterd_start_bricks () from the main thread which eventually may land
into the similar situation. Now since this is not done through a
separate synctask, the 1st brick will never be able to get its turn to
finish all of its handshaking and as a consequence to it, all the bricks
will fail to get attached to it.

Solution : Execute import volume and glusterd restart bricks in separate
synctask. Importing snaps had to be also done through synctask as
there's a dependency of the parent volume need to be available for the
importing snap functionality to work.

&gt;mainline patch : https://review.gluster.org/#/c/19357/
                  https://review.gluster.org/#/c/19536/

Change-Id: I290b244d456afcc9b913ab30be4af040d340428c
BUG: 1543708
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With brick multiplexing, to attach a brick to an existing brick process
the prerequisite is to have the compatible brick to finish it's
initialization and portmap sign in and hence the thread might have to go
to a sleep and context switch the synctask to allow the brick process to
communicate with glusterd. In normal code path, this works fine as
glusterd_restart_bricks () is launched through a separate synctask.

In case there's a mismatch of the volume when glusterd restarts,
glusterd_import_friend_volume is invoked and then it tries to call
glusterd_start_bricks () from the main thread which eventually may land
into the similar situation. Now since this is not done through a
separate synctask, the 1st brick will never be able to get its turn to
finish all of its handshaking and as a consequence to it, all the bricks
will fail to get attached to it.

Solution : Execute import volume and glusterd restart bricks in separate
synctask. Importing snaps had to be also done through synctask as
there's a dependency of the parent volume need to be available for the
importing snap functionality to work.

&gt;mainline patch : https://review.gluster.org/#/c/19357/
                  https://review.gluster.org/#/c/19536/

Change-Id: I290b244d456afcc9b913ab30be4af040d340428c
BUG: 1543708
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: optimize glusterd import volumes code path</title>
<updated>2018-03-08T06:44:17+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-01-29T04:53:52+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=6eea52906f375d484af092e7a9434c28480dc02b'/>
<id>6eea52906f375d484af092e7a9434c28480dc02b</id>
<content type='text'>
In case there's a version mismatch detected for one of the volumes
glusterd was ending up with updating all the volumes which is a
overkill.

&gt;mainline patch : https://review.gluster.org/#/c/19358/

Change-Id: I6df792db391ce3a1697cfa9260f7dbc3f59aa62d
BUG: 1543709
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit bb34b07fd2ec5e6c3eed4fe0cdf33479dbf5127b)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In case there's a version mismatch detected for one of the volumes
glusterd was ending up with updating all the volumes which is a
overkill.

&gt;mainline patch : https://review.gluster.org/#/c/19358/

Change-Id: I6df792db391ce3a1697cfa9260f7dbc3f59aa62d
BUG: 1543709
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit bb34b07fd2ec5e6c3eed4fe0cdf33479dbf5127b)
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: connect to an existing brick process when qourum status is NOT_APPLICABLE_QUORUM</title>
<updated>2018-01-12T05:43:49+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-01-03T08:59:51+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=8679151392e50e1684ed721710f44dd4fbb992b9'/>
<id>8679151392e50e1684ed721710f44dd4fbb992b9</id>
<content type='text'>
First of all, this patch reverts commit 635c1c3 as the same is causing a
regression with bricks not coming up on time when a node is rebooted.
This patch tries to fix the problem in a different way by just trying to
connect to an existing running brick when quorum status is not
applicable.

&gt;mainline patch : https://review.gluster.org/#/c/19134/

Change-Id: I0efb5901832824b1c15dcac529bffac85173e097
BUG: 1511301
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
First of all, this patch reverts commit 635c1c3 as the same is causing a
regression with bricks not coming up on time when a node is rebooted.
This patch tries to fix the problem in a different way by just trying to
connect to an existing running brick when quorum status is not
applicable.

&gt;mainline patch : https://review.gluster.org/#/c/19134/

Change-Id: I0efb5901832824b1c15dcac529bffac85173e097
BUG: 1511301
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Free up svc-&gt;conn on volume delete</title>
<updated>2017-12-12T10:00:52+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-12-06T12:35:24+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=2c588f81d452b0d54920a0abbb266c8c1d37e62f'/>
<id>2c588f81d452b0d54920a0abbb266c8c1d37e62f</id>
<content type='text'>
Daemons like snapd, tierd and gfproxyd are maintained on per volume
basis and on a volume delete we should destroy the rpc connection
established for them.

&gt;mainline patch : https://review.gluster.org/#/c/18957/

Change-Id: Id1440e39da07b990fdb9b207df18da04b1ca8014
BUG: 1523048
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit 36ce4c614a3391043a3417aa061d0aa16e60b2d3)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Daemons like snapd, tierd and gfproxyd are maintained on per volume
basis and on a volume delete we should destroy the rpc connection
established for them.

&gt;mainline patch : https://review.gluster.org/#/c/18957/

Change-Id: Id1440e39da07b990fdb9b207df18da04b1ca8014
BUG: 1523048
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit 36ce4c614a3391043a3417aa061d0aa16e60b2d3)
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: display gluster volume status, when quorum type is server</title>
<updated>2017-11-30T06:46:02+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2017-11-09T07:45:51+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=4c9478b3604305e0e20c90096198b14dda83e3d0'/>
<id>4c9478b3604305e0e20c90096198b14dda83e3d0</id>
<content type='text'>
Problem: when server-quorum-type is server, after restarting glusterd
in the node which is up, gluster volume status is giving incorrect
information.

Fix: check whether server is blank, before adding other keys into the
dictionary.

Change-Id: I926ebdffab330ccef844f23f6d6556e137914047
BUG: 1511782
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit 046c7e3199fca715592762e271e6061ac99b0c4b)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: when server-quorum-type is server, after restarting glusterd
in the node which is up, gluster volume status is giving incorrect
information.

Fix: check whether server is blank, before adding other keys into the
dictionary.

Change-Id: I926ebdffab330ccef844f23f6d6556e137914047
BUG: 1511782
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit 046c7e3199fca715592762e271e6061ac99b0c4b)
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: fix brick restart parallelism</title>
<updated>2017-11-06T06:09:21+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-10-26T08:56:30+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=44e3c3b5c813168d72f10ecb3c058ac3489c719c'/>
<id>44e3c3b5c813168d72f10ecb3c058ac3489c719c</id>
<content type='text'>
glusterd's brick restart logic is not always sequential as there is
atleast three different ways how the bricks are restarted.
1. through friend-sm and glusterd_spawn_daemons ()
2. through friend-sm and handling volume quorum action
3. through friend handshaking when there is a mimatch on quorum on
friend import.

In a brick multiplexing setup, glusterd ended up trying to spawn the
same brick process couple of times as almost in fraction of milliseconds
two threads hit glusterd_brick_start () because of which glusterd didn't
have any choice of rejecting any one of them as for both the case brick
start criteria met.

As a solution, it'd be better to control this madness by two different
flags, one is a boolean called start_triggered which indicates a brick
start has been triggered and it continues to be true till a brick dies
or killed, the second is a mutex lock to ensure for a particular brick
we don't end up getting into glusterd_brick_start () more than once at
same point of time.

Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2
BUG: 1508283
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit 82be66ef8e9e3127d41a4c843daf74c1d8aec4aa)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
glusterd's brick restart logic is not always sequential as there is
atleast three different ways how the bricks are restarted.
1. through friend-sm and glusterd_spawn_daemons ()
2. through friend-sm and handling volume quorum action
3. through friend handshaking when there is a mimatch on quorum on
friend import.

In a brick multiplexing setup, glusterd ended up trying to spawn the
same brick process couple of times as almost in fraction of milliseconds
two threads hit glusterd_brick_start () because of which glusterd didn't
have any choice of rejecting any one of them as for both the case brick
start criteria met.

As a solution, it'd be better to control this madness by two different
flags, one is a boolean called start_triggered which indicates a brick
start has been triggered and it continues to be true till a brick dies
or killed, the second is a mutex lock to ensure for a particular brick
we don't end up getting into glusterd_brick_start () more than once at
same point of time.

Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2
BUG: 1508283
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit 82be66ef8e9e3127d41a4c843daf74c1d8aec4aa)
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: persist brickinfo's port change into glusterd's store</title>
<updated>2017-11-02T09:52:58+00:00</updated>
<author>
<name>Gaurav Yadav</name>
<email>gyadav@redhat.com</email>
</author>
<published>2017-10-27T10:34:46+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9e3244cd177f7ddd4f9697e07bd119e2becb982d'/>
<id>9e3244cd177f7ddd4f9697e07bd119e2becb982d</id>
<content type='text'>
Problem:
Consider a case where node reboot is performed and prior to reboot
brick was listening to 49153. Post reboot glusterd assigned 49152
to brick and started the brick process but the new port was never
persisted. Now when glusterd restarts glusterd always read the port
from its persisted store i.e 49153 however pmap signin happens with
the correct port i.e 49152.

Fix:
Make sure when glusterd_brick_start is called, glusterd_store_volinfo is
eventually invoked.

Change-Id: Ic0efbd48c51d39729ed951a42922d0e59f7115a1
BUG: 1507748
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Consider a case where node reboot is performed and prior to reboot
brick was listening to 49153. Post reboot glusterd assigned 49152
to brick and started the brick process but the new port was never
persisted. Now when glusterd restarts glusterd always read the port
from its persisted store i.e 49153 however pmap signin happens with
the correct port i.e 49152.

Fix:
Make sure when glusterd_brick_start is called, glusterd_store_volinfo is
eventually invoked.

Change-Id: Ic0efbd48c51d39729ed951a42922d0e59f7115a1
BUG: 1507748
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: fix client io-threads option for replicate volumes</title>
<updated>2017-10-09T12:21:08+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2017-10-09T12:16:16+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=47604fad4c2a3951077e41e0c007ceb979bb2c24'/>
<id>47604fad4c2a3951077e41e0c007ceb979bb2c24</id>
<content type='text'>
Backport of https://review.gluster.org/#/c/18430/

Problem:
Commit ff075a3d6f9b142911d25c27fd209838782bfff0 disabled loading
client-io-threads for replicate volumes (it was set to on by default in
commit e068c1997314046658dd502e9118dab32decf879) due to performance
issues but in doing so, inadvertently failed to load the xlator even if
the user explicitly enabled the option using the volume set command.
This was despite returning returning sucess for the volume set.

Fix:
Modify the check in perfxl_option_handler() and add checks in volume
create/add-brick/remove-brick code paths, tying it all to
GD_OP_VERSION_3_12_2.

Change-Id: Ib612973a999a7da818cc926f5c2601b1f0794fcf
BUG: 1499158
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of https://review.gluster.org/#/c/18430/

Problem:
Commit ff075a3d6f9b142911d25c27fd209838782bfff0 disabled loading
client-io-threads for replicate volumes (it was set to on by default in
commit e068c1997314046658dd502e9118dab32decf879) due to performance
issues but in doing so, inadvertently failed to load the xlator even if
the user explicitly enabled the option using the volume set command.
This was despite returning returning sucess for the volume set.

Fix:
Modify the check in perfxl_option_handler() and add checks in volume
create/add-brick/remove-brick code paths, tying it all to
GD_OP_VERSION_3_12_2.

Change-Id: Ib612973a999a7da818cc926f5c2601b1f0794fcf
BUG: 1499158
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd : glusterd fails to start when  peer's network interface is down</title>
<updated>2017-08-21T14:28:58+00:00</updated>
<author>
<name>Gaurav Yadav</name>
<email>gyadav@redhat.com</email>
</author>
<published>2017-07-18T10:53:18+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=a6608cf7b62de850502c582a638cf0d5fc2cad9a'/>
<id>a6608cf7b62de850502c582a638cf0d5fc2cad9a</id>
<content type='text'>
Problem:
glusterd fails to start on nodes where glusterd tries to come up even
before network is up.

Fix:
On startup glusterd tries to resolve brick path which is based on
hostname/ip, but in the above scenario when network interface is not
up, glusterd is not able to resolve the brick path using ip_address or
hostname With this fix glusterd will use UUID to resolve brick path.

&gt;Reviewed-on: https://review.gluster.org/17813
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;(cherry picked from commit 1477fa442a733d7b1a5ea74884cac8f29fbe7e6a)

Change-Id: Icfa7b2652417135530479d0aa4e2a82b0476f710
BUG: 1482835
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18061
Tested-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
glusterd fails to start on nodes where glusterd tries to come up even
before network is up.

Fix:
On startup glusterd tries to resolve brick path which is based on
hostname/ip, but in the above scenario when network interface is not
up, glusterd is not able to resolve the brick path using ip_address or
hostname With this fix glusterd will use UUID to resolve brick path.

&gt;Reviewed-on: https://review.gluster.org/17813
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;(cherry picked from commit 1477fa442a733d7b1a5ea74884cac8f29fbe7e6a)

Change-Id: Icfa7b2652417135530479d0aa4e2a82b0476f710
BUG: 1482835
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18061
Tested-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
