<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests/bugs/glusterd, branch v6dev</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>Land part 2 of clang-format changes</title>
<updated>2018-09-12T12:22:45+00:00</updated>
<author>
<name>Gluster Ant</name>
<email>bugzilla-bot@gluster.org</email>
</author>
<published>2018-09-12T12:22:45+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e16868dede6455cab644805af6fe1ac312775e13'/>
<id>e16868dede6455cab644805af6fe1ac312775e13</id>
<content type='text'>
Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4
Signed-off-by: Nigel Babu &lt;nigelb@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4
Signed-off-by: Nigel Babu &lt;nigelb@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: avoid using glusterd's working directory as a brick</title>
<updated>2018-09-08T13:37:06+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-09-03T10:56:56+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=44e4db05a953a6f231c62225b462470cacb16bd4'/>
<id>44e4db05a953a6f231c62225b462470cacb16bd4</id>
<content type='text'>
Adding checks for avoiding glusterd's working directory used as
a brick for volume creation.

fixes: bz#853601
Change-Id: I4b16a05f752e92216aa628f542a4fdbf59b3c669
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Adding checks for avoiding glusterd's working directory used as
a brick for volume creation.

fixes: bz#853601
Change-Id: I4b16a05f752e92216aa628f542a4fdbf59b3c669
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: fix brick check orders</title>
<updated>2018-08-13T13:43:51+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-08-08T05:00:31+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=885c56b6f3c43cea0b27345f47f5522b42ebf278'/>
<id>885c56b6f3c43cea0b27345f47f5522b42ebf278</id>
<content type='text'>
fix brick checks for validating-server-quorum.t &amp; quorum-validation.t
...and make brick_up_status_1 function more generic.

Also fix a timing issue in
bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t

Change-Id: I797ef4cec5b160aafa979bae7151b1e99fcb48ac
Updates: bz#1603063
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
fix brick checks for validating-server-quorum.t &amp; quorum-validation.t
...and make brick_up_status_1 function more generic.

Also fix a timing issue in
bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t

Change-Id: I797ef4cec5b160aafa979bae7151b1e99fcb48ac
Updates: bz#1603063
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: Fix cleanup routine for some mux tests</title>
<updated>2018-08-13T03:00:48+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-08-11T17:21:36+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=48b93c292c0069da9ac2fe77e66d08a1cdeacfdc'/>
<id>48b93c292c0069da9ac2fe77e66d08a1cdeacfdc</id>
<content type='text'>
Some of the mux tests, set a trap to catch test exit and
call cleanup. This will cause cleanup to be invoked twice
in case the test times out, or even otherwise, as include.rc
also sets a trap to cleanup on exit (TERM and others).

This leads to the tarballs generated on failures for these
tests to be empty and does not aid debugging.

This patch corrects this pattern across the tests to the
more standard cleanup at the end.

Fixes: bz#1615037
Change-Id: Ib83aeb09fac2aa591b390b9fb9e1f605bfef9a8b
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Some of the mux tests, set a trap to catch test exit and
call cleanup. This will cause cleanup to be invoked twice
in case the test times out, or even otherwise, as include.rc
also sets a trap to cleanup on exit (TERM and others).

This leads to the tarballs generated on failures for these
tests to be empty and does not aid debugging.

This patch corrects this pattern across the tests to the
more standard cleanup at the end.

Fixes: bz#1615037
Change-Id: Ib83aeb09fac2aa591b390b9fb9e1f605bfef9a8b
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Compare volume_id before start/attach a brick</title>
<updated>2018-08-10T17:51:37+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2018-08-04T06:35:03+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=bd8fc26a278697c30537d879ea5402db7ebab577'/>
<id>bd8fc26a278697c30537d879ea5402db7ebab577</id>
<content type='text'>
Problem: After reboot a node brick is not coming up because
         fsid comparison is failed before start a brick

Solution: Instead of comparing fsid compare volume_id to
          resolve the same because fsid is changed after
          reboot a node but volume_id persist as a xattr
          on brick_root path at the time of creating a volume.

Change-Id: Ic289aab1b4ebfd83bbcae8438fee26ae61a0fff4
fixes: bz#1612418
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: After reboot a node brick is not coming up because
         fsid comparison is failed before start a brick

Solution: Instead of comparing fsid compare volume_id to
          resolve the same because fsid is changed after
          reboot a node but volume_id persist as a xattr
          on brick_root path at the time of creating a volume.

Change-Id: Ic289aab1b4ebfd83bbcae8438fee26ae61a0fff4
fixes: bz#1612418
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Bricks of a normal volumes should not attach to gluster_shared_storage bricks</title>
<updated>2018-08-03T09:03:49+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-08-01T09:39:08+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=93d7f3f2da52b6fb513ad2ff91c5bd0589d1482b'/>
<id>93d7f3f2da52b6fb513ad2ff91c5bd0589d1482b</id>
<content type='text'>
Problem: In a brick multiplexing environment, Bricks of a normal volume
created by user are getting attached to the bricks of a volume
"gluster_shared_storage" which is created by enabling the
enable-shared-storage option. Mounting gluster_shared_storage
has strict authentication checks. when we attach bricks of a normal
volume to bricks of gluster_shared_storage, mounting the normal
volume created by user will fail due to strict authentication checks.

Solution: We should not attach bricks of a normal volume to brick
process of gluster_shared_storage volume and vice versa.

fixes: bz#1610726
Change-Id: If1b5a2a02675789a2915ba480fb48c145449163d
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: In a brick multiplexing environment, Bricks of a normal volume
created by user are getting attached to the bricks of a volume
"gluster_shared_storage" which is created by enabling the
enable-shared-storage option. Mounting gluster_shared_storage
has strict authentication checks. when we attach bricks of a normal
volume to bricks of gluster_shared_storage, mounting the normal
volume created by user will fail due to strict authentication checks.

Solution: We should not attach bricks of a normal volume to brick
process of gluster_shared_storage volume and vice versa.

fixes: bz#1610726
Change-Id: If1b5a2a02675789a2915ba480fb48c145449163d
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Introduce daemon-log-level cluster wide option</title>
<updated>2018-07-03T14:29:33+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-07-02T15:18:22+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=524c869976c837c2ef13b5ef50020e7769188e4d'/>
<id>524c869976c837c2ef13b5ef50020e7769188e4d</id>
<content type='text'>
This option, applicable to the node level daemons can be very helpful in
controlling the log level of these services. Please note any daemon
which is started prior to setting the specific value of this option (if
not INFO) will need to go through a restart to have this change into
effect.

Change-Id: I7f6d2620bab2b094c737f5cc816bc093e9c9c4c9
fixes: bz#1597473
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This option, applicable to the node level daemons can be very helpful in
controlling the log level of these services. Please note any daemon
which is started prior to setting the specific value of this option (if
not INFO) will need to go through a restart to have this change into
effect.

Change-Id: I7f6d2620bab2b094c737f5cc816bc093e9c9c4c9
fixes: bz#1597473
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: handling brick termination in brick-mux</title>
<updated>2018-05-07T15:31:59+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-04-05T20:23:45+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=4da244caccd38a77de5428b6954f565219ef0719'/>
<id>4da244caccd38a77de5428b6954f565219ef0719</id>
<content type='text'>
Problem: There's a race between the glusterfs_handle_terminate()
response sent to glusterd from last brick of the process and the
socket disconnect event that encounters after the brick process
got killed.

Solution: When it is a last brick for the brick process, instead of
sending GLUSTERD_BRICK_TERMINATE to brick process, glusterd will
kill the process (same as we do it in case of non brick multiplecing).

The test case is added for https://bugzilla.redhat.com/show_bug.cgi?id=1549996

Change-Id: If94958cd7649ea48d09d6af7803a0f9437a85503
fixes: bz#1545048
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: There's a race between the glusterfs_handle_terminate()
response sent to glusterd from last brick of the process and the
socket disconnect event that encounters after the brick process
got killed.

Solution: When it is a last brick for the brick process, instead of
sending GLUSTERD_BRICK_TERMINATE to brick process, glusterd will
kill the process (same as we do it in case of non brick multiplecing).

The test case is added for https://bugzilla.redhat.com/show_bug.cgi?id=1549996

Change-Id: If94958cd7649ea48d09d6af7803a0f9437a85503
fixes: bz#1545048
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gluster: Sometimes Brick process is crashed at the time of stopping brick</title>
<updated>2018-04-19T04:31:51+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-03-12T14:13:15+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0043c63f70776444f69667a4ef9596217ecb42b7'/>
<id>0043c63f70776444f69667a4ef9596217ecb42b7</id>
<content type='text'>
Problem: Sometimes brick process is getting crashed at the time
         of stop brick while brick mux is enabled.

Solution: Brick process was getting crashed because of rpc connection
          was not cleaning properly while brick mux is enabled.In this patch
          after sending GF_EVENT_CLEANUP notification to xlator(server)
          waits for all rpc client connection destroy for specific xlator.Once rpc
          connections are destroyed in server_rpc_notify for all associated client
          for that brick then call xlator_mem_cleanup for for brick xlator as well as
          all child xlators.To avoid races at the time of cleanup introduce
          two new flags at each xlator cleanup_starting, call_cleanup.

BUG: 1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;

Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/)
      with same patch after enable brick mux forcefully, all test cases are
      passed.

Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf
updates: bz#1544090
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: Sometimes brick process is getting crashed at the time
         of stop brick while brick mux is enabled.

Solution: Brick process was getting crashed because of rpc connection
          was not cleaning properly while brick mux is enabled.In this patch
          after sending GF_EVENT_CLEANUP notification to xlator(server)
          waits for all rpc client connection destroy for specific xlator.Once rpc
          connections are destroyed in server_rpc_notify for all associated client
          for that brick then call xlator_mem_cleanup for for brick xlator as well as
          all child xlators.To avoid races at the time of cleanup introduce
          two new flags at each xlator cleanup_starting, call_cleanup.

BUG: 1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;

Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/)
      with same patch after enable brick mux forcefully, all test cases are
      passed.

Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf
updates: bz#1544090
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: mark port_registered to true for all running bricks with brick mux</title>
<updated>2018-04-05T07:18:03+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-03-27T11:23:33+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=d70529701f09f89c7e4f578446d55de31497361d'/>
<id>d70529701f09f89c7e4f578446d55de31497361d</id>
<content type='text'>
glusterd maintains a boolean flag 'port_registered' which is used to determine
if a brick has completed its portmap sign in process. This flag is (re)set in
pmap_sigin and pmap_signout events. In case of brick multiplexing this flag is
the identifier to determine if the very first brick with which the process is
spawned up has completed its sign in process. However in case of glusterd
restart when a brick is already identified as running, glusterd does a
pmap_registry_bind to ensure its portmap table is updated but this flag isn't
which is fine in case of non brick multiplex case but causes an issue if
the very first brick which came as part of process is replaced and then
the subsequent brick attach will fail. One of the way to validate this
is to create and start a volume, remove the first brick and then
add-brick a new one. Add-brick operation will take a very long time and
post that the volume status will show all other brick status apart from
the new brick as down.

Solution is to set brickinfo-&gt;port_registered to true for all the
running bricks when brick multiplexing is enabled.

Change-Id: Ib0662d99d0fa66b1538947fd96b43f1cbc04e4ff
Fixes: bz#1560957
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
glusterd maintains a boolean flag 'port_registered' which is used to determine
if a brick has completed its portmap sign in process. This flag is (re)set in
pmap_sigin and pmap_signout events. In case of brick multiplexing this flag is
the identifier to determine if the very first brick with which the process is
spawned up has completed its sign in process. However in case of glusterd
restart when a brick is already identified as running, glusterd does a
pmap_registry_bind to ensure its portmap table is updated but this flag isn't
which is fine in case of non brick multiplex case but causes an issue if
the very first brick which came as part of process is replaced and then
the subsequent brick attach will fail. One of the way to validate this
is to create and start a volume, remove the first brick and then
add-brick a new one. Add-brick operation will take a very long time and
post that the volume status will show all other brick status apart from
the new brick as down.

Solution is to set brickinfo-&gt;port_registered to true for all the
running bricks when brick multiplexing is enabled.

Change-Id: Ib0662d99d0fa66b1538947fd96b43f1cbc04e4ff
Fixes: bz#1560957
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
