<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-op-sm.c, branch release-7</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd/svc: update pid of mux volumes from the shd process</title>
<updated>2019-07-24T10:29:17+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-06-24T06:30:20+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=47fcbc4c055a7880d2926e918ae1e1f57c7db20d'/>
<id>47fcbc4c055a7880d2926e918ae1e1f57c7db20d</id>
<content type='text'>
For a normal volume, we are updating the pid from a the
process while we do a daemonization or at the end of the
init if it is no-daemon mode. Along with updating the pid
we also lock the file, to make sure that the process is
running fine.

With brick mux, we were updating the pidfile from gluterd
after an attach/detach request.

There are two problems with this approach.
1) We are not holding a pidlock for any file other than parent
   process.
2) There is a chance for possible race conditions with attach/detach.
   For example, shd start and a volume stop could race. Let's say
   we are starting an shd and it is attached to a volume.
   While we trying to link the pid file to the running process,
   this would have deleted by the thread that doing a volume stop.

Backport of : https://review.gluster.org/#/c/glusterfs/+/22935/
&gt;Change-Id: I29a00352102877ce09ea3f376ca52affceb5cf1a
&gt;Updates: bz#1722541
&gt;Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;

Change-Id: I29a00352102877ce09ea3f376ca52affceb5cf1a
Updates: bz#1732668
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
For a normal volume, we are updating the pid from a the
process while we do a daemonization or at the end of the
init if it is no-daemon mode. Along with updating the pid
we also lock the file, to make sure that the process is
running fine.

With brick mux, we were updating the pidfile from gluterd
after an attach/detach request.

There are two problems with this approach.
1) We are not holding a pidlock for any file other than parent
   process.
2) There is a chance for possible race conditions with attach/detach.
   For example, shd start and a volume stop could race. Let's say
   we are starting an shd and it is attached to a volume.
   While we trying to link the pid file to the running process,
   this would have deleted by the thread that doing a volume stop.

Backport of : https://review.gluster.org/#/c/glusterfs/+/22935/
&gt;Change-Id: I29a00352102877ce09ea3f376ca52affceb5cf1a
&gt;Updates: bz#1722541
&gt;Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;

Change-Id: I29a00352102877ce09ea3f376ca52affceb5cf1a
Updates: bz#1732668
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: conditionally clear txn_opinfo in stage op</title>
<updated>2019-07-20T07:40:12+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2019-06-25T05:41:10+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=4ad6dd3ca8cd558b853c7df0d6cd34dd22dff7d3'/>
<id>4ad6dd3ca8cd558b853c7df0d6cd34dd22dff7d3</id>
<content type='text'>
...otherwise this leads to a crash when volume status is run on a
heterogeneous mode.

&gt; Fixes: bz#1723658
&gt; Change-Id: I0d39f412b2e5e9d3ef0a3462b90b38bb5364b09d
&gt; Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt; (cherry picked from commit a72452fcf90679b28baec12d2769cbaa982bb4e4)

Fixes: bz#1728127
Change-Id: I0d39f412b2e5e9d3ef0a3462b90b38bb5364b09d
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
...otherwise this leads to a crash when volume status is run on a
heterogeneous mode.

&gt; Fixes: bz#1723658
&gt; Change-Id: I0d39f412b2e5e9d3ef0a3462b90b38bb5364b09d
&gt; Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt; (cherry picked from commit a72452fcf90679b28baec12d2769cbaa982bb4e4)

Fixes: bz#1728127
Change-Id: I0d39f412b2e5e9d3ef0a3462b90b38bb5364b09d
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: add an op-version check</title>
<updated>2019-05-31T03:08:44+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2019-05-15T02:05:45+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=4f1b762fb482f2ebddaacfd31a7d43a967fea9e3'/>
<id>4f1b762fb482f2ebddaacfd31a7d43a967fea9e3</id>
<content type='text'>
Problem: "gluster v status" is hung in heterogenous cluster
when issued from a non-upgraded node.

Cause: commit 34e010d64 fixes the txn-opinfo mem leak
in op-sm framework by not setting the txn-opinfo if some
conditions are true. When vol status is issued from a
non-upgraded node, command is hanging in its upgraded peer
as the upgraded node setting the txn-opinfo based on new
conditions where as non-upgraded nodes are following diff
conditions.

Fix: Add an op-version check, so that all the nodes follow
same set of conditions to set txn-opinfo.

fixes: bz#1710159
Change-Id: Ie1f353212c5931ddd1b728d2e6949dfe6225c4ab
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: "gluster v status" is hung in heterogenous cluster
when issued from a non-upgraded node.

Cause: commit 34e010d64 fixes the txn-opinfo mem leak
in op-sm framework by not setting the txn-opinfo if some
conditions are true. When vol status is issued from a
non-upgraded node, command is hanging in its upgraded peer
as the upgraded node setting the txn-opinfo based on new
conditions where as non-upgraded nodes are following diff
conditions.

Fix: Add an op-version check, so that all the nodes follow
same set of conditions to set txn-opinfo.

fixes: bz#1710159
Change-Id: Ie1f353212c5931ddd1b728d2e6949dfe6225c4ab
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: coverity fix</title>
<updated>2019-05-30T09:39:35+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2019-05-28T06:22:46+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=d2576452028db3b817fc5b1f4f924a98bc22c7dc'/>
<id>d2576452028db3b817fc5b1f4f924a98bc22c7dc</id>
<content type='text'>
1401590: Deadcode

updates: bz#789278

Change-Id: I3aa1d3aa9769e6990f74b6a53e288e788173c5e0
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
1401590: Deadcode

updates: bz#789278

Change-Id: I3aa1d3aa9769e6990f74b6a53e288e788173c5e0
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/tier: remove tier related code from glusterd</title>
<updated>2019-05-27T07:50:24+00:00</updated>
<author>
<name>Hari Gowtham</name>
<email>hgowtham@redhat.com</email>
</author>
<published>2019-05-02T13:03:34+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e1cc4275583dfd8ae8d0433587f39854c1851794'/>
<id>e1cc4275583dfd8ae8d0433587f39854c1851794</id>
<content type='text'>
The handler functions are pointed to dummy functions.
The switch case handling for tier also have been moved to
point default case to avoid issues, if reintroduced.

The tier changes in DHT still remain as such.

updates: bz#1693692

Change-Id: I80d80c9a3eb862b4440a36b31ae82b2e9d92e4dc
Signed-off-by: Hari Gowtham &lt;hgowtham@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The handler functions are pointed to dummy functions.
The switch case handling for tier also have been moved to
point default case to avoid issues, if reintroduced.

The tier changes in DHT still remain as such.

updates: bz#1693692

Change-Id: I80d80c9a3eb862b4440a36b31ae82b2e9d92e4dc
Signed-off-by: Hari Gowtham &lt;hgowtham@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: prevent use-after-free in glusterd_op_ac_send_brick_op()</title>
<updated>2019-05-06T09:50:18+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2019-05-03T07:18:31+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b2d7644435b92bd2468bbaaef7451a64e08c08bd'/>
<id>b2d7644435b92bd2468bbaaef7451a64e08c08bd</id>
<content type='text'>
Coverity reported that GF_FREE(req_ctx) could be called 2x on req_ctx.

Change-Id: I9120686e5920de8c27688e10de0db6aa26292064
CID: 1401115
Updates: bz#789278
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Coverity reported that GF_FREE(req_ctx) could be called 2x on req_ctx.

Change-Id: I9120686e5920de8c27688e10de0db6aa26292064
CID: 1401115
Updates: bz#789278
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Fix coverity defects &amp; put coverity annotations</title>
<updated>2019-05-02T08:06:06+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2019-04-25T06:30:52+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=91c19c50a3c7b336177dda186500568be43626b8'/>
<id>91c19c50a3c7b336177dda186500568be43626b8</id>
<content type='text'>
Along with fixing few defect, put the required annotations for the defects which
are marked ignore/false positive/intentional as per the coverity defect sheet.
This should avoid the per component graph showing many defects as open in the
coverity glusterfs web page.

Updates: bz#789278
Change-Id: I19461dc3603a3bd8f88866a1ab3db43d783af8e4
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Along with fixing few defect, put the required annotations for the defects which
are marked ignore/false positive/intentional as per the coverity defect sheet.
This should avoid the per component graph showing many defects as open in the
coverity glusterfs web page.

Updates: bz#789278
Change-Id: I19461dc3603a3bd8f88866a1ab3db43d783af8e4
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Optimize glusterd handshaking code path</title>
<updated>2019-04-15T15:20:50+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2019-03-29T06:18:32+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=26a19d9da3ab5604db02d4ca02ce868fb57193a4'/>
<id>26a19d9da3ab5604db02d4ca02ce868fb57193a4</id>
<content type='text'>
Problem: At the time of handshaking glusterd populate volume
         data in a dictionary.While no. of volumes are configured
         more than 1500 glusterd takes more than 10 min to generated
         the data.Due to taking more time rpc request times out and
         rpc start bailing of call frames.

Solution: To optimize the code done below changes
          1) Spawn multiple threads to populate volumes data in bulk
             in separate dictionary and introduce an option
             glusterd.brick-dict-thread-count to configure no. of threads
             to populate volume data.
          2) Populate tier data only while volume type is tier
          3) Compare snap data only while snap_count is non zero

Fixes: bz#1699339
Change-Id: I38dc71970c049217f9d1a06fc0aaf4c26eab18f5
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: At the time of handshaking glusterd populate volume
         data in a dictionary.While no. of volumes are configured
         more than 1500 glusterd takes more than 10 min to generated
         the data.Due to taking more time rpc request times out and
         rpc start bailing of call frames.

Solution: To optimize the code done below changes
          1) Spawn multiple threads to populate volumes data in bulk
             in separate dictionary and introduce an option
             glusterd.brick-dict-thread-count to configure no. of threads
             to populate volume data.
          2) Populate tier data only while volume type is tier
          3) Compare snap data only while snap_count is non zero

Fixes: bz#1699339
Change-Id: I38dc71970c049217f9d1a06fc0aaf4c26eab18f5
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd-volgen.c: skip fetching some vol settings in a bricks loop.</title>
<updated>2019-04-13T20:26:37+00:00</updated>
<author>
<name>Yaniv Kaul</name>
<email>ykaul@redhat.com</email>
</author>
<published>2019-04-09T17:51:21+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e87423a826aa4d17c8638b829da7813e41e1a2fd'/>
<id>e87423a826aa4d17c8638b829da7813e41e1a2fd</id>
<content type='text'>
The values are per volume, and are not going to change
while processing its bricks, as far as I can understand the code.
Fetch them and store them outside the loop.

updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;

Change-Id: I2bc263f92f9141ea26a9dfb8265225f38307cbac
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The values are per volume, and are not going to change
while processing its bricks, as far as I can understand the code.
Fetch them and store them outside the loop.

updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;

Change-Id: I2bc263f92f9141ea26a9dfb8265225f38307cbac
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: remove glusterd_check_volume_exists() call</title>
<updated>2019-04-11T15:19:30+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2019-04-08T13:24:46+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=7108913337f24d3d7a39d919a29872fe50072cf4'/>
<id>7108913337f24d3d7a39d919a29872fe50072cf4</id>
<content type='text'>
As the same functionality is covered in glusterd_volinfo_find

Updates: bz#1193929
Change-Id: I2308c5fa9b2ca9edaa95f172d0bd914103808c36
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
As the same functionality is covered in glusterd_volinfo_find

Updates: bz#1193929
Change-Id: I2308c5fa9b2ca9edaa95f172d0bd914103808c36
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
