<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd.h, branch release-3.12</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: import volumes in separate synctask</title>
<updated>2018-04-06T12:46:32+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-02-08T03:39:00+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0e3206c6a8ef36737e5b303580b87a87f6dc1c8e'/>
<id>0e3206c6a8ef36737e5b303580b87a87f6dc1c8e</id>
<content type='text'>
With brick multiplexing, to attach a brick to an existing brick process
the prerequisite is to have the compatible brick to finish it's
initialization and portmap sign in and hence the thread might have to go
to a sleep and context switch the synctask to allow the brick process to
communicate with glusterd. In normal code path, this works fine as
glusterd_restart_bricks () is launched through a separate synctask.

In case there's a mismatch of the volume when glusterd restarts,
glusterd_import_friend_volume is invoked and then it tries to call
glusterd_start_bricks () from the main thread which eventually may land
into the similar situation. Now since this is not done through a
separate synctask, the 1st brick will never be able to get its turn to
finish all of its handshaking and as a consequence to it, all the bricks
will fail to get attached to it.

Solution : Execute import volume and glusterd restart bricks in separate
synctask. Importing snaps had to be also done through synctask as
there's a dependency of the parent volume need to be available for the
importing snap functionality to work.

&gt;mainline patch : https://review.gluster.org/#/c/19357/
                  https://review.gluster.org/#/c/19536/

Change-Id: I290b244d456afcc9b913ab30be4af040d340428c
BUG: 1543708
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With brick multiplexing, to attach a brick to an existing brick process
the prerequisite is to have the compatible brick to finish it's
initialization and portmap sign in and hence the thread might have to go
to a sleep and context switch the synctask to allow the brick process to
communicate with glusterd. In normal code path, this works fine as
glusterd_restart_bricks () is launched through a separate synctask.

In case there's a mismatch of the volume when glusterd restarts,
glusterd_import_friend_volume is invoked and then it tries to call
glusterd_start_bricks () from the main thread which eventually may land
into the similar situation. Now since this is not done through a
separate synctask, the 1st brick will never be able to get its turn to
finish all of its handshaking and as a consequence to it, all the bricks
will fail to get attached to it.

Solution : Execute import volume and glusterd restart bricks in separate
synctask. Importing snaps had to be also done through synctask as
there's a dependency of the parent volume need to be available for the
importing snap functionality to work.

&gt;mainline patch : https://review.gluster.org/#/c/19357/
                  https://review.gluster.org/#/c/19536/

Change-Id: I290b244d456afcc9b913ab30be4af040d340428c
BUG: 1543708
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd : introduce timer in mgmt_v3_lock</title>
<updated>2017-11-06T12:16:15+00:00</updated>
<author>
<name>Gaurav Yadav</name>
<email>gyadav@redhat.com</email>
</author>
<published>2017-10-05T18:14:46+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=1a1bdfbffc81981a80af40ebf000194d9bcd1bf0'/>
<id>1a1bdfbffc81981a80af40ebf000194d9bcd1bf0</id>
<content type='text'>
Problem:
In a multinode environment, if two of the op-sm transactions
are initiated on one of the receiver nodes at the same time,
there might be a possibility that glusterd  may end up in
stale lock.

Solution:
During mgmt_v3_lock a registration is made to  gf_timer_call_after
which release the lock after certain period of time

&gt;mainline patch : https://review.gluster.org/#/c/18437/

Change-Id: I16cc2e5186a2e8a5e35eca2468b031811e093843
BUG: 1503239
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In a multinode environment, if two of the op-sm transactions
are initiated on one of the receiver nodes at the same time,
there might be a possibility that glusterd  may end up in
stale lock.

Solution:
During mgmt_v3_lock a registration is made to  gf_timer_call_after
which release the lock after certain period of time

&gt;mainline patch : https://review.gluster.org/#/c/18437/

Change-Id: I16cc2e5186a2e8a5e35eca2468b031811e093843
BUG: 1503239
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: fix brick restart parallelism</title>
<updated>2017-11-06T06:09:21+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-10-26T08:56:30+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=44e3c3b5c813168d72f10ecb3c058ac3489c719c'/>
<id>44e3c3b5c813168d72f10ecb3c058ac3489c719c</id>
<content type='text'>
glusterd's brick restart logic is not always sequential as there is
atleast three different ways how the bricks are restarted.
1. through friend-sm and glusterd_spawn_daemons ()
2. through friend-sm and handling volume quorum action
3. through friend handshaking when there is a mimatch on quorum on
friend import.

In a brick multiplexing setup, glusterd ended up trying to spawn the
same brick process couple of times as almost in fraction of milliseconds
two threads hit glusterd_brick_start () because of which glusterd didn't
have any choice of rejecting any one of them as for both the case brick
start criteria met.

As a solution, it'd be better to control this madness by two different
flags, one is a boolean called start_triggered which indicates a brick
start has been triggered and it continues to be true till a brick dies
or killed, the second is a mutex lock to ensure for a particular brick
we don't end up getting into glusterd_brick_start () more than once at
same point of time.

Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2
BUG: 1508283
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit 82be66ef8e9e3127d41a4c843daf74c1d8aec4aa)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
glusterd's brick restart logic is not always sequential as there is
atleast three different ways how the bricks are restarted.
1. through friend-sm and glusterd_spawn_daemons ()
2. through friend-sm and handling volume quorum action
3. through friend handshaking when there is a mimatch on quorum on
friend import.

In a brick multiplexing setup, glusterd ended up trying to spawn the
same brick process couple of times as almost in fraction of milliseconds
two threads hit glusterd_brick_start () because of which glusterd didn't
have any choice of rejecting any one of them as for both the case brick
start criteria met.

As a solution, it'd be better to control this madness by two different
flags, one is a boolean called start_triggered which indicates a brick
start has been triggered and it continues to be true till a brick dies
or killed, the second is a mutex lock to ensure for a particular brick
we don't end up getting into glusterd_brick_start () more than once at
same point of time.

Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2
BUG: 1508283
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit 82be66ef8e9e3127d41a4c843daf74c1d8aec4aa)
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Gluster should keep PID file in correct location</title>
<updated>2017-08-12T13:38:30+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>garg.gaurav52@gmail.com</email>
</author>
<published>2016-03-02T12:12:07+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=22ac7241b2f8c1bb3db2678b8b6b9a364f14876c'/>
<id>22ac7241b2f8c1bb3db2678b8b6b9a364f14876c</id>
<content type='text'>
Currently Gluster keeps process pid information of all the daemons
and brick processes in Gluster configuration file directory
(ie., /var/lib/glusterd/*).

These pid files should be seperate from configuration files.
Deletion of the configuration file directory might result into serious problems.
Also, /var/run/gluster is the default placeholder directory for pid files.

So, with this fix Gluster will keep all process pid information of all
processes in /var/run/gluster/* directory.

&gt; BUG: 1258561
&gt; Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
&gt; Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/13580
&gt; Tested-by: MOHIT AGRAWAL &lt;moagrawa@redhat.com&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt; cherry pick from commit 220d406ad13d840e950eef001a2b36f87570058d

BUG: 1480459
Change-Id: Idb09e3fccb6a7355fbac1df31082637c8d7ab5b4
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18023
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently Gluster keeps process pid information of all the daemons
and brick processes in Gluster configuration file directory
(ie., /var/lib/glusterd/*).

These pid files should be seperate from configuration files.
Deletion of the configuration file directory might result into serious problems.
Also, /var/run/gluster is the default placeholder directory for pid files.

So, with this fix Gluster will keep all process pid information of all
processes in /var/run/gluster/* directory.

&gt; BUG: 1258561
&gt; Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
&gt; Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/13580
&gt; Tested-by: MOHIT AGRAWAL &lt;moagrawa@redhat.com&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt; cherry pick from commit 220d406ad13d840e950eef001a2b36f87570058d

BUG: 1480459
Change-Id: Idb09e3fccb6a7355fbac1df31082637c8d7ab5b4
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18023
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Block brick attach request till the brick's ctx is set</title>
<updated>2017-08-12T13:35:26+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2017-08-08T09:06:17+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=d66af9ac76f84faa33ecb2eb390656f5637e6fee'/>
<id>d66af9ac76f84faa33ecb2eb390656f5637e6fee</id>
<content type='text'>
Problem: In multiplexing setup in a container environment we hit a race
where before the first brick finishes its handshake with glusterd, the
subsequent attach requests went through and they actually failed and
glusterd has no mechanism to realize it. This resulted into all the such
bricks not to be active resulting into clients not able to connect.

Solution: Introduce a new flag port_registered in glusterd_brickinfo
          to make sure about pmap_signin finish before the subsequent
          attach bricks can be processed.

Test:     To reproduce the issue followed below steps
          1) Create 100 volumes on 3 nodes(1x3) in CNS environment
          2) Enable brick multiplexing
          3) Reboot one container
          4) Run below command
             for v in ‛gluster v list‛
             do
               glfsheal $v | grep -i "transport"
             done
          After apply the patch command should not fail.

Note:   A big thanks to Atin for suggest the fix.

&gt;Reviewed-on: https://review.gluster.org/17984
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
&gt;(cherry picked from commit c13d69babc228a2932994962d6ea8afe2cdd620a)

BUG: 1479662
Change-Id: I8e1bd6132122b3a5b0dd49606cea564122f2609b
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18004
Tested-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: In multiplexing setup in a container environment we hit a race
where before the first brick finishes its handshake with glusterd, the
subsequent attach requests went through and they actually failed and
glusterd has no mechanism to realize it. This resulted into all the such
bricks not to be active resulting into clients not able to connect.

Solution: Introduce a new flag port_registered in glusterd_brickinfo
          to make sure about pmap_signin finish before the subsequent
          attach bricks can be processed.

Test:     To reproduce the issue followed below steps
          1) Create 100 volumes on 3 nodes(1x3) in CNS environment
          2) Enable brick multiplexing
          3) Reboot one container
          4) Run below command
             for v in ‛gluster v list‛
             do
               glfsheal $v | grep -i "transport"
             done
          After apply the patch command should not fail.

Note:   A big thanks to Atin for suggest the fix.

&gt;Reviewed-on: https://review.gluster.org/17984
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
&gt;(cherry picked from commit c13d69babc228a2932994962d6ea8afe2cdd620a)

BUG: 1479662
Change-Id: I8e1bd6132122b3a5b0dd49606cea564122f2609b
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18004
Tested-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tier: separation of attach-tier from add-brick</title>
<updated>2017-08-04T20:34:03+00:00</updated>
<author>
<name>hari gowtham</name>
<email>hgowtham@redhat.com</email>
</author>
<published>2017-01-23T11:08:00+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=098801ba723d8dae49ea104144d007f23a8e0a4f'/>
<id>098801ba723d8dae49ea104144d007f23a8e0a4f</id>
<content type='text'>
PROBLEM: Both attach tier and add brick have the same RPC
and set of code. This becomes a hurdle while tring to implement
add brick on a tiered volume.

FIX: This patch separates the add brick and attach tier
giving them separate RPCs.

&gt;Change-Id: Iec57e972be968a9ff00b15b507e56a4f6dc398a2
&gt;BUG: 1376326
&gt;Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;
&gt;Reviewed-on: https://review.gluster.org/15503
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Tested-by: hari gowtham &lt;hari.gowtham005@gmail.com&gt;
&gt;Reviewed-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;

Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;
Change-Id: Iec57e972be968a9ff00b15b507e56a4f6dc398a2
BUG: 1478276
Reviewed-on: https://review.gluster.org/17974
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: hari gowtham &lt;hari.gowtham005@gmail.com&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
PROBLEM: Both attach tier and add brick have the same RPC
and set of code. This becomes a hurdle while tring to implement
add brick on a tiered volume.

FIX: This patch separates the add brick and attach tier
giving them separate RPCs.

&gt;Change-Id: Iec57e972be968a9ff00b15b507e56a4f6dc398a2
&gt;BUG: 1376326
&gt;Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;
&gt;Reviewed-on: https://review.gluster.org/15503
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Tested-by: hari gowtham &lt;hari.gowtham005@gmail.com&gt;
&gt;Reviewed-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;

Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;
Change-Id: Iec57e972be968a9ff00b15b507e56a4f6dc398a2
BUG: 1478276
Reviewed-on: https://review.gluster.org/17974
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: hari gowtham &lt;hari.gowtham005@gmail.com&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>logging: localtime logging, cmdline, volume set option</title>
<updated>2017-08-03T12:04:50+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-07-24T12:18:47+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=498164fa45309bd314ff2fcc282b140edf72bc22'/>
<id>498164fa45309bd314ff2fcc282b140edf72bc22</id>
<content type='text'>
Despite the fact that appliances generally use UTC, some
users really want log entries in localtime.

fixes gluster/glusterfs#272

feature page: https://review.gluster.org/17807

Backport from master https://review.gluster.org/#/c/16911/

Change-Id: I5fbf2c3eedd9eb128fb3f851dd67b2f4081c8bba
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17928
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Despite the fact that appliances generally use UTC, some
users really want log entries in localtime.

fixes gluster/glusterfs#272

feature page: https://review.gluster.org/17807

Backport from master https://review.gluster.org/#/c/16911/

Change-Id: I5fbf2c3eedd9eb128fb3f851dd67b2f4081c8bba
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17928
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>posix: option to handle the shared bricks for statvfs()</title>
<updated>2017-07-31T15:34:58+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2017-06-23T07:40:56+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=61ea2a44b509cebc566fc18b2c356d88a3f1fdc8'/>
<id>61ea2a44b509cebc566fc18b2c356d88a3f1fdc8</id>
<content type='text'>
Currently 'storage/posix' xlator has an option called option
`export-statfs-size no`, which exports zero as values for few
fields in `struct statvfs`. In a case of backend brick shared
between multiple brick processes, the values of these variables
should be `field_value / number-of-bricks-at-node`. This way,
even the issue of 'min-free-disk' etc at different layers would
also be handled properly when the statfs() sys call is made.

Fixes #241

&gt; Reviewed-on: https://review.gluster.org/17618
&gt; Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
&gt; Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt; (cherry picked from commit febf5ed4848ad705a34413353559482417c61467)

Change-Id: I2e320e1fdcc819ab9173277ef3498201432c275f
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17903
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently 'storage/posix' xlator has an option called option
`export-statfs-size no`, which exports zero as values for few
fields in `struct statvfs`. In a case of backend brick shared
between multiple brick processes, the values of these variables
should be `field_value / number-of-bricks-at-node`. This way,
even the issue of 'min-free-disk' etc at different layers would
also be handled properly when the statfs() sys call is made.

Fixes #241

&gt; Reviewed-on: https://review.gluster.org/17618
&gt; Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
&gt; Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt; (cherry picked from commit febf5ed4848ad705a34413353559482417c61467)

Change-Id: I2e320e1fdcc819ab9173277ef3498201432c275f
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17903
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Introduce option to limit no. of muxed bricks per process</title>
<updated>2017-07-10T04:33:19+00:00</updated>
<author>
<name>Samikshan Bairagya</name>
<email>samikshan@gmail.com</email>
</author>
<published>2017-06-02T04:42:12+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9e8ee31e643b7fbf7d46092c395ea27aaeb82f6b'/>
<id>9e8ee31e643b7fbf7d46092c395ea27aaeb82f6b</id>
<content type='text'>
This commit introduces a new global option that can be set to limit
the number of multiplexed bricks in one process.

Usage:
`# gluster volume set all cluster.max-bricks-per-process &lt;value&gt;`

If this option is not set then multiplexing will happen for now
with no limitations set; i.e. a brick process will have as many
bricks multiplexed to it as possible. In other words the current
multiplexing behaviour won't change if this option isn't set to
any value.

This commit also introduces a brick process instance that contains
information about brick processes, like the number of bricks
handled by the process (which is 1 in non-multiplexing cases), list
of bricks, and port number which also serves as an unique identifier
for each brick process instance. The brick process list is
maintained in 'glusterd_conf_t'.

Updates: #151
Change-Id: Ib987d14ab0a4f6034dac01b73a4b2839f7b0b695
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: https://review.gluster.org/17469
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This commit introduces a new global option that can be set to limit
the number of multiplexed bricks in one process.

Usage:
`# gluster volume set all cluster.max-bricks-per-process &lt;value&gt;`

If this option is not set then multiplexing will happen for now
with no limitations set; i.e. a brick process will have as many
bricks multiplexed to it as possible. In other words the current
multiplexing behaviour won't change if this option isn't set to
any value.

This commit also introduces a brick process instance that contains
information about brick processes, like the number of bricks
handled by the process (which is 1 in non-multiplexing cases), list
of bricks, and port number which also serves as an unique identifier
for each brick process instance. The brick process list is
maintained in 'glusterd_conf_t'.

Updates: #151
Change-Id: Ib987d14ab0a4f6034dac01b73a4b2839f7b0b695
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: https://review.gluster.org/17469
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/server: make listen backlog value as configurable</title>
<updated>2017-06-08T15:32:30+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2017-05-29T10:30:24+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0a20e56d07de3e467e09da885a6b71cdc165de17'/>
<id>0a20e56d07de3e467e09da885a6b71cdc165de17</id>
<content type='text'>
problem:

When we call listen from protocol/server, we are giving a
hard coded valie of 10 if it is not manually given.
With multiplexing, especially when glusterd restarts all
clients may try to connect to the server at a time.
Which will result in overflowing the queue, and kernel
will complain about the errors.

Solution:

This patch will introduce a volume set command to make backlog
value as a configurable. This patch also changes the default
values for backlog from 10 to 128. This changes is only applicable
for sockets listening from protocol.

Example:

gluster volume set &lt;volname&gt; transport.listen-backlog 1024

Note: 1 Brick has to be restarted to get this value in effect
      2 This changes won't be reflected in glusterd, or other
        xlators which calls listen. If you need, you have to
        add this option to the volfile.

Change-Id: I0c5a2bbf28b5db612f9979e7560e05dd82b41477
BUG: 1456405
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17411
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
problem:

When we call listen from protocol/server, we are giving a
hard coded valie of 10 if it is not manually given.
With multiplexing, especially when glusterd restarts all
clients may try to connect to the server at a time.
Which will result in overflowing the queue, and kernel
will complain about the errors.

Solution:

This patch will introduce a volume set command to make backlog
value as a configurable. This patch also changes the default
values for backlog from 10 to 128. This changes is only applicable
for sockets listening from protocol.

Example:

gluster volume set &lt;volname&gt; transport.listen-backlog 1024

Note: 1 Brick has to be restarted to get this value in effect
      2 This changes won't be reflected in glusterd, or other
        xlators which calls listen. If you need, you have to
        add this option to the volfile.

Change-Id: I0c5a2bbf28b5db612f9979e7560e05dd82b41477
BUG: 1456405
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17411
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</pre>
</div>
</content>
</entry>
</feed>
