<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/glusterfsd/src/glusterfsd.c, branch v3.12.5</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>mount/fuse: Make event-history feature configurable</title>
<updated>2017-10-05T12:02:42+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2017-09-07T13:18:34+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=761942e9fe8f6d7bbd5c56720e52dc4f6663cd9f'/>
<id>761942e9fe8f6d7bbd5c56720e52dc4f6663cd9f</id>
<content type='text'>
... and disable it by default.

        Backport of:
        &gt; Change-Id: Ia533788d309c78688a315dc8cd04d30fad9e9485
        &gt; Reviewed-on: https://review.gluster.org/18242
        &gt; BUG: 1467614
        &gt; cherry-picked from commit 956d43d6e89d40ee683547003b876f1f456f03b6

This is because having it disabled seems to improve performance.
This could be due to the lock contention by the different epoll threads
on the circular buff lock in the fop cbks just before writing their response
to /dev/fuse.

Just to provide some data - wrt ovirt-gluster hyperconverged
environment, I saw an increase in IOPs by 12K with event-history
disabled for randrom read workload.

Usage:
mount -t glusterfs -o event-history=on $HOSTNAME:$VOLNAME $MOUNTPOINT
OR
glusterfs --event-history=on --volfile-server=$HOSTNAME --volfile-id=$VOLNAME $MOUNTPOINT

Change-Id: Ia533788d309c78688a315dc8cd04d30fad9e9485
BUG: 1495397
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
... and disable it by default.

        Backport of:
        &gt; Change-Id: Ia533788d309c78688a315dc8cd04d30fad9e9485
        &gt; Reviewed-on: https://review.gluster.org/18242
        &gt; BUG: 1467614
        &gt; cherry-picked from commit 956d43d6e89d40ee683547003b876f1f456f03b6

This is because having it disabled seems to improve performance.
This could be due to the lock contention by the different epoll threads
on the circular buff lock in the fop cbks just before writing their response
to /dev/fuse.

Just to provide some data - wrt ovirt-gluster hyperconverged
environment, I saw an increase in IOPs by 12K with event-history
disabled for randrom read workload.

Usage:
mount -t glusterfs -o event-history=on $HOSTNAME:$VOLNAME $MOUNTPOINT
OR
glusterfs --event-history=on --volfile-server=$HOSTNAME --volfile-id=$VOLNAME $MOUNTPOINT

Change-Id: Ia533788d309c78688a315dc8cd04d30fad9e9485
BUG: 1495397
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterfsd: allow subdir mount</title>
<updated>2017-08-04T13:34:21+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2017-07-19T17:38:05+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=ae084046cce12a1ce707b5d141f092b4c011e1b3'/>
<id>ae084046cce12a1ce707b5d141f092b4c011e1b3</id>
<content type='text'>
Changes:

1. Take subdir mount option in client (mount.gluster / glusterfsd)
2. Pass the subdir mount to server-handshake (from client-handshake)
3. Handle subdir-mount dir's lookup in server-first-lookup and handle
   all fops resolution accordingly with proper gfid of subdir
4. Change the auth/addr module to handle the multiple subdir entries
   in option, and valid parsing.

How to use the feature:

`# mount -t glusterfs $hostname:/$volname/$subdir /$mount_point`
Or
`# mount -t glusterfs $hostname:/$volname -osubdir_mount=$subdir /$mount_point`

Option can be set like:

`# gluster volume set &lt;volname&gt; auth.allow "/subdir1(192.168.1.*),/(192.168.10.*),/subdir2(192.168.8.*)"`

Updates #175

&gt; Reviewed-At: https://review.gluster.org/17141/

Change-Id: I7ea57f76ddbe6c3862cfe02e13f89e8a39719e11
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17968
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Changes:

1. Take subdir mount option in client (mount.gluster / glusterfsd)
2. Pass the subdir mount to server-handshake (from client-handshake)
3. Handle subdir-mount dir's lookup in server-first-lookup and handle
   all fops resolution accordingly with proper gfid of subdir
4. Change the auth/addr module to handle the multiple subdir entries
   in option, and valid parsing.

How to use the feature:

`# mount -t glusterfs $hostname:/$volname/$subdir /$mount_point`
Or
`# mount -t glusterfs $hostname:/$volname -osubdir_mount=$subdir /$mount_point`

Option can be set like:

`# gluster volume set &lt;volname&gt; auth.allow "/subdir1(192.168.1.*),/(192.168.10.*),/subdir2(192.168.8.*)"`

Updates #175

&gt; Reviewed-At: https://review.gluster.org/17141/

Change-Id: I7ea57f76ddbe6c3862cfe02e13f89e8a39719e11
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17968
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>logging: localtime logging, cmdline, volume set option</title>
<updated>2017-08-03T12:04:50+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-07-24T12:18:47+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=498164fa45309bd314ff2fcc282b140edf72bc22'/>
<id>498164fa45309bd314ff2fcc282b140edf72bc22</id>
<content type='text'>
Despite the fact that appliances generally use UTC, some
users really want log entries in localtime.

fixes gluster/glusterfs#272

feature page: https://review.gluster.org/17807

Backport from master https://review.gluster.org/#/c/16911/

Change-Id: I5fbf2c3eedd9eb128fb3f851dd67b2f4081c8bba
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17928
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Despite the fact that appliances generally use UTC, some
users really want log entries in localtime.

fixes gluster/glusterfs#272

feature page: https://review.gluster.org/17807

Backport from master https://review.gluster.org/#/c/16911/

Change-Id: I5fbf2c3eedd9eb128fb3f851dd67b2f4081c8bba
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17928
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mem-pool: initialize pthread_key_t pool_key in mem_pool_init_early()</title>
<updated>2017-07-19T20:18:24+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-07-14T16:35:10+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=1e8e6264033669332d4cfa117faf678d7631a7b1'/>
<id>1e8e6264033669332d4cfa117faf678d7631a7b1</id>
<content type='text'>
It is not possible to call pthread_key_delete for the pool_key that is
intialized in the constructor for the memory pools. This makes it
difficult to do a full cleanup of all the resources in mem_pools_fini().
For this, the initialization of pool_key should be moved to
mem_pool_init().

However, the glusterfsd binary has a rather complex initialization
procedure. The memory pools need to get initialized partially to get
mem_get() functionality working. But, the pool_sweeper thread can get
killed in case it is started before glusterfsd deamonizes.

In order to solve this, mem_pools_init() is split into two pieces:
1. mem_pools_init_early() for initializing the basic structures
2. mem_pools_init_late() to start the pool_sweeper thread

With the split of mem_pools_init(), and placing the pthread_key_create()
in mem_pools_init_early(), it is now possible to correctly cleanup the
pool_key with pthread_key_delete() in mem_pools_fini().

It seems that there was no memory pool initialization in the CLI. This
has been added as well now. Without it, the CLI will not be able to call
mem_get() successfully which results in a hang of the process.

Change-Id: I1de0153dfe600fd79eac7468cc070e4bd35e71dd
BUG: 1470170
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17779
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
It is not possible to call pthread_key_delete for the pool_key that is
intialized in the constructor for the memory pools. This makes it
difficult to do a full cleanup of all the resources in mem_pools_fini().
For this, the initialization of pool_key should be moved to
mem_pool_init().

However, the glusterfsd binary has a rather complex initialization
procedure. The memory pools need to get initialized partially to get
mem_get() functionality working. But, the pool_sweeper thread can get
killed in case it is started before glusterfsd deamonizes.

In order to solve this, mem_pools_init() is split into two pieces:
1. mem_pools_init_early() for initializing the basic structures
2. mem_pools_init_late() to start the pool_sweeper thread

With the split of mem_pools_init(), and placing the pthread_key_create()
in mem_pools_init_early(), it is now possible to correctly cleanup the
pool_key with pthread_key_delete() in mem_pools_fini().

It seems that there was no memory pool initialization in the CLI. This
has been added as well now. Without it, the CLI will not be able to call
mem_get() successfully which results in a hang of the process.

Change-Id: I1de0153dfe600fd79eac7468cc070e4bd35e71dd
BUG: 1470170
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17779
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: Name threads on creation</title>
<updated>2017-07-19T14:16:19+00:00</updated>
<author>
<name>Raghavendra Talur</name>
<email>rtalur@redhat.com</email>
</author>
<published>2017-07-18T06:06:19+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=33db9aff1deaa028f30516e49fdb1e8d6e31bb73'/>
<id>33db9aff1deaa028f30516e49fdb1e8d6e31bb73</id>
<content type='text'>
Set names to threads on creation for easier
debugging.

Output of top -H -p &lt;PID-OF-GLUSTERFSD&gt;
Before:
19773 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19774 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19775 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19776 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19777 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19778 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19779 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19780 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19781 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19782 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19783 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19784 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19785 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19786 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19787 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19789 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19790 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
25178 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
 5398 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
 7881 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd

After:
19773 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19774 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustertimer
19775 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19776 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustermemsweep
19777 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustersproc0
19778 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustersproc1
19779 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll0
19780 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusteridxwrker
19781 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusteriotwr0
19782 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterbrssign
19783 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterbrswrker
19784 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterclogecon
19785 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd0
19786 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd1
19787 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd2
19789 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixjan
19790 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixfsy
25178 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll1
 5398 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll2
 7881 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixhc

Change-Id: Id5f333755c1ba168a2ffaa4fce6e71c375e10703
BUG: 1254002
Updates: #271
Signed-off-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
Reviewed-on: https://review.gluster.org/11926
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Set names to threads on creation for easier
debugging.

Output of top -H -p &lt;PID-OF-GLUSTERFSD&gt;
Before:
19773 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19774 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19775 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19776 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19777 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19778 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19779 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19780 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19781 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19782 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19783 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19784 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19785 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19786 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19787 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19789 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19790 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
25178 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
 5398 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
 7881 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd

After:
19773 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19774 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustertimer
19775 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19776 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustermemsweep
19777 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustersproc0
19778 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustersproc1
19779 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll0
19780 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusteridxwrker
19781 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusteriotwr0
19782 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterbrssign
19783 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterbrswrker
19784 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterclogecon
19785 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd0
19786 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd1
19787 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd2
19789 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixjan
19790 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixfsy
25178 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll1
 5398 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll2
 7881 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixhc

Change-Id: Id5f333755c1ba168a2ffaa4fce6e71c375e10703
BUG: 1254002
Updates: #271
Signed-off-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
Reviewed-on: https://review.gluster.org/11926
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: socketfile &amp; pidfile related fixes for brick multiplexing feature</title>
<updated>2017-05-09T01:30:01+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2017-05-08T13:59:22+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=21c7f7baccfaf644805e63682e5a7d2a9864a1e6'/>
<id>21c7f7baccfaf644805e63682e5a7d2a9864a1e6</id>
<content type='text'>
Problem: While brick-muliplexing is on after restarting glusterd, CLI is
         not showing pid of all brick processes in all volumes.

Solution: While brick-mux is on all local brick process communicated through one
          UNIX socket but as per current code (glusterd_brick_start) it is trying
          to communicate with separate UNIX socket for each volume which is populated
          based on brick-name and vol-name.Because of multiplexing design only one
          UNIX socket is opened so it is throwing poller error and not able to
          fetch correct status of brick process through cli process.
          To resolve the problem write a new function glusterd_set_socket_filepath_for_mux
          that will call by glusterd_brick_start to validate about the existence of socketpath.
          To avoid the continuous EPOLLERR erros in  logs update socket_connect code.

Test:     To reproduce the issue followed below steps
          1) Create two distributed volumes(dist1 and dist2)
          2) Set cluster.brick-multiplex is on
          3) kill glusterd
          4) run command gluster v status
          After apply the patch it shows correct pid for all volumes

BUG: 1444596
Change-Id: I5d10af69dea0d0ca19511f43870f34295a54a4d2
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17101
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: While brick-muliplexing is on after restarting glusterd, CLI is
         not showing pid of all brick processes in all volumes.

Solution: While brick-mux is on all local brick process communicated through one
          UNIX socket but as per current code (glusterd_brick_start) it is trying
          to communicate with separate UNIX socket for each volume which is populated
          based on brick-name and vol-name.Because of multiplexing design only one
          UNIX socket is opened so it is throwing poller error and not able to
          fetch correct status of brick process through cli process.
          To resolve the problem write a new function glusterd_set_socket_filepath_for_mux
          that will call by glusterd_brick_start to validate about the existence of socketpath.
          To avoid the continuous EPOLLERR erros in  logs update socket_connect code.

Test:     To reproduce the issue followed below steps
          1) Create two distributed volumes(dist1 and dist2)
          2) Set cluster.brick-multiplex is on
          3) kill glusterd
          4) run command gluster v status
          After apply the patch it shows correct pid for all volumes

BUG: 1444596
Change-Id: I5d10af69dea0d0ca19511f43870f34295a54a4d2
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17101
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: cleanup pidfile on pmap signout</title>
<updated>2017-05-08T13:13:41+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-05-03T06:47:30+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=3d35e21ffb15713237116d85711e9cd1dda1688a'/>
<id>3d35e21ffb15713237116d85711e9cd1dda1688a</id>
<content type='text'>
This patch ensures
1. brick pidfile is cleaned up on pmap signout
2. pmap signout evemt is sent for all the bricks when a brick process
shuts down.

Change-Id: I7606a60775b484651d4b9743b6037b40323931a2
BUG: 1444596
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17168
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch ensures
1. brick pidfile is cleaned up on pmap signout
2. pmap signout evemt is sent for all the bricks when a brick process
shuts down.

Change-Id: I7606a60775b484651d4b9743b6037b40323931a2
BUG: 1444596
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17168
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: make the per glusterfs_ctx_t timer-wheel refcounted</title>
<updated>2017-05-01T09:30:14+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-04-17T10:20:07+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=73fcf3a874b2049da31d01b8363d1ac85c9488c2'/>
<id>73fcf3a874b2049da31d01b8363d1ac85c9488c2</id>
<content type='text'>
xlators can use a 'global' timer-wheel for scheduling events. This
timer-wheel is managed per glusterfs_ctx_t, but does not need to be
allocated for every graph. When an xlator wants to use the timer-wheel,
it will be instanciated on demand, and provided to xlators that request
it later on.

By adding a reference counter to the glusterfs_ctx_t for the
timer-wheel, the threads and structures can be cleaned up when the last
xlator does not have a need for it anymore. In general, the xlators
request the timer-wheel in init(), and they should return it in fini().

Because the timer-wheel is managed per glusterfs_ctx_t, the functions
can be added to ctx.c and do not need to live in their very minimal
tw.[ch] files.

Change-Id: I19d225b39aaa272d9005ba7adc3104c3764f1572
BUG: 1442788
Reported-by: Poornima G &lt;pgurusid@redhat.com&gt;
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17068
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
xlators can use a 'global' timer-wheel for scheduling events. This
timer-wheel is managed per glusterfs_ctx_t, but does not need to be
allocated for every graph. When an xlator wants to use the timer-wheel,
it will be instanciated on demand, and provided to xlators that request
it later on.

By adding a reference counter to the glusterfs_ctx_t for the
timer-wheel, the threads and structures can be cleaned up when the last
xlator does not have a need for it anymore. In general, the xlators
request the timer-wheel in init(), and they should return it in fini().

Because the timer-wheel is managed per glusterfs_ctx_t, the functions
can be added to ctx.c and do not need to live in their very minimal
tw.[ch] files.

Change-Id: I19d225b39aaa272d9005ba7adc3104c3764f1572
BUG: 1442788
Reported-by: Poornima G &lt;pgurusid@redhat.com&gt;
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17068
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Propagate EADDRINUSE correctly to parent process</title>
<updated>2017-04-13T03:49:03+00:00</updated>
<author>
<name>Prashanth Pai</name>
<email>ppai@redhat.com</email>
</author>
<published>2016-12-19T10:58:06+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=94afe2ca98a8ed9effb05901fc89d3b7bb6d0d41'/>
<id>94afe2ca98a8ed9effb05901fc89d3b7bb6d0d41</id>
<content type='text'>
exit()/_exit():
Only the least significant 8 bits i.e (err &amp; 255) shall be available
to the waiting parent process on calling _exit() or exit() with an
integer exit status. If this number is negative, the parent process
doesn't readily get what it's really looking forward to handle.

For example: EADDRINUSE is 98 and if exit status code is set to -98,
the waiting parent process shall get 158 (= -98 &amp; 255) as exit status.

BUG: 1193929

Change-Id: Idc6b0f40c2332e087e584b4b40cbf0d29168c9cd
Signed-off-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16200
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
exit()/_exit():
Only the least significant 8 bits i.e (err &amp; 255) shall be available
to the waiting parent process on calling _exit() or exit() with an
integer exit status. If this number is negative, the parent process
doesn't readily get what it's really looking forward to handle.

For example: EADDRINUSE is 98 and if exit status code is set to -98,
the waiting parent process shall get 158 (= -98 &amp; 255) as exit status.

BUG: 1193929

Change-Id: Idc6b0f40c2332e087e584b4b40cbf0d29168c9cd
Signed-off-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16200
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: Clean up pmap registry up correctly on volume/brick stop</title>
<updated>2017-02-27T22:59:03+00:00</updated>
<author>
<name>Samikshan Bairagya</name>
<email>samikshan@gmail.com</email>
</author>
<published>2017-02-20T13:05:01+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=1e3538baab7abc29ac329c78182b62558da56d98'/>
<id>1e3538baab7abc29ac329c78182b62558da56d98</id>
<content type='text'>
This commit changes the following:
1. In glusterfs_handle_terminate, send out individual pmap signout
requests to glusterd for every brick.
2. Add another parameter to glusterfs_mgmt_pmap_signout function to
pass the brickname that needs to be removed from the pmap registry.
3. Make sure pmap_registry_search doesn't break out from the loop
iterating over the list of bricks per port if the first brick entry
corresponding to a port is whitespaced out.
4. Make sure the pmap registry entries are removed for other
daemons like snapd.

Change-Id: I69949874435b02699e5708dab811777ccb297174
BUG: 1421590
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: https://review.gluster.org/16689
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This commit changes the following:
1. In glusterfs_handle_terminate, send out individual pmap signout
requests to glusterd for every brick.
2. Add another parameter to glusterfs_mgmt_pmap_signout function to
pass the brickname that needs to be removed from the pmap registry.
3. Make sure pmap_registry_search doesn't break out from the loop
iterating over the list of bricks per port if the first brick entry
corresponding to a port is whitespaced out.
4. Make sure the pmap registry entries are removed for other
daemons like snapd.

Change-Id: I69949874435b02699e5708dab811777ccb297174
BUG: 1421590
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: https://review.gluster.org/16689
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
