<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd, branch v3.13.0beta</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: fix brick restart parallelism</title>
<updated>2017-11-01T03:41:36+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-10-26T08:56:30+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=82be66ef8e9e3127d41a4c843daf74c1d8aec4aa'/>
<id>82be66ef8e9e3127d41a4c843daf74c1d8aec4aa</id>
<content type='text'>
glusterd's brick restart logic is not always sequential as there is
atleast three different ways how the bricks are restarted.
1. through friend-sm and glusterd_spawn_daemons ()
2. through friend-sm and handling volume quorum action
3. through friend handshaking when there is a mimatch on quorum on
friend import.

In a brick multiplexing setup, glusterd ended up trying to spawn the
same brick process couple of times as almost in fraction of milliseconds
two threads hit glusterd_brick_start () because of which glusterd didn't
have any choice of rejecting any one of them as for both the case brick
start criteria met.

As a solution, it'd be better to control this madness by two different
flags, one is a boolean called start_triggered which indicates a brick
start has been triggered and it continues to be true till a brick dies
or killed, the second is a mutex lock to ensure for a particular brick
we don't end up getting into glusterd_brick_start () more than once at
same point of time.

Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2
BUG: 1506513
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
glusterd's brick restart logic is not always sequential as there is
atleast three different ways how the bricks are restarted.
1. through friend-sm and glusterd_spawn_daemons ()
2. through friend-sm and handling volume quorum action
3. through friend handshaking when there is a mimatch on quorum on
friend import.

In a brick multiplexing setup, glusterd ended up trying to spawn the
same brick process couple of times as almost in fraction of milliseconds
two threads hit glusterd_brick_start () because of which glusterd didn't
have any choice of rejecting any one of them as for both the case brick
start criteria met.

As a solution, it'd be better to control this madness by two different
flags, one is a boolean called start_triggered which indicates a brick
start has been triggered and it continues to be true till a brick dies
or killed, the second is a mutex lock to ensure for a particular brick
we don't end up getting into glusterd_brick_start () more than once at
same point of time.

Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2
BUG: 1506513
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot: fix coverity issue 'DEADCODE'</title>
<updated>2017-10-31T12:43:22+00:00</updated>
<author>
<name>Sunny Kumar</name>
<email>sunkumar@redhat.com</email>
</author>
<published>2017-10-31T05:54:04+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=fd397e59d8856e4d5f1ccbec1baaedb71623951e'/>
<id>fd397e59d8856e4d5f1ccbec1baaedb71623951e</id>
<content type='text'>
Problem : Unreachable code at glusterd-snapshot.c:6718
          Unreachable code at glusterd-snapshot.c:7352

FIx : Remove unreachable code

At glusterd-snapshot.c:6718 in if condition the value of "snap" must be "NULL"
to call glusterd_snap_remove() which is not possible here.

Change-Id: Id865bde7c1474a9b9ed11c0ed614676b4e2443c6
BUG: 789278
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem : Unreachable code at glusterd-snapshot.c:6718
          Unreachable code at glusterd-snapshot.c:7352

FIx : Remove unreachable code

At glusterd-snapshot.c:6718 in if condition the value of "snap" must be "NULL"
to call glusterd_snap_remove() which is not possible here.

Change-Id: Id865bde7c1474a9b9ed11c0ed614676b4e2443c6
BUG: 789278
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: delete source brick only once in reset-brick commit force</title>
<updated>2017-10-31T11:14:18+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-10-30T10:25:32+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0fb8acaa6ff80c43e46deac0ce66b29ae0df0ca4'/>
<id>0fb8acaa6ff80c43e46deac0ce66b29ae0df0ca4</id>
<content type='text'>
While stopping the brick which is to be reset and replaced delete_brick
flag was passed as true which resulted glusterd to free up to source
brick before the actual operation. This results commit force to fail
failing to find the source brickinfo.

Change-Id: I1aa7508eff7cc9c9b5d6f5163f3bb92736d6df44
BUG: 1507466
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
While stopping the brick which is to be reset and replaced delete_brick
flag was passed as true which resulted glusterd to free up to source
brick before the actual operation. This results commit force to fail
failing to find the source brickinfo.

Change-Id: I1aa7508eff7cc9c9b5d6f5163f3bb92736d6df44
BUG: 1507466
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: clean up portmap on brick disconnect</title>
<updated>2017-10-31T04:36:44+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-10-17T16:02:44+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=30e0b86aae00430823f2523c6efa3c4ebbf0a478'/>
<id>30e0b86aae00430823f2523c6efa3c4ebbf0a478</id>
<content type='text'>
GlusterD's portmap entry for a brick is cleaned up when a PMAP_SIGNOUT event is
initiated by the brick process at the shutdown. But if the brick process crashes
or gets killed through SIGKILL then this event is not initiated and glusterd
ends up with a stale port. Since GlusterD's portmap traversal happens both ways,
forward for allocation and backward for registry search, there is a possibility
that glusterd might end up running with a stale port for a brick which
eventually will end up with clients to fail to connect to the bricks.

Solution is to clean up the port entry in case the process is down as
part of the brick disconnect event. Although with this the handling
PMAP_SIGNOUT event becomes redundant in most of the cases, but this is
the safeguard method to avoid glusterd getting into the stale port
issues.

Change-Id: I04c5be6d11e772ee4de16caf56dbb37d5c944303
BUG: 1503246
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
GlusterD's portmap entry for a brick is cleaned up when a PMAP_SIGNOUT event is
initiated by the brick process at the shutdown. But if the brick process crashes
or gets killed through SIGKILL then this event is not initiated and glusterd
ends up with a stale port. Since GlusterD's portmap traversal happens both ways,
forward for allocation and backward for registry search, there is a possibility
that glusterd might end up running with a stale port for a brick which
eventually will end up with clients to fail to connect to the bricks.

Solution is to clean up the port entry in case the process is down as
part of the brick disconnect event. Although with this the handling
PMAP_SIGNOUT event becomes redundant in most of the cases, but this is
the safeguard method to avoid glusterd getting into the stale port
issues.

Change-Id: I04c5be6d11e772ee4de16caf56dbb37d5c944303
BUG: 1503246
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: persist brickinfo's port change into glusterd's store</title>
<updated>2017-10-31T04:34:10+00:00</updated>
<author>
<name>Gaurav Yadav</name>
<email>gyadav@redhat.com</email>
</author>
<published>2017-10-27T10:34:46+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=15fe99995b8650a677b097028bc14d61a5dd5e1b'/>
<id>15fe99995b8650a677b097028bc14d61a5dd5e1b</id>
<content type='text'>
Problem:
Consider a case where node reboot is performed and prior to reboot
brick was listening to 49153. Post reboot glusterd assigned 49152
to brick and started the brick process but the new port was never
persisted. Now when glusterd restarts glusterd always read the port
from its persisted store i.e 49153 however pmap signin happens with
the correct port i.e 49152.

Fix:
Make sure when glusterd_brick_start is called, glusterd_store_volinfo is
eventually invoked.

Change-Id: Ic0efbd48c51d39729ed951a42922d0e59f7115a1
BUG: 1506589
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Consider a case where node reboot is performed and prior to reboot
brick was listening to 49153. Post reboot glusterd assigned 49152
to brick and started the brick process but the new port was never
persisted. Now when glusterd restarts glusterd always read the port
from its persisted store i.e 49153 however pmap signin happens with
the correct port i.e 49152.

Fix:
Make sure when glusterd_brick_start is called, glusterd_store_volinfo is
eventually invoked.

Change-Id: Ic0efbd48c51d39729ed951a42922d0e59f7115a1
BUG: 1506589
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Make sure attach_brick return a proper error code</title>
<updated>2017-10-26T14:30:40+00:00</updated>
<author>
<name>Michael Scherer</name>
<email>misc@redhat.com</email>
</author>
<published>2017-10-25T12:36:35+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0bb928264a9fb100dc927687eed6ad4d22675950'/>
<id>0bb928264a9fb100dc927687eed6ad4d22675950</id>
<content type='text'>
Coverity warn about "Using uninitialized value "ret".", which is
indeed the case. If we can't attach after 15 tries, attach_brick
return uninitiliazed return code, which is likely hard to trigger,
but bad.

I use -1 to follow the UNIX convetion, but there is no specific error
code tested by code.

Change-Id: I43ad60d25ab169f9cea0db600805ced7f77c37ba
BUG: 789278
Signed-off-by: Michael Scherer &lt;misc@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Coverity warn about "Using uninitialized value "ret".", which is
indeed the case. If we can't attach after 15 tries, attach_brick
return uninitiliazed return code, which is likely hard to trigger,
but bad.

I use -1 to follow the UNIX convetion, but there is no specific error
code tested by code.

Change-Id: I43ad60d25ab169f9cea0db600805ced7f77c37ba
BUG: 789278
Signed-off-by: Michael Scherer &lt;misc@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: Allow parallel writes in EC if possible</title>
<updated>2017-10-24T09:30:25+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2017-06-25T11:04:01+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=a8e16c81e3d14b577ac738574e19a16543df419e'/>
<id>a8e16c81e3d14b577ac738574e19a16543df419e</id>
<content type='text'>
Problem:
Ec at the moment sends one modification fop after another, so if some of
the disks become slow, for a while then the wait time for the writes that
are waiting in the queue becomes really bad.

Fix:
Allow parallel writes when possible. For this we need to make 3 changes.
1) Each fop now has range parameters they will be updating.
2) Xattrop is changed to handle parallel xattrop requests where some
   would be modifying just dirty xattr.
3) Fops that refer to size now take locks and update the locks.

Fixes #251
Change-Id: Ibc3c15372f91bbd6fb617f0d99399b3149fa64b2
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Ec at the moment sends one modification fop after another, so if some of
the disks become slow, for a while then the wait time for the writes that
are waiting in the queue becomes really bad.

Fix:
Allow parallel writes when possible. For this we need to make 3 changes.
1) Each fop now has range parameters they will be updating.
2) Xattrop is changed to handle parallel xattrop requests where some
   would be modifying just dirty xattr.
3) Fops that refer to size now take locks and update the locks.

Fixes #251
Change-Id: Ibc3c15372f91bbd6fb617f0d99399b3149fa64b2
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gluster: IPv6 single stack support</title>
<updated>2017-10-24T09:28:08+00:00</updated>
<author>
<name>Kevin Vigor</name>
<email>kvigor@fb.com</email>
</author>
<published>2017-04-28T23:44:29+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=1260ee53b1674234e6f083563bdcd258e46a6faa'/>
<id>1260ee53b1674234e6f083563bdcd258e46a6faa</id>
<content type='text'>
Summary:
- This diff changes all locations in the code to prefer inet6 family
  instead of inet.  This will allow change GlusterFS to operate
  via IPv6 instead of IPv4 for all internal operations while still
  being able to serve (FUSE or NFS) clients via IPv4.
- The changes apply to NFS as well.
- This diff ports D1892990, D1897341 &amp; D1896522 to the 3.8 branch.

Test Plan: Prove tests!

Reviewers: dph, rwareing

Signed-off-by: Shreyas Siravara &lt;sshreyas@fb.com&gt;

Change-Id: I34fdaaeb33c194782255625e00616faf75d60c33
BUG: 1406898
Reviewed-on-3.8-fb: http://review.gluster.org/16059
Reviewed-by: Shreyas Siravara &lt;sshreyas@fb.com&gt;
Tested-by: Shreyas Siravara &lt;sshreyas@fb.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Summary:
- This diff changes all locations in the code to prefer inet6 family
  instead of inet.  This will allow change GlusterFS to operate
  via IPv6 instead of IPv4 for all internal operations while still
  being able to serve (FUSE or NFS) clients via IPv4.
- The changes apply to NFS as well.
- This diff ports D1892990, D1897341 &amp; D1896522 to the 3.8 branch.

Test Plan: Prove tests!

Reviewers: dph, rwareing

Signed-off-by: Shreyas Siravara &lt;sshreyas@fb.com&gt;

Change-Id: I34fdaaeb33c194782255625e00616faf75d60c33
BUG: 1406898
Reviewed-on-3.8-fb: http://review.gluster.org/16059
Reviewed-by: Shreyas Siravara &lt;sshreyas@fb.com&gt;
Tested-by: Shreyas Siravara &lt;sshreyas@fb.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot: Issue with other processes accessing the mounted brick</title>
<updated>2017-10-23T10:05:02+00:00</updated>
<author>
<name>Sunny Kumar</name>
<email>sunkumar@redhat.com</email>
</author>
<published>2017-08-16T08:34:45+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=2b3b3edee2d849b4aee314048987dc995d9679a1'/>
<id>2b3b3edee2d849b4aee314048987dc995d9679a1</id>
<content type='text'>
Added code for unmount of activated snapshot brick during snapshot
deactivation process which make sense as mount point for deactivated
bricks should not exist.

Removed code for mounting newly created snapshot, as newly created
snapshots should not mount until it is activated.

Added code for mount point creation and snapshot mount during snapshot
activation.

Added validation during glusterd init for mounting only those snapshot
whose status is either STARTED or RESTORED.

During snapshot restore, mount point for stopped snap should exist as
it is required to set extended attribute.

During handshake, after getting updates from friend mount point for
activated snapshot should exist and should not for deactivated
snapshot.

While getting snap status we should show relevent information for
deactivated snapshots, after this pathch 'gluster snap status' command
will show output like-

Snap Name : snap1
Snap UUID : snap-uuid

	Brick Path        :   server1:/run/gluster/snaps/snap-vol-name/brick
	Volume Group      :   N/A (Deactivated Snapshot)
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   N/A
	LV Size           :   N/A

Fixes: #276

Change-Id: I65783488e35fac43632615ce1b8ff7b8e84834dc
BUG: 1482023
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Added code for unmount of activated snapshot brick during snapshot
deactivation process which make sense as mount point for deactivated
bricks should not exist.

Removed code for mounting newly created snapshot, as newly created
snapshots should not mount until it is activated.

Added code for mount point creation and snapshot mount during snapshot
activation.

Added validation during glusterd init for mounting only those snapshot
whose status is either STARTED or RESTORED.

During snapshot restore, mount point for stopped snap should exist as
it is required to set extended attribute.

During handshake, after getting updates from friend mount point for
activated snapshot should exist and should not for deactivated
snapshot.

While getting snap status we should show relevent information for
deactivated snapshots, after this pathch 'gluster snap status' command
will show output like-

Snap Name : snap1
Snap UUID : snap-uuid

	Brick Path        :   server1:/run/gluster/snaps/snap-vol-name/brick
	Volume Group      :   N/A (Deactivated Snapshot)
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   N/A
	LV Size           :   N/A

Fixes: #276

Change-Id: I65783488e35fac43632615ce1b8ff7b8e84834dc
BUG: 1482023
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: documenting server.allow-insecure</title>
<updated>2017-10-18T14:23:46+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2017-10-18T06:10:59+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=c0b08f10ed07bfe06309e31a7fff85cadb733ce2'/>
<id>c0b08f10ed07bfe06309e31a7fff85cadb733ce2</id>
<content type='text'>
problem: "server.allow-insecure" is invisible in gluster volume set
help.

Fix: "server.allow-insecure" is defined as NO_DOC type, chainging
it to DOC type solve the problem.

Change-Id: I327f1e4c1684ff846deb8b7df07d4d8a09073274
BUG: 1503424
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
problem: "server.allow-insecure" is invisible in gluster volume set
help.

Fix: "server.allow-insecure" is defined as NO_DOC type, chainging
it to DOC type solve the problem.

Change-Id: I327f1e4c1684ff846deb8b7df07d4d8a09073274
BUG: 1503424
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
