<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-snapshot-utils.c, branch v4.0.2</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: import volumes in separate synctask</title>
<updated>2018-02-21T15:35:15+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-02-08T03:39:00+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9b2995426ea206df9a4d8f14bbdb8e8baf73d91b'/>
<id>9b2995426ea206df9a4d8f14bbdb8e8baf73d91b</id>
<content type='text'>
With brick multiplexing, to attach a brick to an existing brick process
the prerequisite is to have the compatible brick to finish it's
initialization and portmap sign in and hence the thread might have to go
to a sleep and context switch the synctask to allow the brick process to
communicate with glusterd. In normal code path, this works fine as
glusterd_restart_bricks () is launched through a separate synctask.

In case there's a mismatch of the volume when glusterd restarts,
glusterd_import_friend_volume is invoked and then it tries to call
glusterd_start_bricks () from the main thread which eventually may land
into the similar situation. Now since this is not done through a
separate synctask, the 1st brick will never be able to get its turn to
finish all of its handshaking and as a consequence to it, all the bricks
will fail to get attached to it.

Solution : Execute import volume and glusterd restart bricks in separate
synctask. Importing snaps had to be also done through synctask as
there's a dependency of the parent volume need to be available for the
importing snap functionality to work.

&gt;mainline patch : https://review.gluster.org/#/c/19357/
                  https://review.gluster.org/#/c/19536/
                  https://review.gluster.org/#/c/19539/

Change-Id: I290b244d456afcc9b913ab30be4af040d340428c
BUG: 1543706
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit cb0339f9229fc5c05d7ef4cfcc4ca9c4569f3755)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With brick multiplexing, to attach a brick to an existing brick process
the prerequisite is to have the compatible brick to finish it's
initialization and portmap sign in and hence the thread might have to go
to a sleep and context switch the synctask to allow the brick process to
communicate with glusterd. In normal code path, this works fine as
glusterd_restart_bricks () is launched through a separate synctask.

In case there's a mismatch of the volume when glusterd restarts,
glusterd_import_friend_volume is invoked and then it tries to call
glusterd_start_bricks () from the main thread which eventually may land
into the similar situation. Now since this is not done through a
separate synctask, the 1st brick will never be able to get its turn to
finish all of its handshaking and as a consequence to it, all the bricks
will fail to get attached to it.

Solution : Execute import volume and glusterd restart bricks in separate
synctask. Importing snaps had to be also done through synctask as
there's a dependency of the parent volume need to be available for the
importing snap functionality to work.

&gt;mainline patch : https://review.gluster.org/#/c/19357/
                  https://review.gluster.org/#/c/19536/
                  https://review.gluster.org/#/c/19539/

Change-Id: I290b244d456afcc9b913ab30be4af040d340428c
BUG: 1543706
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit cb0339f9229fc5c05d7ef4cfcc4ca9c4569f3755)
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot: fix several coverity issues in glusterd-snapshot.c</title>
<updated>2017-12-21T04:30:28+00:00</updated>
<author>
<name>Sunny Kumar</name>
<email>sunkumar@redhat.com</email>
</author>
<published>2017-12-20T08:00:39+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=4a06f851dcad6bdd730f3d2e12bd8f26709f27fe'/>
<id>4a06f851dcad6bdd730f3d2e12bd8f26709f27fe</id>
<content type='text'>
This patch fixes issues 157, 426, 428, 431, 432, 437,439, 482 from [1].

[1] https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-12-13-e255385a/html/

Change-Id: Iff9df12bd9802db29434155badb1beda045aba5b
BUG: 789278
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch fixes issues 157, 426, 428, 431, 432, 437,439, 482 from [1].

[1] https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-12-13-e255385a/html/

Change-Id: Iff9df12bd9802db29434155badb1beda045aba5b
BUG: 789278
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot: Fix several coverity issues in glusterd-snapshot-utils.c</title>
<updated>2017-12-18T15:35:23+00:00</updated>
<author>
<name>Sunny Kumar</name>
<email>sunkumar@redhat.com</email>
</author>
<published>2017-12-13T12:56:36+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=42b8df4704c93b35c8b536074df87065ca8eb5c4'/>
<id>42b8df4704c93b35c8b536074df87065ca8eb5c4</id>
<content type='text'>
This patch fixes issues 622, 627, 630, 484, 32, 33 and 34 from [1]

[1] https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-10-30-9aa574a5/html/

Change-Id: I4c7ac2b2725474d73643367b38f8bf33eaddd8da
BUG: 789278
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch fixes issues 622, 627, 630, 484, 32, 33 and 34 from [1]

[1] https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-10-30-9aa574a5/html/

Change-Id: I4c7ac2b2725474d73643367b38f8bf33eaddd8da
BUG: 789278
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot: fix coverity issue 'UNREACHABLE'</title>
<updated>2017-11-09T11:31:40+00:00</updated>
<author>
<name>Sunny Kumar</name>
<email>sunkumar@redhat.com</email>
</author>
<published>2017-11-03T11:24:55+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9eea263043e86ebd79ae72cc79ce4c22f336f958'/>
<id>9eea263043e86ebd79ae72cc79ce4c22f336f958</id>
<content type='text'>
Problem : In glusterd-snapshot-utils.c:2778 code path is unreachable as it was
          intended for n-way replication snapshot support.

Fix : Removed code now, will revert back this change in future once we support
      n-way replication in snapshot.

This patch fixes coverity issue 687 from [1].

[1] https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-10-30-9aa574a5/html/

Change-Id: I163f318e62a7e84d211d9930dedee6587d37b2a0
BUG: 789278
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem : In glusterd-snapshot-utils.c:2778 code path is unreachable as it was
          intended for n-way replication snapshot support.

Fix : Removed code now, will revert back this change in future once we support
      n-way replication in snapshot.

This patch fixes coverity issue 687 from [1].

[1] https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-10-30-9aa574a5/html/

Change-Id: I163f318e62a7e84d211d9930dedee6587d37b2a0
BUG: 789278
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: make gf_boolean_t a C99 bool instead of an enum</title>
<updated>2017-11-03T05:04:11+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@fb.com</email>
</author>
<published>2017-10-31T16:56:11+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=8e973d3ab96d290a32ae3fdbdd1cf867b7060483'/>
<id>8e973d3ab96d290a32ae3fdbdd1cf867b7060483</id>
<content type='text'>
This reduces the space used from four bytes to one, and allows
new code to use familiar C99 types/values interoperably with our
old cruft. It does *not* change current declarations or code;
that will be left for a separate - much larger - patch.

Updates: #80

Change-Id: I5baedd17d3fb05b38f0d8b8bb9dd62824475842e
Signed-off-by: Jeff Darcy &lt;jdarcy@fb.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This reduces the space used from four bytes to one, and allows
new code to use familiar C99 types/values interoperably with our
old cruft. It does *not* change current declarations or code;
that will be left for a separate - much larger - patch.

Updates: #80

Change-Id: I5baedd17d3fb05b38f0d8b8bb9dd62824475842e
Signed-off-by: Jeff Darcy &lt;jdarcy@fb.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: persist brickinfo's port change into glusterd's store</title>
<updated>2017-10-31T04:34:10+00:00</updated>
<author>
<name>Gaurav Yadav</name>
<email>gyadav@redhat.com</email>
</author>
<published>2017-10-27T10:34:46+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=15fe99995b8650a677b097028bc14d61a5dd5e1b'/>
<id>15fe99995b8650a677b097028bc14d61a5dd5e1b</id>
<content type='text'>
Problem:
Consider a case where node reboot is performed and prior to reboot
brick was listening to 49153. Post reboot glusterd assigned 49152
to brick and started the brick process but the new port was never
persisted. Now when glusterd restarts glusterd always read the port
from its persisted store i.e 49153 however pmap signin happens with
the correct port i.e 49152.

Fix:
Make sure when glusterd_brick_start is called, glusterd_store_volinfo is
eventually invoked.

Change-Id: Ic0efbd48c51d39729ed951a42922d0e59f7115a1
BUG: 1506589
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Consider a case where node reboot is performed and prior to reboot
brick was listening to 49153. Post reboot glusterd assigned 49152
to brick and started the brick process but the new port was never
persisted. Now when glusterd restarts glusterd always read the port
from its persisted store i.e 49153 however pmap signin happens with
the correct port i.e 49152.

Fix:
Make sure when glusterd_brick_start is called, glusterd_store_volinfo is
eventually invoked.

Change-Id: Ic0efbd48c51d39729ed951a42922d0e59f7115a1
BUG: 1506589
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot: Issue with other processes accessing the mounted brick</title>
<updated>2017-10-23T10:05:02+00:00</updated>
<author>
<name>Sunny Kumar</name>
<email>sunkumar@redhat.com</email>
</author>
<published>2017-08-16T08:34:45+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=2b3b3edee2d849b4aee314048987dc995d9679a1'/>
<id>2b3b3edee2d849b4aee314048987dc995d9679a1</id>
<content type='text'>
Added code for unmount of activated snapshot brick during snapshot
deactivation process which make sense as mount point for deactivated
bricks should not exist.

Removed code for mounting newly created snapshot, as newly created
snapshots should not mount until it is activated.

Added code for mount point creation and snapshot mount during snapshot
activation.

Added validation during glusterd init for mounting only those snapshot
whose status is either STARTED or RESTORED.

During snapshot restore, mount point for stopped snap should exist as
it is required to set extended attribute.

During handshake, after getting updates from friend mount point for
activated snapshot should exist and should not for deactivated
snapshot.

While getting snap status we should show relevent information for
deactivated snapshots, after this pathch 'gluster snap status' command
will show output like-

Snap Name : snap1
Snap UUID : snap-uuid

	Brick Path        :   server1:/run/gluster/snaps/snap-vol-name/brick
	Volume Group      :   N/A (Deactivated Snapshot)
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   N/A
	LV Size           :   N/A

Fixes: #276

Change-Id: I65783488e35fac43632615ce1b8ff7b8e84834dc
BUG: 1482023
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Added code for unmount of activated snapshot brick during snapshot
deactivation process which make sense as mount point for deactivated
bricks should not exist.

Removed code for mounting newly created snapshot, as newly created
snapshots should not mount until it is activated.

Added code for mount point creation and snapshot mount during snapshot
activation.

Added validation during glusterd init for mounting only those snapshot
whose status is either STARTED or RESTORED.

During snapshot restore, mount point for stopped snap should exist as
it is required to set extended attribute.

During handshake, after getting updates from friend mount point for
activated snapshot should exist and should not for deactivated
snapshot.

While getting snap status we should show relevent information for
deactivated snapshots, after this pathch 'gluster snap status' command
will show output like-

Snap Name : snap1
Snap UUID : snap-uuid

	Brick Path        :   server1:/run/gluster/snaps/snap-vol-name/brick
	Volume Group      :   N/A (Deactivated Snapshot)
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   N/A
	LV Size           :   N/A

Fixes: #276

Change-Id: I65783488e35fac43632615ce1b8ff7b8e84834dc
BUG: 1482023
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot: Buffer Size Warning</title>
<updated>2017-10-03T13:28:01+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2017-09-22T12:47:03+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=fe5fc91ab9b74b525d4979e09ce13766b3180381'/>
<id>fe5fc91ab9b74b525d4979e09ce13766b3180381</id>
<content type='text'>
new_brickinfo-&gt;mnt_opts is allocated memory using calloc. So it is
already zeroed out at allocation. we need not to add null character
at the end. we pass one less than maximum size to the strncpy to make
sure that destination string is terminated.

Change-Id: I463dddd2171fb39a509bb75ffcc074d5b1cf7d62
BUG: 789278
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
new_brickinfo-&gt;mnt_opts is allocated memory using calloc. So it is
already zeroed out at allocation. we need not to add null character
at the end. we pass one less than maximum size to the strncpy to make
sure that destination string is terminated.

Change-Id: I463dddd2171fb39a509bb75ffcc074d5b1cf7d62
BUG: 789278
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapview-server : Refresh the snapshot list during each reconnect</title>
<updated>2017-05-08T05:49:49+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2017-05-04T15:26:43+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=21115ae8b80c1ae0afe8427423ca5ecde40f0027'/>
<id>21115ae8b80c1ae0afe8427423ca5ecde40f0027</id>
<content type='text'>
Currently we are refreshing the snapshot list either when there is
a request from glusterd or the very first initialization. But if
anything changed after when glusterd is down then there is no
mechanism to refresh the snashot dentries.

This patch will refresh snapshot list during each reconnect

Change-Id: I3ed655572d777f60d57dd479d190f75553591267
BUG: 1448150
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17178
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently we are refreshing the snapshot list either when there is
a request from glusterd or the very first initialization. But if
anything changed after when glusterd is down then there is no
mechanism to refresh the snashot dentries.

This patch will refresh snapshot list during each reconnect

Change-Id: I3ed655572d777f60d57dd479d190f75553591267
BUG: 1448150
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17178
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Fix snapshot failure in non-root geo-rep setup</title>
<updated>2017-04-18T09:39:46+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2017-04-17T12:39:30+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=cc839523364e47dea715cd7241772cd68f05f76c'/>
<id>cc839523364e47dea715cd7241772cd68f05f76c</id>
<content type='text'>
Geo-replication session directory name has the form
'&lt;mastervol&gt;_&lt;slavehost&gt;_&lt;slavevol&gt;'. But in non-root
geo-replication setup, while preparing geo-replication
session directory name, glusterd is including 'user@'
resulting in "&lt;mastervol&gt;_&lt;user@slavehost&gt;_&lt;slavevol&gt;".
Hence snapshot is failing to copy geo-rep specific
session files. Fixing the same.

Change-Id: Id214d3186e40997d2827a0bb60d3676ca2552df7
BUG: 1442760
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17067
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Aravinda VK &lt;avishwan@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Geo-replication session directory name has the form
'&lt;mastervol&gt;_&lt;slavehost&gt;_&lt;slavevol&gt;'. But in non-root
geo-replication setup, while preparing geo-replication
session directory name, glusterd is including 'user@'
resulting in "&lt;mastervol&gt;_&lt;user@slavehost&gt;_&lt;slavevol&gt;".
Hence snapshot is failing to copy geo-rep specific
session files. Fixing the same.

Change-Id: Id214d3186e40997d2827a0bb60d3676ca2552df7
BUG: 1442760
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17067
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Aravinda VK &lt;avishwan@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
