<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-messages.h, branch v3.8.5</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>cli/glusterd: add/remove brick fixes for arbiter volumes</title>
<updated>2016-05-24T08:35:57+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2016-04-29T12:11:18+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=ade1d726e035eea4540894b03c82b84304bba2ae'/>
<id>ade1d726e035eea4540894b03c82b84304bba2ae</id>
<content type='text'>
Backport of: http://review.gluster.org/14126

1.Provide a command to convert replica 2 volumes to arbiter volumes.
Existing self-heal logic will automatically heal the file hierarchy into
the arbiter brick, the progress of which can be monitored using the
heal info command.

Syntax: gluster volume add-brick &lt;VOLNAME&gt; replica 3 arbiter 1
&lt;HOST:arbiter-brick-path&gt;

2. Add checks when removing bricks from arbiter volumes:
- When converting from arbiter to replica 2 volume, allow only arbiter
  brick to be removed.
- When converting from arbiter to plain distribute volume, allow only if
  arbiter is one of the bricks that is removed.

3. Some clean-up:
- Use GD_MSG_DICT_GET_SUCCESS instead of GD_MSG_DICT_GET_FAILED to
log messages that are not failures.
- Remove unused variable `brick_list`
- Move 'brickinfo-&gt;group' related functions to glusted-utils.

Change-Id: Ifa75d137c67ffddde7dcb8e0df0873163e713119
BUG: 1337387
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14502
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of: http://review.gluster.org/14126

1.Provide a command to convert replica 2 volumes to arbiter volumes.
Existing self-heal logic will automatically heal the file hierarchy into
the arbiter brick, the progress of which can be monitored using the
heal info command.

Syntax: gluster volume add-brick &lt;VOLNAME&gt; replica 3 arbiter 1
&lt;HOST:arbiter-brick-path&gt;

2. Add checks when removing bricks from arbiter volumes:
- When converting from arbiter to replica 2 volume, allow only arbiter
  brick to be removed.
- When converting from arbiter to plain distribute volume, allow only if
  arbiter is one of the bricks that is removed.

3. Some clean-up:
- Use GD_MSG_DICT_GET_SUCCESS instead of GD_MSG_DICT_GET_FAILED to
log messages that are not failures.
- Remove unused variable `brick_list`
- Move 'brickinfo-&gt;group' related functions to glusted-utils.

Change-Id: Ifa75d137c67ffddde7dcb8e0df0873163e713119
BUG: 1337387
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14502
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/geo-rep: slave volume uuid to identify a geo-rep session</title>
<updated>2016-05-23T08:13:58+00:00</updated>
<author>
<name>Saravanakumar Arumugam</name>
<email>sarumuga@redhat.com</email>
</author>
<published>2015-12-29T13:52:36+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9ace7ecc2a278ac06dd5a0744be9a85679d8ceca'/>
<id>9ace7ecc2a278ac06dd5a0744be9a85679d8ceca</id>
<content type='text'>
Problem:
Currently, it is possible to create multiple geo-rep session from
the Master host to Slave host(s), where Slave host(s) belonging
to the same volume.

For example:
Consider Master Host M1 having volume tv1 and Slave volume tv2,
which spans across two Slave hosts S1 and S2.
Currently, it is possible to create geo-rep session from
M1(tv1) to S1(tv2) as well as from M1(tv1) to S2(tv2).

When the Slave Host is alone modified, it is identified as a new geo-rep
session (as slave host and slave volume together are identifying
Slave side).

Also, it is possible to create both root and non-root geo-rep session between
same Master volume and Slave volume. This should also be avoided.

Solution:
This multiple geo-rep session creation must be avoided and
in order to avoid, use Slave volume uuid to identify a Slave.
This way, we can identify whether a session is already created for
the same Slave volume and avoid creating again (using different host).

When the session creation is forced in the above scenario, rename
the existing geo-rep session directory with new Slave Host mentioned.

Change-Id: I9239759cbc0d15dad63c48b8cf62950bb687c7c8
BUG: 1336704
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Signed-off-by: Aravinda VK &lt;avishwan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13111
Reviewed-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Tested-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
(cherry picked from commit a9128cda34b1f696b717ba09fa0ac5a929be8969)
Reviewed-on: http://review.gluster.org/14372
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Currently, it is possible to create multiple geo-rep session from
the Master host to Slave host(s), where Slave host(s) belonging
to the same volume.

For example:
Consider Master Host M1 having volume tv1 and Slave volume tv2,
which spans across two Slave hosts S1 and S2.
Currently, it is possible to create geo-rep session from
M1(tv1) to S1(tv2) as well as from M1(tv1) to S2(tv2).

When the Slave Host is alone modified, it is identified as a new geo-rep
session (as slave host and slave volume together are identifying
Slave side).

Also, it is possible to create both root and non-root geo-rep session between
same Master volume and Slave volume. This should also be avoided.

Solution:
This multiple geo-rep session creation must be avoided and
in order to avoid, use Slave volume uuid to identify a Slave.
This way, we can identify whether a session is already created for
the same Slave volume and avoid creating again (using different host).

When the session creation is forced in the above scenario, rename
the existing geo-rep session directory with new Slave Host mentioned.

Change-Id: I9239759cbc0d15dad63c48b8cf62950bb687c7c8
BUG: 1336704
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Signed-off-by: Aravinda VK &lt;avishwan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13111
Reviewed-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Tested-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
(cherry picked from commit a9128cda34b1f696b717ba09fa0ac5a929be8969)
Reviewed-on: http://review.gluster.org/14372
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: add defence mechanism to avoid brick port clashes</title>
<updated>2016-05-10T07:11:33+00:00</updated>
<author>
<name>Prasanna Kumar Kalever</name>
<email>prasanna.kalever@redhat.com</email>
</author>
<published>2016-04-27T13:42:19+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=610a3f5bcc9f3443da55d857b162c83d50fa3a6b'/>
<id>610a3f5bcc9f3443da55d857b162c83d50fa3a6b</id>
<content type='text'>
Intro:
Currently glusterd maintain the portmap registry which contains ports that
are free to use between 49152 - 65535, this registry is initialized
once, and updated accordingly as an then when glusterd sees they are been
used.

Glusterd first checks for a port within the portmap registry and gets a FREE
port marked in it, then checks if that port is currently free using a connect()
function then passes it to brick process which have to bind on it.

Problem:
We see that there is a time gap between glusterd checking the port with
connect() and brick process actually binding on it. In this time gap it could
be so possible that any process would have occupied this port because of which
brick will fail to bind and exit.

Case 1:
To avoid the gluster client process occupying the port supplied by glusterd :

we have separated the client port map range with brick port map range more @
http://review.gluster.org/#/c/13998/

Case 2: (Handled by this patch)
To avoid the other foreign process occupying the port supplied by glusterd :

To handle above situation this patch implements a mechanism to return EADDRINUSE
error code to glusterd, upon which a new port is allocated and try to restart
the brick process with the newly allocated port.

Note: Incase of glusterd restarts i.e. runner_run_nowait() there is no way to
handle Case 2, becuase runner_run_nowait() will not wait to get the return/exit
code of the executed command (brick process). Hence as of now in such case,
we cannot know with what error the brick has failed to connect.

This patch also fix the runner_end() to perform some cleanup w.r.t
return values.

Backport of:
&gt; Change-Id: Iec52e7f5d87ce938d173f8ef16aa77fd573f2c5e
&gt; BUG: 1322805
&gt; Signed-off-by: Prasanna Kumar Kalever &lt;prasanna.kalever@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/14043
&gt; Tested-by: Prasanna Kumar Kalever &lt;pkalever@redhat.com&gt;
&gt; Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;

Change-Id: Id7d8351a0082b44310177e714edc0571ad0f7195
BUG: 1333711
Signed-off-by: Prasanna Kumar Kalever &lt;prasanna.kalever@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14235
Tested-by: Prasanna Kumar Kalever &lt;pkalever@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Intro:
Currently glusterd maintain the portmap registry which contains ports that
are free to use between 49152 - 65535, this registry is initialized
once, and updated accordingly as an then when glusterd sees they are been
used.

Glusterd first checks for a port within the portmap registry and gets a FREE
port marked in it, then checks if that port is currently free using a connect()
function then passes it to brick process which have to bind on it.

Problem:
We see that there is a time gap between glusterd checking the port with
connect() and brick process actually binding on it. In this time gap it could
be so possible that any process would have occupied this port because of which
brick will fail to bind and exit.

Case 1:
To avoid the gluster client process occupying the port supplied by glusterd :

we have separated the client port map range with brick port map range more @
http://review.gluster.org/#/c/13998/

Case 2: (Handled by this patch)
To avoid the other foreign process occupying the port supplied by glusterd :

To handle above situation this patch implements a mechanism to return EADDRINUSE
error code to glusterd, upon which a new port is allocated and try to restart
the brick process with the newly allocated port.

Note: Incase of glusterd restarts i.e. runner_run_nowait() there is no way to
handle Case 2, becuase runner_run_nowait() will not wait to get the return/exit
code of the executed command (brick process). Hence as of now in such case,
we cannot know with what error the brick has failed to connect.

This patch also fix the runner_end() to perform some cleanup w.r.t
return values.

Backport of:
&gt; Change-Id: Iec52e7f5d87ce938d173f8ef16aa77fd573f2c5e
&gt; BUG: 1322805
&gt; Signed-off-by: Prasanna Kumar Kalever &lt;prasanna.kalever@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/14043
&gt; Tested-by: Prasanna Kumar Kalever &lt;pkalever@redhat.com&gt;
&gt; Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;

Change-Id: Id7d8351a0082b44310177e714edc0571ad0f7195
BUG: 1333711
Signed-off-by: Prasanna Kumar Kalever &lt;prasanna.kalever@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14235
Tested-by: Prasanna Kumar Kalever &lt;pkalever@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot/quota: Copy quota.cksum during snapshot operations</title>
<updated>2016-04-22T05:51:08+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2016-03-11T09:57:30+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=8f3ad1e3ede77fa5f8c8d606e18a7e83865a822c'/>
<id>8f3ad1e3ede77fa5f8c8d606e18a7e83865a822c</id>
<content type='text'>
A volume having a quota.conf file, should always have
a quota.cksum file too. Based on this above assumption
modifying glusterd_copy_quota_files() to always copy
quota.cksum, if quota.conf is present.

This change will be reflected when a snapshot is created,
restored and cloned.

Change-Id: Ia49dc26eacef32eeb8f7d7d9553c80e304b08779
BUG: 1316848
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13760
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijaikumar Mallikarjuna &lt;vmallika@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
A volume having a quota.conf file, should always have
a quota.cksum file too. Based on this above assumption
modifying glusterd_copy_quota_files() to always copy
quota.cksum, if quota.conf is present.

This change will be reflected when a snapshot is created,
restored and cloned.

Change-Id: Ia49dc26eacef32eeb8f7d7d9553c80e304b08779
BUG: 1316848
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13760
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijaikumar Mallikarjuna &lt;vmallika@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Revert "glusterd: Allocate fresh port on brick (re)start"</title>
<updated>2016-04-14T11:20:53+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>garg.gaurav52@gmail.com</email>
</author>
<published>2016-04-13T09:08:11+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=401d591de168fdb648663f01c4c4f8ed60777558'/>
<id>401d591de168fdb648663f01c4c4f8ed60777558</id>
<content type='text'>
This reverts commit 34899d7

Commit 34899d7 introduced a change, where restarting a volume or rebooting
a node result into fresh allocation of brick port. In production
environment generally administrator makes firewall configuration for a
range of ports for a volume. With commit 34899d7, on rebooting of node
or restarting a volume might result into volume start fail because
firewall might block fresh allocated port of a brick and also it will be
difficult in testing because of fresh allocation of port.

Change-Id: I7a90f69e8c267a013dc906b5228ca76e819d84ad
BUG: 1322805
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13989
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This reverts commit 34899d7

Commit 34899d7 introduced a change, where restarting a volume or rebooting
a node result into fresh allocation of brick port. In production
environment generally administrator makes firewall configuration for a
range of ports for a volume. With commit 34899d7, on rebooting of node
or restarting a volume might result into volume start fail because
firewall might block fresh allocated port of a brick and also it will be
difficult in testing because of fresh allocation of port.

Change-Id: I7a90f69e8c267a013dc906b5228ca76e819d84ad
BUG: 1322805
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13989
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Allocate fresh port on brick (re)start</title>
<updated>2016-04-01T20:38:54+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2016-03-31T05:31:53+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=34899d71f21fd2b4c523b68ffb2d7c655c776641'/>
<id>34899d71f21fd2b4c523b68ffb2d7c655c776641</id>
<content type='text'>
There is no point of using the same port through the entire volume life cycle
for a particular bricks process since there is no guarantee that the same port
would be free and no other application wouldn't consume it in between the
glusterd/volume restart.

We hit a race where on glusterd restart the daemon services start followed by
brick processes and the time brick process tries to bind with the port which was
allocated by glusterd before a restart is been already consumed by some other
client like NFS/SHD/...

Note : This is a short term solution as here we reduce the race window but don't
eliminate it completely. As a long term solution the port allocation has to be
done by glusterfsd and the same should be communicated back to glusterd for book
keeping

Change-Id: Ibbd1e7ca87e51a7cd9cf216b1fe58ef7783aef24
BUG: 1322805
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13865
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There is no point of using the same port through the entire volume life cycle
for a particular bricks process since there is no guarantee that the same port
would be free and no other application wouldn't consume it in between the
glusterd/volume restart.

We hit a race where on glusterd restart the daemon services start followed by
brick processes and the time brick process tries to bind with the port which was
allocated by glusterd before a restart is been already consumed by some other
client like NFS/SHD/...

Note : This is a short term solution as here we reduce the race window but don't
eliminate it completely. As a long term solution the port allocation has to be
done by glusterfsd and the same should be communicated back to glusterd for book
keeping

Change-Id: Ibbd1e7ca87e51a7cd9cf216b1fe58ef7783aef24
BUG: 1322805
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13865
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/tier: add pause tier for snapshots</title>
<updated>2015-10-21T22:44:35+00:00</updated>
<author>
<name>Dan Lambright</name>
<email>dlambrig@redhat.com</email>
</author>
<published>2015-10-05T19:52:02+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=36974c36fa4231df3f0e9428a9da6d1aa33348ab'/>
<id>36974c36fa4231df3f0e9428a9da6d1aa33348ab</id>
<content type='text'>
Snaps of tiered volumes cannot handle files undergoing migration.
We implement a helper mechanism to "pause" migration. Any files
undergoing migration are aborted. Clean up is done to remove
sticky bits and data at the destination. Migration is restarted
after snap completes.

For testing an internal switch is added. It is not exposed externally.

gluster volume set vol1 tier-pause [true|false]

Change-Id: Ia85bbf89ac142e9b7e73fcbef98bb9da86097799
BUG: 1267950
Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12304
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Snaps of tiered volumes cannot handle files undergoing migration.
We implement a helper mechanism to "pause" migration. Any files
undergoing migration are aborted. Clean up is done to remove
sticky bits and data at the destination. Migration is restarted
after snap completes.

For testing an internal switch is added. It is not exposed externally.

gluster volume set vol1 tier-pause [true|false]

Change-Id: Ia85bbf89ac142e9b7e73fcbef98bb9da86097799
BUG: 1267950
Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12304
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: disabling enable-shared-storage option should not delete volume</title>
<updated>2015-10-14T03:45:17+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>garg.gaurav52@gmail.com</email>
</author>
<published>2015-09-24T12:34:23+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=f88d0ade1c09aa1a3cee3713dc851da9552d7ff5'/>
<id>f88d0ade1c09aa1a3cee3713dc851da9552d7ff5</id>
<content type='text'>
Previously when you create volume with "glusterd_shared_storage" name
and if user disable enable-shared-storage option then gluster will
delete the "glusterd_shared_storage" volume.

With this fix gluster will do appropriate validation of
enable-shared-storage option and it will not delete volume with
"glusterd_shared_storage" name if it is a user created volume.

Change-Id: I2bd92f938fb3de6ef496a934933bdcea9f251491
BUG: 1266818
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12232
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Anand Nekkunti &lt;anekkunt@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Previously when you create volume with "glusterd_shared_storage" name
and if user disable enable-shared-storage option then gluster will
delete the "glusterd_shared_storage" volume.

With this fix gluster will do appropriate validation of
enable-shared-storage option and it will not delete volume with
"glusterd_shared_storage" name if it is a user created volume.

Change-Id: I2bd92f938fb3de6ef496a934933bdcea9f251491
BUG: 1266818
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12232
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Anand Nekkunti &lt;anekkunt@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: stop all the daemons services on peer detach</title>
<updated>2015-08-25T05:18:55+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>ggarg@redhat.com</email>
</author>
<published>2015-07-02T12:53:51+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=8e0bf30dc40fed45078c702dec750b5e8bbf5734'/>
<id>8e0bf30dc40fed45078c702dec750b5e8bbf5734</id>
<content type='text'>
Currently glusterd is not stopping all the deamon service on peer detach

With this fix it will do peer detach cleanup properlly and will stop all
the daemon which was running before peer detach on the node.

Change-Id: Ifed403ed09187e84f2a60bf63135156ad1f15775
BUG: 1255386
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11509
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently glusterd is not stopping all the deamon service on peer detach

With this fix it will do peer detach cleanup properlly and will stop all
the daemon which was running before peer detach on the node.

Change-Id: Ifed403ed09187e84f2a60bf63135156ad1f15775
BUG: 1255386
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11509
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot: Log deletion of snapshot, during auto-delete</title>
<updated>2015-08-24T06:48:35+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2015-08-20T09:10:29+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=69942154e197d753580b74cc7f8e6bce470ee673'/>
<id>69942154e197d753580b74cc7f8e6bce470ee673</id>
<content type='text'>
When auto-delete is enabled, and soft-limit is reached,
on creation of a snapshot, the oldest snapshot for that
volume is deleted.

Displaying a warning log before deleting the oldest
snapshot.

Change-Id: I75f0366935966a223b63a4ec5ac13f9fe36c0e82
BUG: 1255310
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11963
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: mohammed rafi  kc &lt;rkavunga@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When auto-delete is enabled, and soft-limit is reached,
on creation of a snapshot, the oldest snapshot for that
volume is deleted.

Displaying a warning log before deleting the oldest
snapshot.

Change-Id: I75f0366935966a223b63a4ec5ac13f9fe36c0e82
BUG: 1255310
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11963
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: mohammed rafi  kc &lt;rkavunga@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
