<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests/bugs/glusterd, branch v3.7.4</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: Don't allow remove brick start/commit if glusterd is down of the host of the brick</title>
<updated>2015-08-28T09:02:29+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2015-07-21T04:27:43+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=f51ffaeda4c87b682b7865c26befd75fe1c8cb25'/>
<id>f51ffaeda4c87b682b7865c26befd75fe1c8cb25</id>
<content type='text'>
Backport of http://review.gluster.org/#/c/11726/

remove brick stage blindly starts the remove brick operation even if the
glusterd instance of the node hosting the brick is down. Operationally its
incorrect and this could result into a inconsistent rebalance status across all
the nodes as the originator of this command will always have the rebalance
status to 'DEFRAG_NOT_STARTED', however when the glusterd instance on the other
nodes comes up, will trigger rebalance and make the status to completed once the
rebalance is finished.

This patch fixes two things:
1. Add a validation in remove brick to check whether all the peers hosting the
bricks to be removed are up.

2. Don't copy volinfo-&gt;rebal.dict from stale volinfo during restore as this
might end up in a incosistent node_state.info file resulting into volume status
command failure.

Change-Id: Ia4a76865c05037d49eec5e3bbfaf68c1567f1f81
BUG: 1256265
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11726
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11996
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/#/c/11726/

remove brick stage blindly starts the remove brick operation even if the
glusterd instance of the node hosting the brick is down. Operationally its
incorrect and this could result into a inconsistent rebalance status across all
the nodes as the originator of this command will always have the rebalance
status to 'DEFRAG_NOT_STARTED', however when the glusterd instance on the other
nodes comes up, will trigger rebalance and make the status to completed once the
rebalance is finished.

This patch fixes two things:
1. Add a validation in remove brick to check whether all the peers hosting the
bricks to be removed are up.

2. Don't copy volinfo-&gt;rebal.dict from stale volinfo during restore as this
might end up in a incosistent node_state.info file resulting into volume status
command failure.

Change-Id: Ia4a76865c05037d49eec5e3bbfaf68c1567f1f81
BUG: 1256265
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11726
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11996
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: stop all the daemons services on peer detach</title>
<updated>2015-08-26T11:58:03+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>garg.gaurav52@gmail.com</email>
</author>
<published>2015-08-21T13:33:03+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9d71c362b11544494b3fe68477cc47abe3bb2cde'/>
<id>9d71c362b11544494b3fe68477cc47abe3bb2cde</id>
<content type='text'>
Backport of:  http://review.gluster.org/#/c/11509/

Currently glusterd is not stopping all the deamon service on peer detach

With this fix it will do peer detach cleanup properlly and will stop all
the daemon which was running before peer detach on the node.

Change-Id: Ifed403ed09187e84f2a60bf63135156ad1f15775
BUG: 1238706
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11971
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of:  http://review.gluster.org/#/c/11509/

Currently glusterd is not stopping all the deamon service on peer detach

With this fix it will do peer detach cleanup properlly and will stop all
the daemon which was running before peer detach on the node.

Change-Id: Ifed403ed09187e84f2a60bf63135156ad1f15775
BUG: 1238706
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11971
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Stop/restart/notify to daemons(svcs) during reset/set on a volume</title>
<updated>2015-08-18T10:58:18+00:00</updated>
<author>
<name>anand</name>
<email>anekkunt@redhat.com</email>
</author>
<published>2015-05-20T14:22:11+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=c49b8064bd83a512dd962d4c4168728886ff0a5c'/>
<id>c49b8064bd83a512dd962d4c4168728886ff0a5c</id>
<content type='text'>
problem : Reset/set commands were not working properly. reset command returns
success but it not sending notification to svcs if corresponding graph modified.

Fix: Whenever reset/set command issued, generate the temp graph and compare
with original graph and do the fallowing actions
1.) If both graph are identical nothing to do with svcs.
2.) If any changes in graph topology restart/stop service by calling
svc manager.
3)  If changes in options send notify signal by calling glusterd_fetchspec_notify.

Back port of:
&gt;Change-Id: I852c4602eafed1ae6e6a02424814fe3a83e3d4c7
&gt;BUG: 1209329
&gt;Signed-off-by: anand &lt;anekkunt@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/10850
&gt;Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;(cherry picked from commit 7255febab2c38cc89b71f2519a20d10f53586000)

Change-Id: I42aa757ecc6b5b307b5927d11f12d08f57ac0ae2
BUG: 1253165
Reviewed-on: http://review.gluster.org/11905
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
problem : Reset/set commands were not working properly. reset command returns
success but it not sending notification to svcs if corresponding graph modified.

Fix: Whenever reset/set command issued, generate the temp graph and compare
with original graph and do the fallowing actions
1.) If both graph are identical nothing to do with svcs.
2.) If any changes in graph topology restart/stop service by calling
svc manager.
3)  If changes in options send notify signal by calling glusterd_fetchspec_notify.

Back port of:
&gt;Change-Id: I852c4602eafed1ae6e6a02424814fe3a83e3d4c7
&gt;BUG: 1209329
&gt;Signed-off-by: anand &lt;anekkunt@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/10850
&gt;Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;(cherry picked from commit 7255febab2c38cc89b71f2519a20d10f53586000)

Change-Id: I42aa757ecc6b5b307b5927d11f12d08f57ac0ae2
BUG: 1253165
Reviewed-on: http://review.gluster.org/11905
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: getting txn_id from frame-&gt;cookie in op_sm call back</title>
<updated>2015-08-07T06:24:22+00:00</updated>
<author>
<name>anand</name>
<email>anekkunt@redhat.com</email>
</author>
<published>2015-07-21T10:12:24+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=90beb1602cb2926c14a9bd02653cd75b40661cd4'/>
<id>90beb1602cb2926c14a9bd02653cd75b40661cd4</id>
<content type='text'>
RCA: If rebalance start is triggered from one node and one of other nodes in the cluster goes down simultaneously
we might end up in a case where callback will use the txn_id from priv-&gt;global_txn_id which is always zeros and
this means injecting an event with an incorrect txn_id will result into op-sm getting stuck.

fix: set txn_id in frame-&gt;cookie during sumbit_and_request, so that we can get txn_id in call back
functions.
Back port of :
&gt;Change-Id: I519176c259ea9d37897791a77a7c92eb96d10052
&gt;BUG: 1245142
&gt;Signed-off-by: anand &lt;anekkunt@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/11728
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;(cherry picked from commit 65e6ab1bfbbec7755f7ae2294cb83334ac65a296)

Change-Id: I376d6c791b0200a8371f590d29c3e950658a02c7
BUG: 1249925
Reviewed-on: http://review.gluster.org/11823
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
RCA: If rebalance start is triggered from one node and one of other nodes in the cluster goes down simultaneously
we might end up in a case where callback will use the txn_id from priv-&gt;global_txn_id which is always zeros and
this means injecting an event with an incorrect txn_id will result into op-sm getting stuck.

fix: set txn_id in frame-&gt;cookie during sumbit_and_request, so that we can get txn_id in call back
functions.
Back port of :
&gt;Change-Id: I519176c259ea9d37897791a77a7c92eb96d10052
&gt;BUG: 1245142
&gt;Signed-off-by: anand &lt;anekkunt@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/11728
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;(cherry picked from commit 65e6ab1bfbbec7755f7ae2294cb83334ac65a296)

Change-Id: I376d6c791b0200a8371f590d29c3e950658a02c7
BUG: 1249925
Reviewed-on: http://review.gluster.org/11823
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: rebalance support for cluster.rc framework</title>
<updated>2015-08-06T12:13:30+00:00</updated>
<author>
<name>anand</name>
<email>anekkunt@redhat.com</email>
</author>
<published>2015-06-13T11:16:10+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b8120c91cb11de7b21f2d6a36f7dc89ad0b7c387'/>
<id>b8120c91cb11de7b21f2d6a36f7dc89ad0b7c387</id>
<content type='text'>
Issue:Rebalance is failing in cluster framework (any simulated cluster environment in same node ).
RCA:
  1. we are passing always "localhost" as volfile server for rebalance xlator .
  2. Rebalance daemons are overwriting  unix socket and log files each other.
     (All rebalance processes are creating socket with same name) .

Fix: set vol_file_server, unix socket and log files properly.
Back portof:
&gt;Change-Id: I6654461e00c2a164b2f1f1db24a316c4180dd8d5
&gt;BUG: 1231437
&gt;Signed-off-by: anand &lt;anekkunt@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/11210
&gt;Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;(cherry picked from commit ee824ccb10e28489907fbf978a2d36b0b2c5dc8c)

Change-Id: I31a97c6186efa36d568c469a2320a4f3e870f781
BUG: 1249983
Reviewed-on: http://review.gluster.org/11824
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Issue:Rebalance is failing in cluster framework (any simulated cluster environment in same node ).
RCA:
  1. we are passing always "localhost" as volfile server for rebalance xlator .
  2. Rebalance daemons are overwriting  unix socket and log files each other.
     (All rebalance processes are creating socket with same name) .

Fix: set vol_file_server, unix socket and log files properly.
Back portof:
&gt;Change-Id: I6654461e00c2a164b2f1f1db24a316c4180dd8d5
&gt;BUG: 1231437
&gt;Signed-off-by: anand &lt;anekkunt@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/11210
&gt;Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;(cherry picked from commit ee824ccb10e28489907fbf978a2d36b0b2c5dc8c)

Change-Id: I31a97c6186efa36d568c469a2320a4f3e870f781
BUG: 1249983
Reviewed-on: http://review.gluster.org/11824
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: initialize the daemon services on demand</title>
<updated>2015-07-27T14:31:30+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2015-07-01T09:17:48+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=c5a19652c80162e670d29a7bd8c910d0acdfacb9'/>
<id>c5a19652c80162e670d29a7bd8c910d0acdfacb9</id>
<content type='text'>
Backport of http://review.gluster.org/#/c/11488/

As of now all the daemon services are initialized at glusterD init path. Since
socket file path of per node daemon demands the uuid of the node, MY_UUID macro
is invoked as part of the initialization.

The above flow breaks the usecases where a gluster image is built following a
template could be Dockerfile, Vagrantfile or any kind of virtualization
environment. This means bringing instances of this image would have same UUIDs
for the node resulting in peer probe failure.

Solution is to lazily initialize the services on demand.

Change-Id: If7caa533026c83e98c7c7678bded67085d0bbc1e
BUG: 1247012
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11488
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11766
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/#/c/11488/

As of now all the daemon services are initialized at glusterD init path. Since
socket file path of per node daemon demands the uuid of the node, MY_UUID macro
is invoked as part of the initialization.

The above flow breaks the usecases where a gluster image is built following a
template could be Dockerfile, Vagrantfile or any kind of virtualization
environment. This means bringing instances of this image would have same UUIDs
for the node resulting in peer probe failure.

Solution is to lazily initialize the services on demand.

Change-Id: If7caa533026c83e98c7c7678bded67085d0bbc1e
BUG: 1247012
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11488
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11766
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Pass NULL in glusterd_svc_manager in glusterd_restart_bricks</title>
<updated>2015-07-24T03:51:38+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>ggarg@redhat.com</email>
</author>
<published>2015-07-14T08:31:14+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=64727ecddb48f6cb8e497c28276ea78a0fb97991'/>
<id>64727ecddb48f6cb8e497c28276ea78a0fb97991</id>
<content type='text'>
On restarting glusterd quota daemon is not started when  more than one
volumes are configured and quota is enabled only on 2nd volume.
This is because of while restarting glusterd it will restart all the bricks.
During brick restart it will start respective daemon by passing volinfo of
first volume. Passing volinfo to glusterd_svc_manager will imply daemon
managers will take action based on the same volume's configuration which
is incorrect for per node daemons.

Fix is to pass volinfo NULL while restarting bricks.

BUG: 1242882
Change-Id: Ie53fc452dc79811068a9397abca13c65de4a8359
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11660
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
On restarting glusterd quota daemon is not started when  more than one
volumes are configured and quota is enabled only on 2nd volume.
This is because of while restarting glusterd it will restart all the bricks.
During brick restart it will start respective daemon by passing volinfo of
first volume. Passing volinfo to glusterd_svc_manager will imply daemon
managers will take action based on the same volume's configuration which
is incorrect for per node daemons.

Fix is to pass volinfo NULL while restarting bricks.

BUG: 1242882
Change-Id: Ie53fc452dc79811068a9397abca13c65de4a8359
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11660
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Fix failure in replace-brick when src-brick is offline</title>
<updated>2015-07-14T18:39:39+00:00</updated>
<author>
<name>Anuradha Talur</name>
<email>atalur@redhat.com</email>
</author>
<published>2015-07-13T18:04:17+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=1059bb42a8ef513484c12a33cef432e2156ae2dd'/>
<id>1059bb42a8ef513484c12a33cef432e2156ae2dd</id>
<content type='text'>
Change-Id: I0fdb58e15da15c40c3fc9767f2fe4df0ea9d2350
BUG: 1242728
Reviewed-on: http://review.gluster.org/11651
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Signed-off-by: Anuradha Talur &lt;atalur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11656
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I0fdb58e15da15c40c3fc9767f2fe4df0ea9d2350
BUG: 1242728
Reviewed-on: http://review.gluster.org/11651
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Signed-off-by: Anuradha Talur &lt;atalur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11656
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: test cluster lock in heterogenious cluster</title>
<updated>2015-06-24T10:57:50+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2015-06-09T17:41:13+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e4fcbc7b812a5f0d0fcf6f87d448fa94fd619afe'/>
<id>e4fcbc7b812a5f0d0fcf6f87d448fa94fd619afe</id>
<content type='text'>
Backport of http://review.gluster.org/11143

Change-Id: I421f50aeb89213d036b4b40f20a8e0d6bd78d60b
BUG: 1231608
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11143
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11216
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/11143

Change-Id: I421f50aeb89213d036b4b40f20a8e0d6bd78d60b
BUG: 1231608
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11143
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11216
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: fix spurious failure in bug-857330/xml.t</title>
<updated>2015-06-17T22:31:28+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2015-06-03T05:10:36+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=7255e567959ca6f0c211afff00e97a911b471b05'/>
<id>7255e567959ca6f0c211afff00e97a911b471b05</id>
<content type='text'>
Backport of http://review.gluster.org/11054

get-task-status () used to always return 0 *until and unless* the CLI command
itself fails which is unlikely. However if the CLI command fails due to some
reason EXPECT_WITHIN will abort.

Change-Id: Ibe54dcdccc26b3ee003677fc3516cfed98b5c06f
BUG: 1232602
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11054
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11266
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/11054

get-task-status () used to always return 0 *until and unless* the CLI command
itself fails which is unlikely. However if the CLI command fails due to some
reason EXPECT_WITHIN will abort.

Change-Id: Ibe54dcdccc26b3ee003677fc3516cfed98b5c06f
BUG: 1232602
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11054
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11266
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
