<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-syncop.c, branch v3.5.3beta1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: Reset opinfo.op ONLY if lock succeeded</title>
<updated>2014-02-08T03:00:19+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2014-02-01T17:19:22+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=ef37b4b6a22fe04b5a1789c45b28c8d7e6fe764a'/>
<id>ef37b4b6a22fe04b5a1789c45b28c8d7e6fe764a</id>
<content type='text'>
... and also initialise @this before doing anything else.

Change-Id: I0244a7f61a826b32f4c2dfe51e246f2593a38211
BUG: 1060434
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6885
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-on: http://review.gluster.org/6922
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
... and also initialise @this before doing anything else.

Change-Id: I0244a7f61a826b32f4c2dfe51e246f2593a38211
BUG: 1060434
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6885
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-on: http://review.gluster.org/6922
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Aggregate tasks status in 'volume status [tasks]'</title>
<updated>2013-12-23T14:56:34+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-12-23T08:37:45+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9d592246d6121aa38cd6fb6a875be4473d4979c8'/>
<id>9d592246d6121aa38cd6fb6a875be4473d4979c8</id>
<content type='text'>
        Backport of http://review.gluster.org/6230
Previously, glusterd used to just send back the local status of a task
in a 'volume status [tasks]' command. As the rebalance operation is
distributed and asynchronus, this meant that different peers could give
different status values for a rebalance or remove-brick task.

With this patch, all the peers will send back the tasks status as a part
of the 'volume status' commit op, and the origin peer will aggregate
these to arrive at a final status for the task.

The aggregation is only done for rebalance or remove-brick tasks. The
replace-brick task will have the same status on all the peers (see
comment in glusterd_volume_status_aggregate_tasks_status() for more
information) and need not be aggregated.

The rebalance process has 5 states,
 NOT_STARTED - rebalance process has not been started on this node
 STARTED - rebalance process has been started and is still running
 STOPPED - rebalance process was stopped by a 'rebalance/remove-brick
           stop' command
 COMPLETED - rebalance process completed successfully
 FAILED - rebalance process failed to complete successfully
The aggregation is done using the following precedence,
 STARTED &gt; FAILED &gt; STOPPED &gt; COMPLETED &gt; NOT_STARTED

The new changes make the 'volume status tasks' command a distributed
command as we need to get the task status from all peers.

The following tests were performed,
- Start a remove-brick task and do a status command on a peer which
  doesn't have the brick being removed. The remove-brick status was
  given correctly as 'in progress' and 'completed', instead of 'not
  started'
- Start a rebalance task, run the status command. The status moved to
  'completed' only after rebalance completed on all nodes.

Also, change the CLI xml output code for rebalance status to use the
same algorithm for status aggregation.

Change-Id: Ifd4aff705aa51609a612d5a9194acc73e10a82c0
BUG: 1027094
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
 http://review.gluster.org/6230
Reviewed-on: http://review.gluster.org/6562
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
        Backport of http://review.gluster.org/6230
Previously, glusterd used to just send back the local status of a task
in a 'volume status [tasks]' command. As the rebalance operation is
distributed and asynchronus, this meant that different peers could give
different status values for a rebalance or remove-brick task.

With this patch, all the peers will send back the tasks status as a part
of the 'volume status' commit op, and the origin peer will aggregate
these to arrive at a final status for the task.

The aggregation is only done for rebalance or remove-brick tasks. The
replace-brick task will have the same status on all the peers (see
comment in glusterd_volume_status_aggregate_tasks_status() for more
information) and need not be aggregated.

The rebalance process has 5 states,
 NOT_STARTED - rebalance process has not been started on this node
 STARTED - rebalance process has been started and is still running
 STOPPED - rebalance process was stopped by a 'rebalance/remove-brick
           stop' command
 COMPLETED - rebalance process completed successfully
 FAILED - rebalance process failed to complete successfully
The aggregation is done using the following precedence,
 STARTED &gt; FAILED &gt; STOPPED &gt; COMPLETED &gt; NOT_STARTED

The new changes make the 'volume status tasks' command a distributed
command as we need to get the task status from all peers.

The following tests were performed,
- Start a remove-brick task and do a status command on a peer which
  doesn't have the brick being removed. The remove-brick status was
  given correctly as 'in progress' and 'completed', instead of 'not
  started'
- Start a rebalance task, run the status command. The status moved to
  'completed' only after rebalance completed on all nodes.

Also, change the CLI xml output code for rebalance status to use the
same algorithm for status aggregation.

Change-Id: Ifd4aff705aa51609a612d5a9194acc73e10a82c0
BUG: 1027094
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
 http://review.gluster.org/6230
Reviewed-on: http://review.gluster.org/6562
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli/glusterd: Changes to quota command Quota feature</title>
<updated>2013-11-26T18:25:27+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2013-11-14T11:35:26+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0d5cd92f51c02b8d664000b5a2d22a2ddbbc23b6'/>
<id>0d5cd92f51c02b8d664000b5a2d22a2ddbbc23b6</id>
<content type='text'>
 re-work.

Following are the cli commands that are new/re-worked:
======================================================

volume quota &lt;VOLNAME&gt; {enable|disable|list [&lt;path&gt; ...]|remove &lt;path&gt;| default-soft-limit &lt;percent&gt;} |
volume quota &lt;VOLNAME&gt; {limit-usage &lt;path&gt; &lt;size&gt; [&lt;percent&gt;]} |
volume quota &lt;VOLNAME&gt; {alert-time|soft-timeout|hard-timeout} {&lt;time&gt;}
volume status [all | &lt;VOLNAME&gt; [nfs|shd|&lt;BRICK&gt;|quotad]] [detail|clients|mem|inode|fd|callpool]
volume statedump &lt;VOLNAME&gt; [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history]

glusterd changes:
=================
* Quota limits are now set as extended attributes by glusterd from
  the aux mount created by the cli.
* The gfids of the directories on which quota limits are set
  for a given volume are stored in
  /var/lib/glusterd/vols/&lt;volname&gt;/quota.conf file in binary format,
  and whose cksum and version is stored in
  /var/lib/glusterd/vols/&lt;volname&gt;/quota.cksum.

Original-author: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Original-author: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;

BUG: 969461
Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6003
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
 re-work.

Following are the cli commands that are new/re-worked:
======================================================

volume quota &lt;VOLNAME&gt; {enable|disable|list [&lt;path&gt; ...]|remove &lt;path&gt;| default-soft-limit &lt;percent&gt;} |
volume quota &lt;VOLNAME&gt; {limit-usage &lt;path&gt; &lt;size&gt; [&lt;percent&gt;]} |
volume quota &lt;VOLNAME&gt; {alert-time|soft-timeout|hard-timeout} {&lt;time&gt;}
volume status [all | &lt;VOLNAME&gt; [nfs|shd|&lt;BRICK&gt;|quotad]] [detail|clients|mem|inode|fd|callpool]
volume statedump &lt;VOLNAME&gt; [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history]

glusterd changes:
=================
* Quota limits are now set as extended attributes by glusterd from
  the aux mount created by the cli.
* The gfids of the directories on which quota limits are set
  for a given volume are stored in
  /var/lib/glusterd/vols/&lt;volname&gt;/quota.conf file in binary format,
  and whose cksum and version is stored in
  /var/lib/glusterd/vols/&lt;volname&gt;/quota.cksum.

Original-author: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Original-author: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;

BUG: 969461
Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6003
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli,glusterd: Implement 'volume status tasks'</title>
<updated>2013-10-09T06:13:16+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2013-09-24T11:31:46+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e51ca3c1c991416895e1e8693f7c3e6332d57464'/>
<id>e51ca3c1c991416895e1e8693f7c3e6332d57464</id>
<content type='text'>
oVirt's Gluster Integration needs an inexpensive command that can be
executed every 10 seconds to monitor async tasks and their parameters,
for all volumes.

The solution involves adding a 'tasks' sub-command to 'volume status'
to fetch only the async task IDs, type and other relevant parameters.
Only the originator glusterd participates in this command as all the
information needed is available on all the nodes. This is to make the
command suitable for being executed every 10 seconds.

Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1
BUG: 1012346
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6006
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
oVirt's Gluster Integration needs an inexpensive command that can be
executed every 10 seconds to monitor async tasks and their parameters,
for all volumes.

The solution involves adding a 'tasks' sub-command to 'volume status'
to fetch only the async task IDs, type and other relevant parameters.
Only the originator glusterd participates in this command as all the
information needed is available on all the nodes. This is to make the
command suitable for being executed every 10 seconds.

Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1
BUG: 1012346
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6006
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Adding transaction checks for cluster unlock.</title>
<updated>2013-09-20T18:48:48+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2013-09-15T12:25:31+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=78b0b59285b03af65c10a1fd976836bc5f53c167'/>
<id>78b0b59285b03af65c10a1fd976836bc5f53c167</id>
<content type='text'>
While a gluster command holding lock is in execution,
any other gluster command which tries to run will fail to
acquire the lock. As a result command#2 will follow the
cleanup code flow, which also includes unlocking the held
locks. As both the commands are run from the same node,
command#2 will end up releasing the locks held by command#1
even before command#1 reaches completion.

Now we call the unlock routine in the code path, of the cluster
has been locked during the same transaction.
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;

Change-Id: I7b7aa4d4c7e565e982b75b8ed1e550fca528c834
BUG: 1008172
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5937
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
While a gluster command holding lock is in execution,
any other gluster command which tries to run will fail to
acquire the lock. As a result command#2 will follow the
cleanup code flow, which also includes unlocking the held
locks. As both the commands are run from the same node,
command#2 will end up releasing the locks held by command#1
even before command#1 reaches completion.

Now we call the unlock routine in the code path, of the cluster
has been locked during the same transaction.
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;

Change-Id: I7b7aa4d4c7e565e982b75b8ed1e550fca528c834
BUG: 1008172
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5937
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/cli changes for distributed geo-rep</title>
<updated>2013-07-26T20:19:18+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2013-07-10T12:02:41+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=5757ed2727990fd2c3aaff420003638f1eec6b92'/>
<id>5757ed2727990fd2c3aaff420003638f1eec6b92</id>
<content type='text'>
Commands:
gluster system:: execute gsec_create
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; create [push-pem] [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; start [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; stop [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; delete
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; config
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; status

The geo-replication is distributed. The session will be created, and
gsyncd will be spawned on all relevant nodes, instead of only one
node.

geo-rep: Collecting status detail related data

Added persistent store for saving information about
TotalFilesSynced, TotalSyncTime, TotalBytesSynced

Changes in the status information in socket:
Existing(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;

New(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978;
TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640;

Persistent details stored in
/var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status

Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111
BUG: 847839
Original Author: Avra Sengupta &lt;asengupt@redhat.com&gt;
Original Author: Venky Shankar &lt;vshankar@redhat.com&gt;
Original Author: Aravinda VK &lt;avishwan@redhat.com&gt;
Original Author: Amar Tumballi &lt;amarts@redhat.com&gt;
Original Author: Csaba Henk &lt;csaba@redhat.com&gt;
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5132
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Tested-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Commands:
gluster system:: execute gsec_create
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; create [push-pem] [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; start [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; stop [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; delete
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; config
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; status

The geo-replication is distributed. The session will be created, and
gsyncd will be spawned on all relevant nodes, instead of only one
node.

geo-rep: Collecting status detail related data

Added persistent store for saving information about
TotalFilesSynced, TotalSyncTime, TotalBytesSynced

Changes in the status information in socket:
Existing(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;

New(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978;
TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640;

Persistent details stored in
/var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status

Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111
BUG: 847839
Original Author: Avra Sengupta &lt;asengupt@redhat.com&gt;
Original Author: Venky Shankar &lt;vshankar@redhat.com&gt;
Original Author: Aravinda VK &lt;avishwan@redhat.com&gt;
Original Author: Amar Tumballi &lt;amarts@redhat.com&gt;
Original Author: Csaba Henk &lt;csaba@redhat.com&gt;
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5132
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Tested-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd-syncop: Fix unlocking and collating errors</title>
<updated>2013-06-04T20:27:35+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@redhat.com</email>
</author>
<published>2013-05-23T06:49:33+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0c1916f3e1eb81aa81dc2d62e97f46880390838c'/>
<id>0c1916f3e1eb81aa81dc2d62e97f46880390838c</id>
<content type='text'>
* Only those peers which were locked need to be unlocked.
* Fix location of collating errors in callbacks. The callback functions
  could miss collating errors if there was an rpc error.

Change-Id: Ie27c2f1ec197da4f5077a4d6e032127954ce87cd
BUG: 948686
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5087
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
* Only those peers which were locked need to be unlocked.
* Fix location of collating errors in callbacks. The callback functions
  could miss collating errors if there was an rpc error.

Change-Id: Ie27c2f1ec197da4f5077a4d6e032127954ce87cd
BUG: 948686
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5087
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Set op_errstr to error string received from peer</title>
<updated>2013-05-17T02:27:26+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2013-05-16T09:35:09+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=5780e274c78aab671e5fb72d66fec23f90576e51'/>
<id>5780e274c78aab671e5fb72d66fec23f90576e51</id>
<content type='text'>
... in case of volume op failure on remote host

Change-Id: I7177dc02369dffa82f217496559532d18b7c7c7a
BUG: 963628
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5018
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
... in case of volume op failure on remote host

Change-Id: I7177dc02369dffa82f217496559532d18b7c7c7a
BUG: 963628
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5018
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Perform NULL check on rsp.op_errstr before using it</title>
<updated>2013-05-13T17:58:46+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2013-05-13T09:29:21+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b9fdbc079025ffd743305cee868e02f653326419'/>
<id>b9fdbc079025ffd743305cee868e02f653326419</id>
<content type='text'>
Change-Id: Id18b215a91cf016964ea98d2f414293b82167d24
BUG: 962362
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4992
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Id18b215a91cf016964ea98d2f414293b82167d24
BUG: 962362
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4992
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Fix uuid to hostname conversion for 'volume status'</title>
<updated>2013-05-13T06:38:49+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@redhat.com</email>
</author>
<published>2013-04-23T06:41:00+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=fd36cabb0db4139cba97fc75c6169b57ebea3e9d'/>
<id>fd36cabb0db4139cba97fc75c6169b57ebea3e9d</id>
<content type='text'>
Change-Id: I46c41c29c2d11652f6d8ccd5637be0ac9774fc1d
BUG: 927648
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4873
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I46c41c29c2d11652f6d8ccd5637be0ac9774fc1d
BUG: 927648
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4873
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
