<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-utils.h, branch v3.5.3beta1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: On gaining quorum spawn_daemons in new thread</title>
<updated>2014-06-08T14:50:18+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@redhat.com</email>
</author>
<published>2014-05-07T12:47:11+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=4977a4d74f14d84f9e622a650f1ea9b47d795962'/>
<id>4977a4d74f14d84f9e622a650f1ea9b47d795962</id>
<content type='text'>
  Backport of http://review.gluster.org/7703 from master

During startup, if a glusterd has peers, it waits till quorum is
obtained to spawn bricks and other services. If peers are not present,
the daemons are started during glusterd' startup itself.

The spawning of daemons as a quorum action was done without using a
seperate thread, unlike the spawn on startup. Since, quotad was launched
using the blocking runner_run api, this leads to the thread being
blocked. The calling thread is almost always the epoll thread and this
leads to a deadlock. The runner_run call blocks the epoll thread waiting
for quotad to start, as a result glusterd cannot serve any requests. But
the startup of quotad is blocked as it cannot fetch the volfile from
glusterd.

The fix for this is to launch the spawn daemons task in a seperate
thread. This will free up the epoll thread and prevents the above
deadlock from happening.

BUG: 1105188
Change-Id: Idad1e96fbe1411dfd4b1a542fb5fa115673636c0
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7995
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
  Backport of http://review.gluster.org/7703 from master

During startup, if a glusterd has peers, it waits till quorum is
obtained to spawn bricks and other services. If peers are not present,
the daemons are started during glusterd' startup itself.

The spawning of daemons as a quorum action was done without using a
seperate thread, unlike the spawn on startup. Since, quotad was launched
using the blocking runner_run api, this leads to the thread being
blocked. The calling thread is almost always the epoll thread and this
leads to a deadlock. The runner_run call blocks the epoll thread waiting
for quotad to start, as a result glusterd cannot serve any requests. But
the startup of quotad is blocked as it cannot fetch the volfile from
glusterd.

The fix for this is to launch the spawn daemons task in a seperate
thread. This will free up the epoll thread and prevents the above
deadlock from happening.

BUG: 1105188
Change-Id: Idad1e96fbe1411dfd4b1a542fb5fa115673636c0
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7995
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/geo-rep: more glusterd and cli fixes for geo-rep.</title>
<updated>2014-01-27T17:44:33+00:00</updated>
<author>
<name>Ajeet Jha</name>
<email>ajha@redhat.com</email>
</author>
<published>2013-12-02T07:25:18+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=f33d09f23b089bd07437eb714f8dffa43460d6b5'/>
<id>f33d09f23b089bd07437eb714f8dffa43460d6b5</id>
<content type='text'>
    -&gt; handle option validation cases in reset case.
    -&gt; Creating valid conf path when glusterd restarts.
    -&gt; Reading the gsyncd worker thread status and displaying it.
    -&gt; Displaying status-detail per worker.
    -&gt; Fetch checkpoint info in geo-rep status.
    -&gt; use-tarssh value validation added.

misc: misc geo-rep fixes based on cluster, logrotate etc..
    -&gt; cluster/dht: fix 'stime' getxattr getting overwritten.
    -&gt; cluster/afr: return max of 'stime' values in subvol.
    -&gt; geo-rep-logrotate: Sending SIGHUP to geo-rep auxiliary.
    -&gt; cluster/dht: fix convoluted logic while aggregating.
    -&gt; cluster/*: fix 'stime' min/max fetch logic.

Change-Id: I811acea0bbd6194797a3e55d89295d1ea021ac85
BUG: 1036552
Signed-off-by: Ajeet Jha &lt;ajha@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6405
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@gmail.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6810
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
    -&gt; handle option validation cases in reset case.
    -&gt; Creating valid conf path when glusterd restarts.
    -&gt; Reading the gsyncd worker thread status and displaying it.
    -&gt; Displaying status-detail per worker.
    -&gt; Fetch checkpoint info in geo-rep status.
    -&gt; use-tarssh value validation added.

misc: misc geo-rep fixes based on cluster, logrotate etc..
    -&gt; cluster/dht: fix 'stime' getxattr getting overwritten.
    -&gt; cluster/afr: return max of 'stime' values in subvol.
    -&gt; geo-rep-logrotate: Sending SIGHUP to geo-rep auxiliary.
    -&gt; cluster/dht: fix convoluted logic while aggregating.
    -&gt; cluster/*: fix 'stime' min/max fetch logic.

Change-Id: I811acea0bbd6194797a3e55d89295d1ea021ac85
BUG: 1036552
Signed-off-by: Ajeet Jha &lt;ajha@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6405
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@gmail.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6810
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: make volinfo a refcnt'ed object.</title>
<updated>2013-12-23T15:00:24+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-12-16T04:59:19+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=94ed403ec213ee955acc55cc4a04f7c39470855b'/>
<id>94ed403ec213ee955acc55cc4a04f7c39470855b</id>
<content type='text'>
        Backport of http://review.gluster.org/6521

Add glusterd_volinfo_remove(..) which removes @volinfo from the list
of volumes in the cluster and performs an unref on @volinfo

Change-Id: I5f546ca58f61bc334ab1bab4c51c4a21e1f66161
BUG: 1038051
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6569
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
        Backport of http://review.gluster.org/6521

Add glusterd_volinfo_remove(..) which removes @volinfo from the list
of volumes in the cluster and performs an unref on @volinfo

Change-Id: I5f546ca58f61bc334ab1bab4c51c4a21e1f66161
BUG: 1038051
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6569
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc,glusterd: Use rpc_clnt notifyfn to cleanup mydata</title>
<updated>2013-12-23T14:58:18+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-12-23T08:37:57+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=8c4e79c446fdfea00c1589a625ba1f1a63fdecc5'/>
<id>8c4e79c446fdfea00c1589a625ba1f1a63fdecc5</id>
<content type='text'>
        Backport of http://review.gluster.org/5512

rpc:
- On a RPC_TRANSPORT_CLEANUP event, rpc_clnt_notify calls the registered
  notifyfn with a RPC_CLNT_DESTROY event. The notifyfn should properly
  cleanup the saved mydata on this event.
- Break the reconnect chain when an rpc client is disabled. This will
  prevent new disconnect events which can lead to crashes.

glusterd:
- Added support for RPC_CLNT_DESTROY in glusterd_brick_rpc_notify
- Use a common glusterd_rpc_clnt_unref() function throught glusterd in
  place of rpc_clnt_unref(). This function correctly gives up the
  big-lock before performing the unref.

Change-Id: I93230441c5089039643fc9f5632477ef1b695348
BUG: 962619
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6566
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
        Backport of http://review.gluster.org/5512

rpc:
- On a RPC_TRANSPORT_CLEANUP event, rpc_clnt_notify calls the registered
  notifyfn with a RPC_CLNT_DESTROY event. The notifyfn should properly
  cleanup the saved mydata on this event.
- Break the reconnect chain when an rpc client is disabled. This will
  prevent new disconnect events which can lead to crashes.

glusterd:
- Added support for RPC_CLNT_DESTROY in glusterd_brick_rpc_notify
- Use a common glusterd_rpc_clnt_unref() function throught glusterd in
  place of rpc_clnt_unref(). This function correctly gives up the
  big-lock before performing the unref.

Change-Id: I93230441c5089039643fc9f5632477ef1b695348
BUG: 962619
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6566
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Improve rebalance handling during volume sync</title>
<updated>2013-12-23T14:57:52+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-12-23T08:37:50+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b07107511c51ae518a1a952ff9c223673cd218a8'/>
<id>b07107511c51ae518a1a952ff9c223673cd218a8</id>
<content type='text'>
        Backport of http://review.gluster.org/6334

Glusterd will now correctly copy existing rebalance information when a
volinfo is updated during volume sync. If the existing rebalance
information was stale, then any existing rebalance process will be
termimnated. A new rebalance process will be started only if there is no
existing rebalance process. The rebalance process will not be started if
the existing rebalance session had completed, failed or been stopped.

Change-Id: I68c5984267c188734da76770ba557662d4ea3ee0
BUG: 1036464
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6564
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
        Backport of http://review.gluster.org/6334

Glusterd will now correctly copy existing rebalance information when a
volinfo is updated during volume sync. If the existing rebalance
information was stale, then any existing rebalance process will be
termimnated. A new rebalance process will be started only if there is no
existing rebalance process. The rebalance process will not be started if
the existing rebalance session had completed, failed or been stopped.

Change-Id: I68c5984267c188734da76770ba557662d4ea3ee0
BUG: 1036464
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6564
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Aggregate tasks status in 'volume status [tasks]'</title>
<updated>2013-12-23T14:56:34+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-12-23T08:37:45+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9d592246d6121aa38cd6fb6a875be4473d4979c8'/>
<id>9d592246d6121aa38cd6fb6a875be4473d4979c8</id>
<content type='text'>
        Backport of http://review.gluster.org/6230
Previously, glusterd used to just send back the local status of a task
in a 'volume status [tasks]' command. As the rebalance operation is
distributed and asynchronus, this meant that different peers could give
different status values for a rebalance or remove-brick task.

With this patch, all the peers will send back the tasks status as a part
of the 'volume status' commit op, and the origin peer will aggregate
these to arrive at a final status for the task.

The aggregation is only done for rebalance or remove-brick tasks. The
replace-brick task will have the same status on all the peers (see
comment in glusterd_volume_status_aggregate_tasks_status() for more
information) and need not be aggregated.

The rebalance process has 5 states,
 NOT_STARTED - rebalance process has not been started on this node
 STARTED - rebalance process has been started and is still running
 STOPPED - rebalance process was stopped by a 'rebalance/remove-brick
           stop' command
 COMPLETED - rebalance process completed successfully
 FAILED - rebalance process failed to complete successfully
The aggregation is done using the following precedence,
 STARTED &gt; FAILED &gt; STOPPED &gt; COMPLETED &gt; NOT_STARTED

The new changes make the 'volume status tasks' command a distributed
command as we need to get the task status from all peers.

The following tests were performed,
- Start a remove-brick task and do a status command on a peer which
  doesn't have the brick being removed. The remove-brick status was
  given correctly as 'in progress' and 'completed', instead of 'not
  started'
- Start a rebalance task, run the status command. The status moved to
  'completed' only after rebalance completed on all nodes.

Also, change the CLI xml output code for rebalance status to use the
same algorithm for status aggregation.

Change-Id: Ifd4aff705aa51609a612d5a9194acc73e10a82c0
BUG: 1027094
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
 http://review.gluster.org/6230
Reviewed-on: http://review.gluster.org/6562
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
        Backport of http://review.gluster.org/6230
Previously, glusterd used to just send back the local status of a task
in a 'volume status [tasks]' command. As the rebalance operation is
distributed and asynchronus, this meant that different peers could give
different status values for a rebalance or remove-brick task.

With this patch, all the peers will send back the tasks status as a part
of the 'volume status' commit op, and the origin peer will aggregate
these to arrive at a final status for the task.

The aggregation is only done for rebalance or remove-brick tasks. The
replace-brick task will have the same status on all the peers (see
comment in glusterd_volume_status_aggregate_tasks_status() for more
information) and need not be aggregated.

The rebalance process has 5 states,
 NOT_STARTED - rebalance process has not been started on this node
 STARTED - rebalance process has been started and is still running
 STOPPED - rebalance process was stopped by a 'rebalance/remove-brick
           stop' command
 COMPLETED - rebalance process completed successfully
 FAILED - rebalance process failed to complete successfully
The aggregation is done using the following precedence,
 STARTED &gt; FAILED &gt; STOPPED &gt; COMPLETED &gt; NOT_STARTED

The new changes make the 'volume status tasks' command a distributed
command as we need to get the task status from all peers.

The following tests were performed,
- Start a remove-brick task and do a status command on a peer which
  doesn't have the brick being removed. The remove-brick status was
  given correctly as 'in progress' and 'completed', instead of 'not
  started'
- Start a rebalance task, run the status command. The status moved to
  'completed' only after rebalance completed on all nodes.

Also, change the CLI xml output code for rebalance status to use the
same algorithm for status aggregation.

Change-Id: Ifd4aff705aa51609a612d5a9194acc73e10a82c0
BUG: 1027094
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
 http://review.gluster.org/6230
Reviewed-on: http://review.gluster.org/6562
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli/glusterd: Changes to quota command Quota feature</title>
<updated>2013-11-26T18:25:27+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2013-11-14T11:35:26+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0d5cd92f51c02b8d664000b5a2d22a2ddbbc23b6'/>
<id>0d5cd92f51c02b8d664000b5a2d22a2ddbbc23b6</id>
<content type='text'>
 re-work.

Following are the cli commands that are new/re-worked:
======================================================

volume quota &lt;VOLNAME&gt; {enable|disable|list [&lt;path&gt; ...]|remove &lt;path&gt;| default-soft-limit &lt;percent&gt;} |
volume quota &lt;VOLNAME&gt; {limit-usage &lt;path&gt; &lt;size&gt; [&lt;percent&gt;]} |
volume quota &lt;VOLNAME&gt; {alert-time|soft-timeout|hard-timeout} {&lt;time&gt;}
volume status [all | &lt;VOLNAME&gt; [nfs|shd|&lt;BRICK&gt;|quotad]] [detail|clients|mem|inode|fd|callpool]
volume statedump &lt;VOLNAME&gt; [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history]

glusterd changes:
=================
* Quota limits are now set as extended attributes by glusterd from
  the aux mount created by the cli.
* The gfids of the directories on which quota limits are set
  for a given volume are stored in
  /var/lib/glusterd/vols/&lt;volname&gt;/quota.conf file in binary format,
  and whose cksum and version is stored in
  /var/lib/glusterd/vols/&lt;volname&gt;/quota.cksum.

Original-author: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Original-author: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;

BUG: 969461
Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6003
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
 re-work.

Following are the cli commands that are new/re-worked:
======================================================

volume quota &lt;VOLNAME&gt; {enable|disable|list [&lt;path&gt; ...]|remove &lt;path&gt;| default-soft-limit &lt;percent&gt;} |
volume quota &lt;VOLNAME&gt; {limit-usage &lt;path&gt; &lt;size&gt; [&lt;percent&gt;]} |
volume quota &lt;VOLNAME&gt; {alert-time|soft-timeout|hard-timeout} {&lt;time&gt;}
volume status [all | &lt;VOLNAME&gt; [nfs|shd|&lt;BRICK&gt;|quotad]] [detail|clients|mem|inode|fd|callpool]
volume statedump &lt;VOLNAME&gt; [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history]

glusterd changes:
=================
* Quota limits are now set as extended attributes by glusterd from
  the aux mount created by the cli.
* The gfids of the directories on which quota limits are set
  for a given volume are stored in
  /var/lib/glusterd/vols/&lt;volname&gt;/quota.conf file in binary format,
  and whose cksum and version is stored in
  /var/lib/glusterd/vols/&lt;volname&gt;/quota.cksum.

Original-author: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Original-author: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;

BUG: 969461
Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6003
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Start rebalance only where required</title>
<updated>2013-11-20T19:32:09+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@redhat.com</email>
</author>
<published>2013-11-14T06:45:53+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=3c38ba1e7b4959602f945112a26b8aee904fefaa'/>
<id>3c38ba1e7b4959602f945112a26b8aee904fefaa</id>
<content type='text'>
Gluster was starting rebalance processes on peers where it wasn't
required in two cases.
- For a normal rebalance command on a volume, rebalance processes were
  started on all peers instead of just the peers which contain bricks of
  the volume
- For rebalance process being restarted by a volume sync, caused by a
  new peer being probed or a peer restarting, rebalance processes were
  started on all peers, for both a normal rebalance and for remove-brick
  needing rebalance.

This patch adds a new check before starting rebalance process in the
above two cases.
- For rebalance process required by a rebalance command, each peer will
  check if it contains atleast one brick of the volume
- For rebalance process required by a remove-brick command, each peer
  will check if it contains atleast one of the bricks being removed

Change-Id: I512da16994f0d5482889c3a009c46dc20a8a15bb
BUG: 1031887
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6301
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Gluster was starting rebalance processes on peers where it wasn't
required in two cases.
- For a normal rebalance command on a volume, rebalance processes were
  started on all peers instead of just the peers which contain bricks of
  the volume
- For rebalance process being restarted by a volume sync, caused by a
  new peer being probed or a peer restarting, rebalance processes were
  started on all peers, for both a normal rebalance and for remove-brick
  needing rebalance.

This patch adds a new check before starting rebalance process in the
above two cases.
- For rebalance process required by a rebalance command, each peer will
  check if it contains atleast one brick of the volume
- For rebalance process required by a remove-brick command, each peer
  will check if it contains atleast one of the bricks being removed

Change-Id: I512da16994f0d5482889c3a009c46dc20a8a15bb
BUG: 1031887
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6301
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gNFS: RFE for NFS connection behavior</title>
<updated>2013-11-15T00:07:02+00:00</updated>
<author>
<name>Santosh Kumar Pradhan</name>
<email>spradhan@redhat.com</email>
</author>
<published>2013-10-28T07:16:37+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e479660d9dd8bf7017c7dc78ccfa6edd9c51ec7a'/>
<id>e479660d9dd8bf7017c7dc78ccfa6edd9c51ec7a</id>
<content type='text'>
Implement reconfigure() for NFS xlator so that volume set/reset wont
restart the NFS server process. But few options can not be reconfigured
dynamically e.g. nfs.mem-factor, nfs.port etc which needs NFS to be
restarted.

Change-Id: Ic586fd55b7933c0a3175708d8c41ed0475d74a1c
BUG: 1027409
Signed-off-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6236
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Implement reconfigure() for NFS xlator so that volume set/reset wont
restart the NFS server process. But few options can not be reconfigured
dynamically e.g. nfs.mem-factor, nfs.port etc which needs NFS to be
restarted.

Change-Id: Ic586fd55b7933c0a3175708d8c41ed0475d74a1c
BUG: 1027409
Signed-off-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6236
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/glusterd: Relax extended attribute checks for volume create and add brick force.</title>
<updated>2013-10-18T00:16:16+00:00</updated>
<author>
<name>Vijay Bellur</name>
<email>vbellur@redhat.com</email>
</author>
<published>2013-08-31T17:04:02+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b2a9cbe5ce61ce170a55fb3dfd7f2d6de9c52f97'/>
<id>b2a9cbe5ce61ce170a55fb3dfd7f2d6de9c52f97</id>
<content type='text'>
Expectation with force is that user is aware of the consequences of
sanity checks not being triggered.

Change-Id: I79dfeed16a23829a7217cef33ab83f9f0ffae336
Signed-off-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
BUG: 1007509
Reviewed-on: http://review.gluster.org/5746
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Expectation with force is that user is aware of the consequences of
sanity checks not being triggered.

Change-Id: I79dfeed16a23829a7217cef33ab83f9f0ffae336
Signed-off-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
BUG: 1007509
Reviewed-on: http://review.gluster.org/5746
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
