<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-snapshot-utils.h, branch v3.10.0</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>snapshot: Fix the failure to recreate clones with same name</title>
<updated>2016-11-02T06:28:26+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2016-10-20T07:28:16+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9a2b3fb8b9ff28edafa012dacc5f5f0e4ee1afab'/>
<id>9a2b3fb8b9ff28edafa012dacc5f5f0e4ee1afab</id>
<content type='text'>
The brick path of snapshot clones contained the clonename,
thereby failing to create newer clones with the same name
after the original clone had been deleted.

This fix creates the brick path with the clone's vol id
instead of the clones name. Hence future clones with the
same name will not have the namespace clash.

Change-Id: I262712adc576122f051b5d1ce171d020efaefd1a
BUG: 1387160
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15683
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The brick path of snapshot clones contained the clonename,
thereby failing to create newer clones with the same name
after the original clone had been deleted.

This fix creates the brick path with the clone's vol id
instead of the clones name. Hence future clones with the
same name will not have the namespace clash.

Change-Id: I262712adc576122f051b5d1ce171d020efaefd1a
BUG: 1387160
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15683
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/cli: cli to get local state representation from glusterd</title>
<updated>2016-08-26T15:23:37+00:00</updated>
<author>
<name>Samikshan Bairagya</name>
<email>samikshan@gmail.com</email>
</author>
<published>2016-07-07T15:03:02+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=4a3454753f6e4ddc309c8d1cb11a6e4e432c1da6'/>
<id>4a3454753f6e4ddc309c8d1cb11a6e4e432c1da6</id>
<content type='text'>
Currently there is no existing CLI that can be used to get the
local state representation of the cluster as maintained in glusterd
in a readable as well as parseable format.

The CLI added has the following usage:

 # gluster get-state [daemon] [odir &lt;path/to/output/dir&gt;] [file &lt;filename&gt;]

This would dump data points that reflect the local state
representation of the cluster as maintained in glusterd (no other
daemons are supported as of now) to a file inside the specified
output directory. The default output directory and filename is
/var/run/gluster and glusterd_state_&lt;timestamp&gt; respectively. The
option for specifying the daemon name leaves room to add support for
other daemons in the future. Following are the data points captured
as of now to represent the state from the local glusterd pov:

 * Peer:
    - Primary hostname
    - uuid
    - state
    - connection status
    - List of hostnames

 * Volumes:
    - name, id, transport type, status
    - counts: bricks, snap, subvol, stripe, arbiter, disperse,
 redundancy
    - snapd status
    - quorum status
    - tiering related information
    - rebalance status
    - replace bricks status
    - snapshots

 * Bricks:
    - Path, hostname (for all bricks these info will be shown)
    - port, rdma port, status, mount options, filesystem type and
signed in status for bricks running locally.

 * Services:
    - name, online status for initialised services

 * Others:
    - Base port, last allocated port
    - op-version
    - MYUUID

Change-Id: I4a45cc5407ab92d8afdbbd2098ece851f7e3d618
BUG: 1353156
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: http://review.gluster.org/14873
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently there is no existing CLI that can be used to get the
local state representation of the cluster as maintained in glusterd
in a readable as well as parseable format.

The CLI added has the following usage:

 # gluster get-state [daemon] [odir &lt;path/to/output/dir&gt;] [file &lt;filename&gt;]

This would dump data points that reflect the local state
representation of the cluster as maintained in glusterd (no other
daemons are supported as of now) to a file inside the specified
output directory. The default output directory and filename is
/var/run/gluster and glusterd_state_&lt;timestamp&gt; respectively. The
option for specifying the daemon name leaves room to add support for
other daemons in the future. Following are the data points captured
as of now to represent the state from the local glusterd pov:

 * Peer:
    - Primary hostname
    - uuid
    - state
    - connection status
    - List of hostnames

 * Volumes:
    - name, id, transport type, status
    - counts: bricks, snap, subvol, stripe, arbiter, disperse,
 redundancy
    - snapd status
    - quorum status
    - tiering related information
    - rebalance status
    - replace bricks status
    - snapshots

 * Bricks:
    - Path, hostname (for all bricks these info will be shown)
    - port, rdma port, status, mount options, filesystem type and
signed in status for bricks running locally.

 * Services:
    - name, online status for initialised services

 * Others:
    - Base port, last allocated port
    - op-version
    - MYUUID

Change-Id: I4a45cc5407ab92d8afdbbd2098ece851f7e3d618
BUG: 1353156
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: http://review.gluster.org/14873
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot : copying nfs-ganesha export file</title>
<updated>2015-10-30T20:39:42+00:00</updated>
<author>
<name>Jiffin Tony Thottan</name>
<email>jthottan@redhat.com</email>
</author>
<published>2015-08-27T17:56:40+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=5583bac79851d24f0a552478b361049fe63c32b7'/>
<id>5583bac79851d24f0a552478b361049fe63c32b7</id>
<content type='text'>
While taking snapshot, the export file used by the volume should
copy to snap directory. So that when restore of snapshot happens,
the volume can retain all its configuration for exporting via
nfs-ganesha. The export file is stored at "/etc/ganesha/export" in
the following format "export.&lt;volname&gt;.conf"

The fix handles given cases in the following manner :

case a: The nfs-ganesha(global) is ON during snapshot and restore.
        i.) Volume was exported during snapshot. When we restore snapshot,
            then volume should be exported back with old configuration file.
        ii.) Volume was unexported during snapshot. When we restore snapshot,
             then volume should unexported again.

case b: The nfs-ganesha is ON during snapshot and OFF during restore
        Volume was exported during snapshot. When we restore snapshot, the
        conf will be copied to corresponding location and if nfs-ganesha enabled
        again, then volume will be exported.

For the clones, export conf file will created in /etc/ganesha/export and then
export it via ganesha.

Change-Id: Ideecda15bd4db58e991cf6c8de7bb93f3db6cd20
BUG: 1257709
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12034
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
While taking snapshot, the export file used by the volume should
copy to snap directory. So that when restore of snapshot happens,
the volume can retain all its configuration for exporting via
nfs-ganesha. The export file is stored at "/etc/ganesha/export" in
the following format "export.&lt;volname&gt;.conf"

The fix handles given cases in the following manner :

case a: The nfs-ganesha(global) is ON during snapshot and restore.
        i.) Volume was exported during snapshot. When we restore snapshot,
            then volume should be exported back with old configuration file.
        ii.) Volume was unexported during snapshot. When we restore snapshot,
             then volume should unexported again.

case b: The nfs-ganesha is ON during snapshot and OFF during restore
        Volume was exported during snapshot. When we restore snapshot, the
        conf will be copied to corresponding location and if nfs-ganesha enabled
        again, then volume will be exported.

For the clones, export conf file will created in /etc/ganesha/export and then
export it via ganesha.

Change-Id: Ideecda15bd4db58e991cf6c8de7bb93f3db6cd20
BUG: 1257709
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12034
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot:cleanup snaps during unprobe</title>
<updated>2015-08-26T10:47:48+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2015-03-17T14:27:47+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e883e98998404a9e1ef18516d88520cfe2451b3f'/>
<id>e883e98998404a9e1ef18516d88520cfe2451b3f</id>
<content type='text'>
When doing an unprobe, the volume that doesnot
contain any brick of the particular node will be
deleted. So the snaps associated with that volume
should also delete

Change-Id: I9f3d23bd11b254ebf7d7722cc1e12455d6b024ff
BUG: 1203185
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9930
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When doing an unprobe, the volume that doesnot
contain any brick of the particular node will be
deleted. So the snaps associated with that volume
should also delete

Change-Id: I9f3d23bd11b254ebf7d7722cc1e12455d6b024ff
BUG: 1203185
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9930
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot: Return correct errno in events of failure - PATCH 2</title>
<updated>2015-06-02T09:59:34+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2015-05-05T12:38:25+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=2df57ab7dc7b9d7deb0eebad96036149760d607b'/>
<id>2df57ab7dc7b9d7deb0eebad96036149760d607b</id>
<content type='text'>
ENUM           RETCODE        ERROR
-------------------------------------------------------------
EG_INTRNL      30800          Internal Error
EG_OPNOTSUP    30801          Gluster Op Not Supported
EG_ANOTRANS    30802          Another Transaction in Progress
EG_BRCKDWN     30803          One or more brick is down
EG_NODEDWN     30804          One or more node is down
EG_HRDLMT      30805          Hard Limit is reached
EG_NOVOL       30806          Volume does not exist
EG_NOSNAP      30807          Snap does not exist
EG_RBALRUN     30808          Rebalance is running
EG_VOLRUN      30809          Volume is running
EG_VOLSTP      30810          Volume is not running
EG_VOLEXST     30811          Volume exists
EG_SNAPEXST    30812          Snapshot exists
EG_ISSNAP      30813          Volume is a snap volume
EG_GEOREPRUN   30814          Geo-Replication is running
EG_NOTTHINP    30815          Bricks are not thinly provisioned

Change-Id: I49a170cdfd77df11fe677e09f4e063d99b159275
BUG: 1212413
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10588
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
ENUM           RETCODE        ERROR
-------------------------------------------------------------
EG_INTRNL      30800          Internal Error
EG_OPNOTSUP    30801          Gluster Op Not Supported
EG_ANOTRANS    30802          Another Transaction in Progress
EG_BRCKDWN     30803          One or more brick is down
EG_NODEDWN     30804          One or more node is down
EG_HRDLMT      30805          Hard Limit is reached
EG_NOVOL       30806          Volume does not exist
EG_NOSNAP      30807          Snap does not exist
EG_RBALRUN     30808          Rebalance is running
EG_VOLRUN      30809          Volume is running
EG_VOLSTP      30810          Volume is not running
EG_VOLEXST     30811          Volume exists
EG_SNAPEXST    30812          Snapshot exists
EG_ISSNAP      30813          Volume is a snap volume
EG_GEOREPRUN   30814          Geo-Replication is running
EG_NOTTHINP    30815          Bricks are not thinly provisioned

Change-Id: I49a170cdfd77df11fe677e09f4e063d99b159275
BUG: 1212413
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10588
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot : While snapshot restore, compute quota checksum.</title>
<updated>2015-04-10T08:19:09+00:00</updated>
<author>
<name>Sachin Pandit</name>
<email>spandit@redhat.com</email>
</author>
<published>2015-03-16T07:42:12+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=da48df4e91b69b8f586d658de9573287cad2ce64'/>
<id>da48df4e91b69b8f586d658de9573287cad2ce64</id>
<content type='text'>
Problem : During snapshot restore we anyways copy the quota conf file
after that we need to compute the checksum for that. If not, there
might be a checksum mismatch during glusterd handshake.

Solution : Compute a checksum file for quota conf file if its
present.

Change-Id: Ic4a6567c6ede9923443abf4ca59380679be88094
BUG: 1202436
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9901
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem : During snapshot restore we anyways copy the quota conf file
after that we need to compute the checksum for that. If not, there
might be a checksum mismatch during glusterd handshake.

Solution : Compute a checksum file for quota conf file if its
present.

Change-Id: Ic4a6567c6ede9923443abf4ca59380679be88094
BUG: 1202436
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9901
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: compute quorum on peers in cluster</title>
<updated>2015-04-02T17:58:13+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2015-01-27T11:14:20+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=f66a85a484d41e68611f68d70e458c67f484d2e1'/>
<id>f66a85a484d41e68611f68d70e458c67f484d2e1</id>
<content type='text'>
... and not on peers participating in an ongoing
transaction.

Change-Id: I6bdb80fd3bf3e7593fdf37e45a441d4a490469b8
BUG: 1205592
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9493
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
... and not on peers participating in an ongoing
transaction.

Change-Id: I6bdb80fd3bf3e7593fdf37e45a441d4a490469b8
BUG: 1205592
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9493
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Protect the peer list and peerinfos with RCU.</title>
<updated>2015-03-16T09:19:14+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@redhat.com</email>
</author>
<published>2015-01-08T13:54:59+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=c7785f78420c94220954eef538ed4698713ebcdb'/>
<id>c7785f78420c94220954eef538ed4698713ebcdb</id>
<content type='text'>
The peer list and the peerinfo objects are now protected using RCU.
Design patterns described in the Paul McKenney's RCU dissertation [1]
(sections 5 and 6) have been used to convert existing non-RCU protected
code to RCU protected code.

Currently, we are only targetting guaranteeing the existence of the
peerinfo objects, ie., we are only looking to protect deletes, not all
updaters. We chose this, as protecting all updates is a much more
complex task.

The steps used to accomplish this are,

1. Remove all long lived direct references to peerinfo objects (apart
from the peerinfo list). This includes references in glusterd_peerctx_t
(RPC), glusterd_friend_sm_event_t (friend state machine) and others.
This way no one has a reference to deleted peerinfo object.

2. Replace the direct references with indirect references, ie., use
peer uuid and peer hostname as indirect references to the peerinfo
object. Any reader or updater now uses the indirect references to get to
the actual peerinfo object, using glusterd_peerinfo_find. Cases where a
peerinfo cannot be found are handled gracefully.

3. The readers get and use the peerinfo object only within a RCU read
critical section. This prevents the object from being deleted/freed when
in actual use.

4. The deletion of a peerinfo object is done in a ordered manner
(glusterd_peerinfo_destroy). The object is first removed from the
peerinfo list using an atomic list remove, but the list head is not
reset to allow existing list readers to complete correctly. We wait for
readers to complete, before resetting the list head. This removes the
object from the list completely. After this no new readers can get a
reference to the object, and it can be freed.

This change was developed on the git branch at [2]. This commit is a
combination of the following commits on the development branch.
  d7999b9 Protect the glusterd_conf_t-&gt;peers_list with RCU.
  0da85c4 Synchronize before INITing peerinfo list head after removing
          from list.
  32ec28a Add missing rcu_read_unlock
  8fed0b8 Correctly exit read critical section once peer is found.
  63db857 Free peerctx only on rpc destruction
  56eff26 Cleanup style issues
  e5f38b0 Indirection for events and friend_sm
  3c84ac4 In __glusterd_probe_cbk goto unlock only if peer already
          exists
  141d855 Address review comments on 9695/1
  aaeefed Protection during peer updates
  6eda33d Revert "Synchronize before INITing peerinfo list head after
          removing from list."
  f69db96 Remove unneeded line
  b43d2ec Address review comments on 9695/4
  7781921 Address review comments on 9695/5
  eb6467b Add some missing semi-colons
  328a47f Remove synchronize_rcu from
          glusterd_friend_sm_transition_state
  186e429 Run part of glusterd_friend_remove in critical section
  55c0a2e Fix gluster (peer status/ pool list) with no peers
  93f8dcf Use call_rcu to free peerinfo
  c36178c Introduce composite struct, gd_rcu_head

[1]: http://www.rdrop.com/~paulmck/RCU/RCUdissertation.2004.07.14e1.pdf
[2]: https://github.com/kshlm/glusterfs/tree/urcu

Change-Id: Ic1480e59c86d41d25a6a3d159aa3e11fbb3cbc7b
BUG: 1191030
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9695
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Anand Nekkunti &lt;anekkunt@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The peer list and the peerinfo objects are now protected using RCU.
Design patterns described in the Paul McKenney's RCU dissertation [1]
(sections 5 and 6) have been used to convert existing non-RCU protected
code to RCU protected code.

Currently, we are only targetting guaranteeing the existence of the
peerinfo objects, ie., we are only looking to protect deletes, not all
updaters. We chose this, as protecting all updates is a much more
complex task.

The steps used to accomplish this are,

1. Remove all long lived direct references to peerinfo objects (apart
from the peerinfo list). This includes references in glusterd_peerctx_t
(RPC), glusterd_friend_sm_event_t (friend state machine) and others.
This way no one has a reference to deleted peerinfo object.

2. Replace the direct references with indirect references, ie., use
peer uuid and peer hostname as indirect references to the peerinfo
object. Any reader or updater now uses the indirect references to get to
the actual peerinfo object, using glusterd_peerinfo_find. Cases where a
peerinfo cannot be found are handled gracefully.

3. The readers get and use the peerinfo object only within a RCU read
critical section. This prevents the object from being deleted/freed when
in actual use.

4. The deletion of a peerinfo object is done in a ordered manner
(glusterd_peerinfo_destroy). The object is first removed from the
peerinfo list using an atomic list remove, but the list head is not
reset to allow existing list readers to complete correctly. We wait for
readers to complete, before resetting the list head. This removes the
object from the list completely. After this no new readers can get a
reference to the object, and it can be freed.

This change was developed on the git branch at [2]. This commit is a
combination of the following commits on the development branch.
  d7999b9 Protect the glusterd_conf_t-&gt;peers_list with RCU.
  0da85c4 Synchronize before INITing peerinfo list head after removing
          from list.
  32ec28a Add missing rcu_read_unlock
  8fed0b8 Correctly exit read critical section once peer is found.
  63db857 Free peerctx only on rpc destruction
  56eff26 Cleanup style issues
  e5f38b0 Indirection for events and friend_sm
  3c84ac4 In __glusterd_probe_cbk goto unlock only if peer already
          exists
  141d855 Address review comments on 9695/1
  aaeefed Protection during peer updates
  6eda33d Revert "Synchronize before INITing peerinfo list head after
          removing from list."
  f69db96 Remove unneeded line
  b43d2ec Address review comments on 9695/4
  7781921 Address review comments on 9695/5
  eb6467b Add some missing semi-colons
  328a47f Remove synchronize_rcu from
          glusterd_friend_sm_transition_state
  186e429 Run part of glusterd_friend_remove in critical section
  55c0a2e Fix gluster (peer status/ pool list) with no peers
  93f8dcf Use call_rcu to free peerinfo
  c36178c Introduce composite struct, gd_rcu_head

[1]: http://www.rdrop.com/~paulmck/RCU/RCUdissertation.2004.07.14e1.pdf
[2]: https://github.com/kshlm/glusterfs/tree/urcu

Change-Id: Ic1480e59c86d41d25a6a3d159aa3e11fbb3cbc7b
BUG: 1191030
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9695
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Anand Nekkunti &lt;anekkunt@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Replace libglusterfs lists with liburcu lists</title>
<updated>2015-03-04T07:50:22+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@redhat.com</email>
</author>
<published>2015-01-06T12:53:41+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=673ba2659cebe22ee30c43f9fb080f330150f55e'/>
<id>673ba2659cebe22ee30c43f9fb080f330150f55e</id>
<content type='text'>
This patch replaces usage of the libglusterfs lists data structures and
API in glusterd with the lists data structures and API from liburcu. The
liburcu data structes and APIs are a drop-in replacement for
libglusterfs lists.

All usages have been changed to keep the code consistent, and free from
confusion.

NOTE: glusterd_conf_t-&gt;xprt_list still uses the libglusterfs data
structures and API, as it holds rpc_transport_t objects, which is not a
part of glusterd and is not being changed in this patch.

This change was developed on the git branch at [1]. This commit is a
combination of the following commits on the development branch.
  6dac576 Replace libglusterfs lists with liburcu lists
  a51b5ab Fix compilation issues
  d98a06f Fix merge issues
  a5d918e Remove merge remnant
  1cca113 More style cleanup
  1917be3 Address review comments on 9624/1
  8d10f13 Use cds_lists for glusterd_svc_t
  524ad5d Add rculist header in glusterd-conn-helper.c
  646f294 glusterd: add list_add_order API honouring rcu

[1]: https://github.com/kshlm/glusterfs/tree/urcu

Change-Id: Ic613c5b6e496a677b9d3de15fc042a0492109fb0
BUG: 1191030
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9624
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-by: Anand Nekkunti &lt;anekkunt@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch replaces usage of the libglusterfs lists data structures and
API in glusterd with the lists data structures and API from liburcu. The
liburcu data structes and APIs are a drop-in replacement for
libglusterfs lists.

All usages have been changed to keep the code consistent, and free from
confusion.

NOTE: glusterd_conf_t-&gt;xprt_list still uses the libglusterfs data
structures and API, as it holds rpc_transport_t objects, which is not a
part of glusterd and is not being changed in this patch.

This change was developed on the git branch at [1]. This commit is a
combination of the following commits on the development branch.
  6dac576 Replace libglusterfs lists with liburcu lists
  a51b5ab Fix compilation issues
  d98a06f Fix merge issues
  a5d918e Remove merge remnant
  1cca113 More style cleanup
  1917be3 Address review comments on 9624/1
  8d10f13 Use cds_lists for glusterd_svc_t
  524ad5d Add rculist header in glusterd-conn-helper.c
  646f294 glusterd: add list_add_order API honouring rcu

[1]: https://github.com/kshlm/glusterfs/tree/urcu

Change-Id: Ic613c5b6e496a677b9d3de15fc042a0492109fb0
BUG: 1191030
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9624
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-by: Anand Nekkunti &lt;anekkunt@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: nfs,shd,quotad,snapd daemons refactoring</title>
<updated>2015-02-20T12:04:08+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2015-02-11T11:43:45+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9d842f965655bf70c643b4541844e83bc4e74190'/>
<id>9d842f965655bf70c643b4541844e83bc4e74190</id>
<content type='text'>
This patch ports nfs, shd, quotad &amp; snapd with the approach suggested in
http://www.gluster.org/pipermail/gluster-devel/2014-December/043180.html

Change-Id: I4ea5b38793f87fc85cc9d2cf873727351dedffd2
BUG: 1191486
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Signed-off-by:  Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9428
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Nekkunti &lt;anekkunt@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch ports nfs, shd, quotad &amp; snapd with the approach suggested in
http://www.gluster.org/pipermail/gluster-devel/2014-December/043180.html

Change-Id: I4ea5b38793f87fc85cc9d2cf873727351dedffd2
BUG: 1191486
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Signed-off-by:  Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9428
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Nekkunti &lt;anekkunt@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
