<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-hooks.c, branch v3.7.5</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: Porting left out log messages to new framework</title>
<updated>2015-06-27T06:56:09+00:00</updated>
<author>
<name>Nandaja Varma</name>
<email>nandaja.varma@gmail.com</email>
</author>
<published>2015-06-24T19:27:00+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=8708953fa3d9187997dc6d484dae663b4469c7ca'/>
<id>8708953fa3d9187997dc6d484dae663b4469c7ca</id>
<content type='text'>
This is a backport of http://review.gluster.org/11388
cherry-picked from commit 23c1e6dc0fa86c014e1a8b6aa5729675f6d69017
&gt;Change-Id: I70d40ae3b5f49a21e1b93f82885cd58fa2723647
&gt;BUG: 1235538
&gt;Signed-off-by: Nandaja Varma &lt;nandaja.varma@gmail.com&gt;

Change-Id: I70d40ae3b5f49a21e1b93f82885cd58fa2723647
BUG: 1217722
Signed-off-by: Nandaja Varma &lt;nandaja.varma@gmail.com&gt;
Reviewed-on: http://review.gluster.org/11422
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Anand Nekkunti &lt;anekkunt@redhat.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This is a backport of http://review.gluster.org/11388
cherry-picked from commit 23c1e6dc0fa86c014e1a8b6aa5729675f6d69017
&gt;Change-Id: I70d40ae3b5f49a21e1b93f82885cd58fa2723647
&gt;BUG: 1235538
&gt;Signed-off-by: Nandaja Varma &lt;nandaja.varma@gmail.com&gt;

Change-Id: I70d40ae3b5f49a21e1b93f82885cd58fa2723647
BUG: 1217722
Signed-off-by: Nandaja Varma &lt;nandaja.varma@gmail.com&gt;
Reviewed-on: http://review.gluster.org/11422
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Anand Nekkunti &lt;anekkunt@redhat.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/shared_storage: Provide a volume set option to create and mount the shared storage</title>
<updated>2015-06-05T09:20:26+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2015-05-14T09:30:59+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=ae87a7fedfcf7f6b287ef5c3860f45412363e4f6'/>
<id>ae87a7fedfcf7f6b287ef5c3860f45412363e4f6</id>
<content type='text'>
     Backport of http://review.gluster.org/#/c/10793/

Introducing a global volume set option(cluster.enable-shared-storage)
which helps create and set-up the shared storage meta volume.

gluster volume set all cluster.enable-shared-storage enable

On enabling this option, the system analyzes the number of peers
in the cluster, which are currently connected, and chooses three
such peers(including the node the command is issued from). From these
peers a volume(gluster_shared_storage) is created. Depending on the
number of peers available the volume is either a replica 3
volume(if there are 3 connected peers),  or a replica 2 volume(if there
are 2 connected peers). "/var/run/gluster/ss_brick" serves as the
brick path on each node for the shared storage volume. We also mount
the shared storage at "/var/run/gluster/shared_storage" on all the nodes
in the cluster as part of enabling this option. If there is only one node
in the cluster,  or only one node is up then the command will fail

Once the volume is created, and mounted the maintainance of the
volume like adding-bricks, removing bricks etc., is expected to
be the onus of the user.

On disabling the option, we provide the user a warning, and on
affirmation from the user we stop the shared storage volume, and unmount
it from all the nodes in the cluster.

gluster volume set all cluster.enable-shared-storage disable

Change-Id: Idd92d67b93f444244f99ede9f634ef18d2945dbc
BUG: 1228181
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11086
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
     Backport of http://review.gluster.org/#/c/10793/

Introducing a global volume set option(cluster.enable-shared-storage)
which helps create and set-up the shared storage meta volume.

gluster volume set all cluster.enable-shared-storage enable

On enabling this option, the system analyzes the number of peers
in the cluster, which are currently connected, and chooses three
such peers(including the node the command is issued from). From these
peers a volume(gluster_shared_storage) is created. Depending on the
number of peers available the volume is either a replica 3
volume(if there are 3 connected peers),  or a replica 2 volume(if there
are 2 connected peers). "/var/run/gluster/ss_brick" serves as the
brick path on each node for the shared storage volume. We also mount
the shared storage at "/var/run/gluster/shared_storage" on all the nodes
in the cluster as part of enabling this option. If there is only one node
in the cluster,  or only one node is up then the command will fail

Once the volume is created, and mounted the maintainance of the
volume like adding-bricks, removing bricks etc., is expected to
be the onus of the user.

On disabling the option, we provide the user a warning, and on
affirmation from the user we stop the shared storage volume, and unmount
it from all the nodes in the cluster.

gluster volume set all cluster.enable-shared-storage disable

Change-Id: Idd92d67b93f444244f99ede9f634ef18d2945dbc
BUG: 1228181
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11086
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Replace libglusterfs lists with liburcu lists</title>
<updated>2015-03-04T07:50:22+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@redhat.com</email>
</author>
<published>2015-01-06T12:53:41+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=673ba2659cebe22ee30c43f9fb080f330150f55e'/>
<id>673ba2659cebe22ee30c43f9fb080f330150f55e</id>
<content type='text'>
This patch replaces usage of the libglusterfs lists data structures and
API in glusterd with the lists data structures and API from liburcu. The
liburcu data structes and APIs are a drop-in replacement for
libglusterfs lists.

All usages have been changed to keep the code consistent, and free from
confusion.

NOTE: glusterd_conf_t-&gt;xprt_list still uses the libglusterfs data
structures and API, as it holds rpc_transport_t objects, which is not a
part of glusterd and is not being changed in this patch.

This change was developed on the git branch at [1]. This commit is a
combination of the following commits on the development branch.
  6dac576 Replace libglusterfs lists with liburcu lists
  a51b5ab Fix compilation issues
  d98a06f Fix merge issues
  a5d918e Remove merge remnant
  1cca113 More style cleanup
  1917be3 Address review comments on 9624/1
  8d10f13 Use cds_lists for glusterd_svc_t
  524ad5d Add rculist header in glusterd-conn-helper.c
  646f294 glusterd: add list_add_order API honouring rcu

[1]: https://github.com/kshlm/glusterfs/tree/urcu

Change-Id: Ic613c5b6e496a677b9d3de15fc042a0492109fb0
BUG: 1191030
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9624
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-by: Anand Nekkunti &lt;anekkunt@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch replaces usage of the libglusterfs lists data structures and
API in glusterd with the lists data structures and API from liburcu. The
liburcu data structes and APIs are a drop-in replacement for
libglusterfs lists.

All usages have been changed to keep the code consistent, and free from
confusion.

NOTE: glusterd_conf_t-&gt;xprt_list still uses the libglusterfs data
structures and API, as it holds rpc_transport_t objects, which is not a
part of glusterd and is not being changed in this patch.

This change was developed on the git branch at [1]. This commit is a
combination of the following commits on the development branch.
  6dac576 Replace libglusterfs lists with liburcu lists
  a51b5ab Fix compilation issues
  d98a06f Fix merge issues
  a5d918e Remove merge remnant
  1cca113 More style cleanup
  1917be3 Address review comments on 9624/1
  8d10f13 Use cds_lists for glusterd_svc_t
  524ad5d Add rculist header in glusterd-conn-helper.c
  646f294 glusterd: add list_add_order API honouring rcu

[1]: https://github.com/kshlm/glusterfs/tree/urcu

Change-Id: Ic613c5b6e496a677b9d3de15fc042a0492109fb0
BUG: 1191030
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9624
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-by: Anand Nekkunti &lt;anekkunt@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/changelog: Cleanup .processing and .current directory</title>
<updated>2015-01-18T12:30:42+00:00</updated>
<author>
<name>Aravinda VK</name>
<email>avishwan@redhat.com</email>
</author>
<published>2014-11-10T07:11:49+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=dec4700c663975896f3aad1b4e59257263b4f4ac'/>
<id>dec4700c663975896f3aad1b4e59257263b4f4ac</id>
<content type='text'>
On changelog_register cleanup .processing, .history/.processing,
.current and .history/.current from the working directory.

Moved glusterd_recursive_rmdir and glusterd_for_each_entry to common
place(libglusterfs) and renamed as recursive_rmdir and
GF_FOR_EACH_ENTRY_IN_DIR respectively

BUG: 1162057
Change-Id: I1f98468a344cead039026762a805437b2f9e507b
Signed-off-by: Aravinda VK &lt;avishwan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9082
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Venky Shankar &lt;vshankar@redhat.com&gt;
Tested-by: Venky Shankar &lt;vshankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
On changelog_register cleanup .processing, .history/.processing,
.current and .history/.current from the working directory.

Moved glusterd_recursive_rmdir and glusterd_for_each_entry to common
place(libglusterfs) and renamed as recursive_rmdir and
GF_FOR_EACH_ENTRY_IN_DIR respectively

BUG: 1162057
Change-Id: I1f98468a344cead039026762a805437b2f9e507b
Signed-off-by: Aravinda VK &lt;avishwan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9082
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Venky Shankar &lt;vshankar@redhat.com&gt;
Tested-by: Venky Shankar &lt;vshankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>build: make GLUSTERD_WORKDIR rely on localstatedir</title>
<updated>2014-08-07T08:17:29+00:00</updated>
<author>
<name>Harshavardhana</name>
<email>harsha@harshavardhana.net</email>
</author>
<published>2014-06-30T01:56:44+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=2ec6ea43f2ddc6c00a030be6d04c00f0924277b7'/>
<id>2ec6ea43f2ddc6c00a030be6d04c00f0924277b7</id>
<content type='text'>
- Break-way from '/var/lib/glusterd' hard-coded previously,
  instead rely on 'configure' value from 'localstatedir'
- Provide 's/lib/db' as default working directory for gluster
  management daemon for BSD and Darwin based installations
- loff_t is really off_t on Darwin
- fix-off the warnings generated by clang on FreeBSD/Darwin
- Now 'tests/*' use GLUSTERD_WORKDIR a common variable for all
  platforms.
- Define proper environment for running tests, define correct PATH
  and LD_LIBRARY_PATH when running tests, so that the desired version
  of glusterfs is used, regardless where it is installed.
  (Thanks to manu@netbsd.org for this additional work)

Change-Id: I2339a0d9275de5939ccad3e52b535598064a35e7
BUG: 1111774
Signed-off-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
Signed-off-by: Emmanuel Dreyfus &lt;manu@netbsd.org&gt;
Reviewed-on: http://review.gluster.org/8246
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
- Break-way from '/var/lib/glusterd' hard-coded previously,
  instead rely on 'configure' value from 'localstatedir'
- Provide 's/lib/db' as default working directory for gluster
  management daemon for BSD and Darwin based installations
- loff_t is really off_t on Darwin
- fix-off the warnings generated by clang on FreeBSD/Darwin
- Now 'tests/*' use GLUSTERD_WORKDIR a common variable for all
  platforms.
- Define proper environment for running tests, define correct PATH
  and LD_LIBRARY_PATH when running tests, so that the desired version
  of glusterfs is used, regardless where it is installed.
  (Thanks to manu@netbsd.org for this additional work)

Change-Id: I2339a0d9275de5939ccad3e52b535598064a35e7
BUG: 1111774
Signed-off-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
Signed-off-by: Emmanuel Dreyfus &lt;manu@netbsd.org&gt;
Reviewed-on: http://review.gluster.org/8246
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpm: Rename old hookscripts in an RPM-standard way.</title>
<updated>2014-05-29T17:09:31+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2014-05-22T11:16:15+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=a726e3e538f65d028d5da4cdbdd7892b9577dba1'/>
<id>a726e3e538f65d028d5da4cdbdd7892b9577dba1</id>
<content type='text'>
Change-Id: I5e47ac8af8033821787281574276f38933de73cf
BUG: 1100251
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7362
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I5e47ac8af8033821787281574276f38933de73cf
BUG: 1100251
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7362
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli/hooks : Add volume set options to enable/disable nfs-ganesha support.</title>
<updated>2014-05-03T14:24:07+00:00</updated>
<author>
<name>Meghana M</name>
<email>mmadhusu@redhat.com</email>
</author>
<published>2014-03-24T08:28:38+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0088e318c1218191535ea0baa04b4fe858412f54'/>
<id>0088e318c1218191535ea0baa04b4fe858412f54</id>
<content type='text'>
1. gluster volume set nfs-ganesha.enable ON/OFF
If the option is set to ON, the volume field in the nfs-ganesha configuartion file is
edited. Gluster-nfs is disabled on that volume and the volume is exported using
nfs-ganesha.

2.gluster volume set nfs-ganesha.host IP
This is used to provide the IP of the nfs-ganesha host.

Note : nfs-ganesha.host MUST be set before using nfs-ganesha.enable ON

The switch from gluster-nfs to nfs-ganesha is mostly done by the hook-scripts
in the post phase of the 'set' option. As a result, gluster volume reset does not
function as it is expected to. By default, nfs-ganesha will be set to off but the
process  will not be killed.

Hence, a few changes have to be made post 'reset' option as well. Those changes
also have been added.

Change-Id: I7fdc14ee49d1724af96eda33c6a3ec08b1020788
BUG: 1092283
Signed-off-by: Meghana &lt;mmadhusu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7321
Reviewed-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
1. gluster volume set nfs-ganesha.enable ON/OFF
If the option is set to ON, the volume field in the nfs-ganesha configuartion file is
edited. Gluster-nfs is disabled on that volume and the volume is exported using
nfs-ganesha.

2.gluster volume set nfs-ganesha.host IP
This is used to provide the IP of the nfs-ganesha host.

Note : nfs-ganesha.host MUST be set before using nfs-ganesha.enable ON

The switch from gluster-nfs to nfs-ganesha is mostly done by the hook-scripts
in the post phase of the 'set' option. As a result, gluster volume reset does not
function as it is expected to. By default, nfs-ganesha will be set to off but the
process  will not be killed.

Hence, a few changes have to be made post 'reset' option as well. Those changes
also have been added.

Change-Id: I7fdc14ee49d1724af96eda33c6a3ec08b1020788
BUG: 1092283
Signed-off-by: Meghana &lt;mmadhusu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7321
Reviewed-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>build: MacOSX Porting fixes</title>
<updated>2014-04-24T21:41:48+00:00</updated>
<author>
<name>Harshavardhana</name>
<email>harsha@harshavardhana.net</email>
</author>
<published>2014-04-17T22:54:34+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=a3cb38e3edf005bef73da4c9cfd958474a14d50f'/>
<id>a3cb38e3edf005bef73da4c9cfd958474a14d50f</id>
<content type='text'>
git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs

Working functionality on MacOSX

 - GlusterD (management daemon)
 - GlusterCLI (management cli)
 - GlusterFS FUSE (using OSXFUSE)
 - GlusterNFS (without NLM - issues with rpc.statd)

Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac
BUG: 1089172
Signed-off-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
Signed-off-by: Dennis Schafroth &lt;dennis@schafroth.com&gt;
Tested-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
Tested-by: Dennis Schafroth &lt;dennis@schafroth.com&gt;
Reviewed-on: http://review.gluster.org/7503
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs

Working functionality on MacOSX

 - GlusterD (management daemon)
 - GlusterCLI (management cli)
 - GlusterFS FUSE (using OSXFUSE)
 - GlusterNFS (without NLM - issues with rpc.statd)

Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac
BUG: 1089172
Signed-off-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
Signed-off-by: Dennis Schafroth &lt;dennis@schafroth.com&gt;
Tested-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
Tested-by: Dennis Schafroth &lt;dennis@schafroth.com&gt;
Reviewed-on: http://review.gluster.org/7503
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/quota: Add the quota config xattr to newly added brick</title>
<updated>2013-11-26T19:57:11+00:00</updated>
<author>
<name>Varun Shastry</name>
<email>vshastry@redhat.com</email>
</author>
<published>2013-10-15T11:55:51+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=50a6e9a74014897b003cadbd297fd0343eb56367'/>
<id>50a6e9a74014897b003cadbd297fd0343eb56367</id>
<content type='text'>
Issue:
Quota directory limit configuration is stored in the xattrs. When a new brick
is added these 'limit-set' xattrs have to be created to the directory in the
new brick. This is done by the dht directory healing when the directory is
created in the new brick. Since 'root' directory is already created DHT doesn't
heal the limit-set xattr root.

Solution:
When the add-brick command is issued run the below hook script to heal the
'limit-set' xattr. The hook script does the following only if limit is
configured on root.
    1. Create an auxiliary mount.
    2. getxattr 'limit-set' on the root
    3. setxattr the same value on the root
But this script needs the volume to be started to make the auxiliary mount.

To handle the case when the add-brick is issued when the volume was stopped,
symlink is created by the 'master' script to the corresponding location and
these two are by default disabled.

So, a 'master' script is added in the add-brick/pre. When add-brick command is
issued, it enables one of the scripts mentioned above based on the condition,
    if volume is started - enable add-brick/post script
    else                 - enable start/post script
After the actual script completes its job, it disables itself.

Note:
The enabling and disabling of the script is based on the glusterd's logic, that
it only runs the scripts which starts its name with 'S'. So,
    Enable     - symlink the file to 'S'*
    Disable    - unlink the symlink.

Change-Id: I2d3947a4d686c54417ec95f530af3bdd3444f4e2
BUG: 969461
Signed-off-by: Varun Shastry &lt;vshastry@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6104
Reviewed-by: Brian Foster &lt;bfoster@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Issue:
Quota directory limit configuration is stored in the xattrs. When a new brick
is added these 'limit-set' xattrs have to be created to the directory in the
new brick. This is done by the dht directory healing when the directory is
created in the new brick. Since 'root' directory is already created DHT doesn't
heal the limit-set xattr root.

Solution:
When the add-brick command is issued run the below hook script to heal the
'limit-set' xattr. The hook script does the following only if limit is
configured on root.
    1. Create an auxiliary mount.
    2. getxattr 'limit-set' on the root
    3. setxattr the same value on the root
But this script needs the volume to be started to make the auxiliary mount.

To handle the case when the add-brick is issued when the volume was stopped,
symlink is created by the 'master' script to the corresponding location and
these two are by default disabled.

So, a 'master' script is added in the add-brick/pre. When add-brick command is
issued, it enables one of the scripts mentioned above based on the condition,
    if volume is started - enable add-brick/post script
    else                 - enable start/post script
After the actual script completes its job, it disables itself.

Note:
The enabling and disabling of the script is based on the glusterd's logic, that
it only runs the scripts which starts its name with 'S'. So,
    Enable     - symlink the file to 'S'*
    Disable    - unlink the symlink.

Change-Id: I2d3947a4d686c54417ec95f530af3bdd3444f4e2
BUG: 969461
Signed-off-by: Varun Shastry &lt;vshastry@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6104
Reviewed-by: Brian Foster &lt;bfoster@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/cli changes for distributed geo-rep</title>
<updated>2013-07-26T20:19:18+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2013-07-10T12:02:41+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=5757ed2727990fd2c3aaff420003638f1eec6b92'/>
<id>5757ed2727990fd2c3aaff420003638f1eec6b92</id>
<content type='text'>
Commands:
gluster system:: execute gsec_create
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; create [push-pem] [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; start [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; stop [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; delete
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; config
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; status

The geo-replication is distributed. The session will be created, and
gsyncd will be spawned on all relevant nodes, instead of only one
node.

geo-rep: Collecting status detail related data

Added persistent store for saving information about
TotalFilesSynced, TotalSyncTime, TotalBytesSynced

Changes in the status information in socket:
Existing(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;

New(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978;
TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640;

Persistent details stored in
/var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status

Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111
BUG: 847839
Original Author: Avra Sengupta &lt;asengupt@redhat.com&gt;
Original Author: Venky Shankar &lt;vshankar@redhat.com&gt;
Original Author: Aravinda VK &lt;avishwan@redhat.com&gt;
Original Author: Amar Tumballi &lt;amarts@redhat.com&gt;
Original Author: Csaba Henk &lt;csaba@redhat.com&gt;
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5132
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Tested-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Commands:
gluster system:: execute gsec_create
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; create [push-pem] [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; start [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; stop [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; delete
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; config
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; status

The geo-replication is distributed. The session will be created, and
gsyncd will be spawned on all relevant nodes, instead of only one
node.

geo-rep: Collecting status detail related data

Added persistent store for saving information about
TotalFilesSynced, TotalSyncTime, TotalBytesSynced

Changes in the status information in socket:
Existing(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;

New(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978;
TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640;

Persistent details stored in
/var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status

Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111
BUG: 847839
Original Author: Avra Sengupta &lt;asengupt@redhat.com&gt;
Original Author: Venky Shankar &lt;vshankar@redhat.com&gt;
Original Author: Aravinda VK &lt;avishwan@redhat.com&gt;
Original Author: Amar Tumballi &lt;amarts@redhat.com&gt;
Original Author: Csaba Henk &lt;csaba@redhat.com&gt;
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5132
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Tested-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
