<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd.h, branch v3.7.20</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd-ganesha : copy ganesha export configuration files during reboot</title>
<updated>2016-05-06T12:59:44+00:00</updated>
<author>
<name>Jiffin Tony Thottan</name>
<email>jthottan@redhat.com</email>
</author>
<published>2016-04-18T16:04:32+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=058a5a7bf5fa099fe2d6de86ac35fa5594b1ebd8'/>
<id>058a5a7bf5fa099fe2d6de86ac35fa5594b1ebd8</id>
<content type='text'>
glusterd creates export conf file for ganesha using hook script during
volume start and ganesha_manage_export() for volume set command. But this
routine is not added in glusterd restart scenario.
Consider the following case, in a three node cluster a volume got exported
via ganesha while one of the node is offline(glusterd is not running).
When the node comes back online, that volume is not exported on that node
due to the above mentioned issue.
Also I have removed unused variables from glusterd_handle_ganesha_op()
For this patch to work pcs cluster should running on that be node.

Upstream reference
&gt;Change-Id: I5b2312c2f3cef962b1f795b9f16c8f0a27f08ee5
&gt;BUG: 1330097
&gt;Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;i
&gt;Reviewed-on: http://review.gluster.org/14063
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;

Change-Id: I5b2312c2f3cef962b1f795b9f16c8f0a27f08ee5
BUG: 1333661
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14233
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
glusterd creates export conf file for ganesha using hook script during
volume start and ganesha_manage_export() for volume set command. But this
routine is not added in glusterd restart scenario.
Consider the following case, in a three node cluster a volume got exported
via ganesha while one of the node is offline(glusterd is not running).
When the node comes back online, that volume is not exported on that node
due to the above mentioned issue.
Also I have removed unused variables from glusterd_handle_ganesha_op()
For this patch to work pcs cluster should running on that be node.

Upstream reference
&gt;Change-Id: I5b2312c2f3cef962b1f795b9f16c8f0a27f08ee5
&gt;BUG: 1330097
&gt;Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;i
&gt;Reviewed-on: http://review.gluster.org/14063
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;

Change-Id: I5b2312c2f3cef962b1f795b9f16c8f0a27f08ee5
BUG: 1333661
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14233
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: use string comparison for realpath checks in glusterd_is_brickpath_available</title>
<updated>2016-03-03T04:15:59+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2016-01-19T05:15:22+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=cdc96d5d4c6247d9b0ca942eeb37338dacfe93ee'/>
<id>cdc96d5d4c6247d9b0ca942eeb37338dacfe93ee</id>
<content type='text'>
Backport of http://review.gluster.org/13258

glusterd_is_brickpath_available () used to call realpath() for checking the
whether the new brick path matches with the existing ones. The problem with this
is if the underlying file system is bad for any one of the existing bricks then
realpath() would fail and we wouldn't allow to create the new brick even if it
should be allowed.

Fix is to use string comparison with having a new field real_path in brickinfo
to store the absolute path

Change-Id: I1250ea5345f00fca0f6128056ebd08750d604f0a
BUG: 1312878
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13258
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13550
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/13258

glusterd_is_brickpath_available () used to call realpath() for checking the
whether the new brick path matches with the existing ones. The problem with this
is if the underlying file system is bad for any one of the existing bricks then
realpath() would fail and we wouldn't allow to create the new brick even if it
should be allowed.

Fix is to use string comparison with having a new field real_path in brickinfo
to store the absolute path

Change-Id: I1250ea5345f00fca0f6128056ebd08750d604f0a
BUG: 1312878
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13258
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13550
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/rebalance: initialize defrag variable after glusterd restart</title>
<updated>2016-02-25T03:49:27+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2016-01-29T10:54:02+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=d9cc672719b96168c46bc82334f44efc010adad5'/>
<id>d9cc672719b96168c46bc82334f44efc010adad5</id>
<content type='text'>
During reblance restart after glusterd restarted, we are not
connecting to rebalance process from glusterd, because the
defrag variable in volinfo will be null.

Initializing the variable will connect the rpc

Back port of&gt;
&gt;Change-Id: Id820cad6a3634a9fc976427fbe1c45844d3d4b9b
&gt;BUG: 1303028
&gt;Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/13319
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;

(cherry picked from commit a67331f3f79e827ffa4f7a547f6898e12407bbf9)

Change-Id: Ieec82a798da937002e09fb9325c93678a5eefca8
BUG: 1311041
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13494
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
During reblance restart after glusterd restarted, we are not
connecting to rebalance process from glusterd, because the
defrag variable in volinfo will be null.

Initializing the variable will connect the rpc

Back port of&gt;
&gt;Change-Id: Id820cad6a3634a9fc976427fbe1c45844d3d4b9b
&gt;BUG: 1303028
&gt;Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/13319
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;

(cherry picked from commit a67331f3f79e827ffa4f7a547f6898e12407bbf9)

Change-Id: Ieec82a798da937002e09fb9325c93678a5eefca8
BUG: 1311041
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13494
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: set decommission_is_in_progress flag for inprogress remove-brick op on glusterd restart</title>
<updated>2016-02-23T15:04:28+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2016-01-30T03:17:35+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=28c2798d5e1f36e3f57192b693758fa8a9f26743'/>
<id>28c2798d5e1f36e3f57192b693758fa8a9f26743</id>
<content type='text'>
Backport of http://review.gluster.org/13323

While remove brick is in progress, if glusterd is restarted since decommission
flag is not persisted in the store the same value is not retained back resulting
in glusterd not blocking remove brick commit when rebalance is already in
progress.

Change-Id: Ibbf12f3792d65ab1293fad1e368568be141a1cd6
BUG: 1310972
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13323
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-by: mohammed rafi  kc &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13489
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/13323

While remove brick is in progress, if glusterd is restarted since decommission
flag is not persisted in the store the same value is not retained back resulting
in glusterd not blocking remove brick commit when rebalance is already in
progress.

Change-Id: Ibbf12f3792d65ab1293fad1e368568be141a1cd6
BUG: 1310972
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13323
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-by: mohammed rafi  kc &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13489
</pre>
</div>
</content>
</entry>
<entry>
<title>Tier: "tier start force" command implementation</title>
<updated>2015-12-23T01:45:00+00:00</updated>
<author>
<name>hari gowtham</name>
<email>hgowtham@redhat.com</email>
</author>
<published>2015-12-16T10:48:29+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=85d34ea0cf8b687c10093ae06417e498e252e563'/>
<id>85d34ea0cf8b687c10093ae06417e498e252e563</id>
<content type='text'>
	back port of : http://review.gluster.org/#/c/12983/

The start command doesnt restart the tier deamon if the deamon
is running at one node. hence to bring up the tierd on the nodes
where the deamon is down, the force command is implemented.
It skips the check for tierd running.

&gt;Change-Id: I0037d3e5ecfe56637d0da201a97903c435d26436
&gt;BUG: 1292112
&gt;Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;

Change-Id: Idaca442c1a41ded8bf555a6e34eed0ebb9ea4034
BUG: 1293698
Signed-off-by: hari &lt;hgowtham@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13069
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Tested-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
	back port of : http://review.gluster.org/#/c/12983/

The start command doesnt restart the tier deamon if the deamon
is running at one node. hence to bring up the tierd on the nodes
where the deamon is down, the force command is implemented.
It skips the check for tierd running.

&gt;Change-Id: I0037d3e5ecfe56637d0da201a97903c435d26436
&gt;BUG: 1292112
&gt;Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;

Change-Id: Idaca442c1a41ded8bf555a6e34eed0ebb9ea4034
BUG: 1293698
Signed-off-by: hari &lt;hgowtham@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13069
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Tested-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/quota: quota-version conflict in export/import volinfo</title>
<updated>2015-12-09T07:25:25+00:00</updated>
<author>
<name>vmallika</name>
<email>vmallika@redhat.com</email>
</author>
<published>2015-12-03T09:25:36+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=f18432dde90fbea02ff656eecc916e8b04e0d516'/>
<id>f18432dde90fbea02ff656eecc916e8b04e0d516</id>
<content type='text'>
This is a backport of http://review.gluster.org/#/c/12865/

When exporting/importing voinfo during handshake,
quota conf and quota xattr version were using same key
'quota-version' and updated wrong values when importing
quota version values.

&gt; Change-Id: If939d6f5bc4851d4114963877be72dda21834f0f
&gt; BUG: 1287996
&gt; Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;

Change-Id: Ic234d9e496f1372789112a0b82ba5cf34014de64
BUG: 1288052
Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12872
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This is a backport of http://review.gluster.org/#/c/12865/

When exporting/importing voinfo during handshake,
quota conf and quota xattr version were using same key
'quota-version' and updated wrong values when importing
quota version values.

&gt; Change-Id: If939d6f5bc4851d4114963877be72dda21834f0f
&gt; BUG: 1287996
&gt; Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;

Change-Id: Ic234d9e496f1372789112a0b82ba5cf34014de64
BUG: 1288052
Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12872
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: cli command implementation for bitrot scrub status</title>
<updated>2015-11-23T03:53:58+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>garg.gaurav52@gmail.com</email>
</author>
<published>2015-11-20T08:30:38+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=164a8dda2cbf10862483e0333ebf7e727fc87f07'/>
<id>164a8dda2cbf10862483e0333ebf7e727fc87f07</id>
<content type='text'>
This patch is backport of: http://review.gluster.org/10231

CLI command  for bitrot scrub status will be :

gluster volume bitrot &lt;volname&gt; scrub status

Above command will show the statistics of bitrot scrubber.

Upon execution of this command it will show some common
scrubber tunable value of volume &lt;VOLNAME&gt; followed by
statistics of scrubber statistics of individual nodes.

sample ouput for single node:

Volume name : &lt;VOLNAME&gt;

State of scrub: Active

Scrub frequency: biweekly

Bitrot error log location: /var/log/glusterfs/bitd.log

Scrubber error log location: /var/log/glusterfs/scrub.log

=========================================================

Node name:

Number of Scrubbed files:

Number of Unsigned files:

Last completed scrub time:

Duration of last scrub:

Error count:

=========================================================

This is just infrastructure. list of bad file, last scrub
time, error count value will be taken care by
http://review.gluster.org/#/c/12503/  and
http://review.gluster.org/#/c/12654/ patches.

    &gt;&gt; Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1
    &gt;&gt; BUG: 1207627
    &gt;&gt; Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
    &gt;&gt; Reviewed-on: http://review.gluster.org/10231
    &gt;&gt; Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
    &gt;&gt; Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
    &gt;&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;

Change-Id: I45ed94e5e0e78a1e007c30eb0b252f74cf3c9187
BUG: 1283881
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12704
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch is backport of: http://review.gluster.org/10231

CLI command  for bitrot scrub status will be :

gluster volume bitrot &lt;volname&gt; scrub status

Above command will show the statistics of bitrot scrubber.

Upon execution of this command it will show some common
scrubber tunable value of volume &lt;VOLNAME&gt; followed by
statistics of scrubber statistics of individual nodes.

sample ouput for single node:

Volume name : &lt;VOLNAME&gt;

State of scrub: Active

Scrub frequency: biweekly

Bitrot error log location: /var/log/glusterfs/bitd.log

Scrubber error log location: /var/log/glusterfs/scrub.log

=========================================================

Node name:

Number of Scrubbed files:

Number of Unsigned files:

Last completed scrub time:

Duration of last scrub:

Error count:

=========================================================

This is just infrastructure. list of bad file, last scrub
time, error count value will be taken care by
http://review.gluster.org/#/c/12503/  and
http://review.gluster.org/#/c/12654/ patches.

    &gt;&gt; Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1
    &gt;&gt; BUG: 1207627
    &gt;&gt; Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
    &gt;&gt; Reviewed-on: http://review.gluster.org/10231
    &gt;&gt; Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
    &gt;&gt; Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
    &gt;&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;

Change-Id: I45ed94e5e0e78a1e007c30eb0b252f74cf3c9187
BUG: 1283881
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12704
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot : copying nfs-ganesha export file</title>
<updated>2015-11-10T11:39:13+00:00</updated>
<author>
<name>Jiffin Tony Thottan</name>
<email>jthottan@redhat.com</email>
</author>
<published>2015-08-27T17:56:40+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=29151137d3da3ab8c0c65f8502d5874febfeeb14'/>
<id>29151137d3da3ab8c0c65f8502d5874febfeeb14</id>
<content type='text'>
Backport of http://review.gluster.org/#/c/12483/

While taking snapshot, the export file used by the volume should
copy to snap directory. So that when restore of snapshot happens,
the volume can retain all its configuration for exporting via
nfs-ganesha. The export file is stored at "/etc/ganesha/export" in
the following format "export.&lt;volname&gt;.conf"

The fix handles given cases in the following manner :

case a: The nfs-ganesha(global) is ON during snapshot and restore.
        i.) Volume was exported during snapshot. When we restore snapshot,
            then volume should be exported back with old configuration file.
        ii.) Volume was unexported during snapshot. When we restore snapshot,
             then volume should unexported again.

case b: The nfs-ganesha is ON during snapshot and OFF during restore
        Volume was exported during snapshot. When we restore snapshot, the
        conf will be copied to corresponding location and if nfs-ganesha enabled
        again, then volume will be exported.

For the clones, export conf file will created in /etc/ganesha/export and then
export it via ganesha.

Upstream Reference:

(cherry picked from commit 5583bac79851d24f0a552478b361049fe63c32b7)
&gt;Change-Id: Ideecda15bd4db58e991cf6c8de7bb93f3db6cd20
&gt;BUG: 1257709
&gt;Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/12034
&gt;Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
&gt;Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;

Change-Id: I19725ec3d093fb32067bba4aba7f5bc3fd61b0e3
BUG: 1257710
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12483
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/#/c/12483/

While taking snapshot, the export file used by the volume should
copy to snap directory. So that when restore of snapshot happens,
the volume can retain all its configuration for exporting via
nfs-ganesha. The export file is stored at "/etc/ganesha/export" in
the following format "export.&lt;volname&gt;.conf"

The fix handles given cases in the following manner :

case a: The nfs-ganesha(global) is ON during snapshot and restore.
        i.) Volume was exported during snapshot. When we restore snapshot,
            then volume should be exported back with old configuration file.
        ii.) Volume was unexported during snapshot. When we restore snapshot,
             then volume should unexported again.

case b: The nfs-ganesha is ON during snapshot and OFF during restore
        Volume was exported during snapshot. When we restore snapshot, the
        conf will be copied to corresponding location and if nfs-ganesha enabled
        again, then volume will be exported.

For the clones, export conf file will created in /etc/ganesha/export and then
export it via ganesha.

Upstream Reference:

(cherry picked from commit 5583bac79851d24f0a552478b361049fe63c32b7)
&gt;Change-Id: Ideecda15bd4db58e991cf6c8de7bb93f3db6cd20
&gt;BUG: 1257709
&gt;Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/12034
&gt;Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
&gt;Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;

Change-Id: I19725ec3d093fb32067bba4aba7f5bc3fd61b0e3
BUG: 1257710
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12483
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: move new feature (tiering) enum op to the last of the array</title>
<updated>2015-11-04T04:14:20+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>garg.gaurav52@gmail.com</email>
</author>
<published>2015-10-30T11:09:16+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=8c1b22e79dcbb61cd39a5700076005e520261d6a'/>
<id>8c1b22e79dcbb61cd39a5700076005e520261d6a</id>
<content type='text'>
Currently new feature tiering have GD_OP_DETACH_TIER and GD_OP_TIER_MIGRATE
enum in the middle of the glusterd_op_ enum array. In multi nodes
cluster when one of the node upgraded from lower version to higher
version and upon executing command can end up in a mismatch in enum ops
at the receiver ends causing command execution fail.

Fix is to put every new feature glusterd operation enum code to last of
the enum array.

Change-Id: I640f811065e8c84add624237aa80fed43fde5967
BUG: 1276029
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12486
Reviewed-by: Anand Nekkunti &lt;anekkunt@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently new feature tiering have GD_OP_DETACH_TIER and GD_OP_TIER_MIGRATE
enum in the middle of the glusterd_op_ enum array. In multi nodes
cluster when one of the node upgraded from lower version to higher
version and upon executing command can end up in a mismatch in enum ops
at the receiver ends causing command execution fail.

Fix is to put every new feature glusterd operation enum code to last of
the enum array.

Change-Id: I640f811065e8c84add624237aa80fed43fde5967
BUG: 1276029
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12486
Reviewed-by: Anand Nekkunti &lt;anekkunt@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>quota: add version to quota xattrs</title>
<updated>2015-11-03T05:03:46+00:00</updated>
<author>
<name>vmallika</name>
<email>vmallika@redhat.com</email>
</author>
<published>2015-10-15T07:11:13+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=3d3176958b7da48dbacb1a5a0fedf26322a38297'/>
<id>3d3176958b7da48dbacb1a5a0fedf26322a38297</id>
<content type='text'>
This is a backport of http://review.gluster.org/#/c/12386/

When a quota is disable and the clean-up process terminated
without completely cleaning-up the quota xattrs.
Now when quota is enabled again, this can mess-up the accounting

A version number is suffixed for all quota xattrs and this version
number is specific to marker xaltor, i.e when quota xattrs are
requested by quotad/client marker will remove the version suffix in the
key before sending the response

&gt; Change-Id: I1ca2c11460645edba0f6b68db70d476d8d26e1eb
&gt; BUG: 1272411
&gt; Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/12386
&gt; Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Manikandan Selvaganesh &lt;mselvaga@redhat.com&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;

Change-Id: I67b1b930b28411d76b2d476a4e5250c52aa495a0
BUG: 1277080
Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12487
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Tested-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This is a backport of http://review.gluster.org/#/c/12386/

When a quota is disable and the clean-up process terminated
without completely cleaning-up the quota xattrs.
Now when quota is enabled again, this can mess-up the accounting

A version number is suffixed for all quota xattrs and this version
number is specific to marker xaltor, i.e when quota xattrs are
requested by quotad/client marker will remove the version suffix in the
key before sending the response

&gt; Change-Id: I1ca2c11460645edba0f6b68db70d476d8d26e1eb
&gt; BUG: 1272411
&gt; Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/12386
&gt; Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Manikandan Selvaganesh &lt;mselvaga@redhat.com&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;

Change-Id: I67b1b930b28411d76b2d476a4e5250c52aa495a0
BUG: 1277080
Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12487
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Tested-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
