<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-snapshot-utils.c, branch v3.11.0beta1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: Fix snapshot failure in non-root geo-rep setup</title>
<updated>2017-04-18T09:39:46+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2017-04-17T12:39:30+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=cc839523364e47dea715cd7241772cd68f05f76c'/>
<id>cc839523364e47dea715cd7241772cd68f05f76c</id>
<content type='text'>
Geo-replication session directory name has the form
'&lt;mastervol&gt;_&lt;slavehost&gt;_&lt;slavevol&gt;'. But in non-root
geo-replication setup, while preparing geo-replication
session directory name, glusterd is including 'user@'
resulting in "&lt;mastervol&gt;_&lt;user@slavehost&gt;_&lt;slavevol&gt;".
Hence snapshot is failing to copy geo-rep specific
session files. Fixing the same.

Change-Id: Id214d3186e40997d2827a0bb60d3676ca2552df7
BUG: 1442760
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17067
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Aravinda VK &lt;avishwan@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Geo-replication session directory name has the form
'&lt;mastervol&gt;_&lt;slavehost&gt;_&lt;slavevol&gt;'. But in non-root
geo-replication setup, while preparing geo-replication
session directory name, glusterd is including 'user@'
resulting in "&lt;mastervol&gt;_&lt;user@slavehost&gt;_&lt;slavevol&gt;".
Hence snapshot is failing to copy geo-rep specific
session files. Fixing the same.

Change-Id: Id214d3186e40997d2827a0bb60d3676ca2552df7
BUG: 1442760
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17067
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Aravinda VK &lt;avishwan@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: (storhaug) remove ganesha</title>
<updated>2017-03-21T17:13:44+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2017-02-01T11:39:03+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=843e1b04b554ab887ec656ae7b468bb93ee4e2f7'/>
<id>843e1b04b554ab887ec656ae7b468bb93ee4e2f7</id>
<content type='text'>
remove all vestiges of ganesha

The storhaug CLI is used to manage ganesha and Samba. Also any setup
and teardown of the ganesha HA is initiated using storhaug to preserve
the proper layering.

Change-Id: I0eec0016a1b7802a36e7b2d92896b86fdf8607d5
BUG: 1420713
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16504
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
remove all vestiges of ganesha

The storhaug CLI is used to manage ganesha and Samba. Also any setup
and teardown of the ganesha HA is initiated using storhaug to preserve
the proper layering.

Change-Id: I0eec0016a1b7802a36e7b2d92896b86fdf8607d5
BUG: 1420713
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16504
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot: Fix restore rollback to reassign snap volume ids to bricks</title>
<updated>2016-12-17T07:07:31+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2016-12-13T06:25:19+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=d0528cf2408533b45383a796d419c49fa96e810b'/>
<id>d0528cf2408533b45383a796d419c49fa96e810b</id>
<content type='text'>
Added further checks to ensure we do not go beyond prevalidate
when trying to restore a snapshot which has a nfs-gansha conf
file, in a cluster when nfs-ganesha is not enabled

The error message for the particular scenario is:
"Snapshot(&lt;snapname&gt;) has a nfs-ganesha export conf
file. cluster.enable-shared-storage and nfs-ganesha
should be enabled before restoring this snapshot."

Change-Id: I1b87e9907e0a5e162f26ef1ca89fe76e8da8610f
BUG: 1404118
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16116
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Added further checks to ensure we do not go beyond prevalidate
when trying to restore a snapshot which has a nfs-gansha conf
file, in a cluster when nfs-ganesha is not enabled

The error message for the particular scenario is:
"Snapshot(&lt;snapname&gt;) has a nfs-ganesha export conf
file. cluster.enable-shared-storage and nfs-ganesha
should be enabled before restoring this snapshot."

Change-Id: I1b87e9907e0a5e162f26ef1ca89fe76e8da8610f
BUG: 1404118
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16116
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot/ganesha: Copy export.conf, only if ganesha.enable is on.</title>
<updated>2016-12-12T11:59:43+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2016-12-09T09:31:40+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=74322ce58b1c949e11cb0aa72bfded0e34422157'/>
<id>74322ce58b1c949e11cb0aa72bfded0e34422157</id>
<content type='text'>
Status of the volume being exported via nfs ganesha, should be
checked by checking if ganesha.enable is set or not, rather than
deciding based on the errno of the stat

Change-Id: Iaff786d9f77a2de1322ce8ccb4b80954f84d3373
BUG: 1402828
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16094
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Status of the volume being exported via nfs ganesha, should be
checked by checking if ganesha.enable is set or not, rather than
deciding based on the errno of the stat

Change-Id: Iaff786d9f77a2de1322ce8ccb4b80954f84d3373
BUG: 1402828
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16094
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot/eventsapi: Integrate snapshot events with eventsapi</title>
<updated>2016-09-01T07:33:42+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2016-08-26T08:35:07+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=c1278de9a5fb6a64455f42b8b17a8d05b74c2420'/>
<id>c1278de9a5fb6a64455f42b8b17a8d05b74c2420</id>
<content type='text'>
1. EVENT_SNAPSHOT_CREATED : snapshot_name=snap1 volume_name=test_vol
                            snapshot_uuid=26dd6c52-6021-40b1-a507-001a80401d70
2. EVENT_SNAPSHOT_CREATE_FAILED : snapshot_name=snap1 volume_name=test_vol
                                  error=Snapshot snap1 already exists
3. EVENT_SNAPSHOT_ACTIVATED : snapshot_name=snap1
                              snapshot_uuid=26dd6c52-6021-40b1-a507-001a80401d70
4. EVENT_SNAPSHOT_ACTIVATE_FAILED: snapshot_name=snap1
                                   error=Snapshot snap1 is already activated.
5. EVENT_SNAPSHOT_DEACTIVATED : snapshot_name=snap1
                              snapshot_uuid=26dd6c52-6021-40b1-a507-001a80401d70
6. EVENT_SNAPSHOT_DEACTIVATE_FAILED : snapshot_name=snap3
                                      error=Snapshot (snap3) does not exist.
7. EVENT_SNAPSHOT_SOFT_LIMIT_REACHED : volume_name=test_vol
                                  volume_id=2ace2616-5591-4b9b-be2a-38592dda5758
8. EVENT_SNAPSHOT_HARD_LIMIT_REACHED : volume_name=test_vol
                                  volume_id=2ace2616-5591-4b9b-be2a-38592dda5758
9. EVENT_SNAPSHOT_RESTORED : snapshot_name=snap1 volume_name=test_vol
                             snapshot_uuid=3a840ec5-08da-4f2b-850d-1d5539a5d14d
10. EVENT_SNAPSHOT_RESTORE_FAILED : snapshot_name=snap10
                                    error=Snapshot (snap10) does not exist
11. EVENT_SNAPSHOT_DELETED : snapshot_name=snap1
                             snapshot_uuid=d9ff3d4f-f579-4345-a4da-4f9353f0950c
12. EVENT_SNAPSHOT_DELETE_FAILED : snapshot_name=snap2
                                   error=Snapshot (snap2) does not exist
13. EVENT_SNAPSHOT_CLONED : clone_uuid=93ba9f06-cb9c-4ace-aa52-2616e7f31022
                            snapshot_name=snap1 clone_name=clone2
14. EVENT_SNAPSHOT_CLONE_FAILED : snapshot_name=snap1 clone_name=clone2
                                  error=Volume with name:clone2 already exists
15. EVENT_SNAPSHOT_CONFIG_UPDATED : auto-delete=enable config_type=system_config
                                    config_type=volume_config hard_limit=100
16. EVENT_SNAPSHOT_CONFIG_UPDATE_FAILED :
                   error=Invalid snap-max-soft-limit 110. Expected range 1 - 100
17. EVENT_SNAPSHOT_SCHEDULER_INITIALISED : status=Success
18. EVENT_SNAPSHOT_SCHEDULER_INIT_FAILED
19. EVENT_SNAPSHOT_SCHEDULER_ENABLED : status=Successfuly Enabled
20. EVENT_SNAPSHOT_SCHEDULER_ENABLE_FAILED :
                                   error=Snapshot scheduler is already enabled.
21. EVENT_SNAPSHOT_SCHEDULER_SCHEDULE_ADDED : status=Successfuly added job job1
22. EVENT_SNAPSHOT_SCHEDULER_SCHEDULE_ADD_FAILED :
                    status=Failed to add job job1 error=The job already exists.
23. EVENT_SNAPSHOT_SCHEDULER_SCHEDULE_EDITED :
                                             status=Successfuly edited job job1
24. EVENT_SNAPSHOT_SCHEDULER_SCHEDULE_EDIT_FAILED :
                                                 status=Failed to edit job job2
                                                 error=The job cannot be found.
25. EVENT_SNAPSHOT_SCHEDULER_SCHEDULE_DELETED :
                                            status=Successfuly deleted job job1
26. EVENT_SNAPSHOT_SCHEDULER_SCHEDULE_DELETE_FAILED :
                                               status=Failed to delete job job1
                                               error=The job cannot be found.
27. EVENT_SNAPSHOT_SCHEDULER_DISABLED : status=Successfuly Disabled
28. EVENT_SNAPSHOT_SCHEDULER_DISABLE_FAILED :
                                   error=Snapshot scheduler is already disabled.

Change-Id: I3479cc3fb7af3c76ded67cf289f99547d0a55d21
BUG: 1370567
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15329
Tested-by: Aravinda VK &lt;avishwan@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
1. EVENT_SNAPSHOT_CREATED : snapshot_name=snap1 volume_name=test_vol
                            snapshot_uuid=26dd6c52-6021-40b1-a507-001a80401d70
2. EVENT_SNAPSHOT_CREATE_FAILED : snapshot_name=snap1 volume_name=test_vol
                                  error=Snapshot snap1 already exists
3. EVENT_SNAPSHOT_ACTIVATED : snapshot_name=snap1
                              snapshot_uuid=26dd6c52-6021-40b1-a507-001a80401d70
4. EVENT_SNAPSHOT_ACTIVATE_FAILED: snapshot_name=snap1
                                   error=Snapshot snap1 is already activated.
5. EVENT_SNAPSHOT_DEACTIVATED : snapshot_name=snap1
                              snapshot_uuid=26dd6c52-6021-40b1-a507-001a80401d70
6. EVENT_SNAPSHOT_DEACTIVATE_FAILED : snapshot_name=snap3
                                      error=Snapshot (snap3) does not exist.
7. EVENT_SNAPSHOT_SOFT_LIMIT_REACHED : volume_name=test_vol
                                  volume_id=2ace2616-5591-4b9b-be2a-38592dda5758
8. EVENT_SNAPSHOT_HARD_LIMIT_REACHED : volume_name=test_vol
                                  volume_id=2ace2616-5591-4b9b-be2a-38592dda5758
9. EVENT_SNAPSHOT_RESTORED : snapshot_name=snap1 volume_name=test_vol
                             snapshot_uuid=3a840ec5-08da-4f2b-850d-1d5539a5d14d
10. EVENT_SNAPSHOT_RESTORE_FAILED : snapshot_name=snap10
                                    error=Snapshot (snap10) does not exist
11. EVENT_SNAPSHOT_DELETED : snapshot_name=snap1
                             snapshot_uuid=d9ff3d4f-f579-4345-a4da-4f9353f0950c
12. EVENT_SNAPSHOT_DELETE_FAILED : snapshot_name=snap2
                                   error=Snapshot (snap2) does not exist
13. EVENT_SNAPSHOT_CLONED : clone_uuid=93ba9f06-cb9c-4ace-aa52-2616e7f31022
                            snapshot_name=snap1 clone_name=clone2
14. EVENT_SNAPSHOT_CLONE_FAILED : snapshot_name=snap1 clone_name=clone2
                                  error=Volume with name:clone2 already exists
15. EVENT_SNAPSHOT_CONFIG_UPDATED : auto-delete=enable config_type=system_config
                                    config_type=volume_config hard_limit=100
16. EVENT_SNAPSHOT_CONFIG_UPDATE_FAILED :
                   error=Invalid snap-max-soft-limit 110. Expected range 1 - 100
17. EVENT_SNAPSHOT_SCHEDULER_INITIALISED : status=Success
18. EVENT_SNAPSHOT_SCHEDULER_INIT_FAILED
19. EVENT_SNAPSHOT_SCHEDULER_ENABLED : status=Successfuly Enabled
20. EVENT_SNAPSHOT_SCHEDULER_ENABLE_FAILED :
                                   error=Snapshot scheduler is already enabled.
21. EVENT_SNAPSHOT_SCHEDULER_SCHEDULE_ADDED : status=Successfuly added job job1
22. EVENT_SNAPSHOT_SCHEDULER_SCHEDULE_ADD_FAILED :
                    status=Failed to add job job1 error=The job already exists.
23. EVENT_SNAPSHOT_SCHEDULER_SCHEDULE_EDITED :
                                             status=Successfuly edited job job1
24. EVENT_SNAPSHOT_SCHEDULER_SCHEDULE_EDIT_FAILED :
                                                 status=Failed to edit job job2
                                                 error=The job cannot be found.
25. EVENT_SNAPSHOT_SCHEDULER_SCHEDULE_DELETED :
                                            status=Successfuly deleted job job1
26. EVENT_SNAPSHOT_SCHEDULER_SCHEDULE_DELETE_FAILED :
                                               status=Failed to delete job job1
                                               error=The job cannot be found.
27. EVENT_SNAPSHOT_SCHEDULER_DISABLED : status=Successfuly Disabled
28. EVENT_SNAPSHOT_SCHEDULER_DISABLE_FAILED :
                                   error=Snapshot scheduler is already disabled.

Change-Id: I3479cc3fb7af3c76ded67cf289f99547d0a55d21
BUG: 1370567
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15329
Tested-by: Aravinda VK &lt;avishwan@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd (snapshot-utils): fix unused variable warnings/errors</title>
<updated>2016-08-29T16:35:27+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2016-08-22T17:22:03+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=2c13e7e04711ef6da55e7616f064f801a41cc46d'/>
<id>2c13e7e04711ef6da55e7616f064f801a41cc46d</id>
<content type='text'>
http://review.gluster.org/14085 fixes a/the "leak" - via the
generated rpc/xdr headers - of pragmas that mask these warnings.

However 14085 won't pass the smoke test until all the warnings are
fixed.

Change-Id: I5d3372de16a1e03a6391885434ffea84f6a00412
BUG: 1369124
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15277
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
http://review.gluster.org/14085 fixes a/the "leak" - via the
generated rpc/xdr headers - of pragmas that mask these warnings.

However 14085 won't pass the smoke test until all the warnings are
fixed.

Change-Id: I5d3372de16a1e03a6391885434ffea84f6a00412
BUG: 1369124
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15277
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/cli: cli to get local state representation from glusterd</title>
<updated>2016-08-26T15:23:37+00:00</updated>
<author>
<name>Samikshan Bairagya</name>
<email>samikshan@gmail.com</email>
</author>
<published>2016-07-07T15:03:02+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=4a3454753f6e4ddc309c8d1cb11a6e4e432c1da6'/>
<id>4a3454753f6e4ddc309c8d1cb11a6e4e432c1da6</id>
<content type='text'>
Currently there is no existing CLI that can be used to get the
local state representation of the cluster as maintained in glusterd
in a readable as well as parseable format.

The CLI added has the following usage:

 # gluster get-state [daemon] [odir &lt;path/to/output/dir&gt;] [file &lt;filename&gt;]

This would dump data points that reflect the local state
representation of the cluster as maintained in glusterd (no other
daemons are supported as of now) to a file inside the specified
output directory. The default output directory and filename is
/var/run/gluster and glusterd_state_&lt;timestamp&gt; respectively. The
option for specifying the daemon name leaves room to add support for
other daemons in the future. Following are the data points captured
as of now to represent the state from the local glusterd pov:

 * Peer:
    - Primary hostname
    - uuid
    - state
    - connection status
    - List of hostnames

 * Volumes:
    - name, id, transport type, status
    - counts: bricks, snap, subvol, stripe, arbiter, disperse,
 redundancy
    - snapd status
    - quorum status
    - tiering related information
    - rebalance status
    - replace bricks status
    - snapshots

 * Bricks:
    - Path, hostname (for all bricks these info will be shown)
    - port, rdma port, status, mount options, filesystem type and
signed in status for bricks running locally.

 * Services:
    - name, online status for initialised services

 * Others:
    - Base port, last allocated port
    - op-version
    - MYUUID

Change-Id: I4a45cc5407ab92d8afdbbd2098ece851f7e3d618
BUG: 1353156
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: http://review.gluster.org/14873
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently there is no existing CLI that can be used to get the
local state representation of the cluster as maintained in glusterd
in a readable as well as parseable format.

The CLI added has the following usage:

 # gluster get-state [daemon] [odir &lt;path/to/output/dir&gt;] [file &lt;filename&gt;]

This would dump data points that reflect the local state
representation of the cluster as maintained in glusterd (no other
daemons are supported as of now) to a file inside the specified
output directory. The default output directory and filename is
/var/run/gluster and glusterd_state_&lt;timestamp&gt; respectively. The
option for specifying the daemon name leaves room to add support for
other daemons in the future. Following are the data points captured
as of now to represent the state from the local glusterd pov:

 * Peer:
    - Primary hostname
    - uuid
    - state
    - connection status
    - List of hostnames

 * Volumes:
    - name, id, transport type, status
    - counts: bricks, snap, subvol, stripe, arbiter, disperse,
 redundancy
    - snapd status
    - quorum status
    - tiering related information
    - rebalance status
    - replace bricks status
    - snapshots

 * Bricks:
    - Path, hostname (for all bricks these info will be shown)
    - port, rdma port, status, mount options, filesystem type and
signed in status for bricks running locally.

 * Services:
    - name, online status for initialised services

 * Others:
    - Base port, last allocated port
    - op-version
    - MYUUID

Change-Id: I4a45cc5407ab92d8afdbbd2098ece851f7e3d618
BUG: 1353156
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: http://review.gluster.org/14873
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot/snapd: Don't display pid when snapd is offline</title>
<updated>2016-07-27T22:07:23+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2016-07-22T06:10:32+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=ec6925a379c7bee071df1638bc2751b266cee346'/>
<id>ec6925a379c7bee071df1638bc2751b266cee346</id>
<content type='text'>
We were previously reading the pidfile, and displaying
the pid even if snapd daemon is not running. Now to fix
it, we re-assign pid value to -1, if snapd is offline.

Change-Id: I4baff8d489fe9380061c52aea006db90fa421cd7
BUG: 1358244
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14981
Tested-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We were previously reading the pidfile, and displaying
the pid even if snapd daemon is not running. Now to fix
it, we re-assign pid value to -1, if snapd is offline.

Change-Id: I4baff8d489fe9380061c52aea006db90fa421cd7
BUG: 1358244
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14981
Tested-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: use readdir(3) with glibc, and associated cleanup</title>
<updated>2016-07-18T11:59:42+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2016-07-07T12:51:08+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=561746080b0b7154bfb3bdee20d426cf2ef7db17'/>
<id>561746080b0b7154bfb3bdee20d426cf2ef7db17</id>
<content type='text'>
Starting with glibc-2.23 (i.e. what's in Fedora 25), readdir_r(3)
is marked as deprecated. Specifically the function decl in &lt;dirent.h&gt;
has the deprecated attribute, and now warnings are thrown during the
compile on Fedora 25 builds.

The readdir(_r)(3) man page (on Fedora 25 at least) and World+Dog say
that glibc's readdir(3) is, and always has been, MT-SAFE as long as
only one thread is accessing the directory object returned by opendir().
World+Dog also says there is a potential buffer overflow in readdir_r().
World+Dog suggests that it is preferable to simply use readdir(). There's
an implication that eventually readdir_r(3) will be removed from glibc.
POSIX has, apparently deprecated it in the standard, or even removed it
entirely.

Over and above that, our source near the various uses of readdir(_r)(3)
has a few unsafe uses of strcpy()+strcat().

(AFAIK nobody has looked at the readdir(3) implemenation in *BSD to see
if the same is true on those platforms, and we can't be sure of MacOS
even though we know it's based on *BSD.)

Change-Id: I5481f18ba1eebe7ee177895eecc9a80a71b60568
BUG: 1356998
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14838
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Starting with glibc-2.23 (i.e. what's in Fedora 25), readdir_r(3)
is marked as deprecated. Specifically the function decl in &lt;dirent.h&gt;
has the deprecated attribute, and now warnings are thrown during the
compile on Fedora 25 builds.

The readdir(_r)(3) man page (on Fedora 25 at least) and World+Dog say
that glibc's readdir(3) is, and always has been, MT-SAFE as long as
only one thread is accessing the directory object returned by opendir().
World+Dog also says there is a potential buffer overflow in readdir_r().
World+Dog suggests that it is preferable to simply use readdir(). There's
an implication that eventually readdir_r(3) will be removed from glibc.
POSIX has, apparently deprecated it in the standard, or even removed it
entirely.

Over and above that, our source near the various uses of readdir(_r)(3)
has a few unsafe uses of strcpy()+strcat().

(AFAIK nobody has looked at the readdir(3) implemenation in *BSD to see
if the same is true on those platforms, and we can't be sure of MacOS
even though we know it's based on *BSD.)

Change-Id: I5481f18ba1eebe7ee177895eecc9a80a71b60568
BUG: 1356998
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14838
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot: Fix snapshot creation with geo-rep</title>
<updated>2016-06-02T05:49:56+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2016-06-01T07:12:30+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=3ae22b61f9aa01f0a97f8f1b3ef75add74c02f7d'/>
<id>3ae22b61f9aa01f0a97f8f1b3ef75add74c02f7d</id>
<content type='text'>
The construction of path to geo-rep session directory
is broken with the commit "http://review.gluster.org/13111"
as it saves the slave volume uuid in 'gsync_slaves'
dictionary. This patch fixes the same.

Change-Id: Ic7fc3c37d368549feb44b3a08d60157ce61227c3
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
BUG: 1341474
Reviewed-on: http://review.gluster.org/14595
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The construction of path to geo-rep session directory
is broken with the commit "http://review.gluster.org/13111"
as it saves the slave volume uuid in 'gsync_slaves'
dictionary. This patch fixes the same.

Change-Id: Ic7fc3c37d368549feb44b3a08d60157ce61227c3
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
BUG: 1341474
Reviewed-on: http://review.gluster.org/14595
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
