summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: documenting server.allow-insecureSanju Rakonde2017-10-181-1/+1
| | | | | | | | | | | | problem: "server.allow-insecure" is invisible in gluster volume set help. Fix: "server.allow-insecure" is defined as NO_DOC type, chainging it to DOC type solve the problem. Change-Id: I327f1e4c1684ff846deb8b7df07d4d8a09073274 BUG: 1503424 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* gfproxyd: Let glusterd manage gfproxy daemonPoornima G2017-10-1815-75/+849
| | | | | | | Updates: #242 BUG: 1428063 Change-Id: Iaaf2edf99b2ecc75f6d30762c752a6d445c1c826 Signed-off-by: Poornima G <pgurusid@redhat.com>
* glusterd : introduce timer in mgmt_v3_lockGaurav Yadav2017-10-174-19/+244
| | | | | | | | | | | | | | | | Problem: In a multinode environment, if two of the op-sm transactions are initiated on one of the receiver nodes at the same time, there might be a possibility that glusterd may end up in stale lock. Solution: During mgmt_v3_lock a registration is made to gf_timer_call_after which release the lock after certain period of time Change-Id: I16cc2e5186a2e8a5e35eca2468b031811e093843 BUG: 1499004 Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
* glusterd:Marking all the brick status as stopped when a process goes down in ↵Sanju Rakonde2017-10-121-1/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | brick multiplexing In brick multiplexing environment, if a brick process goes down i.e., if we kill it with SIGKILL, the status of the brick for which the process came up for the first time is only changing to stopped. all other brick statuses are remain started. This is happening because the process was killed abruptly using SIGKILL signal and signal handler wasn't invoked and further cleanup wasn't triggered. When we try to start a volume using force, it shows error saying "Request timed out", since all the brickinfo->status are still in started state, we're waiting for one of the brick process to come up which never going to happen since the brick process was killed. To resolve this, In the disconnect event, We are checking all the processes that whether the brick which got disconnected belongs the process. Once we get the process we are calling a function named glusterd_mark_bricks_stopped_by_proc() and sending brick_proc_t object as an argument. From the glusterd_brick_proc_t we can get all the bricks attached to that process. but these are duplicated ones. To get the original brickinfo we are reading volinfo from brick. In volinfo we will have original brickinfo copies. We are changing brickinfo->status to stopped for all the bricks. Change-Id: Ifb9054b3ee081ef56b39b2903ae686984fe827e7 BUG: 1499509 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* gfproxy: Introduce new server-side daemon called GFProxyShreyas Siravara2017-10-107-21/+360
| | | | | | | | | | | | | | Summmary: Adds a new server-side daemon called gfproxyd & a new FUSE client called gfproxy-client Updates: #242 BUG: 1428063 Change-Id: I83210098d3a381922bc64fed1110ae1b76e6519f Tested-by: Shreyas Siravara <sshreyas@fb.com> Reviewed-by: Kevin Vigor <kvigor@fb.com> Signed-off-by: Shreyas Siravara <sshreyas@fb.com> Signed-off-by: Poornima G <pgurusid@redhat.com>
* glusterd : fix client io-threads option for replicate volumesRavishankar N2017-10-096-34/+92
| | | | | | | | | | | | | | | | | | | Problem: Commit ff075a3d6f9b142911d25c27fd209838782bfff0 disabled loading client-io-threads for replicate volumes (it was set to on by default in commit e068c1997314046658dd502e9118dab32decf879) due to performance issues but in doing so, inadvertently failed to load the xlator even if the user explicitly enabled the option using the volume set command. This was despite returning returning sucess for the volume set. Fix: Modify the check in perfxl_option_handler() and add checks in volume create/add-brick/remove-brick code paths, tying it all to GD_OP_VERSION_3_12_2. Change-Id: Ib612973a999a7da818cc926f5c2601b1f0794fcf BUG: 1498570 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* cli/afr: gluster volume heal info "healed" command output is not appropriateMohit Agrawal2017-10-041-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | Problem: "gluster volume heal info [healed] [heal-failed]" command output on terminal is not appropriate in case of down any volume. Solution: To make message more appropriate change the condition in function "gd_syncop_mgmt_brick_op". Test : To verify the fix followed below procedure 1) Create 2*3 distribute replicate volume 2) set self-heal daemon off 3) kill two bricks (3, 6) 4) create some file on mount point 5) bring brick 3,6 up 6) kill other two brick (2 and 4) 7) make self heal daemon on 8) run "gluster v heal <vol-name>" Note: After apply the patch options (healed | heal-failed) will deprecate from command line. BUG: 1388509 Change-Id: I229c320c9caeb2525c76b78b44a53a64b088545a Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* glusterd/snapshot: Buffer Size WarningSanju Rakonde2017-10-031-1/+1
| | | | | | | | | | | new_brickinfo->mnt_opts is allocated memory using calloc. So it is already zeroed out at allocation. we need not to add null character at the end. we pass one less than maximum size to the strncpy to make sure that destination string is terminated. Change-Id: I463dddd2171fb39a509bb75ffcc074d5b1cf7d62 BUG: 789278 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: FORWARD_NULL coverity fixAkarsha Rai2017-09-261-11/+22
| | | | | | | | | | | | Problem: Pointer used to print xlator name could be NULL. Solution: Updated the code to use xlator name as appropriate. BUG: 789278 Change-Id: I26927ef1f33f362e17c104684d7f722a643c7f97 Signed-off-by: Akarsha Rai <akrai@redhat.com>
* Coverity Issue Fix: IDENTICAL_BRANCHESGirjesh Rajoria2017-09-261-2/+0
| | | | | | | | | | | | Issue: Event identical_branches: The same code is executed when the condition "ret" is true or false, because the code in the if-then branch and after the if statement is identical. Function: glusterd_print_gsync_status_by_vol Fix: removed if and goto statement. Change-Id: I966d793c9f3b65487acfb07083a4039caf593105 BUG: 789278 Signed-off-by: Girjesh Rajoria <grajoria@redhat.com>
* glusterd: retrieve uuid under mutex lockAtin Mukherjee2017-09-251-7/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | In a multi node cluster, if one of the glusterd instance goes down and comes back, then there might be a race situation where glusterd needs to retrieve its uuid (glusterd_retrieve_uuid) and at the same time as part of receiving a friend handshake from other peer, glusterd iterates over the volume information recieved from remote node and checks for if a brick is local or not by calling MY_UUID which in turn calls glusterd_retrieve_uuid. And the same applies for glusterd_store_global_info () function too. This could end up in a situation where for the same node glusterd ends up generating two UUID files in /var/lib/glusterd. Following is the log snippet which confirms the above: [2017-09-01 03:09:24.458030] I [glusterd.c:146:glusterd_uuid_init] 0-management: retrieved UUID: fd46a495-7e33-468f-88f6-63c815fac640 // thread 1 retrieve uuid from glusterd.info [2017-09-01 03:09:24.458034] E [glusterd-store.c:2109:glusterd_retrieve_uuid] 0-: No previous uuid is present //thread 2 can not retrieve uuid, because in thread1 the file pointer has already become eof. [2017-09-01 03:09:24.458041] E [glusterd-store.c:2117:glusterd_retrieve_uuid] 0-: Returning -1 [2017-09-01 03:09:24.458076] I [glusterd.c:176:glusterd_uuid_generate_save] 0-management: generated UUID: 190bb292-a296-4125-96da-42b247511cc4 [2017-09-01 03:09:24.458129] E [store.c:367:gf_store_save_value] 0-: Able to store key: UUID,value: 190bb292-a296-4125-96da-42b247511cc4 Fix is to retrieve the uuid under mutex lock. Credits : cynthia.zhou@nokia-sbell.com Change-Id: Ib9a5e159c3febf2aef13aa5e38f0a51fe409dadb BUG: 1493967 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: disallow replace brick for dist only volumesAtin Mukherjee2017-09-191-1/+11
| | | | | | | | | | | | | | | | | | Allowing replace-brick on dist only volumes will lead to data loss. This patch blocks replace brick commit force to fail if a volume is dist only. Also removing tests/basic/pump.t as its of no use as per the discussion in http://lists.gluster.org/pipermail/gluster-devel/2017-September/053652.html Change-Id: Iabb0c16f865f3fc361b64a19bfcf0c0fbb5c2682 BUG: 1489432 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/18226 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* heal: New feature heal info summary to list the status of brick and count of ↵Mohamed Ashiq Liyazudeen2017-09-151-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | entries to be healed Command output: Brick 192.168.2.8:/brick/1 Status: Connected Total Number of entries: 363 Number of entries in heal pending: 362 Number of entries in split-brain: 0 Number of entries possibly healing: 1 <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <healInfo> <bricks> <brick hostUuid="9105dd4b-eca8-4fdb-85b2-b81cdf77eda3"> <name>192.168.2.8:/brick/1</name> <status>Connected</status> <totalNumberOfEntries>363</numberOfEntries> <numberOfEntriesInHealPending>362</numberOfEntriesInHealPending> <numberOfEntriesInSplitBrain>0</numberOfEntriesInSplitBrain> <numberOfEntriesPossiblyHealing>1</numberOfEntriesPossiblyHealing> </brick> </bricks> </healInfo> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> </cliOutput> Change-Id: I40cb6f77a14131c9e41b292f4901b41a228863d7 BUG: 1261463 Signed-off-by: Mohamed Ashiq Liyazudeen <mliyazud@redhat.com> Reviewed-on: https://review.gluster.org/12154 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: Karthik U S <ksubrahm@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: fix invalid memory reference returnedXavier Hernandez2017-09-131-2/+9
| | | | | | | | | | | Change-Id: I0823c7b33060b48040c1d86ad346a5f6e15bc190 BUG: 1490897 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: https://review.gluster.org/18263 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> Reviewed-by: Gaurav Yadav <gyadav@redhat.com>
* Command to identify client processhari gowtham2017-09-064-4/+229
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | command: gluster volume status <volname/all> client-list output: Client connections for volume v1 Name count ----- ------ fuse 2 tierd 1 total clients for volume v1 : 3 ----------------------------------------------------------------- Client connections for volume v2 Name count ----- ------ tierd 1 fuse.gsync 1 total clients for volume v2 : 2 ----------------------------------------------------------------- Updates: #178 Change-Id: I0ff2579d6adf57cc0d3bd0161a2ec6ac6c4747c0 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: https://review.gluster.org/18095 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* snapshot: clang compile warningSunny Kumar2017-09-041-1/+1
| | | | | | | | | | | | | | | | | | | | | Warning type: NO_EFFECT xlators/mgmt/glusterd/src/glusterd-snapshot.c:4004 : warning: comparison of constant -1 with expression of type 'gf_boolean_t' (aka 'enum _gf_boolean') is always false. solution : "timestamp" was formerly declared as gf_boolean_t, now changed to int. Change-Id: Ie461537f06fe2971e2b09a428a22331808c41a13 BUG: 1335251 Signed-off-by: Sunny Kumar <sunkumar@redhat.com> Reviewed-on: https://review.gluster.org/18062 Tested-by: Sunny Kumar Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: spelling errors reported by Debian maintainerKaleb S. KEITHLEY2017-09-042-4/+4
| | | | | | | | | | | | Reported-by: "Patrick Matthäi" <pmatthaei@debian.org> Change-Id: I0dd6b7d88ddf3c98e8083b75f8dd848babcfd30a BUG: 1487840 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/18185 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* debug/delay-gen: Implement delay-generation featurePranith Kumar K2017-08-313-26/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Background: I was working on a customer issue where the disks were responding some times after seconds. It was becoming very difficult to recreate the issues in our labs, so had to come up with this feature. Requirements: We need an xlator which can delay x% of ops for y micro seconds. We should be able to enable delays for specific fops. This feature is modeled after error-gen. Most of the logic is borrowed from that xlator. This is a minimum implementation of the feature which satisfied the requirements I had. May be in future with more requirements and understanding of the problem further we can improve upon this implementation. Here are the commands and what they do: Enable delay-gen: (This is similar to how err-gen is enabled on the brick side) - gluster volume set <volname> delay-gen posix Set the percentage of fops that need to be delayed - gluster volume set <volname> delay-gen.delay-percentage 50 Default is 10% Set the delay in micro seconds - gluster volume set <volname> delay-gen.delay-duration 500000 Default is 100000 Set comma separated fops to be delayed - gluster v set r2 delay-gen.enable read,write Default is all fops. Fixes #257 Change-Id: Ib547bd39cc024c9cdb63754d21e3aa62fc9d6473 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17591 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* feature/posix: Enabled gfid2path by defaultKotresh HR2017-08-291-0/+1
| | | | | | | | | | | | | | | | Enable gfid2path feature by default. The basic performance tests are carried out and it doesn't show significant depreciation. The results are updated in issue. Updates: #139 Change-Id: I5f1949a608d0827018ef9d548d5d69f3bb7744fd Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: https://review.gluster.org/17950 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* libglusterfs: Add new fields to volume_options structKaushal M2017-08-294-182/+182
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The new fields are required to enable equivalent volume set and volgen features, and some more additional features in GD2. GD2 does not use a hard-coded volume options map like GD1, but builds such by reading the options tables directly from the xlators. The new fields being introduced into the volume options struct include the following, - op-version - version(s) the option was introduced in - deprecated - version(s) the option was deprecated in - flags - flags for the option (settable, client, global, force, doc etc.) - tags - descriptive tags that apply to this option, can be used to group options - validate_fn - custom option validation function Enums for currently available flags have also been defined. To avoid a naming clashes, the flag enums in GD1 have been renamed. Updates #302 Change-Id: Ic7e08aef9e051beb47e8dc17d7f7be211aed308a Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: https://review.gluster.org/18059 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* glusterd: disable rpc_clnt_t after relalance process disconnectionMilind Changire2017-08-241-1/+1
| | | | | | | | | | | | | | | | | | | | | Problem: glusterd continues to connect to rebalance process even after the socket connection has disconnected. Solution: rpc_clnt_disable() disables the rpc_clnt_t object and disarms all relevant timers and drops refs to the rpc_clnt_t object and the transport as well. Change-Id: I981d6f1cc0087037f1927062c2770a4d5026a619 BUG: 1484225 Signed-off-by: Milind Changire <mchangir@redhat.com> Reviewed-on: https://review.gluster.org/18114 Reviewed-by: MOHIT AGRAWAL <moagrawa@redhat.com> Tested-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: do not create .glusterfs/indicesPrashanth Pai2017-08-233-26/+15
| | | | | | | | | | | | | | | | | | | | | glusterd shouldn't concern itself with creating directories specific to certain xlators. The index xlator will now proceed creating './glusterfs/indices' dir only if the parent '.glusterfs' directory exists, which still fixes the original problem reported i.e 'volume start force' command shouldn't create brick path if it doesn't exist (BUG 1457202) This reverts most of the changes done by the commit b58a15948fb3fc37b6c0b70171482f50ed957f42 Change-Id: I7fc52ad64dce220e336c218fb4d85933ca2e61c0 Signed-off-by: Prashanth Pai <ppai@redhat.com> Reviewed-on: https://review.gluster.org/18003 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd: replace-brick executing successfully when quorum does not metGaurav Yadav2017-08-221-0/+9
| | | | | | | | | | | | | | | | | | Problem: replace-brick command on a setup where quorum does not met executing successfully. Fix: With the fix glusterd is validating whether server is in quorum or not during replace-brick staging Change-Id: I8017154bb62bdcc6c6490e720ecfe9cde090c161 BUG: 1483058 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Reviewed-on: https://review.gluster.org/18068 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: disallow volume specific options to be set with all as volume nameAtin Mukherjee2017-08-181-0/+8
| | | | | | | | | | | | | | | | All the .validate_fn defined in volume map entry table refers to volinfo object. And if we end up in trying to set a volume level option cluster wide glusterd results into a crash. Change-Id: I7c877aee0ff5c8c1d8c95662fdc8c8923355ae7b BUG: 1482344 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/18052 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Gaurav Yadav <gyadav@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: introduce max-port rangeAtin Mukherjee2017-08-177-14/+63
| | | | | | | | | | | | | | | | | glusterd.vol file always had an option (commented out) to indicate the base-port to start the portmapper allocation. This patch brings in the max-port configuration where one can limit the range of ports which gluster can be allowed to bind. Fixes: #305 Change-Id: Id7a864f818227b9530a07e13d605138edacd9aa9 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/18016 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Gaurav Yadav <gyadav@redhat.com>
* glusterd: improve op-version error messageAtin Mukherjee2017-08-161-2/+6
| | | | | | | | | | | | | | | | | In case of a heterogeneous cluster if an admin sets a volume option which is higher than the current op-version the current error message only displays the version number of the new tunable. Better use experience is to dump the supported op-version in the error message as well. Change-Id: Ib42dadc574a401f79364178cb41ad21fb0b4432b BUG: 1328994 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/18020 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Joe Julian <me@joejulian.name> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com>
* glusterd: Fix the debug and error-gen turned on issueRyan Ding2017-08-161-0/+2
| | | | | | | | | | | | | | | | | | | | | Problem: When enabling one of the debug or the error-gen xlator, both of them will be turned on together. The problem is that when adding debug xlators into the graph, its on/off status is ignored to be checked. Fix: When adding a debug xlator, check its status and skip those with a value of "off". Change-Id: Ifa79314ff337b60ee941cfb0308c1ccbe42d027f BUG: 1395492 Signed-off-by: Zhang Huan <zhanghuan@open-fs.com> Reviewed-on: https://review.gluster.org/15849 Tested-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* Infra to indentify processhari gowtham2017-08-166-1/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: currently we can't identify which process is running and how many instances of it are available. Fix: name the process when its spawned and send it to the server and save it in the client_t The processes that abide by this change from this patch are: 1) fuse mount, 2) rebalance, 3) selfheal, 4) tier, 5) quota, 6) snapshot, 7) brick. 8) gfapi (by default. gfapi.<processname> if processname is found) Note: fuse gets a process name as native-fuse-client by default. If the user gives a name for the fuse and spawns it, it will be of this type --process-name native-fuse-client.<name_specified>. This can be made use by the process like aux mount done by quota, geo-rep, etc by adding another option in the aux mount " -o process-name=gsync_mount" Updates: #178 Signed-off-by: hari gowtham <hgowtham@redhat.com> Change-Id: Ie4d02257216839338043737691753bab9a974d5e Reviewed-on: https://review.gluster.org/17957 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* glusterd: Gluster should keep PID file in correct locationGaurav Kumar Garg2017-08-116-24/+152
| | | | | | | | | | | | | | | | | | | | | | | Currently Gluster keeps process pid information of all the daemons and brick processes in Gluster configuration file directory (ie., /var/lib/glusterd/*). These pid files should be seperate from configuration files. Deletion of the configuration file directory might result into serious problems. Also, /var/run/gluster is the default placeholder directory for pid files. So, with this fix Gluster will keep all process pid information of all processes in /var/run/gluster/* directory. Change-Id: Idb09e3fccb6a7355fbac1df31082637c8d7ab5b4 BUG: 1258561 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com> Reviewed-on: https://review.gluster.org/13580 Tested-by: MOHIT AGRAWAL <moagrawa@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* mgmt/glusterd: Provide more information in command messageAshish Pandey2017-08-101-3/+5
| | | | | | | | | | | | | | | | | | | | | Problem: When more than one bricks are present on the same node, while creating a volume, we get a warning message that the setup is not optimal. We need to add more information in this error/warning. Solution: Add following line in current message. Bricks should be on different nodes to have best fault tolerant configuration. Change-Id: Ica72bd6e68dff7e41c37617f3b775a981fa40c69 BUG: 1480099 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: https://review.gluster.org/18014 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Block brick attach request till the brick's ctx is setMohit Agrawal2017-08-083-24/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: In multiplexing setup in a container environment we hit a race where before the first brick finishes its handshake with glusterd, the subsequent attach requests went through and they actually failed and glusterd has no mechanism to realize it. This resulted into all the such bricks not to be active resulting into clients not able to connect. Solution: Introduce a new flag port_registered in glusterd_brickinfo to make sure about pmap_signin finish before the subsequent attach bricks can be processed. Test: To reproduce the issue followed below steps 1) Create 100 volumes on 3 nodes(1x3) in CNS environment 2) Enable brick multiplexing 3) Reboot one container 4) Run below command for v in ‛gluster v list‛ do glfsheal $v | grep -i "transport" done After apply the patch command should not fail. Note: A big thanks to Atin for suggest the fix. BUG: 1478710 Change-Id: I8e1bd6132122b3a5b0dd49606cea564122f2609b Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/17984 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* glusterd: Add geo-replication session details to get-state outputSamikshan Bairagya2017-08-043-1/+132
| | | | | | | | | | | | | | | This commit adds support to the get-state CLI to capture details on geo-replication session as obtained in `gluster volume geo-replication status detail` in its output. Fixes: #291 Change-Id: I2fbcba70bfdaf439522637234805545194777ed4 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17941 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shubhendu Tripathi <shtripat@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* logging: localtime logging, cmdline, volume set optionKaleb S. KEITHLEY2017-08-038-4/+155
| | | | | | | | | | | | | | | | | Despite the fact that appliances generally use UTC, some users really want log entries in localtime. fixes gluster/glusterfs#272 feature page: https://review.gluster.org/17807 Change-Id: I5fbf2c3eedd9eb128fb3f851dd67b2f4081c8bba Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/16911 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* tier: separation of attach-tier from add-brickhari gowtham2017-08-017-5/+352
| | | | | | | | | | | | | | | | | | PROBLEM: Both attach tier and add brick have the same RPC and set of code. This becomes a hurdle while tring to implement add brick on a tiered volume. FIX: This patch separates the add brick and attach tier giving them separate RPCs. Change-Id: Iec57e972be968a9ff00b15b507e56a4f6dc398a2 BUG: 1376326 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: https://review.gluster.org/15503 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: streamline logic flow in glusterd_validate_quorum()Michael Adam2017-08-011-5/+7
| | | | | | | | | | | | | | | | | | | Make an earlier exit when the volume is not of server quorum. Thereby it spares the actual quorum calculation in this case. This also increases the overall readability of the function. The patch is best seen with the --patience diff option to understand what it does (e.g. "git show --patience"). Change-Id: Ifce50bc3f73d79d3d6226473661a83696d65149a BUG: 1476719 Signed-off-by: Michael Adam <obnox@samba.org> Reviewed-on: https://review.gluster.org/17924 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: Atin Mukherjee <amukherj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: improve intuitivity of check in glusterd_get_quorum_cluster_countsMichael Adam2017-07-311-1/+1
| | | | | | | | | | | | | More intuitive to check for ret == 0 than !ret here... Change-Id: I8177a0bc8f266331187f5f2eeefea8a25cfcb30a Signed-off-by: Michael Adam <obnox@samba.org> BUG: 1476410 Reviewed-on: https://review.gluster.org/17912 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Atin Mukherjee <amukherj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: fix misleadingly named GF_FOR_EACH_ENTRY_IN_DIRJeff Darcy2017-07-313-13/+13
| | | | | | | | | | | | | | | What it really does is skip irrelevant entries like . and .. until we're at an entry we might actually care about. Renamed to GF_SKIP_IRRELEVANT_ENTRIES accordingly. Change-Id: If0464451a8243c29c0a93b4c6f0f0eda2fade44c Signed-off-by: Jeff Darcy <jdarcy@fb.com> Reviewed-on: https://review.gluster.org/17901 Reviewed-by: Niels de Vos <ndevos@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: Jeff Darcy <jeff@pl.atyp.us> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* storage/posix: Add virtual xattr to fetch path from gfidKotresh HR2017-07-281-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The gfid2path infra stores the "pargfid/bname" as on xattr value for each non directory entry. Hardlinks would have a separate xattr. This xattr key is internal and is not exposed to applications. A virtual xattr is exposed for the applications to fetch the path from gfid. Internal xattr: trusted.gfid2path.<xxhash> Virtual xattr: glusterfs.gfidtopath getfattr -h -n glusterfs.gfidtopath /<aux-mnt>/.gfid/<gfid> If there are hardlinks, it returns all the paths separated by ':'. A volume set option is introduced to change the delimiter to required string of max length 7. gluster vol set gfid2path-separator ":::" Updates: #139 Change-Id: Ie3b0c3fd8bd5333c4a27410011e608333918c02a Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: https://review.gluster.org/17785 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
* glusterd : glusterd fails to start when peer's network interface is downGaurav Yadav2017-07-283-2/+17
| | | | | | | | | | | | | | | | | | | | | Problem: glusterd fails to start on nodes where glusterd tries to come up even before network is up. Fix: On startup glusterd tries to resolve brick path which is based on hostname/ip, but in the above scenario when network interface is not up, glusterd is not able to resolve the brick path using ip_address or hostname With this fix glusterd will use UUID to resolve brick path. Change-Id: Icfa7b2652417135530479d0aa4e2a82b0476f710 BUG: 1472267 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Reviewed-on: https://review.gluster.org/17813 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: make peerfile parsing more robustJeff Darcy2017-07-271-14/+38
| | | | | | | | | | | | | | Differential Revision: https://phabricator.intern.facebook.com/D5498639 Change-Id: I3184ed8f3dadbdcffd46f4ade855fa93131efa82 BUG: 1462969 Signed-off-by: Jeff Darcy <jdarcy@fb.com> Reviewed-on: https://review.gluster.org/17885 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: Jeff Darcy <jeff@pl.atyp.us> Reviewed-by: Prashanth Pai <ppai@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: highlight arbiter brick in get-stateAtin Mukherjee2017-07-261-1/+25
| | | | | | | | | | | | Fixes: #278 Change-Id: I1af5255127457a70e6362a2c20c53ee533e27c29 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17864 Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-by: Shubhendu Tripathi <shtripat@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: fix compile warning in glusterd_volinfo_copy_brickinfo()Niels de Vos2017-07-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | gcc 7 (default in Fedora 26) complains about the following: glusterd-utils.c: In function ‘glusterd_volinfo_copy_brickinfo’: glusterd-utils.c:4279:54: warning: comparison between pointer and zero character constant [-Wpointer-compare] if (old_brickinfo->real_path == '\0') { ^~ glusterd-utils.c:4279:29: note: did you mean to dereference the pointer? if (old_brickinfo->real_path == '\0') { ^ Comparing a char* with a char is not correct in any case. Instead, compare it to NULL and the char[0] with '\0'. Change-Id: Ie5b925cd200416a1e2fa035046005f421994e641 Updates: #259 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/17847 Tested-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* posix: Needs to reserve disk space to prevent the brick from getting fullMohit Agrawal2017-07-251-0/+4
| | | | | | | | | | | | | | | | | | | | Problem: Currently there is no option available at posix xlator to save the disk from getting full Solution: Introduce a new option storage.reserve at posix xlator to configure disk threshold.posix xlator spawn a thread to update the disk space status in posix private structure and same flag is checked by every posix fop before start operation.If flag value is 1 then it sets op_errno to ENOSPC and goto out from the fop. BUG: 1471366 Change-Id: I98287cd409860f4c754fc69a332e0521bfb1b67e Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/17780 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* glusterd: Add option to get all volume options through get-state CLISamikshan Bairagya2017-07-251-7/+38
| | | | | | | | | | | | | | | | | | | | This commit makes the get-state CLI capable to returning the values for all volume options for all volumes. This is similar to what you get when you issue a `gluster volume get <volname> all` command. This is the new usage for the get-state CLI: # gluster get-state [<daemon>] [[odir </path/to/output/dir/>] \ [file <filename>]] [detail|volumeoptions] Fixes: #277 Change-Id: Ice52d936a5a389c6fa0ba5ab32416a65cdfde46d Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17858 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Gaurav Yadav <gyadav@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org>
* glusterd: add rebal estimates time in get-stateAtin Mukherjee2017-07-241-0/+2
| | | | | | | | | | Fixes: #279 Change-Id: If62fa59042604c9450749d3012c7a962ed0eb374 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17862 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* posix: option to handle the shared bricks for statvfs()Amar Tumballi2017-07-248-12/+97
| | | | | | | | | | | | | | | | | | | | Currently 'storage/posix' xlator has an option called option `export-statfs-size no`, which exports zero as values for few fields in `struct statvfs`. In a case of backend brick shared between multiple brick processes, the values of these variables should be `field_value / number-of-bricks-at-node`. This way, even the issue of 'min-free-disk' etc at different layers would also be handled properly when the statfs() sys call is made. Fixes #241 Change-Id: I2e320e1fdcc819ab9173277ef3498201432c275f Signed-off-by: Amar Tumballi <amarts@redhat.com> Reviewed-on: https://review.gluster.org/17618 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: fix brick start raceAtin Mukherjee2017-07-201-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Another race where glusterd was restarted glusterd_brick_start () is called multiple times due to friend handshaking and in one instance when one of the brick was attempted to be attached to the existing brick process, send_attach_req failed as the first brick itself was still not up and then we did a synlock_unlock () followed by a sleep of 1 sec, before the same thread woke up, another thread tried to start the same brick process and then it assumed that it has to start a fresh brick process. Solution: 1. If brick is in starting phase (brickinfo->status == GF_BRICK_STARTING), no need for a reattempt to start the brick. 2. While initiating attach_req set brickinfo->status to GF_BRICK_STARTING Change-Id: Ib007b6199ec36fdab4214a1d37f99d7f65ef64da BUG: 1465559 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17840 Reviewed-by: Amar Tumballi <amarts@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* glusterd: Set default value for cluster.max-bricks-per-process to 0Samikshan Bairagya2017-07-193-15/+31
| | | | | | | | | | | | | | | | | | | When brick-multiplexing is enabled, and "cluster.max-bricks-per-process" isn't explicitly set, multiplexing happens without any limit set. But the default value set for that tunable is 1, which is confusing. This commit sets the default value to 0, and prevents the user from being able to set this value to 1 when brick-multiplexing is enbaled. The default value of 0 denotes that brick-multiplexing can happen without any limit on the number of bricks per process. Change-Id: I4647f7bf5837d520075dc5c19a6e75bc1bba258b BUG: 1472417 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17819 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* libglusterfs: Name threads on creationRaghavendra Talur2017-07-191-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Set names to threads on creation for easier debugging. Output of top -H -p <PID-OF-GLUSTERFSD> Before: 19773 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19774 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19775 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19776 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19777 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19778 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19779 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19780 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19781 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19782 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19783 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19784 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19785 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd 19786 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd 19787 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd 19789 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19790 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 25178 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 5398 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 7881 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd After: 19773 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19774 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustertimer 19775 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19776 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustermemsweep 19777 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustersproc0 19778 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustersproc1 19779 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll0 19780 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusteridxwrker 19781 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusteriotwr0 19782 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterbrssign 19783 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterbrswrker 19784 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterclogecon 19785 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd0 19786 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd1 19787 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd2 19789 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixjan 19790 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixfsy 25178 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll1 5398 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll2 7881 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixhc Change-Id: Id5f333755c1ba168a2ffaa4fce6e71c375e10703 BUG: 1254002 Updates: #271 Signed-off-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-on: https://review.gluster.org/11926 Reviewed-by: Prashanth Pai <ppai@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: Add description field to global options for brick-muxSamikshan Bairagya2017-07-171-2/+12
| | | | | | | | | | | | | | | | Currently the "cluster.brick-multiplex" and "cluster.max-bricks-per-process" options do not show anything in the description field when gluster volume set help is called. This commit adds the description fields for these 2 options. Change-Id: I3d162c61fa2774dd994f046e305d457f0fd43192 BUG: 1471790 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17790 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Gaurav Yadav <gyadav@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>