summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt
Commit message (Collapse)AuthorAgeFilesLines
...
* glusterd: copy snapshot object during duplication of volfileMohammed Rafi KC2015-11-261-0/+2
| | | | | | | | | | | | | | | | | | | | | Back port of > http://review.gluster.org/#/c/12734/ When creating duplicate volfile for hot/cold tier, we need to copy the snapshot object in to volfile as it requires to generate snapshot brick volfile. >Change-Id: I39ccfa20cd1c16ef2801901e3cd3a31c76f8995d >BUG: 1284789 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Change-Id: Ia0892dfc3af24ee428e0aa0a3e23063a91049a57 BUG: 1285629 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12756 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* cli: Indicate which brick of the replica is the arbiterRavishankar N2015-11-251-0/+6
| | | | | | | | | | | | | | | Backport of http://review.gluster.org/#/c/12747/ Enhances the cli output for arbiter volumes as requested in the BZ. Signed-off-by: Ravishankar N <ravishankar@redhat.com> Change-Id: I28cc34d7d19def043d54291cede25a58dbcc5051 BUG: 1283570 Reviewed-on: http://review.gluster.org/12748 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Change volume start into v3 frameworkMohammed Rafi KC2015-11-254-12/+100
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | As part of volume start, if the volume is of tier type then we need to start tiering daemon also. But before starting tier daemon all the bricks should be started. So by changing volume start into v3 framework, we can do tier start in post validate phase Backport of> >Change-Id: If921067f4739e6b9a3239fc5717696eaf382c22a >BUG: 1284372 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> >Reviewed-on: http://review.gluster.org/12718 >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Avra Sengupta <asengupt@redhat.com> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> (cherry picked from commit 03f731a8b32db7bef7c5e9ffc11c16f670ffe960) Change-Id: Id6fd5555d16d605eb344efd9b9e261644469ecef BUG: 1285335 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12749 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd/bitrot : Integration of bad files from bitd with scrub status commandGaurav Kumar Garg2015-11-231-1/+40
| | | | | | | | | | | | | | | | | | | | | | | This patch is backport of: http://review.gluster.org/#/c/12720/ Currently scrub status command is not displaying list of all the bad files. All the bad files are avaliable in the bitd daemon. With this patch it will dispaly list of all the bad file's in the scrub status command. >> Change-Id: If09babafaf5d7cf158fa79119abbf5b986027748 >> BUG: 1207627 >> Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Change-Id: If09babafaf5d7cf158fa79119abbf5b986027748 BUG: 1283881 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/12725 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd/geo-rep: Adding ssh-port option for geo-rep createKotresh HR2015-11-231-4/+33
| | | | | | | | | | | | | | | | | | | | | | Geo-replication uses default ssh port 22 for setup. i.e., to distribute ssh keys to slaves. In container environments, custom port number might be used. Hence to support custom port number for ssh, option is provided in geo-rep create command to take the same. Change-Id: I0fb61959b1c085342b8e4c21ac4e076fba5462f1 BUG: 1283060 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/12504 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> (cherry picked from commit 5bb3c521431cc27b2826acd889bffb2f90ae7f73) Reviewed-on: http://review.gluster.org/12652 Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* geo-rep: Allow setting config remote_gsyncdAravinda VK2015-11-231-1/+0
| | | | | | | | | | | | | | | | | | | | | Restrictive ssh is not used in containerized environment where networking configuration is "net=host". SSH Pem keys pushed to the slave without gsyncd path in it. (Patch #12459) Actual remote_gsyncd path need to be set to actual path of gsyncd. With this patch, remote_gsyncd is removed from reserved option list. Change-Id: Ia2063e4654e378b62b2414bdad21143c86ad1b9a Signed-off-by: Aravinda VK <avishwan@redhat.com> BUG: 1283060 Reviewed-on: http://review.gluster.org/12472 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Saravanakumar Arumugam <sarumuga@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> (cherry picked from commit 7de355b42dc1f8313db3ffc775a0e1708ba85243) Reviewed-on: http://review.gluster.org/12644 Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: cli command implementation for bitrot scrub statusGaurav Kumar Garg2015-11-227-10/+553
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch is backport of: http://review.gluster.org/10231 CLI command for bitrot scrub status will be : gluster volume bitrot <volname> scrub status Above command will show the statistics of bitrot scrubber. Upon execution of this command it will show some common scrubber tunable value of volume <VOLNAME> followed by statistics of scrubber statistics of individual nodes. sample ouput for single node: Volume name : <VOLNAME> State of scrub: Active Scrub frequency: biweekly Bitrot error log location: /var/log/glusterfs/bitd.log Scrubber error log location: /var/log/glusterfs/scrub.log ========================================================= Node name: Number of Scrubbed files: Number of Unsigned files: Last completed scrub time: Duration of last scrub: Error count: ========================================================= This is just infrastructure. list of bad file, last scrub time, error count value will be taken care by http://review.gluster.org/#/c/12503/ and http://review.gluster.org/#/c/12654/ patches. >> Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1 >> BUG: 1207627 >> Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> >> Reviewed-on: http://review.gluster.org/10231 >> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> >> Tested-by: NetBSD Build System <jenkins@build.gluster.org> >> Tested-by: Gluster Build System <jenkins@build.gluster.com> Change-Id: I45ed94e5e0e78a1e007c30eb0b252f74cf3c9187 BUG: 1283881 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/12704 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* Tiering: Adding space between error meassge for detach-tierhari gowtham2015-11-201-1/+1
| | | | | | | | | | | | | | | | | | | | back port of : http://review.gluster.org/#/c/12657/ >Change-Id: I730cf7fa6fbfb3842d337cd3d7b8394b9c3876d8 >BUG: 1283488 >Signed-off-by: hari gowtham <hgowtham@redhat.com> >Reviewed-on: http://review.gluster.org/12657 >Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Dan Lambright <dlambrig@redhat.com> >Tested-by: Dan Lambright <dlambrig@redhat.com> Change-Id: I0858a6d898fb22c6af87c79af7adc18f924a4f75 BUG: 1283856 Signed-off-by: Hari Gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12703 Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* cluster/tier make cache mode default for tiered volumesDan Lambright2015-11-192-1/+4
| | | | | | | | | | | | | | | | | | | | | | | The default mode for tiered volumes must be cache. The current test mode was for engineering and should ordinarily not be used by customers. This is a back port of 12581 > Change-Id: I20583f54a9269ce75daade645be18ab8575b0b9b > BUG: 1282076 > Signed-off-by: Dan Lambright <dlambrig@redhat.com> > Reviewed-on: http://review.gluster.org/12581 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Change-Id: Ib2629d6d3e9b9374fddb5bc21cf068a1bcd96b9d BUG: 1283288 Reviewed-on: http://review.gluster.org/12647 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* snapshot: Inherit snap-max-hard-limit from original volumeAvra Sengupta2015-11-172-1/+1
| | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/#/c/12437/ A snapshot should inherit snap-max-hard-limit from the original volume while being created and when being restored to, it should restore the same. Similarly a clone taken from a snapshot should inherit snap-max-hard-limit from the snapshot. Change-Id: If8e90e2ffc10e22086b803ac8e2638a16bcec968 BUG: 1277390 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/12437 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> (cherry picked from commit 1f74a3efbd0337759878ffff5cd4ee6782ddfe3f) Reviewed-on: http://review.gluster.org/12492
* mgmt/glusterd: Store arbiter-count and restore itPranith Kumar K2015-11-174-1/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.com/12475 Problem: 1) Glusterd doesn't remember about arbiter information of replica volume in store. When glusterd goes down and comes backup, arbiter volumes will become replica volumes. 2) Glusterd doesn't import/export arbiter information to/from the other peers. 3) Volume info doesn't show any arbiter count in the output. Fix: 1) Persist arbiter information in glusterd-store 2) Import/Export arbiter information of the volume 3) Change volume info output to show arbiter count. >Change-Id: I2db81e73d2694b01f7d07b08a17b41ad5a55c361 >BUG: 1276675 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> BUG: 1276907 Change-Id: I95c9857d645e02831892092bdd07539cc1a58270 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/12479 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* tier/ctr: Providing option to record or ignore metadata heatJoseph Fernandes2015-11-161-0/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we heat up a file for both data and metadata write. Here we provide a ctr xlator option called "ctr-record-metadata-heat" were the admin can decide on recording metadata heat i.e heatup a file on metadata writes or not. Metadata data operation are a. setattr: explicit changing of atime/mtime using utimes, changing of posix permissions of the file b. rename: Renaming a file, c. unlink, link: adding or deleting hardlinks d. xattrs: setting or removal of xattrs. NOTE: atime, mtime and ctime change through writev, readv, truncate, mknod and create will not be considered here as these fops are data and primary metadata fops. Defaultly "ctr-record-metadata-heat" is off. Admin can switch it on using gluster volume set command. Backport of http://review.gluster.org/12540 > Change-Id: I91157509255dd5cb429cda2b6d4f64582e155e7b > BUG: 1279166 > Signed-off-by: Joseph Fernandes <josferna@redhat.com> > Reviewed-on: http://review.gluster.org/12540 > Tested-by: NetBSD Build System <jenkins@build.gluster.org> > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Dan Lambright <dlambrig@redhat.com> > Tested-by: Dan Lambright <dlambrig@redhat.com> Signed-off-by: Joseph Fernandes <josferna@redhat.com> Change-Id: I986c319f0cc337b0692a1dd02f71254e786afac4 BUG: 1282315 Reviewed-on: http://review.gluster.org/12582 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* glusterd: brick failed to startMohammed Rafi KC2015-11-161-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | brick volfiles are generated in post validate, if it is running version higher than GLUSTER_3_7_5, else will be running in syncop. If the code fall back to syncop, and volume is stopped then we were returning the operation with out generating volfiles. back port of > >Change-Id: I3b16ee29de19c5d34e45d77d6b7e4b665c2a4653 >BUG: 1282322 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> >Reviewed-on: http://review.gluster.org/12552 >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> (cherry picked from commit 571cbcf56ef865d64ebdb1621c791fe467501e52) Change-Id: I3b16ee29de19c5d34e45d77d6b7e4b665c2a4653 BUG: 1279351 Reviewed-on: http://review.gluster.org/12585 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* cluster/tier: Disallow detach commit when detach in progressDan Lambright2015-11-161-0/+24
| | | | | | | | | | | | | 1. Check if detach is running, disallow detach commit if so. 2. Cleanup shutdown of tier daemon on detach: do not rerun fix-layout, do not send incorrect status back to glusterd. Change-Id: I97202f748773c1176396a4ffd32a4c7fa9b9c1bc BUG: 1264441 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/12272 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Joseph Fernandes
* snapshot : copying nfs-ganesha export fileJiffin Tony Thottan2015-11-104-0/+213
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/#/c/12483/ While taking snapshot, the export file used by the volume should copy to snap directory. So that when restore of snapshot happens, the volume can retain all its configuration for exporting via nfs-ganesha. The export file is stored at "/etc/ganesha/export" in the following format "export.<volname>.conf" The fix handles given cases in the following manner : case a: The nfs-ganesha(global) is ON during snapshot and restore. i.) Volume was exported during snapshot. When we restore snapshot, then volume should be exported back with old configuration file. ii.) Volume was unexported during snapshot. When we restore snapshot, then volume should unexported again. case b: The nfs-ganesha is ON during snapshot and OFF during restore Volume was exported during snapshot. When we restore snapshot, the conf will be copied to corresponding location and if nfs-ganesha enabled again, then volume will be exported. For the clones, export conf file will created in /etc/ganesha/export and then export it via ganesha. Upstream Reference: (cherry picked from commit 5583bac79851d24f0a552478b361049fe63c32b7) >Change-Id: Ideecda15bd4db58e991cf6c8de7bb93f3db6cd20 >BUG: 1257709 >Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> >Reviewed-on: http://review.gluster.org/12034 >Reviewed-by: Avra Sengupta <asengupt@redhat.com> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Change-Id: I19725ec3d093fb32067bba4aba7f5bc3fd61b0e3 BUG: 1257710 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: http://review.gluster.org/12483 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* snapshot: Don't display snapshot's hard-limit and soft-limit in vol infoAvra Sengupta2015-11-051-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/#/c/12443/ The snap-max-hard-limit being displayed in the volume info currently is propagated from system's snap-max-hard-limit as that is a global option common for all volumes, and hence ends up showing the system's snap-max-hard-limit. We should not be displaying snap-max-hard-limit and snap-max-soft-limit in the volume info at all, as these are snap config options and should be set and displayed via snap config command. Modified bug-1113476.t to test the same behaviour. Change-Id: I90891f0cf7fb39fd686787297c7f7cd8c1e7daa1 BUG: 1277394 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/12443 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> (cherry picked from commit 2e56bde3ea952beabd27cdf8a3a10da563a00bcc) Reviewed-on: http://review.gluster.org/12493
* v info for disperse count fails while upgradingHari Gowtham2015-11-041-4/+12
| | | | | | | | | | | | | | | | | | | | | | | | back-port of : http://review.gluster.org/#/c/12495/ The upgrade from 3.7.5-3 to 3.7.5-5 causes the type and number of bricks for the cold tier to be printed wrong. >Change-Id: Ia45b97c35fef88f9c66e15e5bdb93fd30cb342af >BUG: 1277481 >Signed-off-by: Hari Gowtham <hgowtham@redhat.com> >Reviewed-on: http://review.gluster.org/12495 >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Reviewed-by: Dan Lambright <dlambrig@redhat.com> >Tested-by: Dan Lambright <dlambrig@redhat.com> Change-Id: Ic61dd1378c8efe37d797328719ba16e64ff76f55 BUG: 1277984 Signed-off-by: Hari Gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12506 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* glusterd: move new feature (tiering) enum op to the last of the arrayGaurav Kumar Garg2015-11-031-2/+5
| | | | | | | | | | | | | | | | | | | | Currently new feature tiering have GD_OP_DETACH_TIER and GD_OP_TIER_MIGRATE enum in the middle of the glusterd_op_ enum array. In multi nodes cluster when one of the node upgraded from lower version to higher version and upon executing command can end up in a mismatch in enum ops at the receiver ends causing command execution fail. Fix is to put every new feature glusterd operation enum code to last of the enum array. Change-Id: I640f811065e8c84add624237aa80fed43fde5967 BUG: 1276029 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/12486 Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* quota: add version to quota xattrsvmallika2015-11-026-8/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a backport of http://review.gluster.org/#/c/12386/ When a quota is disable and the clean-up process terminated without completely cleaning-up the quota xattrs. Now when quota is enabled again, this can mess-up the accounting A version number is suffixed for all quota xattrs and this version number is specific to marker xaltor, i.e when quota xattrs are requested by quotad/client marker will remove the version suffix in the key before sending the response > Change-Id: I1ca2c11460645edba0f6b68db70d476d8d26e1eb > BUG: 1272411 > Signed-off-by: vmallika <vmallika@redhat.com> > Reviewed-on: http://review.gluster.org/12386 > Tested-by: NetBSD Build System <jenkins@build.gluster.org> > Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Change-Id: I67b1b930b28411d76b2d476a4e5250c52aa495a0 BUG: 1277080 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/12487 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* glusterd: fix info file checksum mismatch during upgradeanand2015-11-011-10/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | issue: probing a new node(>=3.6) from 3.5 cluster is moving the peer to rejected state. fix: Disperse vol support is added from 3.6 release, so write disperse fields (disperse_count=0 and redundancy_count=0) in vol info file only if cluster version supported. >Change-Id: I11d5e2e337b9bbaddc8e52ca7295ba481beb1132 >BUG: 1276423 >Signed-off-by: anand <anekkunt@redhat.com> >Reviewed-on: http://review.gluster.org/12464 >Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Kaushal M <kaushal@redhat.com> >Tested-by: NetBSD Build System <jenkins@build.gluster.org> BUG: 1276905 Change-Id: Ia601c068706a96621203132429d4417fa1c96f76 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/12477 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* cluster/tier enable CTR on attach tierDan Lambright2015-10-301-0/+3
| | | | | | | | | | | | | | | | | | | | | | | CTR is currently disabled by default, and must be manually enabled for tiering to start. This is an overhead on the administrator and easy to overlook. Enable it automatically when a tier is attached. This is a backport of 12420 > Change-Id: I0c29de8762faec1bfe6d1376a57eeef3357ad15a > BUG: 1274847 > Signed-off-by: Dan Lambright <dlambrig@redhat.com> > Reviewed-on: http://review.gluster.org/12420 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Change-Id: I43a32ac0c88b44d10aa0f40b18b0564ae1e17321 BUG: 1276671 Reviewed-on: http://review.gluster.org/12474 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* cluster/ec: Implement gfid-hash read-policyPranith Kumar K2015-10-291-0/+8
| | | | | | | | | | | | | | | | | | | | | Add a policy in ec to performs reads from same bricks as long as they are good. Based on the gfid of the file/directory it determines the bricks to be considered for reading. >Change-Id: Ic97b5c54c086a28b5e07a330a4fd448551b49376 >BUG: 1261260 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/12133 >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> BUG: 1270705 Change-Id: Ibf0d21d7210125fa7aaa12b3f98bcdf7cd89ef02 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/12456 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: call glusterd_store_volinfo in bump up op-versionAtin Mukherjee2015-10-281-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | After an upgrade, op-version is expected to be updated through gluster volume set. If the new version introduces any feature which changes volinfo structure without storing the default values of these new options would result into cksum issues. Backport of: >Change-Id: I57b4667f3403839811735bf66bef29e5200a9241 >BUG: 1262805 >Signed-off-by: Atin Mukherjee <amukherj@redhat.com> >Reviewed-on: http://review.gluster.org/12171 >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> >Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> >(cherry picked from commit c3ed484af54e32c1ef2300cda652604d75dc9d20) Change-Id: I2640a2cfce5e43a9f21ec1d7e1751327b8b8f05d BUG: 1262793 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/12435 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* tier: Typo while setting the wrong value of low/hi watermarkhari gowtham2015-10-281-2/+3
| | | | | | | | | | | | | | | | | | | | | | back port of : http://review.gluster.org/#/c/12432/ While setting the wrong value of watermark-hi/low the output shows "compatiblevalue" whereas it should be "compatible value" >Change-Id: I29c8f9a954928d22e436465f4ebc30bd08640138 >BUG: 1275502 >Signed-off-by: hari gowtham <hgowtham@redhat.com> >Reviewed-on: http://review.gluster.org/12432 >Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Dan Lambright <dlambrig@redhat.com> Change-Id: I29c8f9a954928d22e436465f4ebc30bd08640138 BUG: 1275910 Reviewed-on: http://review.gluster.org/12434 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* snapshot: Fix snapshot clone postvalidateAvra Sengupta2015-10-274-52/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | BackPort Of http://review.gluster.org/#/c/12364/ In glusterd_snapshot_clone_postvalidate(), we were deleting snap object and snap vol, by looking up snapname. Hence, it was deleting the orignal snapshot from which the clone was being created Instead it should fetch the clonename, the respective clone vol, and its corresponding snap object, and delete them. Also glusterd_snap_remove(), needs to differentiate a clone snap object from a snaphsot snap object, as in case of a clone snap object, we don't have any persisted data in /var/run/gluster/snaps/ and hence is shouldn't try to delete anything there. Change-Id: I02bb22a3898d5720e318a02d6cc32d25f75d317d BUG: 1271627 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/12364 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> (cherry picked from commit 59401a32de51cdb6c1a5a1208723e89e1a4abd30) Reviewed-on: http://review.gluster.org/12406
* cluster/tier: add pause tier for snapshotsDan Lambright2015-10-213-3/+208
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a backport of 12304 Snaps of tiered volumes cannot handle files undergoing migration. We implement a helper mechanism to "pause" migration. Any files undergoing migration are aborted. Clean up is done to remove sticky bits and data at the destination. Migration is restarted after snap completes. For testing an internal switch is added. It is not exposed externally. gluster volume set vol1 tier-pause [true|false] > Change-Id: Ia85bbf89ac142e9b7e73fcbef98bb9da86097799 > BUG: 1267950 > Signed-off-by: Dan Lambright <dlambrig@redhat.com> > Reviewed-on: http://review.gluster.org/12304 > Reviewed-by: N Balachandran <nbalacha@redhat.com> > Tested-by: NetBSD Build System <jenkins@build.gluster.org> > Tested-by: Gluster Build System <jenkins@build.gluster.com> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Conflicts: xlators/mgmt/glusterd/src/glusterd-messages.h Change-Id: I5f039d8d38a4c915bd873969f336b96755a0b8f1 BUG: 1274101 Reviewed-on: http://review.gluster.org/12411 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* cluster/tier: Changed tier xattr-name valueN Balachandran2015-10-202-13/+15
| | | | | | | | | | | | | | | | | | | | | | Each tier layer (for future stacking implementations) must have a unique xattr name. We are currently using the name of the tier subvolume excluding the volume name. Change-Id: Id4adea61dc1c8473fb1d4d7364d1940278c6e129 BUG: 1273246 >Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: http://review.gluster.org/12350 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Tested-by: NetBSD Build System <jenkins@build.gluster.org> > Reviewed-by: Dan Lambright <dlambrig@redhat.com> > Tested-by: Dan Lambright <dlambrig@redhat.com> (cherry picked from commit 0243085e40d842c59f4d7d59c61701ba416878ec) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/12398 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* tier/shd: make shd commands compatible with tieringMohammed Rafi KC2015-10-136-126/+331
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | tiering volfiles may contain afr and disperse together or multiple time based on configuration. And the informations for those configurations are stored in tier_info. So most of the volgen code generation need to be changed to make compatible with it. Back port of> >Change-Id: I563d1ca6f281f59090ebd470b7fda1cc4b1b7e1d >BUG: 1261276 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> >Reviewed-on: http://review.gluster.org/12135 >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> (cherry picked from commit 0ef62933649392051e73fe01c028e41baddec489) BUG: 1261744 Change-Id: Iff1b27ae8ce61f1f38fbbd6c92894b3d3516e4d4 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12344 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Joseph Fernandes Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* tier/shd: inline warning when compiled with gcc v.5Mohammed Rafi KC2015-10-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | back port of> >Change-Id: I487a26263d6e940eed364a831e99f9b8390bc96a >BUG: 1226881 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> >Reviewed-on: http://review.gluster.org/12342 >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Reviewed-by: Anoop C S <anoopcs@redhat.com> >Tested-by: Anoop C S <anoopcs@redhat.com> >Tested-by: Dan Lambright <dlambrig@redhat.com> (cherry picked from commit fc8df80f157c148cf60500be14c1f6a9aeed8d7b) Change-Id: I03821a626ab08d20730ce3ea3f374178c899d369 BUG: 1271249 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12352 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Joseph Fernandes Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* server/protocol: option for dynamic authorization of client permissionsPrasanna Kumar Kalever2015-10-131-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | problem: assuming gluster volume is already mounted (for gfapi: say client transport connection has already established), now if somebody change the volume permissions say *.allow | *.reject for a client, gluster should allow/terminate the client connection based on the fresh set of volume options immediately, but in existing scenario neither we have any option to set this behaviour nor we take any action until and unless we remount the volume manually solution: Introduce 'dynamic-auth' option (default: on). If 'dynamic-auth' is 'on' gluster will perform dynamic authentication to allow/terminate client transport connection immediately in response to *.allow | *.reject volume set options, thus if volume permissions have changed for a particular client (say client is added to auth.reject list), his transport connection to gluster volume will be terminated immediately. Backport of: > Change-Id: I6243a6db41bf1e0babbf050a8e4f8620732e00d8 > BUG: 1245380 > Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> > Reviewed-on: http://review.gluster.org/12229 > Tested-by: NetBSD Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > (cherry picked from commit 84e90b756566bc211535a8627ed16d4231110ade) Change-Id: If7e5c9be912412ea388391ef406ee2c8bedb26b8 BUG: 1271065 Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> Reviewed-on: http://review.gluster.org/12343 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* tier/shd: create shd volfile for tieringMohammed Rafi KC2015-10-123-20/+262
| | | | | | | | | | | | | | | | | | | | | | | | | | | Currently shd graph will only start if it is replicate or disperse volume. But in case of tiering, volume type will be tier. So we need to start shd if any of the cold or hot is compatible with shd volume. Back port of> >Change-Id: Ic689746ac7d2fc6a9eccdabd8518dc9139829de2 >BUG: 1261276 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> >Reviewed-on: http://review.gluster.org/11962 >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> (cherry picked from commit fdff192b918ca9cd237f3f784c627102377e3661) Change-Id: I236a31e7dcefb3dad64881e0b007144bd826b840 BUG: 1261744 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12333 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* cluster/tier: add watermarks and policy driverDan Lambright2015-10-101-13/+122
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport fix 12039 This fix introduces infrastructure to support different policies for promotion and demotion. Currently the tier feature automatically promotes and demotes files periodically based on access. This is good for testing but too stringent for most real workloads. It makes it difficult to fully utilize a hot tier- data will be demoted before it is touched- its unlikely a 100GB hot SSD will have all its data touched in a window of time. A new parameter "mode" allows the user to pick promotion/demotion polcies. The "test mode" will be used for *.t and other general testing. This is the current mechanism. The "cache mode" introduces watermarks. The watermarks represent levels of data residing on the hot tier. "cache mode" policy: The % the hot tier is full is called P. Do not promote or demote more than D MB or F files. A random number [0-100] is called R. Rules for migration: if (P < watermark_low) don't demote, always promote. if (P >= watermark_low) && (P < watermark_hi) demote if R < P; promote if R > P. if (P > watermark_hi) always demote, don't promote. gluster volume set {vol} cluster.watermark-hi % gluster volume set {vol} cluster.watermark-low % gluster volume set {vol} cluster.tier-max-mb {D} gluster volume set {vol} cluster.tier-max-files {F} gluster volume set {vol} cluster.tier-mode {test|cache} > Change-Id: I157f19667ec95aa1d53406041c1e3b073be127c2 > BUG: 1257911 > Signed-off-by: Dan Lambright <dlambrig@redhat.com> > Reviewed-on: http://review.gluster.org/12039 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Conflicts: xlators/cluster/dht/src/dht-rebalance.c xlators/cluster/dht/src/tier.c Change-Id: Ibfe6b89563ceab98708325cf5d5ab0997c64816c BUG: 1270527 Reviewed-on: http://review.gluster.org/12330 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* tiering/glusterd: keep afr/ec xlators name constantMohammed Rafi KC2015-10-093-30/+112
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | afr uses the translator name for locking purpose, so it is mandatory to keep afr/ec xlators name constant across graph change currently when a tier is attached, afr names are appended either with hot or cold. ie that breaks the above mentioned constraint. Backport of> >Change-Id: I3699dcdaa8190bab3ba81cbc01e8fa126d37ba0d >BUG: 1261276 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> >Reviewed-on: http://review.gluster.org/12134 >Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Dan Lambright <dlambrig@redhat.com> >Tested-by: Dan Lambright <dlambrig@redhat.com> (cherry picked from commit 4ad9bc5faca60528345f1e9c95c22bd8402162c0) Change-Id: I7bf5f22f112f1df1c05a0a8503d56029509d6292 BUG: 1261744 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12323 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* tier/glusterd: volume status failed after detach startMohammed Rafi KC2015-10-081-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | After triggering detach start on a tiered volume fails. This because of brick count was wrongly setting in rebal dictionary. Back port of> >Change-Id: I6a472bf2653a07522416699420161f2fb1746aef >BUG: 1261757 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> >Reviewed-on: http://review.gluster.org/12146 >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Dan Lambright <dlambrig@redhat.com> >Tested-by: Dan Lambright <dlambrig@redhat.com> (cherry picked from commit 51632e1eec3ff88d19867dc8d266068dd7db432a) Change-Id: I6b75c243873700dcb498303f1f308dea177feb4f BUG: 1261758 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12244 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* glusterd/add-brick: change add-brick implementation to v3 frameworkMohammed Rafi KC2015-10-082-17/+134
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | add-brick commit first happens on local node and followed by peers. As part of the commit of local-host glusterd will send the updated volfiles to the clients connected to the local-host even before the commit of peers happen. If any of the newly added brick was hosted by any peer, that brick won't be started when client (connected to local-host) try to send fops. By changing to v3 framework we can send post validate ops after commit operation that helps to send volfile fetch request only after completing commits on all nodes. back port of: >Change-Id: Ib7312e01143326128c010c11fc2ed206f37409ad >BUG: 1263549 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> >Reviewed-on: http://review.gluster.org/12237 >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> (cherry picked from commit ee944e86866a6556fd4dd98bcd6f1f58c323721f) Change-Id: Idfa993f2c94a52c2a30be525eeac66af1c320059 BUG: 1259081 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12308 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* Tier/glusterd: Do not allow attach-tier if remove-brick is not committedMohammed Rafi KC2015-10-081-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | When attaching a tier, if there is a pending remove-brick task, then should not allow attach-tier. Since we are not supporting add/remove brick on a tiered volume, we won't able to commit pending remove-brick after attaching the tier Back port of> >Change-Id: Ib434e2e6bc75f0908762f087ad1ca711e6b62818 >BUG: 1261819 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> >Reviewed-on: http://review.gluster.org/12148 >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Dan Lambright <dlambrig@redhat.com> >Tested-by: Dan Lambright <dlambrig@redhat.com> (cherry picked from commit bc11be7864eb7f22ad6b529e95bac5a2833f5a01) Change-Id: I96a6085d215861663eb83a55173282a015976662 BUG: 1258833 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12245 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* Tier/cli: Change detach-tier commit force to detach-tier forceMohammed Rafi KC2015-10-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current detach-tier cli command support commit force. Deprecating the same to force. So the new syntax would be: volume detach-tier <VOLNAME> <start|stop|status|commit|force> Back port of> >Change-Id: Ie86dfd72341078c0a1be94767f523730911312ef >BUG: 1261862 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> >Reviewed-on: http://review.gluster.org/12151 >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Dan Lambright <dlambrig@redhat.com> >Tested-by: Dan Lambright <dlambrig@redhat.com> (cherry picked from commit 68e8d617eb62a7ec40a1db5f3f60730767a168b6) Change-Id: I5b72dd0046fcf2ead74f7d1275f35036cce3195b BUG: 1258242 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12246 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd, dht: volume set for use-readdirp in dhtPranith Kumar K2015-10-041-0/+6
| | | | | | | | | | | | | | | | | | | | | >Change-Id: Icab246b1d02808864d878d949fa56f9f889b538a >BUG: 1265677 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/12221 >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Raghavendra G <rgowdapp@redhat.com> >Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> >Reviewed-by: Kaushal M <kaushal@redhat.com> >(cherry picked from commit 059db0254f5670a34f1a928155c0c7d1cd03b53a) Change-Id: Ifc46ed08fc10b32f5e814aa09c155e11e8c93138 BUG: 1267822 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/12269 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* glusterfsd : newly added brick receives fops only after it is startedSakshi2015-09-241-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When new bricks are added in the middle of an on-going fop like 'rm', the volfile changes without waiting for the newly added bricks to get port. Fops are sent to all bricks and may fail on some with ENOTCONN as these bricks may not have a port yet. This patch ensures that the volfile change happens only after all the bricks have a port. > Backport of http://review.gluster.org/#/c/11342/ > Change-Id: I7ed2413475f80d0cc8849fed33036ade8d75a191 > BUG: 1233151 > Signed-off-by: Sakshi <sabansal@redhat.com> > Reviewed-on: http://review.gluster.org/11342 > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> > Tested-by: Atin Mukherjee <amukherj@redhat.com> Change-Id: I7ed2413475f80d0cc8849fed33036ade8d75a191 BUG: 1265890 Signed-off-by: Sakshi <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/12223 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* Tiering: change in status for remove brick and rebalancehari gowtham2015-09-224-12/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | backport of : http://review.gluster.org/#/c/12149/ when we trigger a detach tier start on a tier vol, it shows in the volume status task as "remove brick" instead of "Detach tier" Status of volume: vol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick 10.70.42.171:/data/gluster/hbr1 49154 0 Y 25098 Cold Bricks: Brick 10.70.42.171:/data/gluster/p1 49152 0 Y 25101 Brick 10.70.42.171:/data/gluster/p2 49153 0 Y 25112 NFS Server on localhost N/A N/A N N/A Task Status of Volume vol1 ------------------------------------------------------------------------------ Task : Tier migrate ID : e11d5a3d-b1ae-4c3f-8f95-b28993c60939 Status : in progress Status of volume: vol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick 10.70.42.171:/data/gluster/hbr1 49154 0 Y 25098 Cold Bricks: Brick 10.70.42.171:/data/gluster/p1 49152 0 Y 25101 Brick 10.70.42.171:/data/gluster/p2 49153 0 Y 25112 NFS Server on localhost N/A N/A N N/A Task Status of Volume vol1 ------------------------------------------------------------------------------ Task : Detach tier ID : 76d700b1-5bbd-43ed-95fd-1640b2b4af31 Status : completed >Change-Id: I4bd3b340d4e700e8afed00e1478b8a8b54dfe2e2 >BUG: 1261837 >Signed-off-by: hari gowtham <hgowtham@redhat.com> >Signed-off-by: Hari Gowtham <hgowtham@redhat.com> >Reviewed-on: http://review.gluster.org/12149 >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Dan Lambright <dlambrig@redhat.com> >Tested-by: Dan Lambright <dlambrig@redhat.com> Change-Id: Ie0e994677a9277486a546e99da334bd4660b678b BUG: 1258340 Signed-off-by: Hari Gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12203 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* Tiering:Changing error message as detach-tier instead of "remove-brick"hari gowtham2015-09-181-4/+13
| | | | | | | | | | | | | | | | | | | | | | backport of : http://review.gluster.org/#/c/12177/ >Change-Id: Id93424a08f601a8d7540d96a47ed2b0497d4a631 >BUG: 1263177 >Signed-off-by: hari gowtham <hgowtham@redhat.com> >Reviewed-on: http://review.gluster.org/12177 >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Dan Lambright <dlambrig@redhat.com> >Tested-by: Dan Lambright <dlambrig@redhat.com> Change-Id: Iec787eec7ece0a88675d25eea9309a5bfc14cd49 BUG: 1258244 Signed-off-by: Hari Gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12190 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* glusterd: volume status backward compatibilityHari Gowtham2015-09-151-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | backport of : http://review.gluster.org/#/c/11986/ volume status message of 3.7 does not display all the brick in a mixed cluster(3.6 and 3.7). it displays the bricks in 3.7 and misses bricks in 3.6 due to the key difference for ports. Status of volume: vol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.42.171:/data/gluster/tier/cbr2 49153 0 Y 13494 Brick 10.70.42.203:/data/gluster/tier/cbr2 49154 0 Y 27686 NFS Server on localhost N/A N/A N N/A NFS Server on dhcp42-203.lab.eng.blr.redhat .com N/A N/A N N/A Task Status of Volume vol1 ------------------------------------------------------------------------------ There are no active volume tasks >Change-Id: Icf0dc01a3d21d0889c43e2868c646a0c7e07ff25 >BUG: 1255694 >Signed-off-by: Hari Gowtham <hgowtham@redhat.com> >Reviewed-on: http://review.gluster.org/11986 >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> >Signed-off-by: Hari Gowtham <hgowtham@redhat.com> Change-Id: I8732991a36d67b97a47025618a6aabe9cc8315e2 BUG: 1260858 Signed-off-by: Hari Gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12120 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* tier/glusterd : Disable subvol match check during detach tierMohammed Rafi KC2015-09-141-5/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | For tiering, user does not have authorization to choose for bricks to detach, so we don't need to whether subvols match for the bricks or not. Back port of >> >Change-Id: I7e777ccc1aa261f652f9b158718fcd55185c7794 >BUG: 1261741 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> >Reviewed-on: http://review.gluster.org/12145 >Reviewed-by: Dan Lambright <dlambrig@redhat.com> >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> (cherry picked from commit 2e041639d8e49e2b768dd43c6f702106250e4da9) Change-Id: I0819caceb3aad88e242cf62faab471fafd62c63f BUG: 1261742 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12173 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* glusterd: Do not allow "detach-tier commit" unnecessarilyGaurav Kumar Garg2015-09-091-9/+21
| | | | | | | | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/#/c/12107/ Currently when user execute gluster v detach-tier commit command without starting detach-tier or without giving force option then gluster will success this operation. Detach-tier commit should not allow without giving "force" optioin. >>Reviewed-on: http://review.gluster.org/12107 >>Tested-by: NetBSD Build System <jenkins@build.gluster.org> >>Tested-by: Gluster Build System <jenkins@build.gluster.com> >>Reviewed-by: Atin Mukherjee <amukherj@redhat.com> >>Reviewed-by: Dan Lambright <dlambrig@redhat.com> Change-Id: Id161c288f6f3e0f6b298878a5c35a49fcbd9c6e3 BUG: 1259694 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/12108 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* glusterd: Return better error messages for probe and detach failuresBrad Hubbard2015-09-081-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | We handle some specific errors and return good error messages for those, but for the default case where the error code is not recognised we just report "unknown errno". This patch attempts to at least return the output of strerror to provide more informative errors. Cherry picked from commit 4b5aec8da9be69da077e1fcc7e852d224517ecc0: >BUG: 1257149 >Change-Id: I0027e74e41adac4ab0c0a929c6fff56878bf39c8 >Signed-off-by: Brad Hubbard <bhubbard@redhat.com> >Reviewed-on: http://review.gluster.org/12021 >Reviewed-by: Niels de Vos <ndevos@redhat.com> >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> BUG: 1257394 Change-Id: I11e6c006c6b93a5c1b915f8c509f24123497301d Signed-off-by: Brad Hubbard <bhubbard@redhat.com> Reviewed-on: http://review.gluster.org/12111 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/tier: account for reordered layoutsDan Lambright2015-09-021-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a backport of 11092 > For a tiered volume the cold subvolume is always at a fixed > position in the graph. DHT's layout array, on the other hand, > may have the cold subvolume in either the first or second > index, therefore code cannot make any assumptions. The fix > searches the layout for the correct position dynamically > rather than statically. > The bug manifested itself in NFS, in which a newly attached > subvolume had not received an existing directory. This case > is a "stale entry" and marked as such in the layout for > that directory. The code did not see this, because it > looked at the wrong index in the layout array. > The fix also adds the check for decomissioned bricks, and > fixes a problem in detach tier related to starting the > rebalance process: we never received the right defrag > command and it did not get directed to the tier translator. > Change-Id: I77cdf9fbb0a777640c98003188565a79be9d0b56 > BUG: 1214289 > Signed-off-by: Dan Lambright <dlambrig@redhat.com> Change-Id: Idb2eec9ba25812f41de7f960a0314c92341d6b5d BUG: 1259081 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/12086 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
* gluster/cli: snapshot delete all does not work with xmlRajesh Joseph2015-08-311-5/+6
| | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/#/c/12027/ Problem: snapshot delete all command fails with --xml option Fix: Provided xml support for delete all command Change-Id: I77cad131473a9160e188c783f442b6a38a37f758 BUG: 1258113 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/12027 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> (cherry picked from commit fd47635a4ffab621a2357c99cd1edd0482940bd5) Reviewed-on: http://review.gluster.org/12042
* snapshot:cleanup snaps during unprobeMohammed Rafi KC2015-08-314-22/+104
| | | | | | | | | | | | | | | | Backport of http://review.gluster.org/#/c/9930/ When doing an unprobe, the volume that doesnot contain any brick of the particular node will be deleted. So the snaps associated with that volume should also delete Change-Id: I9f3d23bd11b254ebf7d7722cc1e12455d6b024ff BUG: 1255384 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/11970 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd: Don't allow remove brick start/commit if glusterd is down of the ↵Atin Mukherjee2015-08-282-31/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | host of the brick Backport of http://review.gluster.org/#/c/11726/ remove brick stage blindly starts the remove brick operation even if the glusterd instance of the node hosting the brick is down. Operationally its incorrect and this could result into a inconsistent rebalance status across all the nodes as the originator of this command will always have the rebalance status to 'DEFRAG_NOT_STARTED', however when the glusterd instance on the other nodes comes up, will trigger rebalance and make the status to completed once the rebalance is finished. This patch fixes two things: 1. Add a validation in remove brick to check whether all the peers hosting the bricks to be removed are up. 2. Don't copy volinfo->rebal.dict from stale volinfo during restore as this might end up in a incosistent node_state.info file resulting into volume status command failure. Change-Id: Ia4a76865c05037d49eec5e3bbfaf68c1567f1f81 BUG: 1256265 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/11726 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/11996 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: stop all the daemons services on peer detachGaurav Kumar Garg2015-08-262-15/+41
| | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/#/c/11509/ Currently glusterd is not stopping all the deamon service on peer detach With this fix it will do peer detach cleanup properlly and will stop all the daemon which was running before peer detach on the node. Change-Id: Ifed403ed09187e84f2a60bf63135156ad1f15775 BUG: 1238706 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/11971 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>