summaryrefslogtreecommitdiffstats
path: root/tests/volume.rc
Commit message (Collapse)AuthorAgeFilesLines
* tests/quota : fix failing test on auxiliary mount pointManikandan Selvaganesh2016-01-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In test file tests/bugs/quota/bug-1049323.t, test "EXPECT "0" get_aux" fails in Fedora. In get_aux function we grep for "/var/run/gluster/<volname>" to check if auxiliary mount point is created and we return 0 on success else we return 1. In fedora, auxiliary mount point is created on "/run/gluster/<volname>". So it fails on Fedora. The patch fixes it by just grepping for "/run/gluster/<volname>". Backport of http://review.gluster.org/#/c/13228/ > Change-Id: Icb59395df4a98109eaa8199cbdbdedcd1cbef27a > BUG: 1297740 > Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> > Reviewed-on: http://review.gluster.org/13228 > Reviewed-by: Niels de Vos <ndevos@redhat.com> > Reviewed-by: Raghavendra Talur <rtalur@redhat.com> > Tested-by: NetBSD Build System <jenkins@build.gluster.org> Change-Id: Icb59395df4a98109eaa8199cbdbdedcd1cbef27a BUG: 1300600 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/13273 Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* features/bitrot: add check for corrupted object in f{stat}Venky Shankar2016-01-261-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/13120 Check for corrupted objects is done bt bitrot stub component for data operations and such fops are denied processing by returning EIO. These checks were not done for operations such as get/set extended attribute, stat and the likes - IOW, stub only blocked pure data operations. However, its necessary to have these checks for certain other fops, most importantly stat (and fstat). This is due to the fact that clients could possibly get stale stat information (such as size, {a,c,m}time) resulting in incorrect operation of the application that rely on these fields. Note that, the data that replication would take care of fetching good (and correct) data, but the staleness of stat information could lead to data inconsistencies (e.g., rebalance, tier). Change-Id: I5a22780373b182a13f8d2c4ca6b7d9aa0ffbfca3 BUG: 1297213 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/13276 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* glusterd/bitrot : Integration of bad files from bitd with scrub status commandGaurav Kumar Garg2015-11-231-0/+7
| | | | | | | | | | | | | | | | | | | | | | | This patch is backport of: http://review.gluster.org/#/c/12720/ Currently scrub status command is not displaying list of all the bad files. All the bad files are avaliable in the bitd daemon. With this patch it will dispaly list of all the bad file's in the scrub status command. >> Change-Id: If09babafaf5d7cf158fa79119abbf5b986027748 >> BUG: 1207627 >> Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Change-Id: If09babafaf5d7cf158fa79119abbf5b986027748 BUG: 1283881 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/12725 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* cluster/tier: add watermarks and policy driverDan Lambright2015-10-101-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport fix 12039 This fix introduces infrastructure to support different policies for promotion and demotion. Currently the tier feature automatically promotes and demotes files periodically based on access. This is good for testing but too stringent for most real workloads. It makes it difficult to fully utilize a hot tier- data will be demoted before it is touched- its unlikely a 100GB hot SSD will have all its data touched in a window of time. A new parameter "mode" allows the user to pick promotion/demotion polcies. The "test mode" will be used for *.t and other general testing. This is the current mechanism. The "cache mode" introduces watermarks. The watermarks represent levels of data residing on the hot tier. "cache mode" policy: The % the hot tier is full is called P. Do not promote or demote more than D MB or F files. A random number [0-100] is called R. Rules for migration: if (P < watermark_low) don't demote, always promote. if (P >= watermark_low) && (P < watermark_hi) demote if R < P; promote if R > P. if (P > watermark_hi) always demote, don't promote. gluster volume set {vol} cluster.watermark-hi % gluster volume set {vol} cluster.watermark-low % gluster volume set {vol} cluster.tier-max-mb {D} gluster volume set {vol} cluster.tier-max-files {F} gluster volume set {vol} cluster.tier-mode {test|cache} > Change-Id: I157f19667ec95aa1d53406041c1e3b073be127c2 > BUG: 1257911 > Signed-off-by: Dan Lambright <dlambrig@redhat.com> > Reviewed-on: http://review.gluster.org/12039 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Conflicts: xlators/cluster/dht/src/dht-rebalance.c xlators/cluster/dht/src/tier.c Change-Id: Ibfe6b89563ceab98708325cf5d5ab0997c64816c BUG: 1270527 Reviewed-on: http://review.gluster.org/12330 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* ec : trusted.ec.version xattr of all root directories of all bricks should ↵Ashish Pandey2015-08-291-3/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | be same. Problem: After replacing the brick using "replace-brick" command and running "heal full", the version of the root directory of the newly added brick is not getting healed. heal starts running on the dentries of the root but does not run on root directory. Solution: Run heal on root directory. > Change-Id: Ifd42a3fb341b049c895817e892e5b484a5aa6f80 > BUG: 1243382 > Signed-off-by: Ashish Pandey <aspandey@redhat.com> > Reviewed-on: http://review.gluster.org/11676 > Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> > Tested-by: NetBSD Build System <jenkins@build.gluster.org> Change-Id: Ifd42a3fb341b049c895817e892e5b484a5aa6f80 BUG: 1243384 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: http://review.gluster.org/11755 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd: stop all the daemons services on peer detachGaurav Kumar Garg2015-08-261-8/+8
| | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/#/c/11509/ Currently glusterd is not stopping all the deamon service on peer detach With this fix it will do peer detach cleanup properlly and will stop all the daemon which was running before peer detach on the node. Change-Id: Ifed403ed09187e84f2a60bf63135156ad1f15775 BUG: 1238706 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/11971 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Stop/restart/notify to daemons(svcs) during reset/set on a volumeanand2015-08-181-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | problem : Reset/set commands were not working properly. reset command returns success but it not sending notification to svcs if corresponding graph modified. Fix: Whenever reset/set command issued, generate the temp graph and compare with original graph and do the fallowing actions 1.) If both graph are identical nothing to do with svcs. 2.) If any changes in graph topology restart/stop service by calling svc manager. 3) If changes in options send notify signal by calling glusterd_fetchspec_notify. Back port of: >Change-Id: I852c4602eafed1ae6e6a02424814fe3a83e3d4c7 >BUG: 1209329 >Signed-off-by: anand <anekkunt@redhat.com> >Reviewed-on: http://review.gluster.org/10850 >Tested-by: NetBSD Build System <jenkins@build.gluster.org> >Tested-by: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> >(cherry picked from commit 7255febab2c38cc89b71f2519a20d10f53586000) Change-Id: I42aa757ecc6b5b307b5927d11f12d08f57ac0ae2 BUG: 1253165 Reviewed-on: http://review.gluster.org/11905 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Pass NULL in glusterd_svc_manager in glusterd_restart_bricksGaurav Kumar Garg2015-07-231-0/+4
| | | | | | | | | | | | | | | | | | | | On restarting glusterd quota daemon is not started when more than one volumes are configured and quota is enabled only on 2nd volume. This is because of while restarting glusterd it will restart all the bricks. During brick restart it will start respective daemon by passing volinfo of first volume. Passing volinfo to glusterd_svc_manager will imply daemon managers will take action based on the same volume's configuration which is incorrect for per node daemons. Fix is to pass volinfo NULL while restarting bricks. BUG: 1242882 Change-Id: Ie53fc452dc79811068a9397abca13c65de4a8359 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/11660 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* cluster/ec: Make background healing optional behaviorPranith Kumar K2015-07-211-1/+1
| | | | | | | | | | | | | | | | | | Provide options to control number of active background heal count and qlen. >Change-Id: Idc2419219d881f47e7d2e9bbc1dcdd999b372033 >BUG: 1237381 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/11473 >Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> >Tested-by: Gluster Build System <jenkins@build.gluster.com> BUG: 1238476 Change-Id: I22ba902d9911195656db9e458c01b54cf0afcd7a Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/11680 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
* quota: fix quota test casevmallika2015-07-161-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a backport of http://review.gluster.org/#/c/11673/ Below command is wrong way of executing mutilple command with | (pipe) local cmd="$CLI volume quota $V0 list $QUOTA_PATH | grep $QUOTA_PATH | awk '{print \$$FIELD}'" $cmd This patch fixes the issue This patch also fixes testcase inode-quota.t, which checking quota values in wrongs fields > Change-Id: If28732e6a76ea4bf75560f6496c8f56670915cf9 > BUG: 1229297 > Signed-off-by: vmallika <vmallika@redhat.com> Change-Id: I301a62566a97bbb4cc9657e1139772de1f09049a BUG: 1242329 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/11674 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* features/quota : Fix spurious failurevmallika2015-07-131-5/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a backport of http://review.gluster.org/#/c/11125/ Problem : Basically, in this test case a file is created which exceeds the quota limit. Once the limit is reached that file will be deleted. At the same moment we are testing inode-quota. It can so happen that before the marker updates the information related to deletion of file, a new file creation operation comes and sees that quota limit is still exceeded. Solution : Inducing a check to see if marker updation completed successfully. Updated all the test case which has the similar machanism and also moved the "usage" function to a common place "volume.rc" > Change-Id: I36ddbc5ebbf1b74c9d326a0d1d5f3b32f20a906a > BUG: 1229297 > Signed-off-by: Sachin Pandit <spandit@redhat.com> > Signed-off-by: vmallika <vmallika@redhat.com> > Reviewed-on: http://review.gluster.org/11125 > Tested-by: NetBSD Build System <jenkins@build.gluster.org> > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Change-Id: Iccc36de2b3a1e1a068d1a8d5e98d413c3afa1bc7 BUG: 1242329 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/11642 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* quota/marker: fix spurious failure afr-quota-xattr-mdata-heal.tvmallika2015-07-121-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | This is a backport of http://review.gluster.org/#/c/11583 During quota-update process if inode info is present in size-xattr and missing in contri-xattrs, then in function '_mq_get_metadata', we set contri-size as zero (on error -2, which means usage info present, but inode info missing). With this we are calculating wrong delta and updating the same. With this patch we are ignoring errors if inode info in xattrs are missing > Change-Id: I7940a0e299b8bb425b5b43746b1f13f775c7fb92 > BUG: 1241153 > Signed-off-by: vmallika <vmallika@redhat.com> Change-Id: Ie85fa84b5362ae179cc43402bd6a3a6d96a04b81 BUG: 1241831 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/11614 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cluster/afr : set pending xattrs for replaced brickAnuradha2015-06-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/10448/ & http://review.gluster.org/11416 This patch is part two change to prevent data loss in a replicate volume on doing a replace-brick commit force operation. Problem: After doing replace-brick commit force, there is a chance that self heal might happen from the replaced (sink) brick rather than the source brick leading to data loss. Solution: Mark pending changelogs on afr children for the replaced afr-child so that heal is performed in the correct direction. Credits to Ravishankar N for patch 11416. Change-Id: Icb9807e49b4c1c4f1dcab115318d9a58ccf95675 BUG: 1232173 Reviewed-on: http://review.gluster.org/10448 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Signed-off-by: Anuradha Talur <atalur@redhat.com> Reviewed-on: http://review.gluster.org/11254 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* tests/bitrot: Scrub state change testsVenky Shankar2015-06-261-2/+2
| | | | | | | | | | | | Backport of http://review.gluster.org/11290 Change-Id: Ibb4b503e7d723c86ac381ad3747b1198334bd6ad BUG: 1226666 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/11398 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* posix: Do not update unmodified xattr in (f)xattropXavier Hernandez2015-05-281-1/+5
| | | | | | | | | | | | | | | | Backport of http://review.gluster.org/10886 If a (f)xattrop is issued with a value that only contains 0's, then we don't modify or create the extended attribute. This is useful to avoid ctime modifications when the only purpose of the xattrop was to get the current value. Change-Id: Ia62494e9009962e683c8276783f671da17a8b03a BUG: 1225320 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/10928 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tests: Await for graph switch before testing open fop countVijay Bellur2015-05-281-0/+6
| | | | | | | | | | | | | | | | | | | | | In performance/open-behind.t, a test for open fop reaching the brick is done by switching off open-behind and performing a read operation. If the read operation is performed before a graph switch, the read happens on the old graph and hence open does not get accounted in the brick. To overcome this EXPECT_WITHIN 10 seconds has now been added to ensure that a graph switch has happened. The read operation happens subsequently after the graph switch. Cleaned up a "No volumes present" message from stderr while doing this. Change-Id: I05f7cbfd1ce51782f5ab778468a980d3732509a2 BUG: 1225077 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/10941 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* tests: arbiter.t fixRavishankar N2015-05-271-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/10833 Wait for AFR's children to be up in glustershd process before attempting heal. Also, grep (version 2.21) is detecting statedump files as binary, causing tests to succeed incorrectly. Hence adding the -a switch to force it to treat it as a text file. Thanks to Vijay Bellur for identifying the issue (http://lists.gnu.org/archive/html/bug-grep/2015-05/msg00000.html) and the workaround. Change-Id: Ie3d9591ffaf44baa0cd8c2baa327aed24378e3df BUG: 1225077 BUG: Reviewed-on: http://review.gluster.org/10833 Tested-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> (cherry picked from commit b51ee5f8d1f80d66effffc06c1e49099c04014a4) Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/10923 Tested-by: NetBSD Build System Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* tests: fix bitrot spurious failuresAtin Mukherjee2015-05-091-0/+9
| | | | | | | | | | | | Change-Id: I55bd62480b7ee38cf7b29aeba67b19b0c5bbe2fb BUG: 1220016 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/10702 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/10705 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
* tests: Check aux umount is unmounted for quota testsPranith Kumar K2015-05-051-0/+13
| | | | | | | | | | | Change-Id: If57d08f3446755ea41f66ca258efcc8ea5a89063 BUG: 1218593 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/10480 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/10575
* features/changelog: Consider only changelog on/off as changelog breakageKotresh HR2015-05-051-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Earlier, both chagelog on/off and brick restart were considered to be changelog breakage and treated as changelog not being continuous. As a result, new HTIME.TSTAMP file was created on both the above cases. Now the change is made such that only on changelog enable/disable, the changelog is considered to be discontinuous. New HTIME.TSTAMP file is not created on brick restart, the changelogs files are appended to last HTIME.TSTAMP file. Treating changelog as continuous in above scenario is important as changelog history API will fail otherwise. It can successfully get changes between start and end timestamps only when changelog is continuous (Changelogs in single HTIME.TSTAMP file are treated as continuous). Without this change, changelog history API would fail, and it would become necessary to fallback to other mechanisms like xsync FSCrawl in case geo-rep to detect changes in this time window. But Xsync FSCrawl would not be applicable to other consumers like glusterfind. Rationale: 1. In plain distributed volume, if brick goes down, no I/O can happen onto the brick. Hence changelog is intact with data on disk. 2. In distributed replicate volume, if brick goes down, since self-heal traffic is captured in changelog. Eventually, I/O happened whend brick down is captured in changelog. BUG: 1217944 Change-Id: Ifa6d932818fe1a3a914e87ac84f1d2ded01c1288 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/10222 Reviewed-on: http://review.gluster.org/10507 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Geo-rep: Adding regression tests for geo-repVijaykumar Koppad2015-04-131-0/+32
| | | | | | | | | | | | | | | | | | | | | | This patch introduces upstream regression suit for geo-replication * Modifies cleanup (tests/include.rc) to remove everything but hook-scripts. Prerequisites: * Passwordless SSH from root to root of current host. * Export /build/install/sbin and /build/install/bin to PATH variable for root user. Change-Id: I433dd8bbb17edba9baaf516fe0dce3133ba39184 BUG: 1101111 Signed-off-by: Vijaykumar Koppad <vkoppad@redhat.com> Signed-off-by: Ajeet Jha <ajha@redhat.com> Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/7392 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* tests: fix volume_exists to be used from EXPECT_WITHINJeff Darcy2015-04-011-7/+6
| | | | | | | | | | | | | | | | | | | | | Fixes the spurious volume-snapshot-clone.t regression failures. In brief, the problem is that the script wasn't waiting for config commands to complete, and would *sometimes* query the status of a volume while that volume was still being deleted. It turns out that "!" doesn't work properly from EXPECT_WITHIN, so there was a choice between changing that or changing volume_exists. This seemed less risky. Because of code duplication, two instances of the function had to be changed, and the other caller (volume-snapshot.t) did too. Change-Id: I766d4dc7c5b11038ede8e45d9d1f29cd02a622a0 BUG: 1163543 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/10053 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* tests: fix online_brick_countJeff Darcy2015-03-301-1/+1
| | | | | | | | | | | | | | | | | | | | It turns out that "pidof" is unreliable on some platforms (e.g. Fedora 21) because it will show spurious entries for processes using the same inode under a different name. Use "pgrep" instead because it's name-based and doesn't get confused by glusterd/glusterfs being links to glusterfsd. Also changed bug-913555.t because it had the same mistake in its own version of the same function. Now it uses the common version. Change-Id: I5d70edd5655faa5470e0f378b8c16a6adacbd4b4 BUG: 1163543 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/9948 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* libxlator: Change marker xattr handling interfacePranith Kumar K2015-03-251-1/+5
| | | | | | | | | | | | | | | | | | | | | - Changed the implementation of marker xattr handling to take just a function which populates important data that is different from default 'gauge' values and subvolumes where the call needs to be wound. - Removed duplicate code I found while reading the code and moved it to cluster_marker_unwind. Removed unused structure members. - Changed dht/afr/stripe implementations to follow the new implementation - Implemented marker xattr handling for ec. Change-Id: Ib0c3626fe31eb7c8aae841eabb694945bf23abd4 BUG: 1200372 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9892 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr : enable inspection & resolution of files in split-brainAnuradha2015-03-191-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Part 2/2 patch to enable users analyze and resolve split-brain. This patch enables : 1) Users to inspect the files in data and metadata split-brain. 2) Resolve the split-brain. Both using a series of setfattr commands. Consider a volume "test" with 2 bricks. 1) To inspect a file f1: setfattr -n replica.split-brain-choice -v test-client-0 f1 After the execution of this command, if no read_subvol is found, reads will be served from test-client-0 (corresponding to brick-0). 2) To resolve split-brain : setfattr -n replica.split-brain-heal-finalize -v test-client-0 f1 Execution of this command will lead to the resolution of data and metadata split-brain with subvol mentioned in the command (test-client-0 here) as the source and the rest as sink. Change-Id: Ia20f3ee5abd3119e3d54fcc599f1e55ac65fd179 BUG: 1191396 Signed-off-by: Anuradha <atalur@redhat.com> Reviewed-on: http://review.gluster.org/9743 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cluster/afr: Implementation of quorum-readsPranith Kumar K2015-03-051-0/+8
| | | | | | | | | | | Provide a way of disabling reads when quorum is not met. Change-Id: Ic4f57c2b87a0b8514600759de3a7a47e217fe3b5 BUG: 1187885 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9543 Reviewed-by: Ravishankar N <ravishankar@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: volume status for tcp,rdma type volume display only tcp portMohammed Rafi KC2015-02-181-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For tcp,rdma type voumes, there will be two ports, one for tcp and one for rdma. But volume status command only display tcp port. By this change, adding an extra column for rdma port and changing the port to tcp port. Eg: >gluster volume status pathy >For tcp,rdma type volume Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick brickname 49152 49153 Y 14158 >For rdma type volume Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick brickname 0 49153 Y 14158 For tcp type volume Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick brickname 49152 0 Y 14158 >gluster volume status patchy detail Status of volume: xcube2 ------------------------------------------------------------------------------ Brick : Brick brickname TCP Port : 49152 RDMA Port : 49153 Online : Y Pid : 14158 File System : ext4 Device : /dev/mapper/luks-2099dd4a-0050-4cae-ad7b-c6a0498c4e88 Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 31.1GB Total Disk Space : 47.9GB Inode Count : 3203072 Free Inodes : 2926789 >gluster volume status xcube --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr>(null)</opErrstr> <volStatus> <volumes> <volume> <volName>xcube</volName> <nodeCount>2</nodeCount> <node> <hostname>hostname</hostname> <path>/home/brick1</path> <peerid>2d7bcb95-3d26-4d4f-b3c6-e2ee01b71662</peerid> <status>1</status> <port>49152</port> <ports> <tcp>49152</tcp> <rdma>N/A</rdma> </ports> <pid>5657</pid> </node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <peerid>2d7bcb95-3d26-4d4f-b3c6-e2ee01b71662</peerid> <status>1</status> <port>2049</port> <ports> <tcp>2049</tcp> <rdma>N/A</rdma> </ports> <pid>5665</pid> </node> <tasks/> </volume> </volumes> </volStatus> </cliOutput> Change-Id: I81aab226edbd400d29cd3f510af4f344dd99ba51 BUG: 1164079 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9191 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cluster/afr: When parent and entry read subvols are different, set ↵Krutika Dhananjay2015-02-021-0/+7
| | | | | | | | | | | | | | | entry->inode to NULL That way a lookup would be forced on the entry, and its attributes will always be selected from its read subvol. Change-Id: Iaba25e2cd5f83e983fc8b1a1f48da3850808e6b8 BUG: 1179169 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/9477 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* afr : Change in heal info split-brain commandAnuradha2015-01-301-0/+5
| | | | | | | | | | | | | Implementation of heal info split-brain command with glfs-heal. Change-Id: I233eb790de6eb5468a4cbb12a1cef0f97db2a1d2 BUG: 1183019 Signed-off-by: Anuradha <atalur@redhat.com> Reviewed-on: http://review.gluster.org/9459 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* mgmt/glusterd: Implement Volume heal enable/disablePranith Kumar K2015-01-201-5/+25
| | | | | | | | | | | | | | | | | | For volumes with replicate, disperse xlators, self-heal daemon should do healing. This patch provides enable/disable functionality for the xlators to be part of self-heal-daemon. Replicate already had this functionality with 'gluster volume set cluster.self-heal-daemon on/off'. But this patch makes it uniform for both types of volumes. Internally it still does 'volume set' based on the volume type. Change-Id: Ie0f3799b74c2afef9ac658ef3d50dce3e8072b29 BUG: 1177601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9358 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* tests: let force_umount work on multiple itemsJeff Darcy2015-01-181-1/+1
| | | | | | | | | | | | | | | Some scripts (e.g. features/weighted-rebalance.t) try to unmount multiple mountpoints at once, using UMOUNT_LOOP. This dutifully passes the $* list through to force_umount, which (prior to this fix) would only unmount $1 instead of the whole set. This would leave those devices mounted, which would not only be a resource leak itself but would cause other cleanup actions to fail. Change-Id: I2e3379c85792765025540f10be7cb37b8a4c1bcf Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/9386 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* tests: Data self-heal test casesPranith Kumar K2014-12-021-0/+10
| | | | | | | | | Change-Id: I74d08797b791ea6649d9aba585996e9ec680e3f8 BUG: 1128721 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8538 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com>
* ec: Add state dump supportXavier Hernandez2014-10-031-0/+24
| | | | | | | | | | | | Change-Id: I4504f3050674dde217e79af28cb4d2b5370fe2d5 BUG: 1148010 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/8891 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org> Tested-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* porting: Provide setfattr/getfattr implementationHarshavardhana2014-09-051-2/+2
| | | | | | | | | | | | | | | | - Use 'getfattr' properly avoid redundant options during xattr query - Untabify certain parts of tests (remove tabs) - Avoid backtick evaluation for certain values to make code more portable. - Use awk on FreeBSD/Darwin, since 'wc' implementation is broken and adds spurious spaces in its output. Change-Id: I7dcc0b70874e43b4cda8c306ed18a31b7a3f990a BUG: 1131713 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8520 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org> Tested-by: Emmanuel Dreyfus <manu@netbsd.org>
* Regression test portability: statEmmanuel Dreyfus2014-08-181-1/+1
| | | | | | | | | | | | | | | Linux uses stat -c, stat --printf= or stat --printf NetBSD uses stat -f with different format strings. This change set changes all stat usage to stat -c and introduce a shell stat() fonction to perform the format string translation. BUG: 764655 Change-Id: I024fca7c1b736b053f5888cbf21da0a72489ef63 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8424 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Harshavardhana <harsha@harshavardhana.net>
* build: make GLUSTERD_WORKDIR rely on localstatedirHarshavardhana2014-08-071-2/+2
| | | | | | | | | | | | | | | | | | | | | | - Break-way from '/var/lib/glusterd' hard-coded previously, instead rely on 'configure' value from 'localstatedir' - Provide 's/lib/db' as default working directory for gluster management daemon for BSD and Darwin based installations - loff_t is really off_t on Darwin - fix-off the warnings generated by clang on FreeBSD/Darwin - Now 'tests/*' use GLUSTERD_WORKDIR a common variable for all platforms. - Define proper environment for running tests, define correct PATH and LD_LIBRARY_PATH when running tests, so that the desired version of glusterfs is used, regardless where it is installed. (Thanks to manu@netbsd.org for this additional work) Change-Id: I2339a0d9275de5939ccad3e52b535598064a35e7 BUG: 1111774 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8246 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* Regression test portability: psEmmanuel Dreyfus2014-07-281-3/+3
| | | | | | | | | | | | | ps aux is truncated to the terminal width on NetBSD, Use ps auxww to avoid that BUG: 764655 Change-Id: I28a2fc23e2823dd6524a72da30111b86fc4bfa7b Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8281 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Harshavardhana <harsha@harshavardhana.net>
* cluster/stripe: Fix EINVAL errors on quota enabled volumesKrutika Dhananjay2014-06-261-0/+4
| | | | | | | | | | | | | | | | | | Write operations on directories with quota enabled used to fail with EINVAL on stripe volumes. This was due to assert failure in stripe_lookup(), meant to ensure loc->path is not NULL. However, in nameless lookup (in this particular case triggered by quotad, which has stripe xlator in its graph), loc->path can be legitimately NULL. The fix involves removing this check in stripe_lookup(). Change-Id: Ibbd4f68763fdd8a85f29da78b3937cef1ee4fd1e BUG: 1100050 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8145 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* tests: Provide force_umount with 5 retriesPranith Kumar K2014-06-181-0/+5
| | | | | | | | | | | Change-Id: I2b5784c48eedcccb17690de438addd29075926bd BUG: 1092850 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8104 Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* mgmt/glusterd: save the snapd port in volinfo after starting snapdRaghavendra Bhat2014-06-171-0/+8
| | | | | | | | | Change-Id: I9266bbf4f67a2135f9a81b32fe88620be11af6ea BUG: 1109889 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/8084 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Kaushal M <kaushal@redhat.com>
* tests: Validate self-heal daemon completionsPranith Kumar K2014-06-161-0/+5
| | | | | | | | | | | | | | | Removed sleep with EXPECT_WITHIN Heal full doesn't generate indices until the files/dirs are recreated. So wait until they are re-created and then wait for heal completion. Change-Id: I82399f6a17f94ecc101db45b83d8ef7bfa9c64dd BUG: 1092850 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8069 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* storage/posix: Janitor should guard against dir renames.Pranith Kumar K2014-06-121-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Directory rename while a brick is down can cause gfid handle of that directory to be deleted until next lookup happens on that directory. *) Self-heal does not have intelligence to detect renames at the moment. So it has to delete the directory 'd' using special flags, because it has to perform 'rm -rf' of that directory as it is not empty. Posix xlator implements this by renaming the directory deleted to 'landfill' directory in '.glusterfs' where janitor thread will perform actual rm -rf by traversing the directory. Janitor thread wakes up every 10 minutes to check if there are any directories to be deleted and deletes them. As part of deleting it also deletes the gfid-handles. Steps to hit the problem: 1) On a replicate volume create a directory 'd', file in 'd' called 'f' so the directory 'd' is not empty. 2) bring one of the bricks down (lets call it brick-a, the other one is brick-b 3) Rename d to d1 4) When brick-a comes online again, self-heal deletes directory 'd' and creates directory 'd1' on brick-a for performing self-heal. So on brick-a, gfid-handle of 'd' pointing to 'da is deleted and recreated to point to 'd1'. 5) This directory 'b' with all its directory hierarchy (for now just the file 'f') will be under 'landfill' directory. 6) When janitor thread wakes up and deletes directory 'd' and gfid-handle of 'd' without realizing that it is now pointing to 'd1'. Thus 'd1' loses its gfid-handle Fix: Delete gfid-handle for a directory only when the gfid-handle is stale. Change-Id: I21265b3bd3852f0967d916aaa21108ae5c9e7373 BUG: 1101143 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/7879 Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr: Remove stale index in self-heal codepathPranith Kumar K2014-05-081-0/+5
| | | | | | | | | Change-Id: I635fc0fa955b33590f1c5b4dfec22d591ea8575c BUG: 1032894 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/6592 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gluster: GlusterFS Volume Snapshot FeatureAvra Sengupta2014-04-111-0/+51
| | | | | | | | | | | | | | | | | | | | | | | | | This is the initial patch for the Snapshot feature. Current patch includes following features: * Snapshot create * Snapshot delete * Snapshot restore * Snapshot list * Snapshot info * Snapshot status * Snapshot config Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9 BUG: 1061685 Signed-off-by: shishir gowda <sgowda@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Signed-off-by: Joseph Fernandes <josferna@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7128 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Sparse file self-heal cangesPranith Kumar K2014-03-261-2/+8
| | | | | | | | | | | | | - Fix boundary condition for offset - Honour data-self-heal-algorithm option - Added tests for sparse file self-healing Change-Id: I14bb1c9d04118a3df4072f962fc8f2f197391d95 BUG: 1080707 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/7339 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr: refactorAnand Avati2014-03-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - Remove client side self-healing completely (opendir, openfd, lookup) - Re-work readdir-failover to work reliably in case of NFS - Remove unused/dead lock recovery code - Consistently use xdata in both calls and callbacks in all FOPs - Per-inode event generation, used to force inode ctx refresh - Implement dirty flag support (in place of pending counts) - Eliminate inode ctx structure, use read subvol bits + event_generation - Implement inode ctx refreshing based on event generation - Provide backward compatibility in transactions - remove unused variables and functions - make code more consistent in style and pattern - regularize and clean up inode-write transaction code - regularize and clean up dir-write transaction code - regularize and clean up common FOPs - reorganize transaction framework code - skip setting xattrs in pending dict if nothing is pending - re-write self-healing code using syncops - re-write simpler self-heal-daemon Change-Id: I1e4080c9796c8a2815c2dab4be3073f389d614a8 BUG: 1021686 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/6010 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Add options to the CLI that let the user control the reset ofDawit Alemu2014-01-241-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | stats "volume profile info" automatically clears incremental stats. There isn't a command to: - fetch stats without clearing incremental stats and - clear cumulative and incremental stats This change introduces two arguments (i.e. peek and clear). 'clear' will wipe both incremental and cumulative stats. 'peek' fetches stats without wiping incremental stats. 'volume profile info peek' - fetches incremental and cumulative stats without wiping incremental stats 'volume profile info incremental peek' - fetches incremental stats without wiping incremental stats 'volume profile info clear' - clears both incremental and cumultiave stats Change-Id: I91834515ad672eca5f882809941147d7d997c4c9 BUG: 1047416 Signed-off-by: Dawit Alemu <dalemu@redhat.com> Reviewed-on: http://review.gluster.org/6620 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* features/marker: Filter quota xattrs on file as wellPranith Kumar K2013-12-011-0/+7
| | | | | | | | | | | | | | | | | | | | | Problem: Quota contributions of a file/directory are tracked by quota xlator using xattrs on the file. Quota allows these xattrs to be healed as part of metadata self-heal. This leads to wrong quota calculations on this brick after self-heal because quota xattrs don't represent the actual contributions on the brick anymore. Fix: Don't let self-heal of this xattr happen as part of self-heal by filtering quota xattrs on file in listxattr. Change-Id: Iea68a116595ba271e58c6fdcc3dd21c7bb55ebb3 BUG: 1035576 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/6374 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/glusterd: improve rebalance fix-layout status reportingRavishankar N2013-09-191-1/+6
| | | | | | | | | | | | | | | | | | | Problem: Currenly the CLI rebalance status command output does not indicate the 'type' of rebalance, i.e. whether a full rebalance or only a fix-layout was carried out. Fix: After the rebalance status of all peers is received by the originator glusterd, alter it to reflect the type of rebalance before passing it on to the CLI process. Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3 BUG: 1004744 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/5826 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: remove-brick:Allow simultaneous removal of multiple subvolumes.Ravishankar N2013-08-131-1/+3
| | | | | | | | | | | | | | | | | Currently, remove-brick supports removal of only one distributed stripe/ replica pair at a time. Fix it to support removal of multiple pairs. This is consistent with add-brick behaviour which supports adding multiple stripe/replica pairs simultaneously. Removal is successful irrespective of the order of the bricks given at the CLI, as long as the bricks are from the same subvolume(s). Change-Id: I7c11c1235ce07b124155978b9d48d0ea65396103 BUG: 974007 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/5210 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>