summaryrefslogtreecommitdiffstats
path: root/tests/volume.rc
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: Gluster should keep PID file in correct locationGaurav Kumar Garg2017-08-121-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently Gluster keeps process pid information of all the daemons and brick processes in Gluster configuration file directory (ie., /var/lib/glusterd/*). These pid files should be seperate from configuration files. Deletion of the configuration file directory might result into serious problems. Also, /var/run/gluster is the default placeholder directory for pid files. So, with this fix Gluster will keep all process pid information of all processes in /var/run/gluster/* directory. > BUG: 1258561 > Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> > Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com> > Reviewed-on: https://review.gluster.org/13580 > Tested-by: MOHIT AGRAWAL <moagrawa@redhat.com> > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> > cherry pick from commit 220d406ad13d840e950eef001a2b36f87570058d BUG: 1480459 Change-Id: Idb09e3fccb6a7355fbac1df31082637c8d7ab5b4 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/18023 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* storage/posix: Add virtual xattr to fetch path from gfidKotresh HR2017-07-311-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The gfid2path infra stores the "pargfid/bname" as on xattr value for each non directory entry. Hardlinks would have a separate xattr. This xattr key is internal and is not exposed to applications. A virtual xattr is exposed for the applications to fetch the path from gfid. Internal xattr: trusted.gfid2path.<xxhash> Virtual xattr: glusterfs.gfidtopath getfattr -h -n glusterfs.gfidtopath /<aux-mnt>/.gfid/<gfid> If there are hardlinks, it returns all the paths separated by ':'. A volume set option is introduced to change the delimiter to required string of max length 7. gluster vol set gfid2path-separator ":::" > Updates: #139 > Change-Id: Ie3b0c3fd8bd5333c4a27410011e608333918c02a > Signed-off-by: Kotresh HR <khiremat@redhat.com> > Reviewed-on: https://review.gluster.org/17785 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Updates: #139 Change-Id: Ie3b0c3fd8bd5333c4a27410011e608333918c02a Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: https://review.gluster.org/17921 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* posix/gfid2path: Block access to gfid2path xattr via mountKotresh HR2017-07-311-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | gfid2path xattr is an internal xattr and should not be allowed to modify by other applications from gluster mount. This patch blocks the same. > Updates: #139 > Change-Id: Id2cb29797ee1bd77e0e0d2203a47469fd7203355 > Signed-off-by: Kotresh HR <khiremat@redhat.com> > Reviewed-on: https://review.gluster.org/17744 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Prashanth Pai <ppai@redhat.com> > Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> > Reviewed-by: Aravinda VK <avishwan@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> (cherry picked from commit 96eece8abbb9c06f0b91f37e718ac9e337a3f714) Updates: #139 Change-Id: Id2cb29797ee1bd77e0e0d2203a47469fd7203355 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: https://review.gluster.org/17869 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Aravinda VK <avishwan@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* storage/posix: New gfid2path infraKotresh HR2017-07-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | With this infra, a new xattr is stored on each entry creation as below. trusted.gfid2path.<xxhash> = <pargfid>/<basename> If there are hardlinks, multiple xattrs would be present. Fops which are impacted: create, mknod, link, symlink, rename, unlink Option to enable: gluster vol set <VOLNAME> storage.gfid2path on Updates: #139 Change-Id: I369974cd16703c45ee87f82e6c2ff5a987a6cc6a Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: https://review.gluster.org/17488 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* glusterd: fix brick start raceAtin Mukherjee2017-06-061-5/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit tries to handle a race where we might end up trying to spawn the brick process twice with two different set of ports resulting into glusterd portmapper having the same brick entry in two different ports which will result into clients to fail connect to bricks because of incorrect ports been communicated back by glusterd. In glusterd_brick_start () checking brickinfo->status flag to identify whether a brick has been started by glusterd or not is not sufficient as there might be cases where while glusterd restarts glusterd_restart_bricks () will be called through glusterd_spawn_daemons () in synctask and immediately glusterd_do_volume_quorum_action () with server-side-quorum set to on will again try to start the brick and in case if the RPC_CLNT_CONNECT event for the same brick hasn't been processed by glusterd by that time, brickinfo->status will still be marked as GF_BRICK_STOPPED resulting into a reattempt to start the brick with a different port and that would result portmap go for a toss and resulting clients to fetch incorrect port. Fix would be to introduce another enum value called GF_BRICK_STARTING in brickinfo->status which will be set when a brick start is attempted by glusterd and will be set to started through RPC_CLNT_CONNECT event. For brick multiplexing, on attach brick request given the brickinfo->status flag is marked to started directly this value will not have any effect. Also this patch removes started_here flag as it looks to be redundant as brickinfo->status. Change-Id: I9dda1a9a531b67734a6e8c7619677867b520dcb2 BUG: 1457981 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17447 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* core: fix spelling errorsKaleb S. KEITHLEY2017-06-021-1/+1
| | | | | | | | | | | | | | fixes for various minor spelling errors and typos Reported-by: Patrick Matthäi <pmatthaei@debian.org> Change-Id: Ic1be36f82e3d822bbdc9559878bd79520fc0fcd5 BUG: 1457808 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/17442 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org>
* glusterd: socketfile & pidfile related fixes for brick multiplexing featureMohit Agrawal2017-05-091-1/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: While brick-muliplexing is on after restarting glusterd, CLI is not showing pid of all brick processes in all volumes. Solution: While brick-mux is on all local brick process communicated through one UNIX socket but as per current code (glusterd_brick_start) it is trying to communicate with separate UNIX socket for each volume which is populated based on brick-name and vol-name.Because of multiplexing design only one UNIX socket is opened so it is throwing poller error and not able to fetch correct status of brick process through cli process. To resolve the problem write a new function glusterd_set_socket_filepath_for_mux that will call by glusterd_brick_start to validate about the existence of socketpath. To avoid the continuous EPOLLERR erros in logs update socket_connect code. Test: To reproduce the issue followed below steps 1) Create two distributed volumes(dist1 and dist2) 2) Set cluster.brick-multiplex is on 3) kill glusterd 4) run command gluster v status After apply the patch it shows correct pid for all volumes BUG: 1444596 Change-Id: I5d10af69dea0d0ca19511f43870f34295a54a4d2 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/17101 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* Fixes quota aux mount failureSanoj Unnikrishnan2017-05-081-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The aux mount is created on the first limit/remove_limit/list command and it remains until volume is stopped / deleted / (quota is disabled) , where we do a lazy unmount. If the process is uncleanly terminated, then the mount entry remains and we get (Transport disconnected) error on subsequent attempts to run quota list/limit-usage/remove commands. Second issue, There is also a risk of inadvertent rm -rf on the /var/run/gluster causing data loss for the user. Ideally, /var/run is a temp path for application use and should not cause any data loss to persistent storage. Solution: 1) unmount the aux mount after each use. 2) clean stale mount before mounting, if any. One caveat with doing mount/unmount on each command is that we cannot use same mount point for both list and limit commands. The reason for this is that list command needs mount to be accessible in cli after response from glusterd, So it could be unmounted by a limit command if executed in parallel (had we used same mount point) Hence we use separate mount points for list and limit commands. Change-Id: I4f9e39da2ac2b65941399bffb6440db8a6ba59d0 BUG: 1433906 Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com> Reviewed-on: https://review.gluster.org/16938 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Manikandan Selvaganesh <manikandancs333@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* cluster/ec: Don't trigger data/metadata heal on LookupsPranith Kumar K2017-02-261-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem-1 If Lookup which doesn't take any locks observes version mismatch it can't be trusted. If we launch a heal based on this information it will lead to self-heals which will affect I/O performance in the cases where Lookup is wrong. Considering self-heal-daemon and operations on the inode from client which take locks can still trigger heal we can choose to not attempt a heal on Lookup. Problem-2: Fixed spurious failure of tests/bitrot/bug-1373520.t For the issues above, what was happening was that ec_heal_inspect() is preventing 'name' heal to happen Problem-3: tests/basic/ec/ec-background-heals.t To be honest I don't know what the problem was, while fixing the 2 problems above, I made some changes to ec_heal_inspect() and ec_need_heal() after which when I tried to recreate the spurious failure it just didn't happen even after a long time. BUG: 1414287 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Change-Id: Ife2535e1d0b267712973673f6d474e288f3c6834 Reviewed-on: https://review.gluster.org/16468 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Ashish Pandey <aspandey@redhat.com>
* glusterd: keep snapshot bricks separate from regular onesJeff Darcy2017-02-101-0/+1
| | | | | | | | | | | | | | | | | | | | | | | The problem here is that a volume's transport options can change, but any snapshots' bricks don't follow along even though they're now incompatible (with respect to multiplexing). This was causing the USS+SSL test to fail. By keeping the snapshot bricks separate (though still potentially multiplexed with other snapshot bricks including those for other volumes) we can ensure that they remain unaffected by changes to their parent volumes. Also fixed various issues with how the test waits (or more precisely didn't) for various events to complete before it continues. Change-Id: Iab4a8a44fac5760373fac36956a3bcc27cf969da BUG: 1385758 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/16544 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Avra Sengupta <asengupt@redhat.com>
* tests: fix online_brick_count for multiplexingJeff Darcy2017-02-071-1/+1
| | | | | | | | | | | | | | | | | | The number of brick processes no longer matches the number of bricks, therefore counting processes doesn't work. Counting *pidfiles* does. Ironically, the fix broke multiplex.t which used this function, so it now uses a different function with the old process-counting behavior. Also had to fix online_brick_count and kill_node in cluster.rc to be consistent with the new reality. Change-Id: I4e81a6633b93227e10604f53e18a0b802c75cbcc BUG: 1385758 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/16527 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* core: run many bricks within one glusterfsd processJeff Darcy2017-01-301-3/+27
| | | | | | | | | | | | | | | | | | | | | | | This patch adds support for multiple brick translator stacks running in a single brick server process. This reduces our per-brick memory usage by approximately 3x, and our appetite for TCP ports even more. It also creates potential to avoid process/thread thrashing, and to improve QoS by scheduling more carefully across the bricks, but realizing that potential will require further work. Multiplexing is controlled by the "cluster.brick-multiplex" global option. By default it's off, and bricks are started in separate processes as before. If multiplexing is enabled, then *compatible* bricks (mostly those with the same transport options) will be started in the same process. Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb BUG: 1385758 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/14763 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* features/bit-rot-stub: use the correct spelling of quarantine for bad objectsRaghavendra Bhat2017-01-301-1/+1
| | | | | | | | | | | | | | | | container The directory for containing the list of bad objects was named "quanrantine" instead of "quarantine" Change-Id: I8c20381ac637201d9d1a224f5223e8dfbed53f1e BUG: 1401571 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: https://review.gluster.org/16027 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Kotresh HR <khiremat@redhat.com>
* tier : Tier as a servicehari gowtham2017-01-161-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tierd is implemented by separating from rebalance process. The commands affected: 1) Attach tier will trigger this process instead of old one 2) tier start and tier start force will also trigger this process. 3) volume status [tier] will show tier daemon as a process instead of task and normal tier status and tier detach status works. 4) tier stop implemented. 5) detach tier implemented separately along with new detach tier status 6) volume tier volname status will work using the changes. 7) volume set works This patch has separated the tier translator from the legacy DHT rebalance code. It now sends the RPCs from the CLI to glusterd separate to the DHT rebalance code. The daemon is now a service, similar to the snapshot daemon, and can be viewed using the volume status command. The code for the validation and commit phase are the same as the earlier tier validation code in DHT rebalance. The “brickop” phase has been changed so that the status command can use this framework. The service management framework is now used. DHT rebalance does not use this framework. This service framework takes care of : *) spawning the daemon, killing it and other such processes. *) volume set options , which are written on the volfile. *) restart and reconfigure functions. Restart is to restart the daemon at two points 1)after gluster goes down and comes up. 2) to stop detach tier. *) reconfigure is used to make immediate volfile changes. By doing this, we don’t restart the daemon. it has the code to rewrite the volfile for topological changes too (which comes into place during add and remove brick). With this patch the log, pid, and volfile are separated and put into respective directories. Change-Id: I3681d0d66894714b55aa02ca2a30ac000362a399 BUG: 1313838 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/13365 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* cluster/ec: Implement heal info with lockAshish Pandey2016-10-111-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: Currently heal info command prints all the files/directories if the index for the file/directory is present in .glusterfs/indices folder. After implementing patch http://review.gluster.org/#/c/13733/ indices of the file which is going through update fop will also be present in .glusterfs/indices even if the fop is successful on all the brick. At this time if heal info command is being used, it will also display this file which is actually healthy and does not require any heal. Solution: Take lock on a file corresponding to the indices and inspect xattrs to decide if the file needs heal or not. Change-Id: I6361e2813ece369be12d02e74816df4eddb81cfa BUG: 1366815 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: http://review.gluster.org/15543 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org>
* tests: Fix races in open-behind.tPranith Kumar K2016-09-271-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | Problems: 1) flush-behind is on by default, so just because write completes doesn't mean it will be on the disk, it could still be in write-behind's cache. This leads to failure where if you write from one mount and expect it to be there on the other mount, sometimes it won't be there. 2) Sometimes the graph switch is not completing by the time we issue read which is leading to opens not being sent on brick leading to failures. Fixes: 1) Disable flush-behind 2) Add new functions to check the new graph is there and connected to bricks before 'cat' is executed. BUG: 1379511 Change-Id: I0faed684e0dc70cfd2258ce6fdaed655ee915ae6 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/15575 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* feature/bitrot: Fix recovery of corrupted hardlinkKotresh HR2016-09-081-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: When a file with hardlink is corrupted in ec volume, the recovery steps mentioned was not working. Only name and metadata was healing but not the data. Cause: The bad file marker in the inode context is not removed. Hence when self heal tries to open the file for data healing, it fails with EIO. Background: The bitrot deletes inode context during forget. Briefly, the recovery steps involves following steps. 1. Delete the entry marked with bad file xattr from backend. Delete all the hardlinks including .glusters hardlink as well. 2. Access the each hardlink of the file including original from the mount. The step 2 will send lookup to the brick where the files are deleted from backend and returns with ENOENT. On ENOENT, server xlator forgets the inode if there are no dentries associated with it. But in case hardlinks, the forget won't be called as dentries (other hardlink files) are associated with the inode. Hence bitrot stube won't delete it's context failing the data self heal. Fix: Bitrot-stub should delete the inode context on getting ENOENT during lookup. Change-Id: Ice6adc18625799e7afd842ab33b3517c2be264c1 BUG: 1373520 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/15408 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* feature/bitrot: Ondemand scrub option for bitrotKotresh HR2016-08-251-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The bitrot scrubber takes 'hourly/daily/biweekly/monthly' as the values for 'scrub-frequency'. There is no way to schedule the scrubbing when the admin wants it. Ondemand scrubbing brings in the new option 'ondemand' with which the admin can start scrubbing ondemand. It starts the scrubbing immediately. Ondemand scrubbing is successful only if the scrubber is in 'Active (Idle)' (waiting for it's next frequency cycle to start scrubbing). It is not entertained when the scrubber is in 'Paused' or already running. Here is the command line syntax. gluster volume bitrot <vol name> scrub ondemand Change-Id: I84c28904367eed827a7dae8d6a535c14b28e9f4d BUG: 1366195 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/15111 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Venky Shankar <vshankar@redhat.com>
* tests: Remove hard coding in get_auxPranith Kumar K2016-07-221-4/+12
| | | | | | | | | | | Change-Id: Ie007d8006a2f2be0187f0c73d46ec6dda2a68a6b Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14988 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: Jeff Darcy <jdarcy@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* cli: different status output for rebalance fix-layoutSakshi2016-07-031-2/+6
| | | | | | | | | | | Change-Id: I6ded40a1b1cff5c72e5b61fd353db3d8c688efd8 BUG: 1225718 Signed-off-by: Sakshi <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/10956 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* tests: Add more tests for granular entry self-heal featureKrutika Dhananjay2016-05-301-0/+5
| | | | | | | | | | | | Change-Id: I6f14e413c538e392c8ee5bf4bf9f283e8ac792b7 BUG: 1332566 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/14542 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* nsr/jbr: Renaming nsr to jbrAvra Sengupta2016-04-251-3/+3
| | | | | | | | | | | | | | | As per community consensus, we have decided to rename nsr to jbr(Journal-Based-Replication). This is the patch to rename the "nsr" code to "jbr" Change-Id: Id2a9837f2ec4da89afc32438b91a1c302bb4104f BUG: 1328043 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/13899 Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* nsr: Introducing a happy path test caseAvra Sengupta2016-03-311-0/+18
| | | | | | | | | | | | | | | | | Write infra for nsr_server to not send a CHILD_UP before it gets a CHILD_UP from a quorum of it's children. Using the CHILD_UP received in the nsr client translator from the server, to decide the right time for starting the I/Os Change-Id: I9551638b306bdcbc6bae6aeda00316576ea832fe Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/13623 CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* features/shard: Implement discard fopKrutika Dhananjay2016-03-111-0/+4
| | | | | | | | | | | | Change-Id: Ia5bd8d36b21a586df6556fbec3474892d5871229 BUG: 1261841 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/13657 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* tests, shard: fallocate tests refactorKrutika Dhananjay2016-03-101-0/+6
| | | | | | | | | | | | Change-Id: I3f275185f4dcb1939e8074851c8f140c5e40b28d BUG: 1261841 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/13405 Smoke: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* glusterd: volume get should pick options from priv->opts tooAtin Mukherjee2016-03-081-0/+7
| | | | | | | | | | | | | | | As of now volume get was not looking for all the global options maintained in option dictionary in glusterd_conf_t. This patch includes the same. Change-Id: Ib05259a2dcacc4a712cae4217fe4a6553b61da56 BUG: 1300596 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/13272 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* tests/quota : improving tests for quotaManikandan Selvaganesh2016-03-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tests/basic/quota.t includes all the basic test that needs to be tested for quota. In most of the other tests specific to bugs(tests/bugs/quota/*), tests such as creating and starting volume, enabling quota, setting limit, writing data, doing list have been done which is essential to write a individual quota test file, but, if the specific bug just needs to test *few* particular cases, I have moved those tests under tests/basic itself to speedup the regressions. Basics of inode-quota and it's enforcing, renaming with quota are basic tests and is hence moved under tests/basic folder. In other files, I have removed tests which are not needed, such as 'pidof glusterd' or checking for 'gluster volume info' or if there are any test which is already being tested under tests/basic and is being written again. Change-Id: Iefd6d9529246d59829cc5bf02687a1861d8462a8 BUG: 1294826 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/13216 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* tests/quota : fix failing test on auxiliary mount pointManikandan Selvaganesh2016-01-121-1/+1
| | | | | | | | | | | | | | | | | | | | In test file tests/bugs/quota/bug-1049323.t, test "EXPECT "0" get_aux" fails in Fedora. In get_aux function we grep for "/var/run/gluster/<volname>" to check if auxiliary mount point is created and we return 0 on success else we return 1. In fedora, auxiliary mount point is created on "/run/gluster/<volname>". So it fails on Fedora. The patch fixes it by just grepping for "/run/gluster/<volname>". Change-Id: Icb59395df4a98109eaa8199cbdbdedcd1cbef27a BUG: 1297740 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/13228 Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
* tests: handle bad objects during lookup/inode_refreshRavishankar N2015-12-281-0/+4
| | | | | | | | | | | Change-Id: I1848f0e9243c9376e0deba6738757350fe8b704a BUG: 1290965 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/13044 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tests: fix brick_up_statusKaushal M2015-12-091-2/+1
| | | | | | | | | | | | | | | The brick_up_status function wasn't correct after the introduction of the RDMA port into the `volume status` output. It has been fixed to use the XML brick status of a specific brick instead of normal CLI output. Change-Id: I5327e1a32b1c6f326bc3def735d0daa9ea320074 BUG: 1289584 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/12913 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/bitrot : Integration of bad files from bitd with scrub status commandGaurav Kumar Garg2015-11-231-0/+7
| | | | | | | | | | | | | | | | Currently scrub status command is not displaying list of all the bad files. All the bad files are avaliable in the bitd daemon. With this patch it will dispaly list of all the bad file's in the scrub status command. Change-Id: If09babafaf5d7cf158fa79119abbf5b986027748 BUG: 1207627 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/12720 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd : check if all bricks are started before performing remove-brickSakshi2015-09-221-0/+8
| | | | | | | | | | Change-Id: Ie9e24e037b7a39b239a7badb983504963d664324 BUG: 1225716 Signed-off-by: Sakshi <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/10954 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* ec : trusted.ec.version xattr of all root directories of all bricks should ↵Ashish Pandey2015-08-291-3/+17
| | | | | | | | | | | | | | | | | | | | be same. Problem: After replacing the brick using "replace-brick" command and running "heal full", the version of the root directory of the newly added brick is not getting healed. heal starts running on the dentries of the root but does not run on root directory. Solution: Run heal on root directory. Change-Id: Ifd42a3fb341b049c895817e892e5b484a5aa6f80 BUG: 1243382 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: http://review.gluster.org/11676 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
* glusterd: stop all the daemons services on peer detachGaurav Kumar Garg2015-08-241-8/+8
| | | | | | | | | | | | | | | Currently glusterd is not stopping all the deamon service on peer detach With this fix it will do peer detach cleanup properlly and will stop all the daemon which was running before peer detach on the node. Change-Id: Ifed403ed09187e84f2a60bf63135156ad1f15775 BUG: 1255386 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/11509 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Stop/restart/notify to daemons(svcs) during reset/set on a volumeanand2015-08-061-0/+9
| | | | | | | | | | | | | | | | | | | | problem : Reset/set commands were not working properly. reset command returns success but it not sending notification to svcs if corresponding graph modified. Fix: Whenever reset/set command issued, generate the temp graph and compare with original graph and do the fallowing actions 1.) If both graph are identical nothing to do with svcs. 2.) If any changes in graph topology restart/stop service by calling svc manager. 3) If changes in options send notify signal by calling glusterd_fetchspec_notify. Change-Id: I852c4602eafed1ae6e6a02424814fe3a83e3d4c7 BUG: 1209329 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/10850 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* quota: fix quota test casevmallika2015-07-151-4/+4
| | | | | | | | | | | | | | | | | | | | Below command is wrong way of executing mutilple command with | (pipe) local cmd="$CLI volume quota $V0 list $QUOTA_PATH | grep $QUOTA_PATH | awk '{print \$$FIELD}'" $cmd This patch fixes the issue This patch also fixes testcase inode-quota.t, which checking quota values in wrongs fields Change-Id: If28732e6a76ea4bf75560f6496c8f56670915cf9 BUG: 1229297 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/11673 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* glusterd: Pass NULL in glusterd_svc_manager in glusterd_restart_bricksGaurav Kumar Garg2015-07-141-0/+3
| | | | | | | | | | | | | | | | | | | | | On restarting glusterd quota daemon is not started when more than one volumes are configured and quota is enabled only on 2nd volume. This is because of while restarting glusterd it will restart all the bricks. During brick restart it will start respective daemon by passing volinfo of first volume. Passing volinfo to glusterd_svc_manager will imply daemon managers will take action based on the same volume's configuration which is incorrect for per node daemons. Fix is to pass volinfo NULL while restarting bricks. Change-Id: I2602002a8ba7762fc1eb08123e79fbcf568ecab4 BUG: 1242875 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/11658 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* features/quota : Fix spurious failurevmallika2015-07-121-5/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | Problem : Basically, in this test case a file is created which exceeds the quota limit. Once the limit is reached that file will be deleted. At the same moment we are testing inode-quota. It can so happen that before the marker updates the information related to deletion of file, a new file creation operation comes and sees that quota limit is still exceeded. Solution : Inducing a check to see if marker updation completed successfully. Updated all the test case which has the similar machanism and also moved the "usage" function to a common place "volume.rc" Change-Id: I36ddbc5ebbf1b74c9d326a0d1d5f3b32f20a906a BUG: 1229297 Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/11125 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* quota/marker: fix spurious failure afr-quota-xattr-mdata-heal.tvmallika2015-07-101-0/+20
| | | | | | | | | | | | | | | | During quota-update process if inode info is present in size-xattr and missing in contri-xattrs, then in function '_mq_get_metadata', we set contri-size as zero (on error -2, which means usage info present, but inode info missing). With this we are calculating wrong delta and updating the same. With this patch we are ignoring errors if inode info in xattrs are missing Change-Id: I7940a0e299b8bb425b5b43746b1f13f775c7fb92 BUG: 1241153 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/11583 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/ec: Make background healing optional behaviorPranith Kumar K2015-07-061-1/+1
| | | | | | | | | | | Provide options to control number of active background heal count and qlen. Change-Id: Idc2419219d881f47e7d2e9bbc1dcdd999b372033 BUG: 1237381 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/11473 Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* tests: fix basic/afr/replace-brick-self-heal.t failureRavishankar N2015-06-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | Test fails with: not ok 28 Got "Binary file (standard input) matches" instead of "qwerty" FAILED COMMAND: qwerty get_text_xattr user.test /d/backends/patchy1_new/file5.txt not ok 29 Got "Binary file (standard input) matches" instead of "qwerty" FAILED COMMAND: qwerty get_text_xattr user.test /d/backends/patchy0/file5.txt Failed 2/29 subtests Fix: Pass -a flag to grep Change-Id: I69626fbf95a9ff756046363c5627cf98ea3f1df8 BUG: 1207829 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/11416 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tests/bitrot: Scrub state change testsVenky Shankar2015-06-211-2/+2
| | | | | | | | | | | Change-Id: Ibb4b503e7d723c86ac381ad3747b1198334bd6ad BUG: 1231619 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/11290 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Tests portability: umount(8)Emmanuel Dreyfus2015-06-091-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1) Avoid hangs on unmounting NFS on NetBSD NetBSD umount(8) on a NFS mount whose server is gone will wait forever because umount(8) calls realpath(3) and tries to access the mount before it calls unmount(2). The non-portable, NetBSD-specific umount -R flag prevent that behavior. We therefore introduce UMOUNT_F, defined as "umount -f" on Linux and "umount -f -R" on NetBSD to take care of forced unmounts, especially in the NFS case. 2) Enforce usage of force_umount wrapper with timeout Whenever umount is used it should be wrapped in force_umount with tiemout handling. That saves us timing issues, and it handles the NetBSD NFS case. 3) Cleanup kernel cache flush. We used (cd $M0 && umount $M0 ) as a portable kernel cache flush trick, but it does not flush everything we need on Linux. Introduce a drop_cache() shell function that reverts to previously used echo 3 > /proc/sys/vm/drop_caches on Linux, and keeps (cd $M0 && umount $M0 ) on other systems. BUG: 1129939 Change-Id: Iab1f5a023405f1f7270c42b595573702ca1eb6f3 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/11114 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* posix: Do not update unmodified xattr in (f)xattropXavier Hernandez2015-05-271-1/+5
| | | | | | | | | | | | | | | If a (f)xattrop is issued with a value that only contains 0's, then we don't modify or create the extended attribute. This is useful to avoid ctime modifications when the only purpose of the xattrop was to get the current value. Change-Id: Ia62494e9009962e683c8276783f671da17a8b03a BUG: 1211123 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/10886 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tests: arbiter.t fixRavishankar N2015-05-241-2/+2
| | | | | | | | | | | | | | | | | | | Wait for AFR's children to be up in glustershd process before attempting heal. Also, grep (version 2.21) is detecting statedump files as binary, causing tests to succeed incorrectly. Hence adding the -a switch to force it to treat it as a text file. Thanks to Vijay Bellur for identifying the issue (http://lists.gnu.org/archive/html/bug-grep/2015-05/msg00000.html) and the workaround. Change-Id: Ie3d9591ffaf44baa0cd8c2baa327aed24378e3df BUG: 1163543 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/10833 Tested-by: NetBSD Build System Tested-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* test: Fix sparse file self heal testzhoushicheng2015-05-191-1/+0
| | | | | | | | | | | | | | | | | | | | | | | This patch solves problems caused by XFS with speculative preallocation feature on : Test EXPECT "1" has_holes $B0/${V0}0/big2bigger would fall when XFS has not freed the preallocated blocks. It is caused by XFS speculative preallocation feature. The test would pass if this feature is disabled. Speculative preallocation can speed up under linux 3.8(and later). Otherwise, the test would pass by dropping cache manually to speed up speculative preallocation. As in http://review.gluster.org/#/c/10411/, using "( cd $M0 ; umount $M0 )" to drop caches, which is better than "echo 3 > /proc/sys/vm/drop_caches". drop caches operation was added in test: tests/basic/afr/sparse-file-self-heal.t BUG: 1206461 Change-Id: Ie2c9d1b92fa8307c44498752fdd100eb86f9689c Signed-off-by: zhoushicheng <madaozhou@gmail.com> Reviewed-on: http://review.gluster.org/10253 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* tests: fix bitrot spurious failuresAtin Mukherjee2015-05-091-0/+9
| | | | | | | | | Change-Id: I55bd62480b7ee38cf7b29aeba67b19b0c5bbe2fb BUG: 1220016 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/10702 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* tests: Check aux umount is unmounted for quota testsPranith Kumar K2015-05-041-0/+13
| | | | | | | | | | Change-Id: If57d08f3446755ea41f66ca258efcc8ea5a89063 BUG: 1217701 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/10480 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* features/changelog: Consider only changelog on/off as changelog breakageKotresh HR2015-05-021-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Earlier, both chagelog on/off and brick restart were considered to be changelog breakage and treated as changelog not being continuous. As a result, new HTIME.TSTAMP file was created on both the above cases. Now the change is made such that only on changelog enable/disable, the changelog is considered to be discontinuous. New HTIME.TSTAMP file is not created on brick restart, the changelogs files are appended to last HTIME.TSTAMP file. Treating changelog as continuous in above scenario is important as changelog history API will fail otherwise. It can successfully get changes between start and end timestamps only when changelog is continuous (Changelogs in single HTIME.TSTAMP file are treated as continuous). Without this change, changelog history API would fail, and it would become necessary to fallback to other mechanisms like xsync FSCrawl in case geo-rep to detect changes in this time window. But Xsync FSCrawl would not be applicable to other consumers like glusterfind. Rationale: 1. In plain distributed volume, if brick goes down, no I/O can happen onto the brick. Hence changelog is intact with data on disk. 2. In distributed replicate volume, if brick goes down, since self-heal traffic is captured in changelog. Eventually, I/O happened whend brick down is captured in changelog. Change-Id: I2eb66efe6ee9a9228fb1fcb38d6e7696b9559d5b BUG: 1211327 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/10222 Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System
* tests: Await for graph switch before testing open fop countVijay Bellur2015-04-291-0/+7
| | | | | | | | | | | | | | | | | | | | | In performance/open-behind.t, a test for open fop reaching the brick is done by switching off open-behind and performing a read operation. If the read operation is performed before a graph switch, the read happens on the old graph and hence open does not get accounted in the brick. To overcome this EXPECT_WITHIN 10 seconds has now been added to ensure that a graph switch has happened. The read operation happens subsequently after the graph switch. Cleaned up a "No volumes present" message from stderr while doing this. Change-Id: I1e1c0d7e4bd2057520b4dd46157d18f30837b8c9 BUG: 1213066 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/10293 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Tested-by: Raghavendra Bhat <raghavendra@redhat.com>