summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* core: use syscall wrappers instead of direct syscalls - miscellaneousKaleb S. KEITHLEY2015-10-2824-245/+263
| | | | | | | | | | | | | | | various xlators and other components are invoking system calls directly instead of using the libglusterfs/syscall.[ch] wrappers. If not using the system call wrappers there should be a comment in the source explaining why the wrapper isn't used. Change-Id: I1f47820534c890a00b452fa61f7438eb2b3f667c BUG: 1267967 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/12276 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* core: use syscall wrappers instead of direct syscalls -- glusterdKaleb S. KEITHLEY2015-10-2821-183/+199
| | | | | | | | | | | | | | | various xlators and other components are invoking system calls directly instead of using the libglusterfs/syscall.[ch] wrappers. If not using the system call wrappers there should be a comment in the source explaining why the wrapper isn't used. Change-Id: I28bf2a5f7730b35914e7ab57fed91e1966b30073 BUG: 1267967 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/12379 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* afr: write zeros to sink for non-sparse filesRavishankar N2015-10-284-24/+99
| | | | | | | | | | | | | | | | | Problem: If a file is created with zeroes ('dd', 'fallocate' etc.) when a brick is down, the self-heal does not write the zeroes to the sink after it comes up. Consequenty, there is a mismatch in disk-usage amongst the bricks of the replica. Fix: If we definitely know that the file is not sparse, then write the zeroes to the sink even if the checksums match. Change-Id: Ic739b3da5dbf47d99801c0e1743bb13aeb3af864 BUG: 1272460 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/12371 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* features/shard: Force cache-refresh when lookup/readdirp/stat detect that ↵Krutika Dhananjay2015-10-283-19/+149
| | | | | | | | | | | xattr value has changed Change-Id: Ia3225a523287f6689b966ba4f893fc1b1fa54817 BUG: 1272986 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/12400 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cluster/tier do not log error message on lookup heal for files on hot tierDan Lambright2015-10-282-17/+25
| | | | | | | | | | | | | | | | | | On fix-layout heal files are scanned. Files found are exist on the hot or cold subvolume. Those not found in the cold tier would exist on the hot. They should not be flagged as an error. Replace INFO with TRACE for common tier migration logs. Frequent migration was growing the log files too quickly. On migratation failures, do not acrue files towards cycle limit's budget. Change-Id: Ie832ee07c43bce5477ae81c939d1fe8416a11615 BUG: 1275383 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/12430 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Joseph Fernandes
* cluster/ec: update version and size on good bricksAshish Pandey2015-10-281-10/+2
| | | | | | | | | | | | | | | | | | | | Problem: readdir/readdirp fops calls [f]xattrop with fop->good which contain only one brick for these operations. That causes xattrop to be failed as it requires at least "minimum" number of brick. Solution: Use lock->good_mask to call xattrop. lock->good_mask contain all the good locked bricks on which the previous write opearion was successfull. Change-Id: If1b500391aa6fca6bd863702e030957b694ab499 BUG: 1274629 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: http://review.gluster.org/12419 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Tested-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd: call glusterd_store_volinfo in bump up op-versionAtin Mukherjee2015-10-271-0/+7
| | | | | | | | | | | | | | | After an upgrade, op-version is expected to be updated through gluster volume set. If the new version introduces any feature which changes volinfo structure without storing the default values of these new options would result into cksum issues. Change-Id: I57b4667f3403839811735bf66bef29e5200a9241 BUG: 1262805 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/12171 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
* features/changelog: record mknod if tier-dht linkto is setSaravanakumar Arumugam2015-10-271-2/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a series of patches which aims to fix geo-replication in a Tiering Volume. Problem: Consider, a file is placed in volume initially and then hot tier is attached. During any operation on the file, due to lookup a linkto file is created in hot tier. Now, any namespace operation carried out on the file is recorded in both cold and hot tier. There is a room for races when both changelogs are replayed. Solution: So, We are going to replay (namespace related)operations only in the hot tier. Why? a. If the file is directly placed in Hot tier, all fops will be recorded in HOT tier. b. If the file is already present in Cold tier, and if any fop is carried out, it creates linkto file in Hot tier. Now, operations like UNLINK, RENAME are captured in Hot tier(by means of linkto file). This way, we can get both tier's operation in HOT tier itself. But, We may miss initial Data sync immediately after creating the file as it is only recording MKNOD. So, if MKNOD encountered with sticky bit set, queue DATA operation for the corresponding gfid. ( This geo-rep related changes are addressed in this patch: http://review.gluster.org/12326/ ) So, If tier-dht linkto is set, we need to record the corresponding MKNOD. Earlier this was avoided as it was set as INTERNAL fop. (This is addressed here in this patch) Change-Id: I25514fe3e25f68592a8d6361507f8c8a4fcb70b1 BUG: 1266875 Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com> Reviewed-on: http://review.gluster.org/12417 Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com>
* features/changelog: ignore recording tiering rebalance fopsSaravanakumar Arumugam2015-10-271-6/+7
| | | | | | | | | | | | | Recording of tiering rebalance process's fops like Creation and Deletion of file must be avoided. Ignore the fops using corresponding pid. Change-Id: Ifdc7765598d04d033f93e6339e9b188f7566cb65 BUG: 1266875 Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com> Reviewed-on: http://review.gluster.org/12239 Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com>
* tests: Separate logs for each testRaghavendra Talur2015-10-271-0/+14
| | | | | | | | | | Change-Id: Ib286e3d4d7c432dab8073fce582ccbf723eb31d2 BUG: 1251592 Signed-off-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-on: http://review.gluster.org/12110 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tier: Typo while setting the wrong value of low/hi watermarkhari gowtham2015-10-271-2/+3
| | | | | | | | | | | | | While setting the wrong value of watermark-hi/low the output shows "compatiblevalue" whereas it should be "compatible value" Change-Id: I29c8f9a954928d22e436465f4ebc30bd08640138 BUG: 1275502 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12432 Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* tests: fix timeout in mount-nfs-auth.tJeff Darcy2015-10-271-11/+16
| | | | | | | | | | | | | | | | | | | | | The mount timeout was too short. The normal configuration-change path (construct graph, call reconfigure) and the auth-refresh path might in effect run serially. Therefore we have to wait for the *sum* of those two intervals. As with all too-short-timeout problems, the result was that the test would run fine most of the time. However, it has caused spurious failures on my own patches a half dozen times, and I have a half dozen other emails about it nuking other people's as well (most often but not always on NetBSD). The fix, obviously, is to calculate and use the right timeout value for NFS mount actions. Other actions and timeouts have been left alone. Change-Id: Ic8f013c8c830e33c48bcc6d1b603d6d22a8ba3c5 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/12396 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* features/shard: Support geo-rep for sharded volumeKotresh HR2015-10-262-34/+74
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Approach: Shard xlator on slave side is by passed for all the fops to geo-rep mount. So each shard on master is considered as a separate file for geo-rep and it syncs them separately on to slave. The extended attribute in which shard maintains the size is also synced from master and shard on slave doesn't calculate by itself. Pre-requisites: 1. If master is sharded volume, slave also should be sharded. 2. Slave's shard configurations should be same as master. 3. Geo-rep config of xattr sync should not be disabled. All other dependant patches: 1. http://review.gluster.org/#/c/12205/ 2. http://review.gluster.org/#/c/12206/ 3. http://review.gluster.org/#/c/12225/ 4. http://review.gluster.org/#/c/12226/ Change-Id: I474220d69fa030b1e06a4fa0868c34fabe02efcf BUG: 1265148 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/12228 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* geo-rep: Avoid cold tier bricks during ENTRY operationSaravanakumar Arumugam2015-10-264-5/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a series of patch which aims to fix geo-replication in a Tiering Volume. Problem: Consider, a file is placed in volume initially and then hot tier is attached. During any operation on the file, due to lookup a linkto file is created in hot tier. Now, any namespace operation carried out on the file is recorded in both cold and hot tier. There is a room for races when both changelogs are replayed. Solution: So, We are going to replay (namespace related)operations only in the hot tier. Why? a. If the file is directly placed in Hot tier , all fops will be recorded in HOT tier. b. If the file is already present in Cold tier, and if any fop is carried out, it creates linkto file in Hot tier. Now, operations like UNLINK, RENAME are captured in Hot tier(by means of linkto file). This way, we can get both tier's operation in HOT tier itself. Now, once the file is demoted to COLD tier, any namespace operation carried out on the cold tier can be avoided as we directly RECORD the same in HOT tier. How? 1. Check whether the brick is cold tier and skip ENTRY operation. 2. Also, if it is cold tier brick, use Xsync(which is used during initial run). This will help in getting all cold tier bricks changes using File System crawl and helps in avoiding races with hot tier brick(which can happen if historychangelog used in cold tier brick). Dependent patches: 1. http://review.gluster.org/12239 2. http://review.gluster.org/12326 Change-Id: I7692b1dbb8813a7e253451bca02f8f09a5782dde BUG: 1266875 Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com> Reviewed-on: http://review.gluster.org/12355 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Aravinda VK <avishwan@redhat.com>
* mount/fuse: use a queue instead of pipe to communicate with threadRaghavendra G2015-10-263-63/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | doing inode/entry invalidations. Writing to pipe can block if pipe is full. This can lead to deadlocks in some situations. Consider following situation: 1. Kernel sends a write on an inode. Client is waiting for a response to write from brick. 2. A lookup happens on behalf of different application/thread on the same inode. In response, mdc tries to invalidate the inode. 3. fuse_invalidate_inode is called. It writes a invalidation request to pipe. Another thread which reads from this pipe writes the request to /dev/fuse. The invalidate code in fuse-kernel-module, tries to acquire lock on all pages for the inode and is blocked as a write is in progress on same inode (step 1) 4. Now, poller thread is blocked in invalidate notification and cannot receive any messages from same socket (on which lookup response came). But client is expecting a response for write from same socket (again step1) and we've a deadlock. The deadlock can be solved in two ways: 1. Use a queue (and a conditional variable for notifications) to pass invalidation requests from poller to invalidate thread. This is a variant of using non-blocking pipe, but doesn't have any limit on the amount of data (worst case we run out of memory and error out). 2. Allow events from sockets, immediately after we read one rpc-msg. Currently we disallow events till that rpc-msg is read from socket, processed and handled by higher layers. That way we won't run into these kind of issues. Also, it'll increase parallelism in way of reading from sockets. This patch implements solution 1 above. Change-Id: I8e8199fd7f4da9eab46a719d9292f35c039967e1 BUG: 1273387 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/12402
* geo-rep: Add data operation if mknod with tier attributeSaravanakumar Arumugam2015-10-261-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a series of patches which aims to fix geo-replication in a Tiering Volume. Problem: Consider, a file is placed in volume initially and then hot tier is attached. During any operation on the file, due to lookup a linkto file is created in hot tier. Now, any namespace operation carried out on the file is recorded in both cold and hot tier. There is a room for races when both changelogs are replayed. Solution: So, We are going to replay (namespace related)operations only in the hot tier. Why? a. If the file is directly placed in Hot tier, all fops will be recorded in HOT tier. b. If the file is already present in Cold tier, and if any fop is carried out, it creates linkto file in Hot tier. Now, operations like UNLINK, RENAME are captured in Hot tier(by means of linkto file). This way, we can get both tier's operation in HOT tier itself. But, We may miss initial Data sync immediately after creating the file as it is only recording MKNOD. So, if MKNOD encountered with sticky bit set, queue DATA operation for the corresponding gfid. (This is addressed here in this patch) So, If tier-gfid linkto is set, we need to record the corresponding MKNOD. Earlier this was avoided as it was set as INTERNAL fop. (This changelog related changes are addressed in the patch: - http://review.gluster.org/12417) Change-Id: I2fa84cfa2b0f86506c3d15d484138ab9651e4f83 BUG: 1266875 Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com> Reviewed-on: http://review.gluster.org/12326 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Aravinda VK <avishwan@redhat.com>
* afr: wind writes only on subvols where preop succeededRavishankar N2015-10-261-2/+4
| | | | | | | | | | | | | | | | | 1. Call local->transaction.wind() only on subvols where pre-op succeeded. 2. Update op_errno in afr_changelog_cbk call path. This fixes a bug in commit 7945121dda340ec8f25711b2ad3ca70b544de967 where we return EUCLEAN to the application if pre-op fails on all bricks. Change-Id: Iab8776e49a992e7a255314bba542742f7607f3ec BUG: 1272362 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/12415 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tier/ctr: Correcting the internal fop calculationJoseph Fernandes2015-10-252-17/+28
| | | | | | | | | | | Correcting the internal fop calculation method, as it had wrong logic. Change-Id: I1d0b40a1e27548147203ddd503794059652ac049 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/12418 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* libglusterfs: replace default functions with generated versionsJeff Darcy2015-10-2212-3428/+1853
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replacing repetitive code like this with code generated from a more compact "canonical" definition carries several advantages. * Ease the process of adding new fops (e.g. GF_FOP_IPC). * Ease the process of making global changes to existing fops (e.g. adding "xdata"). * Ensure strict consistency between all of the pieces that must be compatible with each other, through both kinds of changes. What we have right now is just a start. The above benefits will only truly be realized when we use the same definitions to generate stubs, syncops, and perhaps even parts of gfapi or glupy. This same infrastructure can also be used to reduce code duplication and potential for error in many of our translators. NSR already uses a similar technique, using a few hundred lines of templates to generate a few *thousand* lines of code. The ability to make a global "aspect" change (e.g. to quorum checking) in one place instead of seventy has already been demonstrated there. Other candidates for code generation include the AFR/EC transaction infrastructure, or stub creation/resumption in io-threads. Change-Id: If7d59de7a088848b557f5aea00741b4fe19017c1 BUG: 1271325 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/9411 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* tests/tier: Move common functions to tier.rcN Balachandran2015-10-216-170/+95
| | | | | | | | | | | | Move common functions in tier .t files to tier.rc Change-Id: Ibc312d987be9d93e7cc7fc47d0bf598bb1c944c2 BUG: 1272319 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/12404 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* cluster/tier: add pause tier for snapshotsDan Lambright2015-10-2111-12/+518
| | | | | | | | | | | | | | | | | | | | Snaps of tiered volumes cannot handle files undergoing migration. We implement a helper mechanism to "pause" migration. Any files undergoing migration are aborted. Clean up is done to remove sticky bits and data at the destination. Migration is restarted after snap completes. For testing an internal switch is added. It is not exposed externally. gluster volume set vol1 tier-pause [true|false] Change-Id: Ia85bbf89ac142e9b7e73fcbef98bb9da86097799 BUG: 1267950 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/12304 Reviewed-by: N Balachandran <nbalacha@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* Tier/cli: removing warning message for tieringhari gowtham2015-10-211-10/+0
| | | | | | | | | | | | | | The warning message for tiering being under experimental staus is removed. Change-Id: I7d1d535d380b672c70f03ecc0d24a113600ea43f BUG: 1273726 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12407 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* cluster/dht : op_ret not set correctly in dht_fsync_cbkN Balachandran2015-10-211-1/+1
| | | | | | | | | | | | | local->op_ret was not set correctly in dht_fsync_cbk in case of files being migrated Change-Id: If73ae04368ea0c7f6868c8704dfc2deb2faee753 BUG: 1273372 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/12401 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* cluster/tier do not abort migration if a single brick is downDan Lambright2015-10-201-12/+12
| | | | | | | | | | | | | | When a bricks are down, promotion/demotion should still be possible. For example, if an EC brick is down, the other bricks are able to recover the data and migrate it. Change-Id: I8e650c640bce22a3ad23d75c363fbb9fd027d705 BUG: 1273215 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/12397 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Joseph Fernandes
* tests: return success if the last test ends up with core and a bad testAtin Mukherjee2015-10-201-1/+2
| | | | | | | | | Change-Id: Ie2695ebff8678851edb6b0b6e1de37e1f5ec9077 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/12328 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* snapshot: Fix snapshot clone postvalidateAvra Sengupta2015-10-204-52/+62
| | | | | | | | | | | | | | | | | | | | | | | | | In glusterd_snapshot_clone_postvalidate(), we were deleting snap object and snap vol, by looking up snapname. Hence, it was deleting the orignal snapshot from which the clone was being created Instead it should fetch the clonename, the respective clone vol, and its corresponding snap object, and delete them. Also glusterd_snap_remove(), needs to differentiate a clone snap object from a snaphsot snap object, as in case of a clone snap object, we don't have any persisted data in /var/run/gluster/snaps/ and hence is shouldn't try to delete anything there. Change-Id: I02bb22a3898d5720e318a02d6cc32d25f75d317d BUG: 1272339 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/12364 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* afr: do not wind write if pre-op fails on all childrenRavishankar N2015-10-203-13/+7
| | | | | | | | | | | | | | | | | | 1. When winding the pre-op, transaction.pre_op[i] is set. If the pre-op fails, transaction.failed_subvols[i] is set. If if fails on all chidren, we can directly proceed to unlock (via afr_changelog_post_op_now) without trying to wind the write, fail and then go to unlock. 2. 'fop_subvols' seems to be an unused variable, hence removing it. Change-Id: I9525628daf48082e979b0093fa0478934495e61f BUG: 1272362 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/12368 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Anuradha Talur <atalur@redhat.com>
* features/snap : cleanup the root loc in statfsAshish Pandey2015-10-202-1/+26
| | | | | | | | | | | | | | | | Problem : In svc_statfs function, wipe_loc is getting called on loc passed by nfs. This loc is being used by svc_stat which throws erro if loc->inode is NULL. Solution : wipe_loc should be called on local root_loc. Change-Id: I9cc5ee3b1bd9f352f2362a6d997b7b09051c0f68 BUG: 1260848 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: http://review.gluster.org/12123 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* cluster/tier remove suprious log messages on valid failed migrationDan Lambright2015-10-191-1/+8
| | | | | | | | | | | | | | On a write to a replica volume, we record in all brick's databases an entry. When the tier daemon runs, it will only move the file if it is the true owner of the file as defined by the XATTR_NODE_UUID_KEY. Change-Id: Ib82717f87a3f94f3d0d9f969773de9e88d6aaf22 BUG: 1273043 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/12391 Reviewed-by: Joseph Fernandes Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cluster/tier update man pages for tier featureDan Lambright2015-10-191-0/+19
| | | | | | | | | | | Add to gluster man pages instructions for tier commands. Change-Id: I0918460eeaba22bb6a11238d4f5501fa8e61da88 BUG: 1272557 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/12380 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com>
* cluster/tier: Changed tier xattr-name valueN Balachandran2015-10-192-13/+15
| | | | | | | | | | | | | | | Each tier layer (for future stacking implementations) must have a unique xattr name. We are currently using the name of the tier subvolume excluding the volume name. Change-Id: Id4adea61dc1c8473fb1d4d7364d1940278c6e129 BUG: 1259298 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/12350 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* cluster/ec : Remove index entries if file/dir does not existAshish Pandey2015-10-151-33/+45
| | | | | | | | | | | | | | | | | | | | Problem: During write and rebalance if a brick is down, index entries will be created. If the same file gets migrated to other subvol by rebalance process, these index entries will remain in index directory. During heal, these indices should be removed when we get ENOENT or ESTALE for a index. Solution: Capture correct errno and take appropriate action to purge these indices. Change-Id: I1aad8b99e4df2e139648e3bf971e4cb1c4b38699 Bug: 1271358 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: http://review.gluster.org/12353 Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* protocol/server: define the max number of inodes in lru list as a numberRaghavendra Bhat2015-10-151-3/+3
| | | | | | | | | | | | | | * The max number of inodes in the lru list of the inode table was being defined in terms of memory units (GF_UNIT_MB) instead of number. And the description of the option was also referring to it in memory units instead of number. Change-Id: I48f07e7d2826406697eb2a13714ab22feae81d89 BUG: 1266883 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/12242 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cluster/dht : Do not migrate files with POSIX locks heldN Balachandran2015-10-153-10/+282
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dht_migrate_file does not migrate file locks to the dst file. Any locks held on the source file are lost once the migration is complete. This issue is magnified in the case of a tier volume as file migrations occur more frequently and repeatedly as compared to a DHT rebalance. The fix makes 2 changes: 1. Before starting the actual migration process, check if there are any locks held on the file. If yes, do not migrate the file. 2. The rebalance process tries to lock on the entire file just before moving into the Phase 2 of the file migration. If the lock acquisition fails, the file migration does not proceed. If the lock is granted, the file migration proceeds. This still leaves a small window where conflicting locks can be granted to different clients. If client1 requests a lock on the src file just after it is converted to a linkto file and client2 requests a lock on the dst data file, they will both be granted, but all FOPs will be redirected to the dst data file. This issue will be taken up in a subsequent patch. Change-Id: I8c895fc3cced50dd2894259d40a827c7b43d58ac BUG: 1271148 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/12347 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* libglusterfs: pass buffer size to gf_store_read_and_tokenize functionGaurav Kumar Garg2015-10-143-4/+6
| | | | | | | | | | | | | | | | | | | | | | Previously if user set an option where length of key=value goes beyond PATH_MAX (4096) character then tokenzing the option at the time of reading configuration file will fail. This is because of the we was having restraction in fgets to read maximum of PATH_MAX (4096) length of character. Consequence of this is when user try to restart glusterd, after setting key=value length beyond PATH_MAX (4096) character, glusterd will not restart. With this fix instead of PATH_MAX, consumer of gf_store_read_and_tokenize function will decide the size of the buffer length. Change-Id: I655a8ce982effdfff8f3e785ea31f543dbe39301 BUG: 1271150 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/12346 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cli/quota : rm -rf on /<mountpoint>/<dir> is not showing quota headerManikandan Selvaganesh2015-10-141-4/+23
| | | | | | | | | | | | | | | | | Currently, when 'gluster v quota <VOLNAME> list' command is issued after an rm -rf on /run/gluster/vol/<directory>, quota output header is not shown. It is because the list_count was properly calculated with 'gluster v quota <VOLNAME> remove /path' and not with an rm -rf. The patch fixes this issue. Change-Id: I5266a8b0b9322b7db1b9e1d6b0327065931f4bcb BUG: 1269375 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/12345 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli : freeing the allocated memoryManikandan Selvaganesh2015-10-131-0/+1
| | | | | | | | | | Change-Id: Ibcbad94c091a9c24fe5aff2d7e8bcd9ac88da7bf BUG: 1248521 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/12337 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: disabling enable-shared-storage option should not delete volumeGaurav Kumar Garg2015-10-133-6/+68
| | | | | | | | | | | | | | | | | | | Previously when you create volume with "glusterd_shared_storage" name and if user disable enable-shared-storage option then gluster will delete the "glusterd_shared_storage" volume. With this fix gluster will do appropriate validation of enable-shared-storage option and it will not delete volume with "glusterd_shared_storage" name if it is a user created volume. Change-Id: I2bd92f938fb3de6ef496a934933bdcea9f251491 BUG: 1266818 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/12232 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* tier/shd: inline warning when compiled with gcc v.5Mohammed Rafi KC2015-10-131-1/+1
| | | | | | | | | | | | Change-Id: I487a26263d6e940eed364a831e99f9b8390bc96a BUG: 1226881 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12342 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Anoop C S <anoopcs@redhat.com> Tested-by: Anoop C S <anoopcs@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* features/shard: Return ENOTSUP as opposed to ENOTCONN in unimplemented fopsKrutika Dhananjay2015-10-132-4/+17
| | | | | | | | | | Change-Id: Idba1070b11c5c1de26ef57e6843c93c105b8b8a5 BUG: 1270694 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/12340 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* features/shard: Dump private members and addresses in statedumpKrutika Dhananjay2015-10-131-1/+15
| | | | | | | | | | Change-Id: I3c5e5bd93288c4c9a2665a26c0d6a76e67ecf914 BUG: 1270694 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/12334 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
* protocol/client: give preference to loc->gfid over inode->gfidRavishankar N2015-10-131-6/+6
| | | | | | | | | | | | | | | | | | There are xlators which perform fops even before inode gets linked. Because of this loc.gfid is preferred at the time of inodelk/entrylk but by the time unlock can happen, inode could be linked with a different gfid than the one in loc.gfid (because of the way dht was giving preference) Due to this unlock goes on a different inode than the one we sent inodelk on, which leads to hang. Credits to Pranith for the fix. Change-Id: I7d162d44852ba876f35aa1bb83e4afdb184d85b9 BUG: 1266834 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/12233 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tier/shd: make shd commands compatible with tieringMohammed Rafi KC2015-10-127-126/+428
| | | | | | | | | | | | | | | | tiering volfiles may contain afr and disperse together or multiple time based on configuration. And the informations for those configurations are stored in tier_info. So most of the volgen code generation need to be changed to make compatible with it. Change-Id: I563d1ca6f281f59090ebd470b7fda1cc4b1b7e1d BUG: 1261276 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12135 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* quota/marker: dir_count accounting is not atomicvmallika2015-10-122-72/+148
| | | | | | | | | | | | | | | | | | | | | | | Consider below scenario: Quota enabled on pre-existing data Now quota-crawl process will start healing xattrs Now if write is performed where healing is not complete, there is a possibility that 'update txn' is started before 'create xattr txn', in this case dir count can be missed on a dir where quota size xattr is not yet created. Solution is to get size xattr and if xattr is missing, add 1 for dir_count, this requires one additional fop if done in marker during each update iteration Better solution is to us xattrop GF_XATTROP_ADD_ARRAY64_WITH_DEFAULT Change-Id: Idc8978860a3914e70c98f96effeff52e9a24e6ba BUG: 1243798 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/11694 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* tier/shd: create shd volfile for tieringMohammed Rafi KC2015-10-113-20/+262
| | | | | | | | | | | | | | | Currently shd graph will only start if it is replicate or disperse volume. But in case of tiering, volume type will be tier. So we need to start shd if any of the cold or hot is compatible with shd volume. Change-Id: Ic689746ac7d2fc6a9eccdabd8518dc9139829de2 BUG: 1261276 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/11962 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tier/ctr: CTR DB named lookup heal of cold tier during attach tierJoseph Fernandes2015-10-105-6/+257
| | | | | | | | | | | | | | | | | | | | | Heal hardlink in the db for already existing data in the cold tier during attach tier. i.e during fix layout do lookup to files in the cold tier. CTR xlator on the brick/server side does db update/insert of the hardlink on a namelookup. Currently the namedlookup is done synchronous to the fixlayout that is triggered by attach tier. This is not performant, adding more time to fixlayout. The performant approach is record the hardlinks on a compressed datastore and then do the namelookup asynchronously later, giving the ctr db eventual consistency Change-Id: I4ffc337fffe7d447804786851a9183a51b5044a9 BUG: 1252586 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/11828 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* cluster/tier: add watermarks and policy driverDan Lambright2015-10-107-113/+589
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fix introduces infrastructure to support different policies for promotion and demotion. Currently the tier feature automatically promotes and demotes files periodically based on access. This is good for testing but too stringent for most real workloads. It makes it difficult to fully utilize a hot tier- data will be demoted before it is touched- its unlikely a 100GB hot SSD will have all its data touched in a window of time. A new parameter "mode" allows the user to pick promotion/demotion polcies. The "test mode" will be used for *.t and other general testing. This is the current mechanism. The "cache mode" introduces watermarks. The watermarks represent levels of data residing on the hot tier. "cache mode" policy: The % the hot tier is full is called P. Do not promote or demote more than D MB or F files. A random number [0-100] is called R. Rules for migration: if (P < watermark_low) don't demote, always promote. if (P >= watermark_low) && (P < watermark_hi) demote if R < P; promote if R > P. if (P > watermark_hi) always demote, don't promote. gluster volume set {vol} cluster.watermark-hi % gluster volume set {vol} cluster.watermark-low % gluster volume set {vol} cluster.tier-max-mb {D} gluster volume set {vol} cluster.tier-max-files {F} gluster volume set {vol} cluster.tier-mode {test|cache} Change-Id: I157f19667ec95aa1d53406041c1e3b073be127c2 BUG: 1257911 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/12039 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* Porting developer guide to source code repo from glusterdocs projectHumble Devassy Chirammal2015-10-1030-12/+3013
| | | | | | | | | | | | Change-Id: Ib8d9c668ebb05863918e6ec2b89908f206626f38 BUG: 1206539 Signed-off-by: Humble Devassy Chirammal <hchiramm@redhat.com> Reviewed-on: http://review.gluster.org/12227 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Tested-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Tested-by: Raghavendra Talur <rtalur@redhat.com>
* cluster/tier: fix transpoint endpoint not connected in tier.t (rare)Dan Lambright2015-10-091-0/+4
| | | | | | | | | | | | | | The script did not cleanly unmount/mount gluster and change the current working directory when stopping and starting the volume. Most of the time this problem would self-resolve before subsequent tests, but very occasionally races would lead to the errors/failures. Change-Id: I128b913a71e2745512ee81c3d71852311e3b4a1b BUG: 1270328 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/12327 Reviewed-by: Joseph Fernandes Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterfsd: Initialize ctx, cmd_argsPranith Kumar K2015-10-091-3/+3
| | | | | | | | | Change-Id: I9c71ae264665b7bba609c7f86cf42a52a6b47260 BUG: 1269696 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/12311 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>