summaryrefslogtreecommitdiffstats
path: root/xlators
Commit message (Collapse)AuthorAgeFilesLines
* nfs: improve error handling for WebNFS mount permissionsNiels de Vos2017-07-211-1/+2
| | | | | | | | | | | | | | | | | In case nfs3_funge_webnfs_zerolen_fh() returns an error, the nfs3_call_state_t structure will not get initialized. This means that calling `nfs3_call_state_wipe (cs)` will result in a segmentation fault after commit daed52b8eb that makes nfs3_call_state_t refcounted. Change-Id: I4c300aedf132a7fea95756dd278ff87d67722478 BUG: 1468291 Fixes: e3f48fa2 ("nfs: add permission checking for mounting over WebNFS") Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/17822 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Reviewed-by: soumya k <skoduri@redhat.com>
* features/libgfchangelog: Fix encoding to encode only space and newlineAravinda VK2017-07-214-11/+11
| | | | | | | | | | | | | | | | | | libgfchangelog was encoding path using spec rfc3986, but encoding only required for SPACE and NEWLINE chars since the NEWLINE char is used as record separator and SPACE as field separator in the parsed changelogs output. Changed the encoding function to encode only SPACE and NEWLINE. BUG: 1451724 Change-Id: I4305459aab9e710517dd3eb065f0024503064b77 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: https://review.gluster.org/17674 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Kotresh HR <khiremat@redhat.com>
* glusterd: fix brick start raceAtin Mukherjee2017-07-201-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Another race where glusterd was restarted glusterd_brick_start () is called multiple times due to friend handshaking and in one instance when one of the brick was attempted to be attached to the existing brick process, send_attach_req failed as the first brick itself was still not up and then we did a synlock_unlock () followed by a sleep of 1 sec, before the same thread woke up, another thread tried to start the same brick process and then it assumed that it has to start a fresh brick process. Solution: 1. If brick is in starting phase (brickinfo->status == GF_BRICK_STARTING), no need for a reattempt to start the brick. 2. While initiating attach_req set brickinfo->status to GF_BRICK_STARTING Change-Id: Ib007b6199ec36fdab4214a1d37f99d7f65ef64da BUG: 1465559 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17840 Reviewed-by: Amar Tumballi <amarts@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* cluster/dht: Fixed crash in dht_rmdir_is_subvol_emptyN Balachandran2017-07-201-13/+34
| | | | | | | | | | | | | | | | | | | The local->call_cnt was being accessed and updated inside the loop where the entries were being processed and the calls were being wound. This could end up in a scenario where the local->call_cnt became 0 before the processing was complete causing the crash when the next entry was being processed. Change-Id: I930f61f1a1d1948f90d4e58e80b7d6680cf27f2f BUG: 1472949 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17825 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd: Set default value for cluster.max-bricks-per-process to 0Samikshan Bairagya2017-07-193-15/+31
| | | | | | | | | | | | | | | | | | | When brick-multiplexing is enabled, and "cluster.max-bricks-per-process" isn't explicitly set, multiplexing happens without any limit set. But the default value set for that tunable is 1, which is confusing. This commit sets the default value to 0, and prevents the user from being able to set this value to 1 when brick-multiplexing is enbaled. The default value of 0 denotes that brick-multiplexing can happen without any limit on the number of bricks per process. Change-Id: I4647f7bf5837d520075dc5c19a6e75bc1bba258b BUG: 1472417 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17819 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* libglusterfs: Name threads on creationRaghavendra Talur2017-07-1930-77/+114
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Set names to threads on creation for easier debugging. Output of top -H -p <PID-OF-GLUSTERFSD> Before: 19773 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19774 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19775 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19776 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19777 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19778 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19779 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19780 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19781 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19782 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19783 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19784 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19785 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd 19786 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd 19787 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd 19789 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19790 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 25178 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 5398 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 7881 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd After: 19773 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19774 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustertimer 19775 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19776 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustermemsweep 19777 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustersproc0 19778 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustersproc1 19779 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll0 19780 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusteridxwrker 19781 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusteriotwr0 19782 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterbrssign 19783 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterbrswrker 19784 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterclogecon 19785 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd0 19786 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd1 19787 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd2 19789 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixjan 19790 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixfsy 25178 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll1 5398 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll2 7881 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixhc Change-Id: Id5f333755c1ba168a2ffaa4fce6e71c375e10703 BUG: 1254002 Updates: #271 Signed-off-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-on: https://review.gluster.org/11926 Reviewed-by: Prashanth Pai <ppai@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* common-utils: Remove fop_enum_to_string, get_fop_intPranith Kumar K2017-07-182-91/+2
| | | | | | | | | | | | | Implementation of these two functions becomes easier by using gf_fop_list[] array. So implemented that and removed usage of these functions. BUG: 1472250 Change-Id: I8a592913f9eeb02d965708bcf28a637588ed4988 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17812 Reviewed-by: Niels de Vos <ndevos@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org>
* cluster/afr: GFID split-brain resolution with existing CLIkarthik-us2017-07-186-135/+315
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Currently there is no way for the admin from CLI to resolve gfid split-brain based on some policy like choice of the brick, mtime or size. Fix: With the existing CLI options based on size, mtime, and choice of brick, we do lookup on the parent for the specified file. As part of the lookup, if we find gfid mismatch, we resolve them based on the policy and return. If the file is not in gfid split- brain, then we check for the data and metadata split-brain in the getxattr code path, and resolve if any. This will work provided absolute path to the file with the CLI and not with gfid of the file. Hence the source-brick policy without any file path will also not resolve the gfid split-brain since it uses the gfid of the files. But it can resolve any other type of split-brains and skip the gfid mismatch resolution with the usual error message. Reverting the change https://review.gluster.org/17290. This patch resolves the issue. Fixes gluster/glusterfs#135 Change-Id: Iaeba6fc32f184a34255d03be87cda02773130a09 BUG: 1459530 Signed-off-by: karthik-us <ksubrahm@redhat.com> Reviewed-on: https://review.gluster.org/17485 Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org>
* program/GF-DUMP: Shield ping processing from traffic to GlusterfsRaghavendra G2017-07-182-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Program Since poller thread bears the brunt of execution till the request is handed over to io-threads, poller thread experiencies lock contention(s) in the control flow till io-threads, which slows it down. This delay invariably affects reading ping requests from network and responding to them, resulting in increased ping latencies, which sometimes results in a ping-timer-expiry on client leading to disconnect of transport. So, this patch aims to free up poller thread from executing code of Glusterfs Program. We do this by making * Glusterfs Program registering itself asking rpcsvc to execute its actors in its own threads. * GF-DUMP Program registering itself asking rpcsvc to _NOT_ execute its actors in its own threads. Otherwise program's ownthreads become bottleneck in processing ping traffic. This means that poller thread reads a ping packet, invokes its actor and hands the response msg to transport queue. Change-Id: I526268c10bdd5ef93f322a4f95385137550a6a49 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> BUG: 1421938 Reviewed-on: https://review.gluster.org/17105 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* storage/posix: Don't allow gfid/volume-id xattr to be removedPranith Kumar K2017-07-183-140/+134
| | | | | | | | | | | | | | | | | | | | | | | Problem: Bulk xattr removal doesn't check if the xattrs that are coming in xdata have gfid/volume-id xattrs, so there is potential for bulkremovexattr removing gfid/volume-id. I also observed that bulkremovexattr is not available for fremovexattr. Fix: Do proper checks in bulk removexattr to remove gfid/volume-id. Refactor [f]removexattr to reduce the differences. BUG: 1470489 Change-Id: Ia845b31846a149500111c0996646e648f72cdce6 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17765 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Anuradha Talur <atalur@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
* glusterd: Add description field to global options for brick-muxSamikshan Bairagya2017-07-171-2/+12
| | | | | | | | | | | | | | | | Currently the "cluster.brick-multiplex" and "cluster.max-bricks-per-process" options do not show anything in the description field when gluster volume set help is called. This commit adds the description fields for these 2 options. Change-Id: I3d162c61fa2774dd994f046e305d457f0fd43192 BUG: 1471790 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17790 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Gaurav Yadav <gyadav@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* quota: fix a crash by using bad regfile inode as parentKinglong Mee2017-07-172-2/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 0 0x00007f1482f1f1d7 in raise () from /lib64/libc.so.6 1 0x00007f1482f208c8 in abort () from /lib64/libc.so.6 2 0x00007f1482f18146 in __assert_fail_base () from /lib64/libc.so.6 3 0x00007f1482f181f2 in __assert_fail () from /lib64/libc.so.6 4 0x00007f148484986a in __inode_link (inode=inode@entry=0x7f14742404d4, parent=parent@entry=0x7f14742404d4, name=name@entry=0x7f1460001c48 "testfile5308", iatt=iatt@entry=0x7f1460001bc8) at inode.c:954 5 0x00007f1484849969 in inode_link (inode=0x7f14742404d4, parent=parent@entry=0x7f14742404d4, name=name@entry=0x7f1460001c48 "testfile5308", iatt=iatt@entry=0x7f1460001bc8) at inode.c:1060 6 0x00007f147591b895 in quota_build_ancestry_cbk ( frame=frame@entry=0x7f1482315e80, cookie=<optimized out>, this=0x7f147000e910, op_ret=op_ret@entry=6904, op_errno=op_errno@entry=0, entries=entries@entry=0x7f1474731c00, xdata=xdata@entry=0x0) at quota.c:779 7 0x00007f1475b2f505 in marker_build_ancestry_cbk (frame=0x7f1482315988, cookie=<optimized out>, this=<optimized out>, op_ret=<optimized out>, op_errno=<optimized out>, entries=0x7f1474731c00, xdata=0x0) at marker.c:3055 8 0x00007f14848b9cd9 in default_readdirp_cbk ( frame=frame@entry=0x7f1482315b30, cookie=<optimized out>, this=<optimized out>, op_ret=op_ret@entry=6904, op_errno=op_errno@entry=0, entries=entries@entry=0x7f1474731c00, xdata=xdata@entry=0x0) at defaults.c:1403 9 0x00007f1475f68132 in pl_readdirp_cbk (frame=0x7f1482315dac, cookie=<optimized out>, this=<optimized out>, op_ret=6904, op_errno=0, entries=0x7f1474731c00, xdata=0x0) at posix.c:2700 10 0x00007f1476e26819 in posix_readdirp (frame=0x7f1482315f54, this=<optimized out>, fd=<optimized out>, size=<optimized out>, off=<optimized out>, dict=<optimized out>) at posix.c:6282 11 0x00007f1475f6599a in pl_readdirp (frame=0x7f1482315dac, this=0x7f147000a200, fd=0x7f1484b5106c, size=0, offset=0, xdata=0x7f1481ab4f34) at posix.c:2711 12 0x00007f14848ce954 in default_readdirp_resume (frame=0x7f1482315b30, this=0x7f147000b690, fd=0x7f1484b5106c, size=0, off=0, xdata=0x7f1481ab4f34) at defaults.c:2019 13 0x00007f148485c92d in call_resume (stub=0x7f1481b65710) at call-stub.c:2508 14 0x00007f1475d54743 in iot_worker (data=0x7f147004e7d0) at io-threads.c:210 15 0x00007f148369cdc5 in start_thread () from /lib64/libpthread.so.0 16 0x00007f1482fe173d in clone () from /lib64/libc.so.6 Change-Id: I740dc691e7be1bc2a9ae3a0cb14bbf566ea77bc5 Signed-off-by: Kinglong Mee <mijinlong@open-fs.com> Reviewed-on: https://review.gluster.org/17730 Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org>
* fuse: memory leak fixesDanny Couture2017-07-141-38/+30
| | | | | | | | | | | | | | | | | | | Fix fuse ctx memory leak in case an error occurs and the cleanup path is different than usual. Also fix a memory leak in logging if eh_save_history() fails. Change-Id: I7ec967c807b0ed91184e5b958be70702215c46c9 BUG: 1470220 Signed-off-by: Danny Couture <couture.danny@gmail.com> Reviewed-on: https://review.gluster.org/17759 Reviewed-by: Niels de Vos <ndevos@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Amar Tumballi <amarts@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* core: miscellaneous cleanupKaleb S. KEITHLEY2017-07-144-12/+11
| | | | | | | | | | | | | | | | | | | | | | | clean up things that I tripped over doing other changes. 1) fix mishmash of random spacing in struct decls in glusterfs.h. Not technically a problem, just ugly to look at. 2) replace open-coded strings constants with existing #define constants. A disaster waiting to happen. 3) Use sys_access() instead of sys_stat() or sys_lstat() to test simple existence of file. Why copy dozens of bytes from kernel to user space that aren't going to be used by anything? There are probably more instances like these. Change-Id: I28089bef4cc93d5e4e4213045fb1a2649d110f82 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/17769 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* cluster/ec: Non-disruptive upgrade on EC volume failsSunil Kumar Acharya2017-07-141-1/+4
| | | | | | | | | | | | | | | | | | | | Problem: Enabling optimistic changelog on EC volume was not handling node down scenarios appropriately resulting in volume data inaccessibility. Solution: Update dirty xattr appropriately on good bricks whenever nodes are down. This would fix the metadata information as part of heal and thus ensures data accessibility. BUG: 1468261 Change-Id: I08b0d28df386d9b2b49c3de84b4aac1c729ac057 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Reviewed-on: https://review.gluster.org/17703 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* afr: mark non sources as sinks in metadata healRavishankar N2017-07-132-3/+5
| | | | | | | | | | | | | | | | | | Problem: In a 3 way replica, when the source brick does not have pending xattrs for the sinks, but the 2 sinks blame each other, metadata heal was not happpening because we were not setting all non-sources as sinks. Fix: Mark all non-sources as sinks, like it is done in data and entry heal. Change-Id: I534978940f5087302e307fcc810a48ffe898ce08 BUG: 1468279 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/17717 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* posix: brick process crash after stop the volume while brick mux is onMohit Agrawal2017-07-131-1/+3
| | | | | | | | | | | | | | | | | Problem: sometime brick process is getting crash after stop the volume while brick mux is enabled and no. of volumes are high Solution: In posix notify at the time close mount_lock dir , dir handle needs to set NULL to avoid the reuse of same dir handle. BUG: 1470533 Change-Id: Ifd41c20b3c597317851f91049a7c801949840b16 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/17767 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* cluster/ec: Get size of file in EC [f]xattropPranith Kumar K2017-07-133-4/+22
| | | | | | | | | | | | | | | | | | | | Problem: For allowing parallel writes we shouldn't depend on ia_size to be same for all the bricks in each write_cbk(). But we need to make sure backend size is correct on all the bricks and no crashes/manual modifications happened. Fix: At the time of get_size_version() we do 1 check to make sure size of the file is same across the bricks. From then on the FOPs will give the status of the fop, so we rely on this information to keep which bricks are good/bad. Updates #251 Change-Id: I1df645347e2e9f2e09cfa4411b6cc305d7f4e4e5 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17741 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
* storage/posix: Handle [f]xattrop xdataPranith Kumar K2017-07-132-26/+43
| | | | | | | | | | | | Updates #251 Change-Id: I13d89c3b5dc39aa0a232a70be8ec6b64394cfa6e Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17740 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
* cluster/rebalance: Fix hardlink migration failuresSusant Palai2017-07-132-15/+82
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A brief about how hardlink migration works: - Different hardlinks (to the same file) may hash to different bricks, but their cached subvol will be same. Rebalance picks up the first hardlink, calculates it's hash(call it TARGET) and set the hashed subvolume as an xattr on the data file. - Now all the hardlinks those come after this will fetch that xattr and will create linkto files on TARGET (all linkto files for the hardlinks will be hardlink to each other on TARGET). - When number of hardlinks on source is equal to the number of hardlinks on TARGET, the data migration will happen. RACE:1 Since rebalance is multi-threaded, the first lookup (which decides where the TARGET subvol should be), can be called by two hardlink migration parallely and they may end up creating linkto files on two different TARGET subvols. Hence, hardlinks won't be migrated. Fix: Rely on the xattr response of lookup inside gf_defrag_handle_hardlink since it is executed under synclock. RACE:2 The linkto files on TARGET can be created by other clients also if they are doing lookup on the hardlinks. Consider a scenario where you have 100 hardlinks. When rebalance is migrating 99th hardlink, as a result of continuous lookups from other client, linkcount on TARGET is equal to source linkcount. Rebalance will migrate data on the 99th hardlink itself. On 100th hardlink migration, hardlink will have TARGET as cached subvolume. If it's hash is also the same, then a migration will be triggered from TARGET to TARGET leading to data loss. Fix: Make sure before the final data migration, source is not same as destination. RACE:3 Since a hardlink can be migrating to a non-hashed subvolume, a lookup from other client or even the rebalance it self, might delete the linkto file on TARGET leading to hardlinks never getting migrated. This will be addressed in a different patch in future. Change-Id: If0f6852f0e662384ee3875a2ac9d19ac4a6cea98 BUG: 1469964 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: https://review.gluster.org/17755 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* features/read-only: Allow internal clients to r/wKotresh HR2017-07-113-28/+31
| | | | | | | | | | | | | | | | | | | | If the "read-only" volume option is set, it would make the volume "read-only". But it also makes it read-only to gluster internal clients such as gsyncd, self heal, bitd, rebalance etc. In which case, all the internal operations would fail. This patch allows internal clients to read and write when "read-only" option is set. Change-Id: I8110e8d9eac8def403bb29f235000ddc79eaa433 BUG: 1430608 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: https://review.gluster.org/16855 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Karthik U S <ksubrahm@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* cluster/dht: Clear clean_dst flag on target changeN Balachandran2017-07-111-0/+5
| | | | | | | | | | | | | | | | | | If the target of a file migration was changed because of min-free-disk limits, the dst_fd was closed but the clean_dst flag was not set to false. If the file could not be created on the new target for some reason, the ftruncate call to clean up the dst was sent on the now invalid fd causing the process to deadlock. Change-Id: I5bfa80f519b04567413d84229cf62d143c6e2f04 BUG: 1469029 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17735 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/dht: Fix fd check raceN Balachandran2017-07-112-1/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | There is a another race between the cached subvol being updated in the inode_ctx and the fd being opened on the target. 1. fop1 -> fd1 -> subvol0 2. file migrated from subvol0 to subvol1 and cached_subvol changed to subvol1 in inode_ctx 3. fop2 -> fd1 -> subvol1 [takes new cached subvol] 4. fop2 -> checks fd ctx (fd not open on subvol1) -> opens fd1 on subvol1 5. fop1 -> checks fd ctx (fd not open on subvol0) -> tries to open fd1 on subvol0 -> fails with "No such file on directory". Fix: If dht_fd_open_on_dst fails with ENOENT or ESTALE, wind to old subvol and let the phase1/phase2 checks handle it. Change-Id: I34f8011574a8b72e3bcfe03b0cc4f024b352f225 BUG: 1465075 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17731 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* storage/posix: New gfid2path infraKotresh HR2017-07-106-2/+198
| | | | | | | | | | | | | | | | | | | | | | | | | With this infra, a new xattr is stored on each entry creation as below. trusted.gfid2path.<xxhash> = <pargfid>/<basename> If there are hardlinks, multiple xattrs would be present. Fops which are impacted: create, mknod, link, symlink, rename, unlink Option to enable: gluster vol set <VOLNAME> storage.gfid2path on Updates: #139 Change-Id: I369974cd16703c45ee87f82e6c2ff5a987a6cc6a Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: https://review.gluster.org/17488 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* cluster/dht: Use size to calculate estimatesN Balachandran2017-07-103-24/+188
| | | | | | | | | | | | | | | | | | | The earlier approach of using the number of files to determine when the rebalance would complete did not work well when file sizes differed widely. The new approach now gets the total data size and uses that information to determine how long the rebalance is expected to take. Change-Id: I84e80a0893efab72ff06130e4596fa71c9c8c868 BUG: 1467209 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17668 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: MOHIT AGRAWAL <moagrawa@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* mgtm/core : use sha hash function for volfile checkMohammed Rafi KC2017-07-101-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We are storing the entire volfile and using this to check volfile change. With brick multiplexing there will be lot of graphs per process which will increase the memory foot print of the process. So instead of storing the entire graph we could use sha256 and we can compare the hash to see whether volfile change happened or not. Also with Brick multiplexing, the direct comparison of vol file is not correct. There are two problems. Problem 1: We are currently storing one single graph (the last updated volfile) whereas, what we need is the entire graph with all atttached bricks. If we fix this issue, we have second problem Problem 2: With multiplexing we have a graph that contains multiple bricks. But what we are checking as part of the reconfigure is, comparing the entire graph with one single graph, which will always fail. Solution: We create list in glusterfs_ctx_t that stores sha256 hash of individual brick graphs. When a graph changes happens we compare the stored hash and the current hash. If the hash matches, then no need for reconfigure. Otherwise we first do the reconfigure and then update the hash. For now, gfapi has not changed this way. Meaning when gfapi volfile fetch or reconfigure happens, we still store the entire graph and compare, each memory. This is fine, because libgfapi will not load brick graphs. But changing the libgfapi will make the code similar in both glusterfsd-mgmt and api. Also it helps to reduce some memory. Change-Id: I9df917a771a52b95622ab8f63af34ec390163a77 BUG: 1467986 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: https://review.gluster.org/17709 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* glusterd: Introduce option to limit no. of muxed bricks per processSamikshan Bairagya2017-07-1011-58/+483
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit introduces a new global option that can be set to limit the number of multiplexed bricks in one process. Usage: `# gluster volume set all cluster.max-bricks-per-process <value>` If this option is not set then multiplexing will happen for now with no limitations set; i.e. a brick process will have as many bricks multiplexed to it as possible. In other words the current multiplexing behaviour won't change if this option isn't set to any value. This commit also introduces a brick process instance that contains information about brick processes, like the number of bricks handled by the process (which is 1 in non-multiplexing cases), list of bricks, and port number which also serves as an unique identifier for each brick process instance. The brick process list is maintained in 'glusterd_conf_t'. Updates: #151 Change-Id: Ib987d14ab0a4f6034dac01b73a4b2839f7b0b695 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17469 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* nfs: add permission checking for mounting over WebNFSNiels de Vos2017-07-094-62/+99
| | | | | | | | | | | | | | | | | | | | | | | Solaris 10 uses WebNFS and not the MOUNT protocol. All permission checks for allowing/denying clients to mount are done through the MNT handlers. These handlers will not give out a filehandle to the NFS-client when mounting is denied. This prevents clients from successful mounting. However, over WebNFS a well known 'root-filehandle' is used directly with the NFSv3 protocol. When WebNFS was used, no permission checks (the "nfs.export-dir" option) were applied. Now the WebNFS mount-handler in Gluster/NFS calls the mnt3_parse_dir_exports() function that takes care of the permission checking. BUG: 1468291 Change-Id: Ic9dfd092473ba9c1c7b5fa38401cf9c0aa8395bb Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/17718 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: soumya k <skoduri@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* nfs/nlm: keep track of the call-state and frame for notificationsNiels de Vos2017-07-092-24/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | When blocking locks are used, a new frame is allocated that is used to send the notification to the client once once the lock becomes available. In all other cases, the frame that contains the request from the client will be used for the reply. Because there was no way to track the different clients with their requests (captured in the call-state), the call-state could be free'd before the notification was sent to the client. This caused a use-after-free of the call-state and could trigger segfaults of the Gluster/NFS server or incorrect replies on (un)lock requests. By introducing a nlm4_notify_args structure, the call-state and frame can be tracked better. This prevents the possibility of segfaulting when the call-state is used after being free'd. BUG: 1467313 Change-Id: I285d2bc552f509e5145653b7a50afcff827cd612 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/17700 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
* nfs/nlm: use refcounting for nfs3_call_state_tNiels de Vos2017-07-091-11/+35
| | | | | | | | | | | | | | | In order to track down a potential use-after-free of the nfs3_call_state_t structure in the NLM component, add reference counting where teh structure is used. This should prevent premature free'ing of the structure. Change-Id: Ib1f13b0463ab1e012b7b49a623c91f0f3e73e1fb BUG: 1467313 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/17699 Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* nfs/nlm: handle reconnect for non-NLM4_LOCK requestsNiels de Vos2017-07-091-22/+79
| | | | | | | | | | | | | | | | | | | | | When a reply on an NLM-procedure gets stuck, the NFS-client will resend the request. This can happen through a re-connect in case the connection was terminated (long delay in the reply on the initial request). Once that happens, not all NLM-procedures are handled correctly. Testing this is difficult and time-consuming. There still may be problems with certain operations, but this definitely makes it behave much better than before. The problem occured due to a problem in EC, change-id I18a782903ba addressed the root cause. Change-Id: I23b385568e27232951fa3fbd7198a0e5d775a8c2 BUG: 1467313 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/17698 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* uss/svc: fix double free on xdata dictionaryMohammed Rafi KC2017-07-092-8/+20
| | | | | | | | | | | | | we were taking unref on wrong dictionary which results in wrong memory access. Change-Id: Ic25a6c209ecd72c9056dfcb79fabcfc650dd3c1e BUG: 1467513 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: https://review.gluster.org/17691 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* svc: send revalidate lookup on special dirMohammed Rafi KC2017-07-092-0/+81
| | | | | | | | | | | | | | | | | | | | | | | .snaps directory is a virtual direcotory, that doesn't exist on the backend. Even though it is a special dentry, it doesn't have a dedicated inode. So the inode number is always random. Which means it will get different inode number when reboot happens on snapd process. Now with windows client the show-direcotry feature requires a lookup on the .snpas direcoty post readdirp on root. If the snapd restarted after a lookup, then subsequent lookup will fail, because linked inode will be stale. This patch will do a revalidate lookup with a new inode. Change-Id: If97c07ecb307cefe7c86be8ebd05e28cbf678d1f BUG: 1467513 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: https://review.gluster.org/17690 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org>
* svs:implement CHILD UP notify in snapview-serverMohammed Rafi KC2017-07-091-0/+15
| | | | | | | | | | | | | | | | | | | | | | protocol/server expects a child up event to successfully configure the graph. In the actual brick graph, posix is the one who decide to initiate the notification to the parent that the child is up. But in snapd graph there is no posix, hence the child up notification was missing. Ideally each xlator should initiate the child up event whenever it see's that this is the last child xlator. Change-Id: Icccdb9fe920c265cadaf9f91c040a0831b4b78fc BUG: 1467513 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: https://review.gluster.org/17689 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* cluster/ec : Don't try to heal when no sink is UPAshish Pandey2017-07-071-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: 4 + 2 EC volume configuration. If untar of linux is going on and we kill a brick, indices will be created for the files/dir which need to be healed. ec_shd_index_sweep spawns threads to scan these entries and start heal. If in the middle of this we kill one more brick, we end up in a situation where we can not heal an entry as there are only "ec->fragment" number of bricks are UP. However, the scan will be continued and it will trigger the heal for those entries. Solution: When a heal is triggered for an entry, check if it *CAN* be healed or not. If not come out with ENOTCONN. Change-Id: I305be7701c289f36bd7bde22491b71074771424f BUG: 1464359 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: https://review.gluster.org/17692 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Sunil Kumar Acharya <sheggodu@redhat.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
* groups: don't allocate auxiliary gid list on stackCsaba Henk2017-07-063-52/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | When glusterfs wants to retrieve the list of auxiliary gids of a user, it typically allocates a sufficiently big gid_t array on stack and calls getgrouplist(3) with it. However, "sufficiently big" means to be of maximum supported gid list size, which in GlusterFS is GF_MAX_AUX_GROUPS = 64k. That means a 64k * sizeof(gid_t) = 256k allocation, which is big enough to overflow the stack in certain cases. A further observation is that stack allocation of the gid list brings no gain, as in all cases the content of the gid list eventually gets copied over to a heap allocated buffer. So we add a convenience wrapper of getgrouplist to libglusterfs called gf_getgrouplist which calls getgrouplist with a sufficiently big heap allocated buffer (it takes care of the allocation too). We are porting all the getgrouplist invocations to gf_getgrouplist and thus eliminate the huge stack allocation. BUG: 1464327 Change-Id: Icea76d0d74dcf2f87d26cb299acc771ca3b32d2b Signed-off-by: Csaba Henk <csaba@redhat.com> Reviewed-on: https://review.gluster.org/17706 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* debug/io-stats: Append stats for each interval in the same fileKrutika Dhananjay2017-07-061-1/+1
| | | | | | | | | | | | | | | | | ... instead of overwriting stats from the previous interval. This is so that consumers of this feature do not have to be worried about monitoring when each 'ios-dump-interval' has passed and back up the resultant stats file well before the next interval has expired. Change-Id: Ide897237bf4d38e5d759f09911f7d9c817019edf BUG: 1458197 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: https://review.gluster.org/17452 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* nfs/nlm: unref fds in nlm_client_free()Niels de Vos2017-07-061-13/+12
| | | | | | | | | | | | | | | When a nlm_clnt is getting free'd, the FDs associated with this client should be unref'd as well. Change-Id: Ifa4ea4b7ed45a454413cfc0c820f2516c534a9aa BUG: 1467313 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/17697 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* nfs: make nfs3_call_state_t refcountedNiels de Vos2017-07-063-39/+42
| | | | | | | | | | | | | | | | | | | | | | | | There is no refcounting done of the nfs3_call_state_t structure, which seems to result in use-after-free problems in the NLM part of Gluster/NFS. The structure is initialized with two different functions, it is easier to have a single place to do this. The Gluster/NFS part will not use the refcounting, for now. This is being added to make the NLM code more stable. nfs3_call_state_wipe() will behave as before for Gluster/NFS, but cleanup is triggered through the refcounting now. This prevents major changes to the stable part of the NFS-server, and makes it possible to improve the NLM component separately. Change-Id: I2e15bcf12af74e8a46c2727e4a160e9444d29ece BUG: 1467313 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/17696 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
* cluster/ec: correctly handle end of file for seekXavier Hernandez2017-07-061-0/+18
| | | | | | | | | | | | | | | | | When a SEEK_HOLE was issued near to the end of file, sometimes an offset beyond the end of file was returned. Another problem was that using some offsets greater than the end of file returned successfully instead of failing with ENXIO. Change-Id: I238d2884ba02fd19a78116b0f8f8e8d6338fb3f5 BUG: 1449348 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: https://review.gluster.org/17228 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* core: assorted typos and spelling mistakes from Debian lintianKaleb S. KEITHLEY2017-07-033-7/+9
| | | | | | | | | | | | | | Plus minor readability improvements. Reported-by: pmatthaei@debian.org Change-Id: I5393819a2fc9f240a19811143bb57b127df717cf BUG: 1466785 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/17660 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* posix: Avoid one extra call of l(get|list)xattr call after use buffer in ↵Mohit Agrawal2017-07-032-177/+300
| | | | | | | | | | | | | | | | | | | | | posix_getxattr Problem: In posix xlator posix_(f)getxattr is calling system call(sys_lgetxattr) two times to fetch the xattr value. Solution: After use the extra buffer for first time calling we can avoid second attempt of system call(sys_lgetxattr) calling in posix_getxattr for most of the xattrs. BUG: 1460659 Change-Id: I0d8da776c5bc86653d874a4629a73bbf65c621b8 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/17520 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Kinglong Mee
* Link against missed libraries to resolve symbolsPrashanth Pai2017-07-033-4/+6
| | | | | | | | | | | | | | | | | | | | | | When external programs perform a dlopen("..so", RTLD_LAZY|RTLD_LOCAL) on some shared objects like xlators, it can fail with dlerror set to error string "undefined symbol <some-type>". This was observed for the following shared objects: fuse.so, quota.so, quotad.so, server.so, libgfrpc.so and socket.so P.S: This was found while running a go program which fetches the list of xlator options (volume_option_t) from xlator's shared object. BUG: 1193929 Change-Id: I7b958409cf11fb67c2be32a3f85a96fb1260236b Signed-off-by: Prashanth Pai <ppai@redhat.com> Reviewed-on: https://review.gluster.org/17659 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* multiple: fix struct/typedef inconsistenciesJeff Darcy2017-06-303-13/+13
| | | | | | | | | | | | | | | | | | | | | | | | The most common pattern, both in our code and elsewhere, is this: struct _xyz { ... }; typedef struct _xyz xyz_t; These exceptions - especially call_frame/call_stack - have been slowing down code navigation for years. By converging on a single pattern, navigating from xyz_t in code to the actual definition of struct _xyz (i.e. without having to visit the typedef first) might even be automatable. Change-Id: I0e5dd1f51f98e000173c62ef4ddc5b21d9ec44ed Signed-off-by: Jeff Darcy <jdarcy@fb.com> Reviewed-on: https://review.gluster.org/17650 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: Jeff Darcy <jeff@pl.atyp.us> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* features/shard: Remove ctx from LRU in shard_forgetPranith Kumar K2017-06-301-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: There is a race when the following two commands are executed on the mount in parallel from two different terminals on a sharded volume, which leads to use-after-free. Terminal-1: while true; do dd if=/dev/zero of=file1 bs=1M count=4; done Terminal-2: while true; do cat file1 > /dev/null; done In the normal case this is the life-cycle of a shard-inode 1) Shard is added to LRU when it is first looked-up 2) For every operation on the shard it is moved up in LRU 3) When "unlink of the shard"/"LRU limit is hit" happens it is removed from LRU But we are seeing a race where the inode stays in Shard LRU even after it is forgotten which leads to Use-after-free and then some memory-corruptions. These are the steps: 1) Shard is added to LRU when it is first looked-up 2) For every operation on the shard it is moved up in LRU Reader-handler Truncate-handler 1) Reader handler needs shard-x to be read. 1) Truncate has just deleted shard-x 2) In shard_common_resolve_shards(), it does inode_resolve() and that leads to a hit in LRU, so it is going to call __shard_update_shards_inode_list() to move the inode to top of LRU 2) shard-x gets unlinked from the itable and inode_forget(inode, 0) is called to make sure the inode can be purged upon last unref 3) when __shard_update_shards_inode_list() is called it finds that the inode is not in LRU so it adds it back to the LRU-list Both these operations complete and call inode_unref(shard-x) which leads to the inode getting freed and forgotten, even when it is in Shard LRU list. When more inodes are added to LRU, use-after-free will happen and it leads to undefined behaviors. Fix: I see that the inode can be removed from LRU even by the protocol layers like gfapi/gNFS when LRU limit is reached. So it is better to add a check in shard_forget() to remove itself from LRU list if it exists. BUG: 1466037 Change-Id: Ia79c0c5c9d5febc56c41ddb12b5daf03e5281638 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17644 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
* cluster:dht Fix crash in dht_rename_lock_cbkN Balachandran2017-06-291-2/+4
| | | | | | | | | | | | | | | | | | Use a local variable to store the call count in the STACK_WIND for loop. Using frame->local is dangerous as it could be freed while the loop is still being processed Change-Id: Ie65cdcfb7868509b4a83bc2a5b5d6304eabfbc8e BUG: 1466110 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17645 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: Nigel Babu <nigelb@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/dht: Check if fd is opened on dst subvolN Balachandran2017-06-286-30/+543
| | | | | | | | | | | | | | | | | | | | If an fd is opened on a file, the file is migrated and the cached subvol is updated in the inode_ctx before an fd based fop is sent, the fop is sent to the dst subvol on which the fd is not opened. This causes the FOP to fail with EBADF. Now, every fd based fop will check to see that the fd has been opened on the dst subvol before winding it down. Change-Id: Id92ef5eb7a5b5226688e2d2868b15e383f5f240e BUG: 1465075 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17630 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Susant Palai <spalai@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* ec: Increase notification in all the casesAshish Pandey2017-06-281-31/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: "gluster v heal <volname> info" is taking long time to respond when a brick is down. RCA: Heal info command does virtual mount. EC wait for 10 seconds, before sending UP call to upper xlator, to get notification (DOWN or UP) from all the bricks. Currently, we are increasing ec->xl_notify_count based on the current status of the brick. So, if a DOWN event notification has come and brick is already down, we are not increasing ec->xl_notify_count in ec_handle_down. Solution: Handle DOWN even as notification irrespective of what is the current status of brick. Change-Id: I0acac0db7ec7622d4c0584692e88ad52f45a910f BUG: 1464091 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: https://review.gluster.org/17606 Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* glusterd: mark brickinfo to started on successful attachAtin Mukherjee2017-06-281-5/+4
| | | | | | | | | | | | | | brickinfo's port & status should be filled up only when attach brick is successful. Change-Id: I68b181be37cb94d176f0f4692e8d9dac5493181c BUG: 1465559 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17640 Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: brick process fails to restart after gluster pod failureMohit Agrawal2017-06-271-10/+31
| | | | | | | | | | | | | | | | | | | | Problem: In container environment sometime after delete gluster pod and created new gluster pod brick process doesn't seem to come up. Solution: On the basis of logs it seems glusterd is trying to attach with non glusterfs process.Change the code of function glusterd_get_sock_from_brick_pid to fetch socketpath from argument of running brick process. BUG: 1464072 Change-Id: Ida6af00066341b683bbb4440d7a0d8042581656a Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/17601 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>