summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* extras/hooks: Do not blindly remove volume share from smb.confAnoop C S2018-05-073-9/+12
| | | | | | | | | | | | | | | | | When Gluster volumes are shared via Samba, any extra smb.conf parameter setting done by administrator to those shares are lost during restart of the volume. Instead of removing the whole share completely from smb.conf(via hook scripts during volume stop) it is better to make it temporarily unavailable to end-users till the volume is started again. Therefore we make use of a smb.conf parameter named 'available'[1] to achieve the above intend. [1] https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html Change-Id: I68a9055b50791f6ffd3b95a3c13d858a75fa6530 fixes: bz#1558921 BUG: 1558921 Signed-off-by: Anoop C S <anoopcs@redhat.com>
* gfapi: acutally avoid recall callback when closedv4.2devThomas Hindoe Paaboel Andersen2018-05-061-1/+2
| | | | | | | | | | | | Due to missing curly braces we end up calling the callback function even when state is GLFD_CLOSE. This patch adds the curly braces so both the log and actual callback is skipped. Introduced in 19568 in commit b04066721bf4a240f61b83bd87bbb27437c5fe4f Change-Id: I0b15cfe222841cfcb12f17723284acb3838d64d7 fixes: bz#1575294 Signed-off-by: Thomas Hindoe Paaboel Andersen <phomes@gmail.com>
* glusterd: Fix for memory leak in volume tier status commandSanju Rakonde2018-05-061-0/+8
| | | | | | Fixes: bz#1573220 Change-Id: Ia60f40fa4f1e525cae6f571a24e5385ba1e004c0 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd/ctime: Provide option to enable/disable ctime featureKotresh HR2018-05-066-16/+41
| | | | | | Updates: #208 Change-Id: If6f52b9b1b5b823ad64faeed662e96ceb848c54c Signed-off-by: Kotresh HR <khiremat@redhat.com>
* glusterd/utime: glusterd utime changesKotresh HR2018-05-062-0/+21
| | | | | | | | | Load utime xlator in the client side just after (below) performance xlators. Updates: #208 Change-Id: Ie15f156943fa8e7dac7050e5479c906da747b568 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* utime: ctime client side xlatorKotresh HR2018-05-0613-1/+616
| | | | | | | | | | | | | | The client side utime xlator does two things. 1. Update unix epoch time in frame->root->ctime 2. Update the frame->root->flags based on the fop which indicates time attributes that should be updated for the parent/entry. Credits: Rafi KC <rkavunga@redhat.com> Updates: #208 Change-Id: I9cad297040c70798a0a8468a080eb4aeff73138d Signed-off-by: Kotresh HR <khiremat@redhat.com>
* posix/ctime: posix hook to set ctime xattr in relevant fopsKotresh HR2018-05-068-34/+279
| | | | | | | | | | | This patch uses the ctime posix APIs to set consistent time across replica on disk. It also stores the time attributes in the inode context. Credits: Rafi KC <rkavunga@redhat.com> Updates: #208 Change-Id: I1a8d74d1e251f1d6d142f066fc99258025c0bcdd Signed-off-by: Kotresh HR <khiremat@redhat.com>
* posix/ctime: posix hooks to get consistent time xattrKotresh HR2018-05-0611-103/+233
| | | | | | | | | | | | This patch uses the ctime posix APIs to get consistent time across replica. The time attributes are got from from inode context or from on disk if not found and merged with iatt to be returned. Credits: Rafi KC <rkavunga@redhat.com> Updates: #208 Change-Id: Id737038ce52468f1f5ebc8a42cbf9c6ffbd63850 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* posix: APIs in posix to get and set time attributesKotresh HR2018-05-0610-21/+683
| | | | | | | | | | | | | This is part of the effort to provide consistent time across distribute and replica set for time attributes (ctime, atime, mtime) of the object. This patch contains the APIs to set and get the attributes from on disk and in inode context. Credits: Rafi KC <rkavunga@redhat.com> Updates: #208 Change-Id: I5d3cba53eef90ac252cb8299c0da42ebab3bde9f Signed-off-by: Kotresh HR <khiremat@redhat.com>
* tests: Add lease test casePoornima G2018-05-052-2/+704
| | | | | | | Updates: #350 Change-Id: Iee78ab4baf48c481de1e13ff2b0393bc106b7d0e Signed-off-by: Poornima G <pgurusid@redhat.com> Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
* afr: Add lease() fopPoornima G2018-05-053-0/+157
| | | | | | | | Change-Id: Ied047dd5ee44e9d5a5d3db214826f7df30332ef9 updates: #350 BUG: 1319992 Signed-off-by: Poornima G <pgurusid@redhat.com> Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
* mount,fuse: make fuse dumping available as mount optionCsaba Henk2018-05-042-0/+10
| | | | | | Updates: bz#1193929 Change-Id: I4dd4d0e607f89650ebb74b893b911b554472826d Signed-off-by: Csaba Henk <csaba@redhat.com>
* fuse: add support for kernel writeback cacheCsaba Henk2018-05-0410-4/+170
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Added kernel-writeback-cache command line and xlator option for requesting utilisation of the writeback cache of the kernel in FUSE_INIT (see [1]). - Added attr-times-granularity command line and xlator option via which granularity of the {a,m,c}time in stat (attr) data that we support can be indicated to kernel. This is a means to avoid divergence of the attr times between kernel and userspace that could occur with writeback-cache, while still maintaining maximum time precision the FUSE server is capable of (see [2]). - Handling FATTR_CTIME flag in FUSE_SETATTR that indicates presence of ctime in setattr payload. Currently we cannot associate arbitrary ctimes to files on backend, so we just touch them to update their ctimes to current time. Having ctimes in setattr payload is also a side effect of writeback cache (see [3] and [4]). [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4d99ff8, "fuse: Turn writeback cache on" [2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e27c9d3, "fuse: fuse: add time_gran to INIT_OUT" [3]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1e18bda, "fuse: add .write_inode" [4]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ab9e13f, "fuse: allow ctime flushing to userspace" Updates: #435 Change-Id: Id174c8e0c815c4456c35f8c53e41a6a507d91855 Signed-off-by: Csaba Henk <csaba@redhat.com>
* feature/leases : fixing bugs found while testing glfs_test.tJiffin Tony Thottan2018-05-043-14/+56
| | | | | | Change-Id: Iee8f431601ecda184108a079f665e05902b0f78b updates: #350 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
* gfapi : RECALL_LEASE implementationSoumya Koduri2018-05-0413-28/+556
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Right now there are two types of upcalls * poll method * registering callback But callback can be registered per fs and same callback fn shall be used for any lease recall with object handle as argument as done for cache invalidation. TODO: RECALL LEASE for each glfd (for future reference) (may be needed fo Samba as they do not deal with object handles. In case of RECALL_LEASE, we could associate separate cbk function for each glfd either by - extending pub_glfs_lease to accept new args (recall_cbk_fn, cookie) - or by defining new API "glfs_register_recall_cbk_fn (glfd, recall_cbk_fn, cookie) . In such cases, flag it and instead of calling below upcall functions, define a new one to go through the glfd list and invoke each of theirs recall_cbk_fn. Plus added following as well * passed lease id to dict in required arguments * added flag check in pub_glfs_open Updates: #350 Change-Id: I07a971f0f26ec6aae0b9f9a5613504317dee153b Signed-off-by: Soumya Koduri <skoduri@redhat.com> Signed-off-by: Poornima G <pgurusid@redhat.com> Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
* features/bitrot: print the path of the corrupted objectsRaghavendra Bhat2018-05-047-12/+292
| | | | | | | | | | | | | | | | | | | | | | | | | Currently "gluster volume bitrot <volume name> scrub status" gives the list of the corrupted objects (files as of now). But only the gfids of those corrupted objects are seen and one has to do getfattr, find etc operations to get the actual path of those objects for removal etc. This change makes an attempt to print the path of those files as much as possible. * Try to get the path using the on disk gfid2path xattr. * If the above operation fails, then go for in memory path (provided that the object has its dentry properly created and linked in the inode table of the brick where the corrupted object is present) So the gfid to path resolution is a soft resolution, i.e. based on the inode and dentry cache in the brick's memory. If the path cannot be obtained via inode table also, then only gfid is printed. Change-Id: Ie9a30307f43a49a2a9225821803c7d40d231de68 fixes: bz#1570962 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* use awk to get a specific line from the output instead of cutRaghavendra Bhat2018-05-041-7/+1
| | | | | | | | | | cut -d$'\n' is not separating the xattrs shown as part of getfattr output. Hence use awk to get the nth line of getfattr output for nth iteration in the for loop. Change-Id: I1a96cd3f72f4f407f9a783375f78d9a69d5d3885 fixes: bz#1574606 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd: enable self-heal in daemonsRavishankar N2018-05-043-14/+0
| | | | | | | | | ..like rebalance, quota and tier because that seems to be the consensus (see BZ). Change-Id: I912336a12f4e33ea4ec55f804df403fab0dc89fc BUG: 1536024 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* glusterfsd: initiate pmap_signout for all detach brick requestsAtin Mukherjee2018-05-041-0/+1
| | | | | | | | | In glusterfs_handle_terminate all bricks getting detached need to initiate a pmap_signout. Change-Id: Iacbd6fcd49215fe6a5210df7dfed1260fde9179a Fixes: bz#1570011 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* cli/snapshot: ignore errors for snapshot status for ALL/VOLThomas Hindoe Paaboel Andersen2018-05-041-1/+1
| | | | | | | | | | | | | | | | | Currently errors are reported for snapshot status of type ALL and VOL. The intention was to ignore those, but the code gets it wrong. The original condition for ignoring ALL/VOL was removed in Bug 1096610 Change-Id Ifc0ac31d2a9f91e136e87f3b51a629df7dba94e8 And the current logic introduced in Bug 789278 Change-Id I985cea1ef787d239b2632d5a7f467070846f92e4 Change-Id: Ic02ea98fb23b1149264e91b41f2fc2ca916d405f Fixes: bz#1574259 Signed-off-by: Thomas Hindoe Paaboel Andersen <phomes@gmail.com>
* cli/glusterd: Add warning message in cli for user to checkSusant Palai2018-05-034-9/+27
| | | | | | | | | | | | | | | | | | | | force-migration config for remove-brick operation. The cli will take input from the user before starting "remove-brick" start operation. The message/confirmation looks like the following: <Running remove-brick with cluster.force-migration enabled can result in data corruption. It is safer to disable this option so that files that receive writes during migration are not migrated. Files that are not migrated can then be manually copied after the remove-brick commit operation. Do you want to continue with your current cluster.force-migration settings? (y/n)> And also question for COMMIT_FORCE is changed. Fixes: bz#1572586 Change-Id: Ifdb6b108a646f50339dd196d6e65962864635139 Signed-off-by: Susant Palai <spalai@redhat.com>
* posix: Avoid changelog retries for geo-repMohit Agrawal2018-05-031-0/+33
| | | | | | | | | | | | | | | Problem: georep is slowdown to migrate directory from master volume to slave volume due to lot of changelog retries Solution: Update the condition in posix_getxattr to ignore MDS_INTERNAL_XATTR as it(posix) ignored other internal xattrs BUG: 1571069 Change-Id: I4d91ec73e5b1ca1cb3ecf0825ab9f49e261da70e fixes: bz#1571069 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* doc: updates Language Bindings for libgfapi-perl and gogfapiJi-Hyeon Gim2018-05-031-1/+6
| | | | | | | | | | | | The libgfapi-perl provides declarations and linkage for the Gluster gfapi C library with FFI for many Perl mongers In addition, gogfapi URI link is replaced with GitHub because Forge is dead. Change-Id: I773e78beb201b48ca3fde0dc72d04b64dc9697d6 Signed-off-by: Ji-Hyeon Gim <potatogim@potatogim.net> Updates: #447
* Don't use hardcoded /sbin, /usr/bin etc. paths. Fixes #1450546Niklas Hambüchen2018-05-034-22/+10
| | | | | | | | | Instead, rely on programs to be in PATH, as gluster already does in many places across its code base. Change-Id: Id21152fe42f5b67205d8f1571b0656c4d5f74246 BUG: 1450546 Signed-off-by: Niklas Hambuechen <mail@nh2.me>
* cluster/dht: unwind if dht_selfheal_dir_mkdir returns an errorRaghavendra G2018-05-031-1/+5
| | | | | | | | | | If dht_selfheal_dir_mkdir returns an error, cbk passed to dht_selfheal_directory is not invoked. So, Current codepath leaves an unwound frame resulting in a hung fop forever. Change-Id: I422308b8a34a074301ca46b029ffe676f5e0f66c fixes: bz#1574305 Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
* protocol/server : unwind as per op versionAshish Pandey2018-05-033-3/+11
| | | | | | | Change-Id: Id6717640ac14881b490e512c4682e45ffffa7f5b fixes: bz#1570538 BUG: 1570538 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* glusterd/geo-rep: Fix UNUSED_VALUE coverity issueVarsha Rao2018-05-032-12/+12
| | | | | | | | | | | The return value of glusterd_get_local_brickpaths is unused so add goto statement. As it is reinitialized outside the if block. Also change the if condition to check the failure case, when return value is -1 and path_list is NULL. Change-Id: I6b47d7751263f704bd69a6452a7e71bfcf226d49 updates: bz#789278 Signed-off-by: Varsha Rao <varao@redhat.com>
* core/various: python3 compat, prepare for python2 -> python3Kaleb S. KEITHLEY2018-05-0232-341/+377
| | | | | | | | | | see https://review.gluster.org/#/c/19788/ use print fn from __future__ Change-Id: If5075d8d9ca9641058fbc71df8a52aa35804cda4 updates: #411 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
* block-profile: enable cluster.eager-lock in block-profilePrasanna Kumar Kalever2018-05-011-1/+1
| | | | | | | | | | | | | Eager-lock gave 2.5X perf improvement. On top of that with batching fix in tcmu-runner and client-io-threads we are seeing close to 3X perf improvement. But we don't want to include that in the default profile option but enable it on a case by case basis. So not adding client-io-threads option. BUG: 1573119 Fixes: bz#1573119 Change-Id: Ida53c3ef9a041a73b65fdd06158ac082da437206 Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
* glusterd: Fix for memory leak in get-state detailSanju Rakonde2018-05-011-1/+8
| | | | | | Fixes: bz#1573066 Change-Id: I76fe3bdde7351736b32eb3d6c4cc5f8f276257ed Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* dht: gf_defrag_settle_hash should ignore ENOENT and ESTALE errorSusant Palai2018-04-301-1/+8
| | | | | | | | | | | Problem: A directory deletion can happen just before gf_defrag_settle_hash which internally does a setxattr operation on a directory. Solution: Ignore ENOENT and ESTALE errors Fixes: bz#1572581 Change-Id: I2f91809f3b5e02976c4c3a5a596406a8b2f8f6f2 Signed-off-by: Susant Palai <spalai@redhat.com>
* cluster/afr: shd changes for thin arbiterkarthik-us2018-04-301-0/+184
| | | | | | | Updates #352 Change-Id: I1bbb3c652ba33cec6aa37f3700370674077fb17d Signed-off-by: karthik-us <ksubrahm@redhat.com>
* afr: initial changes for thin arbiterRavishankar N2018-04-306-8/+229
| | | | | | | | | 1. Create thin arbiter index file during mount. 2. Set pending marker in thin arbiter id file in case of failure. Change-Id: I269eb8d069f0323f1fc616175e5e5eb7b91d5f82 updates: #352 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* server/resolver: don't trust inode-table for RESOLVE_NOTRaghavendra G2018-04-301-4/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There have been known races between fops which add a dentry (like lookup, create, mknod etc) and fops that remove a dentry (like rename, unlink, rmdir etc) due to which stale dentries are left out in inode table even though the dentry doesn't exist on backend. For eg., consider a lookup (parent/bname) and unlink (parent/bname) racing in the following order: * lookup hits storage/posix and finds that dentry exists * unlink removes the dentry on storage/posix * unlink reaches protocol/server where the dentry (parent/bname) is unlinked from the inode * lookup reaches protocol/server and creates a dentry (parent/bname) on the inode Now we've a stale dentry (parent/bname) associated with the inode in itable. This situation is bad for fops like link, create etc which invoke resolver with type RESOLVE_NOT. These fops fail with EEXIST even though there is no such dentry on backend fs. This issue can be solved in two ways: * Enable "dentry fop serializer" xlator [1]. # gluster volume set features.sdfs on * Make sure resolver does a lookup on backend when it finds a dentry in itable and validates the state of itable. - If a dentry is not found, unlink those stale dentries from itable and continue with fop - If dentry is found, fail the fop with EEXIST This patch implements second solution as sdfs is not enabled by default in brick xlator stack. Once sdfs is enabled by default, this patch can be reverted. [1] https://github.com/gluster/glusterfs/issues/397 Change-Id: Ia8bb0cf97f97cb0e72639bce8aadb0f6d3f4a34a updates: bz#1543279 BUG: 1543279 Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
* libglusterfs: Capture the dict response in syncop_xattrop_cbkkarthik-us2018-04-275-9/+30
| | | | | | | | | | | | | | | Problem: Currently it is not possible to capture the xattrs values which are set on the bricks by calling syncop_(f)xattrop, because the response dict is not being assigned to any of the dictionaries. Fix: In the xattrop callback capture the response dict and send it back to the caller if it is requested. Change-Id: I9de9bcd97d6008091c9b060bcca3676cb9ae8ef9 fixes: bz#1572076 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* feature/thin-arbiter: Implement thin-arbiter translatorAshish Pandey2018-04-2511-1/+843
| | | | | | | Updates #352 Change-Id: I3d8caa6479dc8e48bec62a09b056971bb061f0cf Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* performance/md-cache: purge cache on ENOENT/ESTALE errorsRaghavendra G2018-04-251-87/+438
| | | | | | | | | | If not, next lookup could be served from cache and can be success, which is wrong. This can affect retry logic of VFS when it receives an ESTALE. Change-Id: Iad8e564d666aa4172823343f19a60c11e4416ef6 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Fixes: bz#1566303
* cluster/afr: Keep child-up until ping-eventPranith Kumar K2018-04-253-25/+40
| | | | | | | | | | | | | | | | | | | | | Problem: If we have 2 bricks, brick-A and brick-B with brick-A within halo-max-latency and brick-B more than halo-max-latency. If we set both halo-min, halo-max replicas as '1'. In this case, brick-A comes online and then ping-latency will be updated for it. When brick-B comes online, we have 2 up-bricks, so the code tries to find the brick with worst latency to mark it down. Since Brick-B just came online it always had '0' latency so brick-B used to be marked offline and Brick-B would eventually be the one to be online even when brick-A is more suited. Fix: Consider latency of just-up child as HALO_MAX_LATENCY so that worst-child until ping-latency is found as the just-up brick. Also keep ping-latency as -1 until child-up during initialization. BUG: 1567881 fixes bz#1567881 Change-Id: I148262fe505468190f0eb99225d0f6d57cdb6f04 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* libglusterfs/syncop: Handle barrier_{init/destroy} in error casesPranith Kumar K2018-04-232-4/+27
| | | | | | | BUG: 1568521 updates: bz#1568521 Change-Id: I53e60cfcaa7f8edfa5eca47307fa99f10ee64505 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* features/shard: Add option to barrier parallel lookup and unlink of shardsKrutika Dhananjay2018-04-232-28/+89
| | | | | | | | | Also move the common parallel unlink callback for GF_FOP_TRUNCATE and GF_FOP_FTRUNCATE into a separate function. Change-Id: Ib0f90a5f62abdfa89cda7bef9f3ff99f349ec332 updates: bz#1568521 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
* cluster/dht: Fix dht_rename lock orderN Balachandran2018-04-231-18/+47
| | | | | | | | | | Fixed dht_order_rename_lock to use the same inodelk ordering as that of the dht selfheal locks (dictionary order of lock subvolumes). Change-Id: Ia3f8353b33ea2fd3bc1ba7e8e777dda6c1d33e0d fixes: bz#1568348 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* server/auth: add option for strict authenticationMohammed Rafi KC2018-04-206-12/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When this option is enabled, we will check for a matching username and password, if not found then the connection will be rejected. This also does a checksum validation of volfile The option is invalid when SSL/TLS is in use, at which point the SSL/TLS certificate user name is used to validate and hence authorize the right user. This expects TLS allow rules to be setup correctly rather than the default *. This option is not settable, as a result this cannot be enabled for volumes using the CLI. This is used with the shared storage volume, to restrict access to the same in non-SSL/TLS environments to the gluster peers only. Tested: ./tests/bugs/protocol/bug-1321578.t ./tests/features/ssl-authz.t - Ran tests on volumes with and without strict auth checking (as brick vol file needed to be edited to test, or rather to enable the option) - Ran tests on volumes to ensure existing mounts are disconnected when we enable strict checking Change-Id: I2ac4f0cfa5b59cc789cc5a265358389b04556b59 fixes: bz#1568844 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Signed-off-by: ShyamsundarR <srangana@redhat.com>
* shared storage: Prevent mounting shared storage from non-trusted clientMohammed Rafi KC2018-04-201-0/+21
| | | | | | | | | | | | | | | gluster shared storage is a volume used for internal storage for various features including ganesha, geo-rep, snapshot. So this volume should not be exposed to the client, as it is a special volume for internal use. This fix wont't generate non trusted volfile for shared storage volume. Change-Id: I8ffe30ae99ec05196d75466210b84db311611a4c fixes: bz#1568844 BUG: 1568844 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* server: fix unresolved symbols by moving them to libglusterfsMohit Agrawal2018-04-205-104/+106
| | | | | | | | | | | | | | | | Problem: glusterd2 build is failed due to undefined symbol (xlator_mem_cleanup , glusterfsd_ctx) in server.so Solution: To resolve the same done below two changes 1) Move xlator_mem_cleanup code from glusterfsd-mgmt.c to xlator.c to be part of libglusterfs.so 2) replace glusterfsd_ctx to this->ctx because symbol glusterfsd_ctx is not part of server.so BUG: 1544090 Change-Id: Ie5e6fba9ed458931d08eb0948d450aa962424ae5 fixes: bz#1544090 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* cluster/afr: Need heal-timeout to be configured as low as 5 secondsPranith Kumar K2018-04-201-1/+1
| | | | | | | | | | | In Halo replication, there are pending heals more often than not. It makes sense to give users the capability to configure it as low as 5 seconds. BUG: 1569489 fixes bz#1569489 Change-Id: I451c1975827f66398b903f659c981ef3121d5376 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* features/bitrot: show the corresponding brick for the corrupted objectsRaghavendra Bhat2018-04-201-3/+8
| | | | | | | | | | | Currently with "gluster volume bitrot <volume name> scrub status" command the corrupted objects of a node are shown. But to what brick that corrupted object belongs to is not shown. Showing the brick of the corrupted object will help in situations where a node hosts multiple bricks of a volume. Change-Id: I7fbdea1e0072b9d3487eb10757468bc02d24df21 fixes: bz#1569198 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* eventsapi: Handle Unicode string during signingAravinda VK2018-04-201-1/+1
| | | | | | | | | | | Python 2.7 HMAC does not support Unicode strings. Secret is read from file so it is possible that glustereventsd reads the content as Unicode. This patch converts the secret to `str` type before generating HMAC signature. Fixes: bz#1568820 Change-Id: I7daa64499ac4ca02544405af26ac8af4b6b0bd95 Signed-off-by: Aravinda VK <avishwan@redhat.com>
* Make glusterfsd binary print statedump & xlator dirPrashanth Pai2018-04-195-7/+49
| | | | | | | | | | | | | | | | | | | | | | | The glusterd2 needs following options, some of which are provided by gluster CLI today: --print-xlatordir --print-statedumpdir --print-logdir However, the CLI package need not be present on the machine running glusterd2. This change adds the above CLI options to glusterfsd binary which glusterd2 depends on. Reverts 9a1ae47c8d60836ae0628a04a153f28c1085c0e8 Related changes: https://review.gluster.org/#/c/19882/ https://github.com/gluster/glusterd2/pull/663 Updates: bz#1193929 Change-Id: I18c123b0d3350d2bd4f2400783e3b94e402a4e29 Signed-off-by: Prashanth Pai <ppai@redhat.com>
* gluster: Sometimes Brick process is crashed at the time of stopping brickMohit Agrawal2018-04-1920-112/+365
| | | | | | | | | | | | | | | | | | | | | | | | Problem: Sometimes brick process is getting crashed at the time of stop brick while brick mux is enabled. Solution: Brick process was getting crashed because of rpc connection was not cleaning properly while brick mux is enabled.In this patch after sending GF_EVENT_CLEANUP notification to xlator(server) waits for all rpc client connection destroy for specific xlator.Once rpc connections are destroyed in server_rpc_notify for all associated client for that brick then call xlator_mem_cleanup for for brick xlator as well as all child xlators.To avoid races at the time of cleanup introduce two new flags at each xlator cleanup_starting, call_cleanup. BUG: 1544090 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/) with same patch after enable brick mux forcefully, all test cases are passed. Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf updates: bz#1544090
* glusterd: volume inode/fd status broken with brick muxhari gowtham2018-04-199-87/+119
| | | | | | | | | | | | | | | | | | | | | | | Problem: The values for inode/fd was populated from the ctx received from the server xlator. Without brickmux, every brick from a volume belonged to a single brick from the volume. So searching the server and populating it worked. With brickmux, a number of bricks can be confined to a single process. These bricks can be from different volumes too (if we use the max-bricks-per-process option). If they are from different volumes, using the server xlator to populate causes problem. Fix: Use the brick to validate and populate the inode/fd status. Signed-off-by: hari gowtham <hgowtham@redhat.com> Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd fixes: bz#1566067