summaryrefslogtreecommitdiffstats
path: root/xlators
Commit message (Collapse)AuthorAgeFilesLines
* POSIX/xlators: Unused and unchecked value coverity issueyatipadia2019-10-111-2/+2
| | | | | | | | | | | | | | | | | | This patch addresses CID-1274094 and CID-1382354 Problem(1): "ret" was assigned a value which was never used, it was overwritten by 0 and hence has no use. Problem(2): function was called without checking the return value. Solution(1): Removed the assignment and just called the function whose value was being written in ret. Solution(2): There was no need to check for the return value as at the end 0 is returned, so typecasted the return value as void. Change-Id: Iefd0e9000c466ef2428c754c31370263bf1ca0d0 updates: bz#789278
* Posix: UNUSED VALUE coverity fixPurna Pavan Chandra Aekkaladevi2019-10-111-2/+0
| | | | | | | | | | | | This patch fixes Coverity issue with CID 1274206 Problem : -1 is assigned to op_ret, but that stored value is overwritten before it can be used. Fix : Removal of the line that assigns -1 to op_ret which has no significance. Change-Id: Icb881549ac946003710551c9b9e88b33b6a06239 Updates: bz#789278 Signed-off-by: Purna Pavan Chandra Aekkaladevi <paekkala@redhat.com>
* performance/open-behind: seek fop should open_and_resumePranith Kumar K2019-10-111-0/+27
| | | | | | fixes: bz#1760187 Change-Id: I4c6ad13194d4fc5c7705e35bf9a27fce504b51f9 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* performance/read-ahead: update expected offset before unwinding read responseRaghavendra G2019-10-111-2/+2
| | | | | | | | | | | | | | With the current code there is a window of time between unwinding response to a read request and internal offset is updated to account the read just done. If new sequential read request comes in this time window, it is incorrectly identified as non-sequential read. Fix is to update the file offset to account for a read request before sending back the response to it. Change-Id: Iff0c59c769e1eb15f262257763026657e2d4785d Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Fixes: bz#1753843
* client xlator: misc. cleanupsYaniv Kaul2019-10-116-662/+474
| | | | | | | | | | | | | - remove dead code - move functions to be static - move some code that only needs to be executed under if branch - remove some dead assignments and redundant checks. No functional change, I hope. Change-Id: I93d952408197ecd2fa91c3f812a73c54242342fa updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* glusterd-utils.c: attach_brick() - remove dead codeYaniv Kaul2019-10-111-5/+0
| | | | | | | | | pidfile1 and pidfile2 were not used anywhere. Removed the assignment and the variables. Change-Id: Ic5fe091ba28bb500c370410a63440953048fd0b7 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* posix-helpers.c: do not copy when reading xattrYaniv Kaul2019-10-111-47/+51
| | | | | | | | | | | | | The code is simplified to avoid needless copy as well as simplified overall for readability. Such changes are needed elsewhere too (see https://github.com/gluster/glusterfs/issues/720 ) Few other minor changes here and there, nothing functional. Change-Id: Ia1167849f54d9cacbfe32ddd712dc1699760daf5 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* glusterd: rebalance start should fail when quorum is not metSanju Rakonde2019-10-101-1/+2
| | | | | | | | | | rebalance start should not succeed if quorum is not met. this patch adds a condition to check whether quorum is met in pre-validation stage. fixes: bz#1760467 Change-Id: Ic7d0d08f69e4bc6d5e7abae713ec1881531c8ad4 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* cluster/afr: Heal entries when there is a source & no healed_sinkskarthik-us2019-10-091-0/+15
| | | | | | | | | | | | | | | | | | | | Problem: In a situation where B1 blames B2, B2 blames B1 and B3 doesn't blame anything for entry heal, heal will not complete even though we have clear source and sinks. This will happen because while doing afr_selfheal_find_direction() only the bricks which are blamed by non-accused bricks are considered as sinks. Later in __afr_selfheal_entry_finalize_source() when it tries to mark all the non-sources as sinks it fails to do so because there won't be any healed_sinks marked, no witness present and there will be a source. Fix: If there is a source and no healed_sinks, then reset all the locked sources to 0 and healed sinks to 1 to do conservative merge. Change-Id: If40d8bc95d52a52b2730f55bdcf135109b421548 Fixes: bz#1749322 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* afr: support split-brain CLI for replica 3Ravishankar N2019-10-092-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | Ever since we added quorum checks for lookups in afr via commit bd44d59741bb8c0f5d7a62c5b1094179dd0ce8a4, the split-brain resolution commands would not work for replica 3 because there would be no readables for the lookup fop. The argument was that split-brains do not occur in replica 3 but we do see (data/metadata) split-brain cases once in a while which indicate that there are a few bugs/corner cases yet to be discovered and fixed. Fortunately, commit 8016d51a3bbd410b0b927ed66be50a09574b7982 added GF_CLIENT_PID_GLFS_HEALD as the pid for all fops made by glfsheal. If we leverage this and allow lookups in afr when pid is GF_CLIENT_PID_GLFS_HEALD, split-brain resolution commands will work for replica 3 volumes too. Likewise, the check is added in shard_lookup as well to permit resolving split-brains by specifying "/.shard/shard-file.xx" as the file name (which previously used to fail with EPERM). Change-Id: I3c543dea79caf7cfbc1633e9089cb1cdd2538ba9 Fixes: bz#1756938 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* afr: replace afr_frame_return() when possible with direct callYaniv Kaul2019-10-075-15/+9
| | | | | | | | | | | | If you are already under lock, just decrement the call count directly instead of removing the lock, re-taking the lock and decrementing. Implements https://github.com/gluster/glusterfs/issues/728 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: I3fa20b4651fbdb826655c5a03baeed46e99b5487
* xlators: fixes logically dead code.yatipadia2019-10-031-5/+0
| | | | | | | | | | | | | | | This patch addresses CID-1124388. Problem: When we reach the "out" section in ra_priv_dump(), if the condition (ret && conf) holds true, then the value of "add_section" will always be true. So the condition (add_section == _gf_false) will be a dead code. Fix:"add_section" has no use in the whole block and was making part of the block as logically dead code and hence, removed it. Change-Id: Id7e0105fc9a5ca5b2c2d098c665e6e32ecc6b62b updates: bz#789278 Signed-off-by: yatipadia <ypadia@redhat.com>
* cluster/dht: Correct fd processing loopN Balachandran2019-10-021-22/+62
| | | | | | | | | | | | The fd processing loops in the dht_migration_complete_check_task and the dht_rebalance_inprogress_task functions were unsafe and could cause an open to be sent on an already freed fd. This has been fixed. Change-Id: I0a3c7d2fba314089e03dfd704f9dceb134749540 Fixes: bz#1757399 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* glusterd: improve loggingSanju Rakonde2019-10-011-3/+3
| | | | | | | updates: bz#1193929 Change-Id: I5b4a39fbdaa43642a322440d550ca24df815cae9 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* protocol/handshake: pass volume-id for extra checkAmar Tumballi2019-09-305-1/+77
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With added check of volume-id during handshake, we can be sure to not connect with a brick if this gets re-used in another volume. This prevents any accidental issues which can happen with a stale client process lurking along. Also added test case for testing same volume name which would fetch a different volfile (ie, different bricks, different type), and a different volume name, but same brick. For reference: Currently a client<->server handshake happens in glusterfs through protocol/client translator (setvolume) to protocol/server using a dictionary which containes many keys. Rejection happens in server side if some of the required keys are missing in handshake dictionary. Till now, there was no single unique identifier to validate for a client to tell server if it is actually talking to a corresponding server. All we look in protocol/client is a key called 'remote-subvolume', which should match with a subvolume name in server volume file, and for any volume with same brick name (can be present in same cluster due to recreate), it would be same. This could cause major issue, when a client was connected to a given brick, in one volume would be connected to another volume's brick if its re-created/re-used. To prevent this behavior, we are now passing along 'volume-id' in handshake, which would be preserved for the life of client process, which can prevent this accidental connections. NOTE: This behavior wouldn't be applicable for user-snapshot enabled volumes, as snapshotted volume's would have different volume-id. Fixes: bz#1620580 Change-Id: Ie98286e94ce95ae09c2135fd6ec7d7c2ca1e8095 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* posix: heketidbstorage bricks go down during PVC creationMohit Agrawal2019-09-301-1/+1
| | | | | | | | | | | | | Problem: In OCS environment heketidbstorage is detached due to health_check thread is failed.Sometime aio_write is not successfully finished within default health-check-timeout limit and the brick is detached. Solution: To avoid the issue increase default timeout to 20s Change-Id: Idff283d5713da571f9d20a6b296274f69c3e5b7b Fixes: bz#1755900 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* cluster/ec: Implement read-mask featurePranith Kumar K2019-09-273-0/+82
| | | | | | fixes: #725 Change-Id: Iaaefe6f49c8193c476b987b92df6bab3e2f62601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: prevent filling shd log with "table not found" messagesXavi Hernandez2019-09-261-2/+13
| | | | | | | | | | | | | | | When self-heal daemon receives an inodelk contention notification, it tries to locate the related inode using inode_find() and the inode table owned by top-most xlator, which in this case doesn't have any inode table. This causes many messages to be logged by inode_find() function because the inode table passed is NULL. This patch prevents this by making sure the inode table is not NULL before calling inode_find(). Change-Id: I8d001bd180aaaf1521ba40a536b097fcf70c991f Fixes: bz#1755344 Signed-off-by: Xavi Hernandez <jahernan@redhat.com>
* read-ahead/io-cache: turn off by defaultRaghavendra Gowdappa2019-09-261-2/+2
| | | | | | | | | | | | | | | | We've found perf xlators io-cache and read-ahead not adding any performance improvement. At best read-ahead is redundant due to kernel read-ahead and at worst io-cache is degrading the performance for workloads that doesn't involve re-read. Given that VFS already have both these functionalities, this patch makes these two translators turned off by default for native fuse mounts. For non-native fuse mounts like gfapi (NFS-ganesha/samba) we can have these xlators on by having custom profiles. Change-Id: Ie7535788909d4c741844473696f001274dc0bb60 Signed-off-by: Raghavendra Gowdappa <rgowdapp@redhat.com> fixes: bz#1676479
* posix: Brick is going down unexpectedlyMohit Agrawal2019-09-261-4/+10
| | | | | | | | | | | | | Problem: In brick_mux environment, while multiple volumes are created (1-1000) sometimes brick is going down due to health_check thread failure Solution: Ignore EAGAIN error in health_check thread code to avoid the issue Change-Id: Id44c59f8e071a363a14d09d188813a6633855213 Fixes: bz#1751907 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* perf/write-behind: Clear frame->local on conflict errorN Balachandran2019-09-251-0/+4
| | | | | | | | | | | | | | | WB saves the wb_inode in frame->local for the truncate and ftruncate fops. This value is not cleared in case of error on a conflicting write request. FRAME_DESTROY finds a non-null frame->local and tries to free it using mem_put. However, wb_inode is allocated using GF_CALLOC, causing the process to crash. credit: vpolakis@gmail.com Change-Id: I217f61470445775e05145aebe44c814731c1b8c5 Fixes: bz#1753592 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* glusterfs/fuse: Reduce the default lru-limit valueN Balachandran2019-09-242-2/+2
| | | | | | | | | | The current lru-limit value still uses memory for upto 128K inodes. Reduce the default value of lru-limit to 64K. Change-Id: Ica2dd4f8f5fde45cb5180d8f02c3d86114ac52b3 Fixes: bz#1753880 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* glusterd/ganesha: fixing resource leak in tear_down_cluster()Jiffin Tony Thottan2019-09-231-0/+8
| | | | | | | | CID: 1370947 Updates: bz#789278 Change-Id: Ib694056430ff0536ed705a0e77e5ace22486891e Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
* mgmt/glusterd: Fixed typos and reworded logsN Balachandran2019-09-232-8/+8
| | | | | | | | Fixed typos and reworded log messages for clarity. Change-Id: I46f616ce7d3eb993c77a5812e8bc044e5f283354 Fixes: bz#1753859 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* afr-common.c, afr-self-heal.h: calloc/alloca0 -> malloc/allocaYaniv Kaul2019-09-202-5/+4
| | | | | | | | | | In 3 cases, there was a memory allocation and zeroing, followed directly by populating it with content. Replaced with memory allocation that did not zero the memory. Change-Id: I4fbb5c924fb3a144e415d2368126b784dde760ea updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* posix/ctime: Fix coverity issueKotresh HR2019-09-181-1/+1
| | | | | | | | | | | | | | | posix-metadata.c: 462 in posix_set_mdata_xattr() ... 460 GF_VALIDATE_OR_GOTO(this->name, time, out); 461 >>> CID 1405665: Control flow issues (DEADCODE) >>> Execution cannot reach the expression "flag->atime" inside this >>> statement: "if (update_utime && (flag->...". 462 if (update_utime && (flag->ctime && !time) && (flag->atime && !u_atime) && Change-Id: Id31d81d04ea2785a669eafe0dc1307303cb2271b updates: bz#789278 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* cluster/dht: Handle file truncates during migrationN Balachandran2019-09-171-26/+34
| | | | | | | | | File truncate operations during a migration were not handled properly. This has been fixed. Change-Id: Ic642d257e893641236a4a21ab69fcc7a569dd70a Fixes: bz#1745967 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* ctime/rebalance: Heal ctime xattr on directory during rebalanceKotresh HR2019-09-169-52/+132
| | | | | | | | | | | | | | | | After add-brick and rebalance, the ctime xattr is not present on rebalanced directories on new brick. This patch fixes the same. Note that ctime still doesn't support consistent time across distribute sub-volume. This patch also fixes the in-memory inconsistency of time attributes when metadata is self healed. Change-Id: Ia20506f1839021bf61d4753191e7dc34b31bb2df fixes: bz#1734026 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* protocol/client: don't reopen fds on which POSIX locks are held after a ↵Raghavendra G2019-09-128-6/+122
| | | | | | | | | | | | | | | | | | | reconnect Bricks cleanup any granted locks after a client disconnects and currently these locks are not healed after a reconnect. This means post reconnect a competing process could be granted a lock even though the first process which was granted locks has not unlocked. By not re-opening fds, subsequent operations on such fds will fail forcing the application to close the current fd and reopen a new one. This way we prevent any silent corruption. A new option "client.strict-locks" is introduced to control this behaviour. This option is set to "off" by default. Change-Id: Ieed545efea466cb5e8f5a36199aa26380c301b9e Signed-off-by: Raghavendra G <rgowdapp@redhat.com> updates: bz#1694920
* cluster/ec: Mark release only when it is acquiredPranith Kumar K2019-09-122-2/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Mount-1 Mount-2 1)Tries to acquire lock on 'dir1' 1)Tries to acquire lock on 'dir1' 2)Lock is granted on brick-0 2)Lock gets EAGAIN on brick-0 and leads to blocking lock on brick-0 3)Gets a lock-contention 3) Doesn't matter what happens on mount-2 notification, marks lock->release from here on. to true. 4)New fop comes on 'dir1' which will be put in frozen list as lock->release is set to true. 5) Lock acquisition from step-2 fails because 3 bricks went down in 4+2 setup. Fop on mount-1 which is put in frozen list will hang because no codepath will move it from frozen list to any other list and the lock will not be retried. Fix: Don't set lock->release to true if lock is not acquired at the time of lock-contention-notification fixes: bz#1743573 Change-Id: Ie6630db8735ccf372cc54b873a3a3aed7a6082b7 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* glusterd, rpc, glusterfsd: fix coverity defects and put required annotationsAtin Mukherjee2019-09-104-3/+10
| | | | | | | | | | | 1404965 - Null pointer dereference 1404316 - Program hangs 1401715 - Program hangs 1401713 - Program hangs Updates: bz#789278 Change-Id: I6e6575daafcb067bc910445f82a9d564f43b75a2 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* cluster/ec: quorum-count implementationPranith Kumar K2019-09-087-59/+156
| | | | | | fixes: #721 Change-Id: I5333540e3c635ccf441cf1f4696e4c8986e38ea8 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: Fix coverity issuesPranith Kumar K2019-09-071-12/+16
| | | | | | | | | | | | | | | | | Fixed the following coverity issue in both flush/fsync >>> CID 1404964: Null pointer dereferences (REVERSE_INULL) >>> Null-checking "fd" suggests that it may be null, but it has already been dereferenced on all paths leading to the check. >>> if (fd != NULL) { >>> fop->fd = fd_ref(fd); >>> if (fop->fd == NULL) { >>> gf_msg(this->name, GF_LOG_ERROR, 0, >>> "Failed to reference a " >>> "file descriptor."); fixes bz#1748836 Change-Id: I19c05d585e23f8fbfbc195d1f3775ec528eed671 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: Fail fsync/flush for files on update size/version failurePranith Kumar K2019-09-065-1/+80
| | | | | | | | | | | | | | | | | Problem: If update size/version is not successful on the file, updates on the same stripe could lead to data corruptions if the earlier un-aligned write is not successful on all the bricks. Application won't have any knowledge of this because update size/version happens in the background. Fix: Fail fsync/flush on fds that are opened before update-size-version went bad. fixes: bz#1748836 Change-Id: I9d323eddcda703bd27d55f340c4079d76e06e492 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* glusterd: IPV6 hostname address is not parsed correctlyMohit Agrawal2019-09-061-5/+11
| | | | | | | | | | | | Problem: IPV6 hostname address is not parsed correctly in function glusterd_check_brick_order Solution: Update the code to parse hostname address Change-Id: Ifb2f83f9c6e987b2292070e048e97eeb51b728ab Fixes: bz#1747746 Credits: Amgad Saleh <amgad.saleh@nokia.com> Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* graph/cleanup: Fix race in graph cleanupMohammed Rafi KC2019-09-051-2/+63
| | | | | | | | | | | | | | | | | We were unconditionally cleaning up the grap when we get child_down followed by parent_down. But this is prone to race condition when some of the bricks are already disconnected. In this case, even before the last child down is executed in the client xlator code,we might have freed the graph. Because the child_down event is alreadt recevied. To fix this race, we have introduced a check to see if all client xlator have cleared thier reconnect chain, and called the child_down for last time. Change-Id: I7d02813bc366dac733a836e0cd7b14a6fac52042 fixes: bz#1727329 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* afr/lookup: Pass xattr_req in while doing a selfheal in lookupMohammed Rafi KC2019-09-053-5/+16
| | | | | | | | | | We were not passing xattr_req when doing a name self heal as well as a meta data heal. Because of this, some xdata was missing which causes i/o errors Change-Id: Ibfb1205a7eb0195632dc3820116ffbbb8043545f Fixes: bz#1728770 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* posix*.c: remove unneeded strlen() callsYaniv Kaul2019-09-057-42/+38
| | | | | | | | | In various places, we can re-use knowledge of string length or result of snprintf() and such instead of strlen(). Change-Id: I4c9b1decf1169b3f8ac83699a0afbd7c38fad746 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* glusterd-store.c: remove of dead codeYaniv Kaul2019-09-051-130/+0
| | | | | | | | These functions do not seem to be in use. Change-Id: Ie76baf2a9727b9ba0e66f234226b1e62788245f2 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* debug/error-gen: Set count correctly for short-writesPranith Kumar K2019-08-301-0/+1
| | | | | | | | | | vector count is reduced to 1 but it is not reflected at the time of winding the call. This leads to crash when short-writes are done using error-gen. Set the correct count to fix the problem. fixes: bz#1746320 Change-Id: I037b60b7e321f2f50b71fb52c43c64707cf114ca Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* build: Fix libglusterd Makefile targetAnoop C S2019-08-301-4/+0
| | | | | | | | | | | | | * Fix libglusterd.la target path in cli/src/Makefile.am * Like libglusterfs, libgfxdr and libgfrpc, libglusterd is also expected to be ready by the time xlators/mgmt/glusterd sources are compiled. Therefore this change removes the additional mentioning of libglusterd.la target in Makefile.am Change-Id: I1b787316cfb6cd7487f49e661490b9788a0b80b3 Updates: bz#1193929 Signed-off-by: Anoop C S <anoopcs@redhat.com>
* afr: wake up index healer threadsRavishankar N2019-08-305-11/+25
| | | | | | | | | | | | | ...whenever shd is re-enabled after disabling or there is a change in `cluster.heal-timeout`, without needing to restart shd or waiting for the current `cluster.heal-timeout` seconds to expire. See BZ 1743988 for more details. Change-Id: Ia5ebd7c8e9f5b54cba3199c141fdd1af2f9b9bfe fixes: bz#1744548 Reported-by: Glen Kiessling <glenk1973@hotmail.com> Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* glusterd: Fixed incorrect size argumentN Balachandran2019-08-271-2/+3
| | | | | | | | | | An incorrect size argument to snprintf caused the glusterd process to crash on startup. This has been fixed. Change-Id: Iddafb5468866d0182cd8239210c92c893e643285 Fixes: bz#1745965 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* glusterd: Unused value coverity fixSanju Rakonde2019-08-261-0/+5
| | | | | | | | CID: 1288765 updates: bz#789278 Change-Id: Ie6b01f81339769f44d82fd7c32ad0ed1a697c69c Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: stop stale bricks during handshaking in brick mux modeAtin Mukherjee2019-08-254-9/+55
| | | | | | | | | | | | | | | | | | | | This patch addresses two problems: 1. During friend handshaking, if a volume is imported due to change in the version, the old bricks were not stopped which would lead to a situation where bricks will run with old volfiles. 2. As part of attaching shd service in glusterd_attach_svc, there might be a case that the volume for which we're attempting to attach a shd service might become stale and in the process of deletion and hence in every retrials (if the rpc connection isn't ready) check for the existance of the volume and then only attempt the further attach request. Fixes: bz#1733425 Change-Id: I6bac6b871f7e31cb5bf277db979289dec196a03e Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterd: Add warning and abort in case of failures in migration during ↵heal-infoVishal Pandey2019-08-251-0/+11
| | | | | | | | | | | | | | | | | | remove-brick commit Problem - Currently remove-brick commit goes through even though there were files that failed to migrate or were skipped. There is no warning raised to the user. Solution- Add a check in the remove brick staging phase to verify if the status of the rebalnce process is complete but there has been failures or some skipped files while migration, In this case user will be given a warning and remove-brick commit. User will need to use the force option to remove the bricks. Fixes: bz#1514683 Signed-offby- Vishal Pandey <vpandey@redhat.com> Change-Id: I014d0f0afb4b2fac35ab0de52227f98dbae079d5
* cluster/afr - Unused variablesBarak Sason2019-08-242-6/+9
| | | | | | | | | | | -Minor change to if-else structure to avoid code duplication. -Added logging in case method calls fails CID: 1394654 Updates: bz#789278 Change-Id: Ibef4450dc89ddd3bf951303d5b87f503924fd250 Signed-off-by: Barak Sason <bsasonro@redhat.com>
* Revert "glusterd: (storhaug) remove ganesha (843e1b0)"Jiffin Tony Thottan2019-08-2413-15/+1320
| | | | | | | | | please note as an additional change, macro GLUSTERD_GET_SNAP_DIR moved from glusterd-store.c to glusterd-snapshot-utils.h Change-Id: I811efefc148453fe32e4f0d322e80455447cec71 updates: #663 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
* posix: log aio_error return codes in posix_fs_health_checkMohit Agrawal2019-08-221-3/+2
| | | | | | | | | | | | | Problem: Sometime brick is going down to health check thread is failed without logging error codes return by aio system calls. As per aio_error man page it returns a positive error number if the asynchronous I/O operation failed. Solution: log aio_error return codes in error message Change-Id: I2496b1bc16e602b0fd3ad53e211de11ec8c641ef Fixes: bz#1744519 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* ctime: Fix incorrect realtime passed to frame->root->ctimeKotresh HR2019-08-221-1/+1
| | | | | | | | | | | | On systems that don't support "timespec_get"(e.g., centos6), it was using "clock_gettime" with "CLOCK_MONOTONIC" to get unix epoch time which is incorrect. This patch introduces "timespec_now_realtime" which uses "clock_gettime" with "CLOCK_REALTIME" which fixes the issue. Change-Id: I57be35ce442d7e05319e82112b687eb4f28d7612 Signed-off-by: Kotresh HR <khiremat@redhat.com> fixes: bz#1743652