summaryrefslogtreecommitdiffstats
path: root/tests/bugs/core
Commit message (Collapse)AuthorAgeFilesLines
* tests: provide an option to mark tests as 'flaky'Amar Tumballi2020-08-201-56/+0
| | | | | | | | | | | | | * also add some time gap in other tests to see if we get things properly * create a directory 'tests/000/', which can host any tests, which are flaky. * move all the tests mentioned in the issue to above directory. * as the above dir gets tested first, all flaky tests would be reported quickly. * change `run-tests.sh` to continue tests even if flaky tests fail. Reference: gluster/project-infrastructure#72 Updates: #1000 Change-Id: Ifdafa38d083ebd80f7ae3cbbc9aa3b68b6d21d0e Signed-off-by: Amar Tumballi <amar@kadalu.io>
* tests: fix spurious failure of bug-1402841.t-mt-dir-scan-race.tRavishankar N2019-09-041-4/+5
| | | | | | | | | | | | | | | | | | | | | | Problem: Since commit 600ba94183333c4af9b4a09616690994fd528478, shd starts healing as soon as it is toggled from disabled to enabled. This was causing the following line in the .t to fail on a 'fast' machine (always on my laptop and sometimes on the jenkins slaves). EXPECT_NOT "^0$" get_pending_heal_count $V0 because by the time shd was disabled, the heal was already completed. Fix: Increase the no. of files to be healed and make it a variable called FILE_COUNT, should we need to bump it up further because the machines become even faster. Also created pending metadata heals to increase the time taken to heal a file. fixes: bz#1748744 Change-Id: I5a26b08e45b8c19bce3c01ce67bdcc28ed48198d Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* core: Brick is not able to detach successfully in brick_mux environmentMohit Agrawal2019-04-141-0/+33
| | | | | | | | | | | | | | | | | | | | | Problem: In brick_mux environment, while volumes are stopped in a loop bricks are not detached successfully. Brick's are not detached because xprtrefcnt has not become 0 for detached brick. At the time of initiating brick detach process server_notify saves xprtrefcnt on detach brick and once counter has become 0 then server_rpc_notify spawn a server_graph_janitor_threads for cleanup brick resources.xprtrefcnt has not become 0 because socket framework is not working due to assigning 0 as a fd for socket. In commit dc25d2c1eeace91669052e3cecc083896e7329b2 there was a change in changelog fini to close htime_fd if htime_fd is not negative, by default htime_fd is 0 so it close 0 also. Solution: Initialize htime_fd to -1 after just allocate changelog_priv by GF_CALLOC Fixes: bz#1699025 Change-Id: I5f7ca62a0eb1c0510c3e9b880d6ab8af8d736a25 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* test: Change glustershd_pid update in .t fileMohit Agrawal2019-04-121-1/+2
| | | | | | | | | | | Problem: bug-1650403.t && bug-858215.t are throwing error at the time of access glustershd pidfile Solution: Use ps command to findout glustershd pid Change-Id: I3477345b6220aa039e012e674cba21d741e9abab fixes: bz#1697486 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* tests: run nfs tests only if --enable-gnfs is providedAmar Tumballi2019-01-241-0/+2
| | | | | | Fixes: bz#1665358 Change-Id: Idbf88ec3ac683733b32c313377eeb72f2819bf0d Signed-off-by: Amar Tumballi <amarts@redhat.com>
* tests/bug-brick-mux-restart: add extra informationAmar Tumballi2019-01-241-1/+12
| | | | | | | | | | so that we can understand more about process memory and thread consumptions With this, we will also be able to understand more about the process details with brick-mux. updates: bz#1193929 Change-Id: I147a3e3814fc37dfb635217d0a0f0184fae0994f Signed-off-by: Amar Tumballi <amarts@redhat.com>
* core: Resolve dict_leak at the time of destroying graphMohit Agrawal2019-01-141-0/+112
| | | | | | | | | | | | Problem: In gluster code some of the places it call's get_new_dict to create a dictionary without taking reference so at the time of dict_unref it has become a leak Solution: To resolve the same call dict_new instead of get_new_dict updates bz#1650403 Change-Id: I3ccbbf5af07079a4fa09aad2cd0458c8625b2f06 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* tests: Brick is getting OOM in ./tests/bugs/core/bug-1432542-mpx-restart-crash.tMohit Agrawal2018-12-211-2/+3
| | | | | | | | | | | | | | This test "tests/bugs/core/bug-1432542-mpx-restart-crash.t" case creates 20 2x3 volumes after enabling brick_mux.At the time of creating last volume brick is getting OOM because brick consumption has increased from previous consumption due to these patches https://review.gluster.org/#/c/glusterfs/+/19997/, https://review.gluster.org/#/c/glusterfs/+/20362/ To avoid OOM reduce NUM_VOLS to 15 so that brick consumption has reduced Change-Id: Ib98b47a3db6b990ff22c7e57396d51e7fef5c7e8 fixes: bz#1661214 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* iobuf: Get rid of pre allocated iobuf_pool and use per thread mem poolPoornima G2018-12-181-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current implementation of iobuf_pool has two problems: - prealloc of 12.5MB memory, this limits the scale factor of the gluster processes due to RAM requirements - lock contention, as the current implementation has one global iobuf_pool lock. Credits for debugging and addressing the same goes to Krutika Dhananjay <kdhananj@redhat.com>. Issue: #410 Hence changing the iobuf implementation to use per thread mem pool. This may theoritically appear to cause perf dip as there is no preallocation. But per thread mem pool will not have significant perf impact as the last allocated memory is kept alive for subsequent allocs, for some time. The worst case would be if iobufs requested are of random sizes each time. The best case is, if we get iobuf request of the same size. From the perf tests, this patch did not seem to cause any perf decrease. Note that, with this patch, the rdma performance is going to degrade drastically. In one of the previous patchsets we had fixes to not degrade rdma perf, but rdma is not supported and also not tested [1]. Hence the decision was to not have code in rdma that is not tested and not supported. [1] https://lists.gluster.org/pipermail/gluster-users.old/2018-July/034400.html Updates: #325 Change-Id: Ic2ef3bd498f9250dea25f25ba0c01fde19584b27 Signed-off-by: Poornima G <pgurusid@redhat.com>
* tests: brick-mux-fd-cleanup.t should be under core directoryAtin Mukherjee2018-10-311-0/+78
| | | | | | Fixes: bz#1637934 Change-Id: I5f95beab62bd2bdde3bbee94c308b0ad03e94379 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* dht: utilize the framework to pass-through xlator tasksAmar Tumballi2018-09-191-1/+1
| | | | | | | | | | | | | | | | | | | | | Also fixes the issue caused due to not converting back the fn function to after getting its address. We wanted the value of the field, not the address of the pt_fop field. With this patch, DHT will always be started in pass-through mode if the number of subvols is just 1. Fixes some tests to make sure DHT is in full config (ie, subvols > 1). - increased timeout of brick-mux test as it was bordering on 300 seconds. - Also change the volume type to supported 'replica 3' from 'replica 2'. - also no DHT tests should assume presence of DHT when there is just 1 brick in volume Credits: Nithya B <nbalacha@redhat.com> fixes: #405 Change-Id: I8e55239ce58d6ac6ae1901e2e384be1ecbd33d6e Signed-off-by: Amar Tumballi <amarts@redhat.com>
* Land part 2 of clang-format changesGluster Ant2018-09-121-43/+42
| | | | | Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4 Signed-off-by: Nigel Babu <nigelb@redhat.com>
* io-stats: dump io-stats info in /var/run/glusterAmar Tumballi2018-09-051-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | It wouldn't make sense to allow iostats file to be written in *any* directory. While the formating makes sure we try to append io-stats-name for the file, so overwriting existing file is slim, but in any case it makes sense to restrict dumping to one directory. Below are the sample commands, and files created for the corresponding values: $ setfattr -n trusted.io-stats-dump -v file-for-dump $M0 In this case, the file would be in /var/run/gluster/file-for-dump $ setfattr -n trusted.io-stats-dump -v /dir1/dir2/file-for-dump $M0 In this case, then the dump file is in /var/run/gluster/dir1-dir2-file-for-dump Note that the value passed for this virtual xattr would be treated as a file, and even if the value has '/' in it, it would be changed to '-' for sanity. Fixes: bz#1625106 Change-Id: Id9ae6a40a190b8937c51662e6e1c2a0f6c86a0e0 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* tests: Fix cleanup routine for some mux testsShyamsundarR2018-08-131-3/+2
| | | | | | | | | | | | | | | | | Some of the mux tests, set a trap to catch test exit and call cleanup. This will cause cleanup to be invoked twice in case the test times out, or even otherwise, as include.rc also sets a trap to cleanup on exit (TERM and others). This leads to the tarballs generated on failures for these tests to be empty and does not aid debugging. This patch corrects this pattern across the tests to the more standard cleanup at the end. Fixes: bz#1615037 Change-Id: Ib83aeb09fac2aa591b390b9fb9e1f605bfef9a8b Signed-off-by: ShyamsundarR <srangana@redhat.com>
* tests: Increase timeout for mpx restart crash testShyamsundarR2018-08-091-3/+6
| | | | | | | | | | | | | | | | | | | | In lcov based regression testing environments, all tests take more time than what occurs in centos7 regressions. Possibly due to code instrumentation for lcov purposes. Due to this the test, bug-1432542-mpx-restart-crash.t constantly times out. This patch increases the timeout for the same to enable lcov tests to pass on a more regular basis. It was also noted by Nithya that the test at times generated an OOM kill on the regression machines. In order to reduce runtime memory foot print of the tests, FUSE mounts are unmounted as soon as the required test is complete. Fixes: bz#1608568 Change-Id: I37f8d4b45807a69c52c7c7df4923c0fc33fab4e4 Signed-off-by: ShyamsundarR <srangana@redhat.com>
* core: Increase SCRIPT_TIMEOUT for bug-1432542-mpx-restart-crash.tMohit Agrawal2018-07-091-1/+1
| | | | | | | BUG: 1599250 Change-Id: I27bda2a0764580289e7154766e13a0c358cba3a8 fixes: bz#1599250 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* tests: bring option of per test timeoutAmar Tumballi2018-02-151-0/+2
| | | | | | | | | | | | | | This uses 'timeout' command with 300 seconds default. Right now, there is just 1 test which takes more than that in a properly setup machine. Ideally best case is set the default to something like 30 seconds, and if a test is supposed to take more than that, owner should add a timeout line to test knowingly. That way, it makes test writers think about a time limit too. Change-Id: I747005ce1f208aeb2ecbf899e8feea487ecd21a0 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* cluster/ec: OpenFD heal implementation for ECSunil Kumar Acharya2018-01-051-11/+1
| | | | | | | | | | | | | Existing EC code doesn't try to heal the OpenFD to avoid unnecessary healing of the data later. Fix implements the healing of open FDs before carrying out file operations on them by making an attempt to open the FDs on required up nodes. BUG: 1431955 Change-Id: Ib696f59c41ffd8d5678a484b23a00bb02764ed15 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* tests: fix bug-1432542-mpx-restart-crash.t spurious failureAtin Mukherjee2017-11-281-1/+7
| | | | | | Change-Id: Ied1215bfec0ccf2ec8ee55a0aaf618517b67bd55 BUG: 1517961 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* tests: Spurious failure multiplex-limit-issue-151.tN Balachandran2017-11-271-1/+1
| | | | | | | | | A timing issue caused the remove-brick commit to fail. Replaced 'remove-brick commit' with 'remove-brick force'. Change-Id: I69144b2f7be34095dbd3a7d182e0bf01b27fb0a4 BUG: 1517904 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* glusterd: Introduce option to limit no. of muxed bricks per processSamikshan Bairagya2017-07-101-0/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit introduces a new global option that can be set to limit the number of multiplexed bricks in one process. Usage: `# gluster volume set all cluster.max-bricks-per-process <value>` If this option is not set then multiplexing will happen for now with no limitations set; i.e. a brick process will have as many bricks multiplexed to it as possible. In other words the current multiplexing behaviour won't change if this option isn't set to any value. This commit also introduces a brick process instance that contains information about brick processes, like the number of bricks handled by the process (which is 1 in non-multiplexing cases), list of bricks, and port number which also serves as an unique identifier for each brick process instance. The brick process list is maintained in 'glusterd_conf_t'. Updates: #151 Change-Id: Ib987d14ab0a4f6034dac01b73a4b2839f7b0b695 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17469 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* tests: remove tests/bugs/core/bug-1421590-brick-mux-reuse-ports.tAtin Mukherjee2017-04-121-60/+0
| | | | | | | | | | | | | | | | | | | | | | | | | bug-1421590-brick-mux-reuse-ports.t seems to be a bad test to me and here is my reasoning: This test tries to check if the ports are reused or not. When a volume is restarted, by the time glusterd tries to allocate a new port to the one of the brick processes of the volume there is no guarantee that the older port will be allocated given the kernel might take some extra time to free up the port between this time frame. From https://build.gluster.org/job/regression-test-burn-in/2932/console we can clearly see that post restart of the volume, glusterd allocated port 49153 & 49155 for brick1 & brick2 respectively but the test was expecting the ports to be matched with 49155 & 49156 which were allocated before the volume was restarted. Change-Id: Id887bf28445261d4de04fc7502e58057659c9512 BUG: 1441035 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17033 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* glusterd: hold off volume deletes while still restarting bricksJeff Darcy2017-03-302-0/+96
| | | | | | | | | | | | | | | | We need to do this because modifying the volume/brick tree while glusterd_restart_bricks is still walking it can lead to segfaults. Without waiting we could accidentally "slip in" while attach_brick has released big_lock between retries and make such a modification. Change-Id: I30ccc4efa8d286aae847250f5d4fb28956a74b03 BUG: 1432542 Signed-off-by: Jeff Darcy <jeff@pl.atyp.us> Reviewed-on: https://review.gluster.org/16927 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* core: Clean up pmap registry up correctly on volume/brick stopSamikshan Bairagya2017-02-271-0/+55
| | | | | | | | | | | | | | | | | | | | | | | This commit changes the following: 1. In glusterfs_handle_terminate, send out individual pmap signout requests to glusterd for every brick. 2. Add another parameter to glusterfs_mgmt_pmap_signout function to pass the brickname that needs to be removed from the pmap registry. 3. Make sure pmap_registry_search doesn't break out from the loop iterating over the list of bricks per port if the first brick entry corresponding to a port is whitespaced out. 4. Make sure the pmap registry entries are removed for other daemons like snapd. Change-Id: I69949874435b02699e5708dab811777ccb297174 BUG: 1421590 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/16689 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Gaurav Yadav <gyadav@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: take conn->lock around operations on conn->reconnectJeff Darcy2017-02-181-0/+25
| | | | | | | | | | | | | | | | Failure to do this could lead to a race in which a timer would be removed twice concurrently, corrupting the timer list (because gf_timer_call_cancel has no internal protection against this) and possibly causing a crash. Change-Id: Ic1a8b612d436daec88fd6cee935db0ae81a47d5c BUG: 1421721 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/16662 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* tests: Fix spurious failure in bug-1402841.t-mt-dir-scan-race.tKrutika Dhananjay2016-12-191-0/+10
| | | | | | | | | | | | | | Check that shd is up before executing 'volume heal' command Change-Id: Ib510a5de06d732fd3a738e90fa16376698479897 BUG: 1405554 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/16169 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Ravishankar N <ravishankar@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
* syncop: fix conditional wait bug in parallel dir scanRavishankar N2016-12-091-0/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: The issue as seen by the user is detailed in the BZ but what is happening is if the no. of items in the wait queue == max-qlen, syncop_mt_dir_scan() does a pthread_cond_wait until the launched synctask workers dequeue the queue. But if for some reason the worker fails, the queue is never emptied due to which further invocations of syncop_mt_dir_scan() are blocked forever. Fix: Made some changes to _dir_scan_job_fn - If a worker encounters error while processing an entry, notify the readdir loop in syncop_mt_dir_scan() of the error but continue to process other entries in the queue, decrementing the qlen as and when we dequeue elements, and ending only when the queue is empty. - If the readdir loop in syncop_mt_dir_scan() gets an error form the worker, stop the readdir+queueing of further entries. Change-Id: I39ce073e01a68c7ff18a0e9227389245a6f75b88 BUG: 1402841 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/16073 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* logging: Fix per xl log levelPoornima G2016-08-151-0/+43
| | | | | | | | | | | | | | | | | | | Currently per xlator loglevel setting doesn't work, due to the flaw in loglevel checking. Fix the same. Per xlator logging can be set using the below command: Eg: setfattr -n trusted.glusterfs.patchy-md-cache.set-log-level -v TRACE /mnt/glusterfs/0 Change-Id: I8ff1d15bd5693b6f682d99bee22a4bbb5eee646c BUG: 1362520 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: http://review.gluster.org/15071 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* libglusterfs: fix glusterd statedump crashAtin Mukherjee2016-08-041-1/+6
| | | | | | | | | | | | | | | | | | commit 3c04a91 removed setting typeStr to NULL if num_allocs is set to 0, this has caused this regression. Code has been put back like earlier and to avoid statedump printing all the NULL values check is modified to see skip the records if num_allocs is 0 instead of total_allocs Change-Id: Ib8bcc2fba908e88cf52b641c3f6bcba74f5e667c BUG: 1359190 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/14987 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* io-stats: Fix io-stat dump to dump at all levelsShyam2016-06-151-5/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previous commit to fix the bug, where io-stat-dump was overwriting the dump file when the client and a brick was on the same host, failed to consider the existing behaviour where io-stats can help generate closely correlated set of stats across clients and bricks, by triggering the dump using the same command. This was introduced in commit: 0facb11220aea20a6573b656785922219c9650cf Further, by limiting the first io-stat to unwind the dump request, there is no way to trigger other io-stat xlators in the stack to dump their stat information. This bug hence is being fixed by this commit keeping the following in mind, - We need to trigger io-stat-dump for all instances in the graph when this attr is set - We need to write the output to different files, so that they do not overwrite each others data - We need to prevent this xattr from being set on the path that is used to trigger the io-stat-dump information Change-Id: I31ec380f0d85e10313a9d7b977da0e1ec74638a6 BUG: 1322825 Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/14552 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Ravishankar N <ravishankar@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: default value of nfs.disable, change from false to trueKaleb S KEITHLEY2016-04-271-0/+1
| | | | | | | | | | | | | | | | | | Next step in eventual deprecation of glusterfs nfs server in favor of ganesha.nfsd. Also replace several open-coded strings with constant. Change-Id: If52f5e880191a14fd38e69b70a32b0300dd93a50 BUG: 1092414 Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/13738 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* statedump: Prevent (null) typestr to be printedPranith Kumar K2016-04-241-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: After the commits: 7e44c783ad731856956929f6614bbe045c26ea3a - lock: use spinlock only on multicore systems a6aecae2cd8171b8538bfe5d2800bdd157380b85 - nfs: fix lock variable type we see a lot of "[global.glusterfs - usage-type (null) memusage]" in statedump because lock status is not all-zeros after init, and the memcmp to check that a datatype is never allocated is invalid. Fix: Changed if a datatype is allocated or not check based on total_allocs. Also removed setting typestr to NULL on gf_free even when num_allocs is 0. Because even that is leading to 'null' memusage string to be printed in statedump. BUG: 1329870 Change-Id: If2b01a557cbdc787625db32e276e06cee3ac46ee Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14054 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* qemu-block: mop leftover codePrasanna Kumar Kalever2016-04-241-1/+1
| | | | | | | | | | | | | | | | | | | | This patch cleans off the code that was leftover by '6860968' which basically remove qemu-block from gluster code repo Also update 'bug-1168803-snapd-option-validation-fix.t' which previously used 'features.file-snapshot' for checking 'volume set' for some reason. Change-Id: I2c4f28e186b74a4ce55d48c0fa7f3f79ca1901b5 BUG: 1198849 Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> Reviewed-on: http://review.gluster.org/13964 Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* io-stats: Fix overwriting of client profile by the bricksPoornima G2016-04-121-0/+16
| | | | | | | | | | | | | | | | | | | | | Issue: When the user executes the following command to generate the client perf profile, if the client is on the same node as bricks, the bricks overwrite the profile info written by clients. Also xattr "trusted.io-stats-dump" gets set on the mount point. setxattr -n trusted.io-stats-dump -v /tmp/iostat.log /mnt/fuse Fix: Unwind from setxattr, when xattr is 'io-stats-dump' Change-Id: Iba0e5df2f25f4ba3b1399ac176a3f8a916ff372e BUG: 1322825 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: http://review.gluster.org/13872 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* tests: Disable flush behind so that file will be closedPranith Kumar K2015-05-031-0/+1
| | | | | | | | | | | Change-Id: I97eb8d236448e8783f09b4922aa465a2b1b3979b BUG: 1217788 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/10488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
* tests: move bug-1168875.t to tests/bugs/snapshot folderAtin Mukherjee2015-04-061-96/+0
| | | | | | | | | Change-Id: Ie4e960002b8de7e31f91365785c44df3ac04c88d BUG: 1178685 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/10131 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* snapshot: append timestamp with snapnameMohammed Rafi KC2015-03-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Appending GMT time stamp with snapname by default. If no-timestamp flag is given during snapshot creation, then time stamp will not append with snapname; Initial consumer of this feature is Samba's Shadow Copy feature. This feature allows Windows user to get previous revisions of a file. For this feature to work snapshot names under .snaps folder (USS) should have timestamp in following format appended: @GMT-YYYY.MM.DD-hh.mm.ss PS: https://www.samba.org/samba/docs/man/manpages/vfs_shadow_copy2.8.html This format is configurable by Samba conf file. Due to a limitation in Windows directory access the exact format cannot be used by USS. Therefore we have modified the file format to: _GMT-YYYY.MM.DD-hh.mm.ss Snapshot scheduling feature also required to append timestamp to the snapshot name therefore timestamp is appended in snapshot creation itself instead of doing the changes in snapview server. More info: https://www.mail-archive.com/gluster-users@gluster.org/msg18895.html Change-Id: Idac24670948cf4c0fbe916ea6690e49cbc832d07 BUG: 1189473 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9597 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* uss: disable memory accounting for the snapshot daemonRaghavendra Bhat2015-01-281-1/+3
| | | | | | | | | | | | | | | | | | * Bring in option to disable memory accounting for a glusterfs process This reverses the changes done by the commit 7fba3a88f1ced610eca0c23516a1e720d75160cd. * Change the key from "memory-accounting" to "no-memory-accounting", as by default all the glusterfs process enable memory accounting now. So to disable memory accounting for some process, "no-mem-accounting" argument has to be passed. Change-Id: I39c7cefb0fe764ea3e48f4e73e1305b084c5f497 BUG: 1184366 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/9469 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* tests: move all test-cases into component subdirectoriesNiels de Vos2015-01-0618-0/+629
There are around 300 regression tests, 250 being in tests/bugs. Running partial set of tests/bugs is not easy because this is a flat directory with almost all tests inside. It would be valuable to make partial test/bugs easier, and allow the use of mulitple build hosts for a single commit, each running a subset of the tests for a quicker result. Additional changes made: - correct the include path for *.rc shell libraries and *.py utils - make the testcases pass checkpatch - arequal-checksum in afr/self-heal.t was never executed, now it is - include.rc now complains loudly if it fails to find env.rc Change-Id: I26ffd067e9853d3be1fd63b2f37d8aa0fd1b4fea BUG: 1178685 Reported-by: Emmanuel Dreyfus <manu@netbsd.org> Reported-by: Atin Mukherjee <amukherj@redhat.com> URL: http://www.gluster.org/pipermail/gluster-devel/2014-December/043414.html Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/9353 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>