summaryrefslogtreecommitdiffstats
path: root/tests/bugs/glusterd
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: glusterd fails to start if peers file has blank lineGaurav Yadav2017-08-291-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: On start of glusterd service, glusterd fetch data from store, while parsing data from store if peers file consists of blank line glusterd fails to start. Fix: With this fix while parsing peers file glusterd will skip blank lines if it contains any. >Reviewed-on: https://review.gluster.org/18066 >Tested-by: Gaurav Yadav <gyadav@redhat.com> >Smoke: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Prashanth Pai <ppai@redhat.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> >Reviewed-by: Niels de Vos <ndevos@redhat.com> Change-Id: I53cd65a54de5f57baef292b2118b70ffb7f99388 BUG: 1486107 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Reviewed-on: https://review.gluster.org/18124 Reviewed-by: Prashanth Pai <ppai@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd: replace-brick executing successfully when quorum does not metGaurav Yadav2017-08-291-0/+51
| | | | | | | | | | | | | | | | | | | | | | | | Problem: replace-brick command on a setup where quorum does not met executing successfully. Fix: With the fix glusterd is validating whether server is in quorum or not during replace-brick staging >Reviewed-on: https://review.gluster.org/18068 >Smoke: Gluster Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Change-Id: I8017154bb62bdcc6c6490e720ecfe9cde090c161 BUG: 1486110 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Reviewed-on: https://review.gluster.org/18125 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: disallow volume specific options to be set with all as volume nameAtin Mukherjee2017-08-211-0/+25
| | | | | | | | | | | | | | | | | | | | | | | All the .validate_fn defined in volume map entry table refers to volinfo object. And if we end up in trying to set a volume level option cluster wide glusterd results into a crash. >Reviewed-on: https://review.gluster.org/18052 >Smoke: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Prashanth Pai <ppai@redhat.com> >Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> >Reviewed-by: Gaurav Yadav <gyadav@redhat.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >(cherry picked from commit 01abf7ee37702407403afcf9aa6c9019a0316e1d) Change-Id: I7c877aee0ff5c8c1d8c95662fdc8c8923355ae7b BUG: 1482804 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/18060 Reviewed-by: Prashanth Pai <ppai@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd: Gluster should keep PID file in correct locationGaurav Kumar Garg2017-08-122-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently Gluster keeps process pid information of all the daemons and brick processes in Gluster configuration file directory (ie., /var/lib/glusterd/*). These pid files should be seperate from configuration files. Deletion of the configuration file directory might result into serious problems. Also, /var/run/gluster is the default placeholder directory for pid files. So, with this fix Gluster will keep all process pid information of all processes in /var/run/gluster/* directory. > BUG: 1258561 > Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> > Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com> > Reviewed-on: https://review.gluster.org/13580 > Tested-by: MOHIT AGRAWAL <moagrawa@redhat.com> > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> > cherry pick from commit 220d406ad13d840e950eef001a2b36f87570058d BUG: 1480459 Change-Id: Idb09e3fccb6a7355fbac1df31082637c8d7ab5b4 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/18023 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterfs: Not able to mount running volume after enable brick mux and ↵Mohit Agrawal2017-05-311-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | stopped any volume Problem: After enabled brick mux if any volume has down and then try ot run mount with running volume , mount command is hung. Solution: After enable brick mux server has shared one data structure server_conf for all associated subvolumes.After down any subvolume in some ungraceful manner (remove brick directory) posix xlator sends GF_EVENT_CHILD_DOWN event to parent xlatros and server notify updates the child_up to false in server_conf.When client is trying to communicate with server through mount it checks conf->child_up and it is FALSE so it throws message "translator are not yet ready". From this patch updated structure server_conf to save child_up status for xlator wise. Another improtant correction from this patch is cleanup threads from server side xlators after stop the volume. BUG: 1453977 Change-Id: Ic54da3f01881b7c9429ce92cc569236eb1d43e0d Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/17356 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra Talur <rtalur@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* libglusterfs : Fix crash in glusterd while peer probingGaurav Yadav2017-05-261-0/+25
| | | | | | | | | | | | | | | | | | | | | | glusterd crashes when port is being set explcitly to a range which is outside greater than short data type range. Eg. sysctl net.ipv4.ip_local_reserved_ports="49152-49156" In above case glusterd crashes while parsing the port. With this fix glusterd will be able to handle port range between INT_MIN to INT_MAX Change-Id: I7c75ee67937b0e3384502973d96b1c36c89e0fe1 BUG: 1454418 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Reviewed-on: https://review.gluster.org/17359 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* glusterd: Don't spawn new glusterfsds on node reboot with brick-muxSamikshan Bairagya2017-05-181-0/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | With brick multiplexing enabled, upon a node reboot new bricks were not being attached to the first spawned brick process even though there wasn't any compatibility issues. The reason for this is that upon glusterd restart after a node reboot, since brick services aren't running, glusterd starts the bricks in a "no-wait" mode. So after a brick process is spawned for the first brick, there isn't enough time for the corresponding pid file to get populated with a value before the compatibilty check is made for the next brick. This commit solves this by iteratively waiting for the pidfile to be populated in the brick compatibility comparison stage before checking if the brick process is alive. Change-Id: Ibd1f8e54c63e4bb04162143c9d70f09918a44aa4 BUG: 1451248 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17307 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: remove useless options from glusterd's volume set tableZhou Zhengping2017-05-171-10/+10
| | | | | | | | | | | | | | | | These options will cause brick's log complains: _log_if_unknown_option] 0-patchy-quota: option 'timeout' is not recognized _log_if_unknown_option] 0-patchy-server: option 'ping-timeout' is not recognized Change-Id: Ida2add13f792736a4e52bfaf38d1169309283a3f BUG: 1449008 Signed-off-by: Zhou Zhengping <johnzzpcrystal@gmail.com> Reviewed-on: https://review.gluster.org/17213 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* glusterd: Make reset-brick work correctly if brick-mux is onSamikshan Bairagya2017-05-101-0/+79
| | | | | | | | | | | | | | | | | | | Reset brick currently kills of the corresponding brick process. However, with brick multiplexing enabled, stopping the brick process would render all bricks attached to it unavailable. To handle this correctly, we need to make sure that the brick process is terminated only if brick-multiplexing is disabled. Otherwise, we should send the GLUSTERD_BRICK_TERMINATE rpc to the respective brick process to detach the brick that is to be reset. Change-Id: I69002d66ffe6ec36ef48af09b66c522c6d35ac58 BUG: 1446172 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17128 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: socketfile & pidfile related fixes for brick multiplexing featureMohit Agrawal2017-05-093-0/+116
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: While brick-muliplexing is on after restarting glusterd, CLI is not showing pid of all brick processes in all volumes. Solution: While brick-mux is on all local brick process communicated through one UNIX socket but as per current code (glusterd_brick_start) it is trying to communicate with separate UNIX socket for each volume which is populated based on brick-name and vol-name.Because of multiplexing design only one UNIX socket is opened so it is throwing poller error and not able to fetch correct status of brick process through cli process. To resolve the problem write a new function glusterd_set_socket_filepath_for_mux that will call by glusterd_brick_start to validate about the existence of socketpath. To avoid the continuous EPOLLERR erros in logs update socket_connect code. Test: To reproduce the issue followed below steps 1) Create two distributed volumes(dist1 and dist2) 2) Set cluster.brick-multiplex is on 3) kill glusterd 4) run command gluster v status After apply the patch it shows correct pid for all volumes BUG: 1444596 Change-Id: I5d10af69dea0d0ca19511f43870f34295a54a4d2 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/17101 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* Fixes quota aux mount failureSanoj Unnikrishnan2017-05-081-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The aux mount is created on the first limit/remove_limit/list command and it remains until volume is stopped / deleted / (quota is disabled) , where we do a lazy unmount. If the process is uncleanly terminated, then the mount entry remains and we get (Transport disconnected) error on subsequent attempts to run quota list/limit-usage/remove commands. Second issue, There is also a risk of inadvertent rm -rf on the /var/run/gluster causing data loss for the user. Ideally, /var/run is a temp path for application use and should not cause any data loss to persistent storage. Solution: 1) unmount the aux mount after each use. 2) clean stale mount before mounting, if any. One caveat with doing mount/unmount on each command is that we cannot use same mount point for both list and limit commands. The reason for this is that list command needs mount to be accessible in cli after response from glusterd, So it could be unmounted by a limit command if executed in parallel (had we used same mount point) Hence we use separate mount points for list and limit commands. Change-Id: I4f9e39da2ac2b65941399bffb6440db8a6ba59d0 BUG: 1433906 Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com> Reviewed-on: https://review.gluster.org/16938 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Manikandan Selvaganesh <manikandancs333@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd : Disallow peer detach if snapshot bricks exist on itGaurav Yadav2017-03-311-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | Problem : - Deploy gluster on 2 nodes, one brick each, one volume replicated - Create a snapshot - Lose one server - Add a replacement peer and new brick with a new IP address - replace-brick the missing brick onto the new server (wait for replication to finish) - peer detach the old server - after doing above steps, glusterd fails to restart. Solution: With the fix detach peer will populate an error : "N2 is part of existing snapshots. Remove those snapshots before proceeding". While doing so we force user to stay with that peer or to delete all snapshots. Change-Id: I3699afb9b2a5f915768b77f885e783bd9b51818c BUG: 1322145 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Reviewed-on: https://review.gluster.org/16907 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* rpc: bump up conn->cleanup_gen in rpc_clnt_reconnect_cleanupAtin Mukherjee2017-03-201-0/+14
| | | | | | | | | | | | | | | | | | | | | | | Commit 086436a introduced generation number (cleanup_gen) to ensure that rpc layer doesn't end up cleaning up the connection object if application layer has already destroyed it. Bumping up cleanup_gen was done only in rpc_clnt_connection_cleanup (). However the same is needed in rpc_clnt_reconnect_cleanup () too as with out it if the object gets destroyed through the reconnect event in the application layer, rpc layer will still end up in trying to delete the object resulting into double free and crash. Peer probing an invalid host/IP was the basic test to catch this issue. Change-Id: Id5332f3239cb324cead34eb51cf73d426733bd46 BUG: 1433578 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/16914 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Milind Changire <mchangir@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* TESTS/TIER: bug-1303028-Rebalance-glusterd-rpc-connection-issue.thari gowtham2017-02-231-1/+2
| | | | | | | | | | | | | | | | | | | | | | PROBLEM: spurious failure of the test. CAUSE: the function "rebalance_run_time" calculates the total time the tier has been running for. this being a test case, the run time of tier can be 0 and when the function adds up zero it results in zero. and thus it starts to fail. FIX: Give it some time for the function to add up the values. Signed-off-by: hari gowtham <hgowtham@redhat.com> Change-Id: Ie270f3f3c8942081cca85dc49ef8fec76f3a261a BUG: 1425743 Reviewed-on: https://review.gluster.org/16711 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd: ignore return code of glusterd_restart_bricksAtin Mukherjee2017-02-091-0/+40
| | | | | | | | | | | | | | | | | | | | | When GlusterD is restarted on a multi node cluster, while syncing the global options from other GlusterD, it checks for quorum and based on which it decides whether to stop/start a brick. However we handle the return code of this function in which case if we don't want to start any bricks the ret will be non zero and we will end up failing the import which is incorrect. Fix is just to ignore the ret code of glusterd_restart_bricks () Change-Id: I37766b0bba138d2e61d3c6034bd00e93ba43e553 BUG: 1420637 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/16574 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* tests : turn off nfs.disable in bug-1238706-daemons-stop-on-peer-cleanup.tAtin Mukherjee2017-02-071-0/+2
| | | | | | | | | | | | | | To validate this test and remove it from the list of bad tests, turn off nfs.disable option so that nfs daemon can come up. Change-Id: I8146c2d7f72ac53cac7e395dbb9e819d729eb6a9 BUG: 1257792 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/16514 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: double-check brick liveness for remove-brick validationJeff Darcy2017-02-021-2/+4
| | | | | | | | | | | | | | | | Same problem as https://review.gluster.org/#/c/16509/ in a different place. Tests detach bricks without glusterd's knowledge, so glusterd's internal brick state is out of date and we have to re-check (via the brick's pidfile) as well. BUG: 1385758 Change-Id: I169538c1c62d72a685a49d57ef65fb6c3db6eab2 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/16529 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* core: run many bricks within one glusterfsd processJeff Darcy2017-01-303-9/+27
| | | | | | | | | | | | | | | | | | | | | | | This patch adds support for multiple brick translator stacks running in a single brick server process. This reduces our per-brick memory usage by approximately 3x, and our appetite for TCP ports even more. It also creates potential to avoid process/thread thrashing, and to improve QoS by scheduling more carefully across the bricks, but realizing that potential will require further work. Multiplexing is controlled by the "cluster.brick-multiplex" global option. By default it's off, and bricks are started in separate processes as before. If multiplexing is enabled, then *compatible* bricks (mostly those with the same transport options) will be started in the same process. Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb BUG: 1385758 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/14763 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: daemon restart logic should adhere server side quorumAtin Mukherjee2017-01-271-0/+57
| | | | | | | | | | | | | | | | Just like brick processes, other daemon services should also follow the same logic of quorum checks to see if a particular service needs to come up if glusterd is restarted or the incoming friend add/update request is received (in glusterd_restart_bricks () function) Change-Id: I54a1fbdaa1571cc45eed627181b81463fead47a3 BUG: 1383893 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/15626 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com>
* glusterd: bypass add-brick validation with forceAtin Mukherjee2017-01-181-2/+2
| | | | | | | | | | | | | | | | | | | | | Commit c916a2f added a validation to restrict add-brick operation if a replica configuration is changed and any of the bricks belonging to the volume is down. However we should bypass this validation with a force option if users really want to have add-brick to go through at the sake of the corner cases of data loss issue. The original problem of add-brick getting failed when layout is not set will still be a problem with a force option as the issue has to be taken care in the DHT layer. Change-Id: I0ed3df91ea712f77674eb8afc6fdfa577f25a7bb BUG: 1406411 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/16358 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Ravishankar N <ravishankar@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* tier : Tier as a servicehari gowtham2017-01-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tierd is implemented by separating from rebalance process. The commands affected: 1) Attach tier will trigger this process instead of old one 2) tier start and tier start force will also trigger this process. 3) volume status [tier] will show tier daemon as a process instead of task and normal tier status and tier detach status works. 4) tier stop implemented. 5) detach tier implemented separately along with new detach tier status 6) volume tier volname status will work using the changes. 7) volume set works This patch has separated the tier translator from the legacy DHT rebalance code. It now sends the RPCs from the CLI to glusterd separate to the DHT rebalance code. The daemon is now a service, similar to the snapshot daemon, and can be viewed using the volume status command. The code for the validation and commit phase are the same as the earlier tier validation code in DHT rebalance. The “brickop” phase has been changed so that the status command can use this framework. The service management framework is now used. DHT rebalance does not use this framework. This service framework takes care of : *) spawning the daemon, killing it and other such processes. *) volume set options , which are written on the volfile. *) restart and reconfigure functions. Restart is to restart the daemon at two points 1)after gluster goes down and comes up. 2) to stop detach tier. *) reconfigure is used to make immediate volfile changes. By doing this, we don’t restart the daemon. it has the code to rewrite the volfile for topological changes too (which comes into place during add and remove brick). With this patch the log, pid, and volfile are separated and put into respective directories. Change-Id: I3681d0d66894714b55aa02ca2a30ac000362a399 BUG: 1313838 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/13365 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Fail add-brick on replica count change, if brick is downkarthik-us2017-01-061-0/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: 1. Have a replica 2 volume with bricks b1 and b2 2. Before setting the layout, b1 goes down 3. Set the layout write some data, which gets populated on b2 4. b2 goes down then b1 comes up 5. Add another brick b3, and heal will take place from b1 to b3, which basically have no data 6. Write some data. Both b1 and b3 will mark b2 for pending writes 7. b1 goes down, and b2 comes up 8. b2 gets heald from b1. During heal it removes the data which is already in b2, considering that as stale data. This leads to data loss. Solution: 1. In glusterd stage-op, while adding bricks, check whether the replica count is being increased 2. If yes, then check whether any of the bricks are down at that time 3. If yes, then fail the add-brick to avoid such data loss 4. Else continue the normal operation. This check will work enen when we convert plain distribute volume to replicate Test: 1. Create a replica 2 volume 2. Kill one brick from the volume 3. Try adding a brick to the volume 4. It should fail with all bricks are not up error 5. Cretae a distribute volume and kill one of the brick 6. Try to convert it to replicate volume, by adding bricks. 7. This should also fail. Change-Id: I9c8d2ab104263e4206814c94c19212ab914ed07c BUG: 1406411 Signed-off-by: karthik-us <ksubrahm@redhat.com> Reviewed-on: http://review.gluster.org/16330 Tested-by: Ravishankar N <ravishankar@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* tests: fix tests/bugs/glusterd/bug-913555.t spurious failuresAtin Mukherjee2017-01-011-5/+8
| | | | | | | | | | | | | | Mainly replaced EXPECT instances with EXPECT_WITHIN Change-Id: If48f444f6b2ba6713fdc5e31ff3a642092e62ada BUG: 1408758 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/16289 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* tests: fix bug-963541.t spurious failureAtin Mukherjee2016-09-111-1/+2
| | | | | | | | | | | | | wait for remove brick to complete before attempt for a commit. Change-Id: I66ea6c48b6a69fe33d79f9d9080b6f2c1462578e BUG: 1374993 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/15457 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Fix volume restart issue upon glusterd restartSamikshan Bairagya2016-08-172-1/+41
| | | | | | | | | | | | | | | | | | | | | | | http://review.gluster.org/#/c/14758/ introduces a check in glusterd_restart_bricks that makes sure that if server quorum is enabled and if the glusterd instance has been restarted, the bricks do not get started. This prevents bricks which have been brought down purposely, say for maintainence, from getting started upon a glusterd restart. However this change introduced regression for a situation that involves multiple volumes. The bricks from the first volume get started, but then for the subsequent volumes the bricks do not get started. This patch fixes that by setting the value of conf->restart_done to _gf_true only after bricks are started correctly for all volumes. Change-Id: I2c685b43207df2a583ca890ec54dcccf109d22c3 BUG: 1367478 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: http://review.gluster.org/15183 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd : skip non directories inside /var/lib/glusterd/volsJiffin Tony Thottan2016-08-081-0/+31
| | | | | | | | | | | | | | | | Right now glusterd won't come up if vols directory contains an invalid entry. Instead of doing that with this change a message will be logged and then skip that entry Change-Id: I665b5c35291b059cf054622da0eec4db44ec5f68 BUG: 1318591 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: http://review.gluster.org/13764 Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* tests: fix spurious failure in tests/bugs/glusterd/bug-1089668.tAtin Mukherjee2016-08-041-2/+1
| | | | | | | | | | | | | | Instead of rebalance stop, its always better to wait for rebalance to complete as the former doesn't have any purpose. Change-Id: Ia1bc2a34d937a0a96543bebd257dcda619f12474 BUG: 1363948 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/15085 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: clean up old port and allocate new one on every restartAtin Mukherjee2016-08-031-47/+0
| | | | | | | | | | | | | | | | | | | | | | | | GlusterD as of now was blindly assuming that the brick port which was already allocated would be available to be reused and that assumption is absolutely wrong. Solution : On first attempt, we thought GlusterD should check if the already allocated brick ports are free, if not allocate new port and pass it to the daemon. But with that approach there is a possibility that if PMAP_SIGNOUT is missed out, the stale port will be given back to the clients where connection will keep on failing. Now given the port allocation always start from base_port, if everytime a new port has to be allocated for the daemons, the port range will still be under control. So this fix tries to clean up old port using pmap_registry_remove () if any and then goes for pmap_registry_alloc () Change-Id: If54a055d01ab0cbc06589dc1191d8fc52eb2c84f BUG: 1221623 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/15005 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Avra Sengupta <asengupt@redhat.com>
* afr: some coverity fixesRavishankar N2016-07-261-3/+3
| | | | | | | | | | | | | | | Thanks to Krutika for a cleaner way to track inode refs in afr_set_split_brain_choice(). Change-Id: I2d968d05b815ad764b7e3f8aa9ad95a792b3c1df BUG: 1355604 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/14895 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* tests: Fix spurious failure of tests/bugs/glusterd/bug-1111041.tAvra Sengupta2016-07-201-36/+0
| | | | | | | | | | | | | | | | | | On a faster machine the ps check was returning two pids, including the glusterfsd process's pid, right after that, process forked. Hence removing that ps, as for the scope of this test, verifying the snapd pid from the status command itself is enough. Change-Id: I8bd8fc4ea406d96e3a47f952cfe44560b615dbe6 BUG: 1358195 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/14963 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* Revert "tests: remove tests for clear-locks"Pranith Kumar K2016-07-182-0/+91
| | | | | | | | | | | | | | | | | | This reverts commit 0086a55bb7de1ef5dc7a24583f5fc2b560e835fd. As part of Richard's patch for lock-revocation feature this bug is completely fixed (I think at least ;-) ). So bringing these back so that we will find out if there are anymore things we need to address in this code path. BUG: 1350867 Change-Id: If1440fc83b376576ae1a77b1156188a6bf53fe3a Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14817 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* tests: fix rebalance timing issueSakshi Bansal2016-07-111-0/+2
| | | | | | | | | | | | | | | | With a start and stop rebalance, the stop command may fail as by that time the rebalance process may not come up. Using the rebalance status commmand to ensure that the rebalance process is up before stoping rebalance. Change-Id: I3d5123cd5dfabde2720428455b257d11b980ce21 BUG: 1354372 Signed-off-by: Sakshi Bansal <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/14885 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: Don't start bricks if server quorum is not metSamikshan Bairagya2016-07-051-0/+62
| | | | | | | | | | | | | | | | | | | | | Upon glusterd restart if it is observered that the server quorum isn't met anymore due to changes to the "server-quorum-ratio" global option, the bricks should be stopped if they are running. Also if glusterd has been restarted, and if server quorum is not applicable for a volume, do not restart the bricks corresponding to the volume to make sure that bricks that have been brought down purposely, say for maintenance, are not brought up. This commit moves this check that was previously inside "glusterd_spawn_daemons" to "glusterd_restart_bricks" instead. Change-Id: I0a44a2e7cad0739ed7d56d2d67ab58058716de6b BUG: 1345727 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: http://review.gluster.org/14758 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: spawn daemons from init() on a single or two node setupAtin Mukherjee2016-07-051-0/+37
| | | | | | | | | | | | | | | | | Allow glusterd to spawn the daemons at the time of initialization when peer count is less than 2. This is required if user wants to set up a two node cluster with out server side quorum and want the bricks to come up on a node where the other node is down, however the behaviour will be overriden when server side quorum is enabled. Change-Id: I21118e996655822467eaf329f638eb9a8bf8b7d5 BUG: 1352277 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/14848 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: glusterd must store all rebalance related informationSakshi Bansal2016-07-041-0/+59
| | | | | | | | | | | Change-Id: I8404b864a405411e3af2fbee46ca20330e656045 BUG: 1351021 Signed-off-by: Sakshi Bansal <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/14827 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* cli: different status output for rebalance fix-layoutSakshi2016-07-031-1/+1
| | | | | | | | | | | Change-Id: I6ded40a1b1cff5c72e5b61fd353db3d8c688efd8 BUG: 1225718 Signed-off-by: Sakshi <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/10956 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* Fix opRet value for volume info --xml call on non-existent volumeSamikshan Bairagya2016-06-211-0/+24
| | | | | | | | | | | | | | | | | | The opRet field was being assigned to 0 in the XML output when a gluster volume info --xml call is made on a non-existent volume. This change assigns a value of -1 to opRet for volume info calls for non-existent volumes. Other fields like opErrno and opErrstr are also assigned relevant values Change-Id: I3920c602328f74252c87bb521f5a43d4bdc7d44d BUG: 1321836 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: http://review.gluster.org/13843 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: darshan n <dnarayan@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* tests: fix bug-1344407-volume-delete-on-node-down.tAtin Mukherjee2016-06-121-1/+0
| | | | | | | | | | | | | | Test was earlier starting the volume which will always make volume delete fail. so it was actually not validating BZ 1344407 Change-Id: I6761be16e414bb7b67694ff1a468073bfdd872ac BUG: 1344407 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/14693 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Smoke: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: fail volume delete if one of the node is downAtin Mukherjee2016-06-101-0/+20
| | | | | | | | | | | | | | | | | Deleting a volume on a cluster where one of the node in the cluster is down is buggy since once that node comes back the resync of the same volume will happen. Till we bring in the soft delete feature tracked in http://review.gluster.org/12963 this is a safe guard to block the volume deletion. Change-Id: I9c13869c4a7e7a947f88842c6dc6f231c0eeda6c BUG: 1344407 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/14681 Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* glusterd: default value of nfs.disable, change from false to trueKaleb S KEITHLEY2016-04-274-11/+19
| | | | | | | | | | | | | | | | | | Next step in eventual deprecation of glusterfs nfs server in favor of ganesha.nfsd. Also replace several open-coded strings with constant. Change-Id: If52f5e880191a14fd38e69b70a32b0300dd93a50 BUG: 1092414 Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/13738 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* glusterd: fix validation of lower op-version check in volume setAtin Mukherjee2016-04-261-0/+22
| | | | | | | | | | | | | | | | Commit 2d87a98 introduced a validation to fail lowering down the cluster.op-version. Commit 2eb8758 actually changed the variable value from cluster's op-version to volume's op-version which resulted the logic go for a toss. Change-Id: I70df32b75c3a3fe47dc840c4a655059e5b124bca BUG: 1315186 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/14069 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* Revert "glusterd: Allocate fresh port on brick (re)start"Gaurav Kumar Garg2016-04-141-0/+47
| | | | | | | | | | | | | | | | | | | | | This reverts commit 34899d7 Commit 34899d7 introduced a change, where restarting a volume or rebooting a node result into fresh allocation of brick port. In production environment generally administrator makes firewall configuration for a range of ports for a volume. With commit 34899d7, on rebooting of node or restarting a volume might result into volume start fail because firewall might block fresh allocated port of a brick and also it will be difficult in testing because of fresh allocation of port. Change-Id: I7a90f69e8c267a013dc906b5228ca76e819d84ad BUG: 1322805 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/13989 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* extras: Add namespace for options in group-virt.exampleVijay Bellur2016-04-091-0/+14
| | | | | | | | | | | | | | | | | | Commit 23ccabbeb7 introduced a new key "disperse.eager-lock" which causes a conflict with key "cluster.eager-lock" when option is used without the qualifying namespace. group-virt.example which gets installed as /var/lib/glusterd/ groups/virt contains options without namespace qualifiers. This patch adds the appropriate namespace to all options in group-virt.example. Change-Id: I2c09dd10d44138410d889ddeb805f01c641c6780 BUG: 1314649 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/13929 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd: fill real_path variable in brickinfo during volume importMohammed Rafi KC2016-04-051-0/+39
| | | | | | | | | | | | | | | | | | Variable "real_path" in brick info was used to store absolute path and using this we check the availability of the newly added bricks. But we were not populating the variable when we import a volume from peers. That caused to reset the real_path variable to zero, which resulted in validation failure for all new brick creation. Change-Id: I62be7bf452f0dcdf6aec3a4ec33c2e1fba2951ca BUG: 1323287 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/13890 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Allocate fresh port on brick (re)startAtin Mukherjee2016-04-011-47/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | There is no point of using the same port through the entire volume life cycle for a particular bricks process since there is no guarantee that the same port would be free and no other application wouldn't consume it in between the glusterd/volume restart. We hit a race where on glusterd restart the daemon services start followed by brick processes and the time brick process tries to bind with the port which was allocated by glusterd before a restart is been already consumed by some other client like NFS/SHD/... Note : This is a short term solution as here we reduce the race window but don't eliminate it completely. As a long term solution the port allocation has to be done by glusterfsd and the same should be communicated back to glusterd for book keeping Change-Id: Ibbd1e7ca87e51a7cd9cf216b1fe58ef7783aef24 BUG: 1322805 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/13865 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: Always copy old brick ports when importingKaushal M2016-03-091-0/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When an updated volinfo is imported in, the brick ports from the old volinfo should be always copied. Earlier, this was being done only if the old volinfo was stopped and new volinfo was started. This could lead to brick ports chaging when the following sequence of steps happened. - A volume is stopped - GlusterD is stopped on a peer - The stopped volume is started - The stopped GlusterD is started This sequence would lead to bricks on the peer with re-started GlusterD to get new ports, which could break firewall rules and could prevent client access. This sequence could be hit when enabling management encryption in a Gluster trusted storage pool. Change-Id: I808ad478038d12ed2b19752511bdd7aa6f663bfc BUG: 1313628 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/13578 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> Tested-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* tests: Add mechanism for disabled testsRaghavendra Talur2016-03-091-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Requirements: Should be able to skip tests from run-tests.sh run. Should be granular enough to disable on subset of OSes. Solution: Tests can have special comment lines with some comma separated values within them. Key names used to determine test status are G_TESTDEF_TEST_STATUS_CENTOS6 G_TESTDEF_TEST_STATUS_NETBSD7 Some examples: G_TESTDEF_TEST_STATUS_CENTOS6=BAD_TEST,BUG=123456 G_TESTDEF_TEST_STATUS_NETBSD7=KNOWN_ISSUE,BUG=4444444 G_TESTDEF_TEST_STATUS_CENTOS6=BAD_TEST,BUG=123456;555555 You can change status of test to enabled or delete the line only if all the bugs are closed or modified or if the patch fixes it. Change-Id: Idee21fecaa5837fd4bd06e613f5c07a024f7b0c2 BUG: 1295704 Signed-off-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-on: http://review.gluster.org/13393 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* afr: Add throttled background client-side healsRavishankar N2016-03-011-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | If a heal is needed after inode refresh (lookup, read_txn), launch it in the background instead of blocking the fop (that triggered refresh) until the heal happens. afr_replies_interpret() is modified such that the heal is launched only if atleast one sink brick is up. Max. no of heals that can happen in parallel is configurable via the 'background-self-heal-count' volume option. Any number greater than that is put in a wait queue whose length is configurable via 'heal-wait-queue-leng' volume option. If the wait queue is also full, further heals will be ignored. Default values: background-self-heal-count=8, heal-wait-queue-leng=128 Change-Id: I1d4a52814cdfd43d90591b6d2ad7b6219937ce70 BUG: 1297172 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/13207 Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* tests/glusterd: Parse the new time formatMohammed Rafi KC2016-02-251-2/+2
| | | | | | | | | | | | | | | | | | A recent change in cli changed elapsed time format that broke a test. This patch will fix the issue with parsing. Change-Id: I9a4a4b28f654cf2ac223e25abfc9df6570607d74 BUG: 1312036 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/13524 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
* tests: remove-brick commit getting executed before migration has completedSakshi Bansal2016-02-241-0/+2
| | | | | | | | | | | | | | | | Remove brick commit will fail when it is executed while rebalance is in progress. Hence added a rebalance timeout check before remove-brick commit to enusre that rebalance has completed. Change-Id: Ic12f97cbba417ce8cddb35ae973f2bc9bde0fc80 BUG: 1225716 Signed-off-by: Sakshi Bansal <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/13191 Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>