| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added further checks to ensure we do not go beyond prevalidate
when trying to restore a snapshot which has a nfs-gansha conf
file, in a cluster when nfs-ganesha is not enabled
The error message for the particular scenario is:
"Snapshot(<snapname>) has a nfs-ganesha export conf
file. cluster.enable-shared-storage and nfs-ganesha
should be enabled before restoring this snapshot."
Change-Id: I1b87e9907e0a5e162f26ef1ca89fe76e8da8610f
BUG: 1404118
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/16116
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The "gluster volume reset" should first unexport the volume and then delete
export configuration file. Also reset option is not applicable for ganesha.enable
if volume value is "all".
This patch also changes the name of create_export_config into manange_export_config
Change-Id: Ie81a49e7d3e39a88bca9fbae5002bfda5cab34af
BUG: 1397795
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: http://review.gluster.org/15914
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: soumya k <skoduri@redhat.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The brick path of snapshot clones contained the clonename,
thereby failing to create newer clones with the same name
after the original clone had been deleted.
This fix creates the brick path with the clone's vol id
instead of the clones name. Hence future clones with the
same name will not have the namespace clash.
Change-Id: I262712adc576122f051b5d1ce171d020efaefd1a
BUG: 1387160
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/15683
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Check if S32gluster_enable_shared_storage.sh is present
at /var/lib/glusterd/hooks/1/set/post/ at staging
before proceeding with the command. Fail the command
with the appropriate error message in case it is not
present.
Change-Id: I84e3912f1cdffb927f8a40d74d52be43ee69388b
BUG: 1388348
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/15718
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On a volume stop trigger glusterd issues a brick-op to terminate the brick
process during brick-op phase , however in the commit-op glusterd once again
tries to kill the same process if it exists and then mark the brickinfo->status
flag to GF_BRICK_STOPPED. In the former case, if brick is successfully killed
there is a possibility that GlusterD will receive RPC_CLNT_DISCONNECT from the
said brick process before even the commit op phase is executed and hence by that
time brickinfo->status will still be set to GF_BRICK_STARTED.
BRICK_DISCONNECT event should be only sent if a brick has been killed and not
through a volume stop/remove brick trigger, however due to this trace, this
event is also sent out on a volume stop.
Fix is to introduce an intermediate state GF_BRICK_STOPPING which can be used to
mark the brick status at brick op phase of volume stop/remove brick to avoid
sending spurious BRICK_DISCONNECT events on a volume stop trigger.
Change-Id: Ieed4450e1c988715e0f9958be44faa6b14be81e1
BUG: 1387652
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/15699
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The command basically allows replace brick with src and
dst bricks as same.
Usage:
gluster v reset-brick <volname> <hostname:brick-path> start
This command kills the brick to be reset. Once this command is run,
admin can do other manual operations that they need to do,
like configuring some options for the brick. Once this is done,
resetting the brick can be continued with the following options.
gluster v reset-brick <vname> <hostname:brick> <hostname:brick> commit {force}
Does the job of resetting the brick. 'force' option should be used
when the brick already contains volinfo id.
Problem: On doing a disk-replacement of a brick in a replicate volume
the following 2 scenarios may occur :
a) there is a chance that reads are served from this replaced-disk brick,
which leads to empty reads. b) potential data loss if next writes succeed
only on replaced brick, and heal is done to other bricks from this one.
Solution: After disk-replacement, make sure that reset-brick command is
run for that brick so that pending markers are set for the brick and it
is not chosen as source for reads and heal. But, as of now replace-brick
for the same brick-path is not allowed. In order to fix the above
mentioned problem, same brick-path replace-brick is needed.
With this patch reset-brick commit {force} will be allowed even when
source and destination <hostname:brickpath> are identical as long as
1) destination brick is not alive
2) source and destination brick have the same brick uuid and path.
Also, the destination brick after replace-brick will use the same port
as the source brick.
Change-Id: I440b9e892ffb781ea4b8563688c3f85c7a7c89de
BUG: 1266876
Signed-off-by: Anuradha Talur <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/12250
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Ashish Pandey <aspandey@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
http://review.gluster.org/14085 fixes a/the "leak" - via the
generated rpc/xdr headers - of pragmas that mask these warnings.
However 14085 won't pass the smoke test until all the warnings are
fixed.
Change-Id: Id3577872ef720787796f7bfe6772734a3b26fef0
BUG: 1369124
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/15286
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Prashanth Pai <ppai@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently there is no existing CLI that can be used to get the
local state representation of the cluster as maintained in glusterd
in a readable as well as parseable format.
The CLI added has the following usage:
# gluster get-state [daemon] [odir <path/to/output/dir>] [file <filename>]
This would dump data points that reflect the local state
representation of the cluster as maintained in glusterd (no other
daemons are supported as of now) to a file inside the specified
output directory. The default output directory and filename is
/var/run/gluster and glusterd_state_<timestamp> respectively. The
option for specifying the daemon name leaves room to add support for
other daemons in the future. Following are the data points captured
as of now to represent the state from the local glusterd pov:
* Peer:
- Primary hostname
- uuid
- state
- connection status
- List of hostnames
* Volumes:
- name, id, transport type, status
- counts: bricks, snap, subvol, stripe, arbiter, disperse,
redundancy
- snapd status
- quorum status
- tiering related information
- rebalance status
- replace bricks status
- snapshots
* Bricks:
- Path, hostname (for all bricks these info will be shown)
- port, rdma port, status, mount options, filesystem type and
signed in status for bricks running locally.
* Services:
- name, online status for initialised services
* Others:
- Base port, last allocated port
- op-version
- MYUUID
Change-Id: I4a45cc5407ab92d8afdbbd2098ece851f7e3d618
BUG: 1353156
Signed-off-by: Samikshan Bairagya <samikshan@gmail.com>
Reviewed-on: http://review.gluster.org/14873
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the second patch which moves export related configuration for
a volume into shared storage. The main change includes in scripts
create-export-ganesha.sh, dbus-send.sh and the handling of volume set
command "ganesha.enable". The manipulation of EXPORT_ID move from
dbus-send.sh to create-export-ganesha.sh.
In volume set handling following has performed
stage | commit
----------------------------------------------------------
1.) gluster v set <volname> ganesha.enable on
None | create export file
| in node where cli executed,
| thne export volume via dbus
2.) gluster v set <volname> ganesha.enable off
unexport volume via dbus | remove export file from the
| shared storage
-----------------------------------------------------------
More details can be found at http://review.gluster.org/#/c/15105/
Change-Id: Ia8b0e89bc8fff24b0bc5d20a538a89212894a8e4
BUG: 1355956
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: http://review.gluster.org/14908
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: soumya k <skoduri@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
reboot"
This reverts commit f71e2fa49af185779b9f43e146effd122d4e9da0.
Reason:
As part of sync up node reboot this patch copies ganesha export conf file
from a source node. This change is no more require if the export files are
available in shared storage.
Change-Id: Id9c1ae78377bbd7d5d80aa1c14f534e30feaae97
BUG: 1355956
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: http://review.gluster.org/14907
Reviewed-by: soumya k <skoduri@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The parameter HA_VOL_SERVER introduced intially ganesha-ha.conf to
specify gluster server from which to mount the shared data volume.
But after introducing new cli for the same purpose, it become
unnecessary. The existence of that parameter can lead confussion
to the users. This patch will remove/replace all the instance of
HA_VOL_SERVER from the code
Change-Id: I638c61dcd2c21ebdb279bbb141d35bb806bd3ef0
BUG: 1350371
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: http://review.gluster.org/14812
Tested-by: Kaleb KEITHLEY <kkeithle@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: soumya k <skoduri@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The bitrot scrubber takes 'hourly/daily/biweekly/monthly'
as the values for 'scrub-frequency'. There is no way
to schedule the scrubbing when the admin wants it.
Ondemand scrubbing brings in the new option 'ondemand'
with which the admin can start scrubbing ondemand.
It starts the scrubbing immediately.
Ondemand scrubbing is successful only if the scrubber
is in 'Active (Idle)' (waiting for it's next frequency
cycle to start scrubbing). It is not entertained when
the scrubber is in 'Paused' or already running.
Here is the command line syntax.
gluster volume bitrot <vol name> scrub ondemand
Change-Id: I84c28904367eed827a7dae8d6a535c14b28e9f4d
BUG: 1366195
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/15111
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Events related sources are not loaded in libglusterfs when
configure is run with --disable-events option. Due to this
every call of gf_event should be guarded with USE_EVENTS macro.
To prevent this, USE_EVENTS macro was included in events.c
itself(Patch #15054)
Instead of disabling building entire directory "events", selectively
disabled the code. So that constants and empty function gf_event is
exposed. Code will not fail even if gf_event is called when events is
disabled.
BUG: 1368042
Change-Id: Ia6abfe9c1e46a7640c4d8ff5ccf0e9c30c87f928
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: http://review.gluster.org/15198
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
log file names are based on:
a. user provided input (through -l switch while starting a gluster process)
b. mount point paths in the case of native clients
c. volume/configuration files used for starting a gluster process
d. volume server used for starting a gluster process
Currently glusterd uses scheme c. to have a log file name that reads as
INSTALL_PREFIX-etc-glusterfs-glusterd.log
Since glusterd has a well known configuration file, it does not make much
sense to have log file name based on scheme c. This patch changes the name of
glusterd's log file to "glusterd.log". Hopefully this enables users to identify
glusterd's log file more easily.
Change-Id: I2d04179c4b9b06271b50eeee3909ee259e8cf547
BUG: 1348944
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/13426
Tested-by: Atin Mukherjee <amukherj@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd creates export conf file for ganesha using hook script during
volume start and ganesha_manage_export() for volume set command. But this
routine is not added in glusterd restart scenario.
Consider the following case, in a three node cluster a volume got exported
via ganesha while one of the node is offline(glusterd is not running).
When the node comes back online, that volume is not exported on that node
due to the above mentioned issue.
Also I have removed unused variables from glusterd_handle_ganesha_op()
For this patch to work pcs cluster should running on that be node.
Change-Id: I5b2312c2f3cef962b1f795b9f16c8f0a27f08ee5
BUG: 1330097
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: http://review.gluster.org/14063
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: soumya k <skoduri@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously quota crawl was done from the single mount point,
this is very slow process if there are huge number of files exists
in the volume
This RFE will now spawn crawl process for each brick in the
volume, and files are looked in parallel independently for each
brick. This improves the speed of crawling process for
entire files-system
This patch also fixes below problem
* Previously, mountdir was created under '/tmp'.
If someone tries to cleanup '/tmp'/ directory
then it is very dangerous that we loose volume data
So create a mount point under /var/run/gluster/tmp
instead
* Previously, file-system crawl is performed from all the nodes,
which is a redundant operation and performance will degrade
The problem is fixed with this patch
Change-Id: Icabedeb44182139ace9c8106793803122388cab8
BUG: 1290766
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/12952
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As per community consensus, we have decided to rename
nsr to jbr(Journal-Based-Replication). This is the patch
to rename the "nsr" code to "jbr"
Change-Id: Id2a9837f2ec4da89afc32438b91a1c302bb4104f
BUG: 1328043
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/13899
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd_is_brickpath_available
glusterd_is_brickpath_available () used to call realpath() for checking the
whether the new brick path matches with the existing ones. The problem with this
is if the underlying file system is bad for any one of the existing bricks then
realpath() would fail and we wouldn't allow to create the new brick even if it
should be allowed.
Fix is to use string comparison with having a new field real_path in brickinfo
to store the absolute path
Change-Id: I1250ea5345f00fca0f6128056ebd08750d604f0a
BUG: 1299710
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/13258
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
op on glusterd restart
While remove brick is in progress, if glusterd is restarted since decommission
flag is not persisted in the store the same value is not retained back resulting
in glusterd not blocking remove brick commit when rebalance is already in
progress.
Change-Id: Ibbf12f3792d65ab1293fad1e368568be141a1cd6
BUG: 1303269
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/13323
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During reblance restart after glusterd restarted, we are not
connecting to rebalance process from glusterd, because the
defrag variable in volinfo will be null.
Initializing the variable will connect the rpc
Change-Id: Id820cad6a3634a9fc976427fbe1c45844d3d4b9b
BUG: 1303028
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/13319
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allows the user to convert an afr-volume to a nsr-volume
by using cluster.nsr option in the volume set command
gluster volume set <volname> cluster.nsr <on/off>
Change-Id: Ia1c5aa89d27535f7275d474cf312dc5efb8e222f
BUG: 1158654
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/12943
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The start command doesnt restart the tier deamon if the deamon
is running at one node. hence to bring up the tierd on the nodes
where the deamon is down, the force command is implemented.
It skips the check for tierd running.
Change-Id: I0037d3e5ecfe56637d0da201a97903c435d26436
BUG: 1292112
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12983
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When exporting/importing voinfo during handshake,
quota conf and quota xattr version were using same key
'quota-version' and updated wrong values when importing
quota version values.
Change-Id: If939d6f5bc4851d4114963877be72dda21834f0f
BUG: 1287996
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/12865
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CLI command for bitrot scrub status will be :
gluster volume bitrot <volname> scrub status
Above command will show the statistics of bitrot scrubber.
Upon execution of this command it will show some common
scrubber tunable value of volume <VOLNAME> followed by
statistics of scrubber statistics of individual nodes.
sample ouput for single node:
Volume name : <VOLNAME>
State of scrub: Active
Scrub frequency: biweekly
Bitrot error log location: /var/log/glusterfs/bitd.log
Scrubber error log location: /var/log/glusterfs/scrub.log
=========================================================
Node name:
Number of Scrubbed files:
Number of Unsigned files:
Last completed scrub time:
Duration of last scrub:
Error count:
=========================================================
This is just infrastructure. list of bad file, last scrub
time, error count value will be taken care by
http://review.gluster.org/#/c/12503/ and
http://review.gluster.org/#/c/12654/ patches.
Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1
BUG: 1207627
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10231
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently new feature tiering have GD_OP_DETACH_TIER and GD_OP_TIER_MIGRATE
enum in the middle of the glusterd_op_ enum array. In multi nodes
cluster when one of the node upgraded from lower version to higher
version and upon executing command can end up in a mismatch in enum ops
at the receiver ends causing command execution fail.
Fix is to put every new feature glusterd operation enum code to last of
the enum array.
Change-Id: I640f811065e8c84add624237aa80fed43fde5967
BUG: 1276643
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/12473
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a quota is disable and the clean-up process terminated
without completely cleaning-up the quota xattrs.
Now when quota is enabled again, this can mess-up the accounting
A version number is suffixed for all quota xattrs and this version
number is specific to marker xaltor, i.e when quota xattrs are
requested by quotad/client marker will remove the version suffix in the
key before sending the response
Change-Id: I1ca2c11460645edba0f6b68db70d476d8d26e1eb
BUG: 1272411
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/12386
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While taking snapshot, the export file used by the volume should
copy to snap directory. So that when restore of snapshot happens,
the volume can retain all its configuration for exporting via
nfs-ganesha. The export file is stored at "/etc/ganesha/export" in
the following format "export.<volname>.conf"
The fix handles given cases in the following manner :
case a: The nfs-ganesha(global) is ON during snapshot and restore.
i.) Volume was exported during snapshot. When we restore snapshot,
then volume should be exported back with old configuration file.
ii.) Volume was unexported during snapshot. When we restore snapshot,
then volume should unexported again.
case b: The nfs-ganesha is ON during snapshot and OFF during restore
Volume was exported during snapshot. When we restore snapshot, the
conf will be copied to corresponding location and if nfs-ganesha enabled
again, then volume will be exported.
For the clones, export conf file will created in /etc/ganesha/export and then
export it via ganesha.
Change-Id: Ideecda15bd4db58e991cf6c8de7bb93f3db6cd20
BUG: 1257709
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: http://review.gluster.org/12034
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In glusterd_snapshot_clone_postvalidate(), we were deleting
snap object and snap vol, by looking up snapname. Hence, it
was deleting the orignal snapshot from which the clone was
being created
Instead it should fetch the clonename, the respective
clone vol, and its corresponding snap object, and delete them.
Also glusterd_snap_remove(), needs to differentiate a clone
snap object from a snaphsot snap object, as in case of a clone
snap object, we don't have any persisted data in
/var/run/gluster/snaps/ and hence is shouldn't try to delete
anything there.
Change-Id: I02bb22a3898d5720e318a02d6cc32d25f75d317d
BUG: 1272339
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/12364
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
afr uses the translator name for locking purpose,
so it is mandatory to keep afr/ec xlators name constant
across graph change
currently when a tier is attached, afr names are appended
either with hot or cold. ie that breaks the above
mentioned constraint.
Change-Id: I3699dcdaa8190bab3ba81cbc01e8fa126d37ba0d
BUG: 1261276
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/12134
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when we trigger a detach tier start on a tier vol,
it shows in the volume status task as "remove brick" instead of "Detach tier"
Status of volume: vol1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.42.171:/data/gluster/hbr1 49154 0 Y 25098
Cold Bricks:
Brick 10.70.42.171:/data/gluster/p1 49152 0 Y 25101
Brick 10.70.42.171:/data/gluster/p2 49153 0 Y 25112
NFS Server on localhost N/A N/A N N/A
Task Status of Volume vol1
------------------------------------------------------------------------------
Task : Tier migrate
ID : e11d5a3d-b1ae-4c3f-8f95-b28993c60939
Status : in progress
Status of volume: vol1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.42.171:/data/gluster/hbr1 49154 0 Y 25098
Cold Bricks:
Brick 10.70.42.171:/data/gluster/p1 49152 0 Y 25101
Brick 10.70.42.171:/data/gluster/p2 49153 0 Y 25112
NFS Server on localhost N/A N/A N N/A
Task Status of Volume vol1
------------------------------------------------------------------------------
Task : Detach tier
ID : 76d700b1-5bbd-43ed-95fd-1640b2b4af31
Status : completed
Change-Id: I4bd3b340d4e700e8afed00e1478b8a8b54dfe2e2
BUG: 1261837
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12149
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The no. of epoll worker threads can be configured
by adding the following option into glusterd.vol.
option event-threads <NUM-OF-EPOLL_WORKERS>
BUG: 1242421
Change-Id: I2a9e2d81c64beaf54872081f9ce45355cf4dfca7
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/11630
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
brick
The brick path we use to create shared storage is
/var/run/gluster/ss_brick.
The problem with using this brick path is /var/run/gluster
is a tmpfs and all the brick/shared storage data will be wiped
off when the node restarts. Hence using /var/lib/glusterd/ss_brick
as the brick path for shared storage volume as this brick and
the shared storage volume is internally created by us (albeit on
user's request), and contains only internal state data and no user data.
Change-Id: I808d1aa3e204a5d2022086d23bdbfdd44a2cfb1c
BUG: 1218573
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/11533
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch is part one change to prevent data loss
in a replicate volume on doing a replace-brick commit
force operation.
Problem: After doing replace-brick commit force, there is a
chance that self heal happens from the replaced (sink) brick
rather than the source brick leading to data loss.
Solution: During the commit phase of replace brick, after old
brick is brought down, create a temporary mount and perform
setfattr operation (on virtual xattr) indicating AFR to mark
the replaced brick as sink.
As a part of this change replace-brick command is being changed
to use mgmt_v3 framework rather than op-state-machine framework.
Many thanks to Krishnan Parthasarathi for helping me out on this.
Change-Id: If0d51b5b3cef5b34d5672d46ea12eaa9d35fd894
BUG: 1207829
Signed-off-by: Anuradha <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/10076
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: In GLUSTERD_GET_DEFRAG_PROCESS we are using PATH_MAX (4096)
as the max size of the input for target path, but we have allocated
NAME_MAX (255) size of buffer for the target.
Now this crash is not seen with source install, but is seen with RPMS.
The reason is _foritfy_fail. This check happens when _FORTIFY_SOURCE
flag is enabled. This option tries to figure out possible
overflow scenarios like the bug here and does crash the process.
Change-Id: I26261be85936d2e94a526fdcaa8d3249f8af11c3
BUG: 1228093
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/11090
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
shared storage
Introducing a global volume set option(cluster.enable-shared-storage)
which helps create and set-up the shared storage meta volume.
gluster volume set all cluster.enable-shared-storage enable
On enabling this option, the system analyzes the number of peers
in the cluster, which are currently connected, and chooses three
such peers(including the node the command is issued from). From these
peers a volume(gluster_shared_storage) is created. Depending on the
number of peers available the volume is either a replica 3
volume(if there are 3 connected peers), or a replica 2 volume(if there
are 2 connected peers). "/var/run/gluster/ss_brick" serves as the
brick path on each node for the shared storage volume. We also mount
the shared storage at "/var/run/gluster/shared_storage" on all the nodes
in the cluster as part of enabling this option. If there is only one node
in the cluster, or only one node is up then the command will fail
Once the volume is created, and mounted the maintainance of the
volume like adding-bricks, removing bricks etc., is expected to
be the onus of the user.
On disabling the option, we provide the user a warning, and on
affirmation from the user we stop the shared storage volume, and unmount
it from all the nodes in the cluster.
gluster volume set all cluster.enable-shared-storage disable
Change-Id: Idd92d67b93f444244f99ede9f634ef18d2945dbc
BUG: 1222013
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/10793
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ENUM RETCODE ERROR
-------------------------------------------------------------
EG_INTRNL 30800 Internal Error
EG_OPNOTSUP 30801 Gluster Op Not Supported
EG_ANOTRANS 30802 Another Transaction in Progress
EG_BRCKDWN 30803 One or more brick is down
EG_NODEDWN 30804 One or more node is down
EG_HRDLMT 30805 Hard Limit is reached
EG_NOVOL 30806 Volume does not exist
EG_NOSNAP 30807 Snap does not exist
EG_RBALRUN 30808 Rebalance is running
EG_VOLRUN 30809 Volume is running
EG_VOLSTP 30810 Volume is not running
EG_VOLEXST 30811 Volume exists
EG_SNAPEXST 30812 Snapshot exists
EG_ISSNAP 30813 Volume is a snap volume
EG_GEOREPRUN 30814 Geo-Replication is running
EG_NOTTHINP 30815 Bricks are not thinly provisioned
Change-Id: I49a170cdfd77df11fe677e09f4e063d99b159275
BUG: 1212413
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/10588
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of including config.h in each file, and have the additional
config.h included from the compiler commandline (-include option).
When a .c file tests for a certain #define, and config.h was not
included, incorrect assumtions were made. With this change, it can not
happen again.
BUG: 1222319
Change-Id: I4f9097b8740b81ecfe8b218d52ca50361f74cb64
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/10808
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When promotion/demotion daemon starts, it uses the same pidfile
as rebalance. This patch will introduce a different pid file
for the same.
Change-Id: Ic484c53f51e00ae6b2d697748a9600b14829e23b
BUG: 1221970
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/10792
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The key concept here is to determine whether a directory is "clean" by
comparing its last-known-good topology to the current one for the
volume. These are stored as "commit hashes" on the directory and the
volume root respectively. The volume's commit hash changes whenever a
brick is added or removed, and a fix-layout is done. A directory's
commit hash changes only when a full rebalance (not just fix-layout)
is done on it. If all bricks are present and have a directory
commit hash that matches the volume commit hash, then we can assume
that every file is in its "proper" place. Therefore, if we look for
a file in that proper place and don't find it, we can assume it's not
on any other subvolume and *safely* skip the global (broadcast to all)
lookup.
Change-Id: Id6ce4593ba1f7daffa74cfab591cb45960629ae3
BUG: 1219637
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Signed-off-by: Shyam <srangana@redhat.com>
Reviewed-on: http://review.gluster.org/7702
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
>> gluster volume info patchy
Volume Name: patchy
Type: Tier
Volume ID: 8bf1a1ca-6417-484f-821f-18973a7502a8
Status: Created
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Replicate
Number of Bricks: 1 x 2 = 2
Brick1: hostname:/home/brick30
Brick2: hostname:/home/brick31
Cold Bricks:
Cold Tier Type : Disperse
Number of Bricks: 1 x (4 + 2) = 6
Brick3: hostname:/home/brick20
Brick4: hostname:/home/brick21
Brick5: hostname:/home/brick23
Brick6: hostname:/home/brick24
Brick7: hostname:/home/brick25
Brick8: hostname:/home/brick26
Change-Id: I7b9025af81263ebecd641b4b6897b20db8b67195
BUG: 1212400
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/10339
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fix adds support to view the number of promoted or demoted
files from the cli. The mechanism is isolmorphic to checking
the status of volumes being rebalanced.
gluster volume rebalance <vol> tier status
Change-Id: I1b11ca27355ceec36c488967c23531202030e205
BUG: 1213063
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Signed-off-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-on: http://review.gluster.org/10292
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace-brick operation with data migration support have been
deprecated from gluster.
With this fix replace brick command will support only one commad
gluster volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force}
Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b
BUG: 1094119
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10101
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Command gluster volume status <VOLNAME> should show the status of bitrot
and scrubber daemon and its pid information.
Along with displaying bitrot and scrubber daemon information in gluster
volume status command there should be command to show its individual status
separately.
Command to show individual status of bitrot and scrubber daemon will
following.
command to show only bitd daemon information will be
gluster volume status <VOLNAME> bitd
command to show only scrubber daemon information
gluster volume status <VOLNAME> scrub
Change-Id: Id86aae1156c8c599347c98e2a538f294d37376e4
BUG: 1209752
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10175
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When ganesha.enable is set to on and features.ganesha is
enabled, there are a few behaviour changes that should
be seen in other volume operations.
1. ganesha.enable can be set to 'on' only
when features.ganesha is set to 'enable'
2.When gluster vol is started, and if ganesha.enable
key was set to 'on', it should automatically export the volume
via NFS-Ganesha.
3.When ganesha.enable is set to 'on', and a volume
is stopped, that volume should be unexported via NFS-Ganesha.
4. gluster vol reset <volname>
If ganesha.enable was set to on, then unexport the
volume via NFS-Ganesha.
5. gluster vol reset all
If features.ganesha is set to enable, as part
of reset all, set it to disable. This translates
to teardown cluster.
All the above problems are fixed by checking the global key
and value, depending on the value, specific functions are called.
And also, functions related to global commands
are moved to cli-cmd-global.c
Commit phase of features.ganesha enable/disable
runs the ganesha-ha.sh setup/teardown respectively.
Before the script begins, it is important that the
NFS-Ganesha service starts on all the HA nodes.
Having the start service commands in the
commit phase could lead to problems.
Moving the pre-requisite service start
commands to the 'stage' phase.
Change-Id: I5a256f94f8e1310ddcd5369f329b7168b2a24c47
BUG: 1200265
Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com>
Reviewed-on: http://review.gluster.org/10283
Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On restarting of glusterd first it will start all the bricks present
in the volume then it will start all the services. During starting of
all the services it may pass volinfo as a NULL. It will cause Assert
failure in glusterd_bitdsvc_manager function and will cause a glusterd
crash.
Change-Id: Ia14cf5022da88516cdd576eb2d1e0e7b17a3782b
BUG: 1207029
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10241
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using a uint64_t for the peerinfo generation number was overkill for how
the generation number is used within GlusterD. It also prevented
GlusterD from running on 32-bit architechtures, as uatomic_add_return
doesn't support 64-bit values on 32-bit architechtures.
This change was developed on the git branch at [1]. This commit is a
combination of the following commits on the development branch.
b78dba4 Use 32-bit generation number
2c37e4b Change other generation number variables to uint32_t
[1]: https://github.com/kshlm/glusterfs/tree/urcu
Change-Id: I0f310f56a4fb97d6bcbc23255a379ed5bb1ed9e1
BUG: 1205186
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/10425
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org>
Tested-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Logic for adding the 'glusterd_brickinfo->group' member and using it to
find the brick positon has been taken from http://review.gluster.org/#/c/9919.
Thanks to Jeff Darcy for that.
This patch is a part of the arbiter logic implementation for 3 way AFR
details of which can be found at http://review.gluster.org/#/c/9656/
Change-Id: Idbfe4f29ee8e098e0102def8f38b32314316b188
BUG: 1199985
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/10257
Tested-by: NetBSD Build System
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Transaction peer lists were used in GlusterD to peers belonging to a
transaction. This was needed to prevent newly added peers performing
partial transactions, which could be incorrect.
This was accomplished by creating a seperate transaction peers list at
the beginning of every transaction. A transaction peers list referenced
the peerinfo data structures of the peers which were present at the
beginning of the transaction. RCU protection of peerinfos referenced by
the transaction peers list is a hard problem and difficult to do
correctly.
To have proper RCU protection of peerinfos, the transaction peers lists
have been replaced by an alternative method to identify peers that
belong to a transaction. The alternative method is to the global peers
list along with generation numbers to identify peers that should belong
to a transaction.
This change introduces a global peer list generation number, and a
generation number for each peerinfo object. Whenever a peerinfo object
is created, the global generation number is bumped, and the peerinfos
generation number is set to the bumped global generation.
With the above changes, the algorithm to identify peers belonging to a
transaction with RCU protection is as follows,
- At the beginning of a transaction, the current global generation
number is saved
- To identify if a peers belonging to the transaction,
- Start a RCU read critical section
- For each peer in the global peers list,
- If the peers generation number is not greater than the saved
generation number, continue with the action on the peer
- End the RCU read critical section
The above algorithm guarantees that,
- The peer list is not modified when a transaction is iterating through
it
- The transaction actions are only done on peers that were present when
the transaction started
But, as a transaction could iterate over the peers list multiple times,
the algorithm cannot guarantee that same set of peers will be selected
every time. A peer could get deleted between two iterations of the list
within a transaction. This problem existed with transaction peers list
as well, but unlike before now it will not lead to invalid memory access
and potential crashes. This problem will be addressed seprately.
This change was developed on the git branch at [1]. This commit is a
combination of the following commits on the development branch.
52ded5b Add timespec_cmp
44aedd8 Add create timestamp to peerinfo
7bcbea5 Fix some silly mistakes
13e3241 Add start time to opinfo
17a6727 Use timestamp comparisions to identify xaction peers instead
of a xaction peer list
3be05b6 Correct check for peerinfo age
70d5b58 Use read-critical sections for peer list iteration
ba4dbca Use peerinfo timestamp checks in op-sm instead of xaction peer
list
d63f811 Add more peer status checks when iterating peers list in
glusterd-syncop
1998a2a Timestamp based peer list traversal of mgmtv3 xactions
f3c1a42 Remove transaction peer lists
b8b08ee Remove unused labels
32e5f5b Remove 'npeers' usage
a075fb7 Remove 'npeers' from mgmt-v3 framework
12c9df2 Use generation number instead of timestamps.
9723021 Remove timespec_cmp
80ae2c6 Remove timespec.h include
a9479b0 Address review comments on 10147/4
[1]: https://github.com/kshlm/glusterfs/tree/urcu
Change-Id: I9be1033525c0a89276f5b5d83dc2eb061918b97f
BUG: 1205186
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/10147
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On Linux systems we should use the libuuid from the distribution and not
bundle and statically link the contrib/uuid/ bits.
libglusterfs/src/compat-uuid.h has been introduced and should become an
abstraction layer for different UUID APIs. Non-Linux operating systems
should implement their compatibility layer there.
Once all operating systems have an implementation in compat-uuid.h, we
can remove contrib/uuid/ from the repository completely.
Change-Id: I345e5357644be2521685e00358bb8c83c4ea0577
BUG: 1206587
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/10129
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterfs relies on Linux uuid implementation, which
API is incompatible with most other systems's uuid. As
a result, libglusterfs has to embed contrib/uuid,
which is the Linux implementation, on non Linux systems.
This implementation is incompatible with systtem's
built in, but the symbols have the same names.
Usually this is not a problem because when we link
with -lglusterfs, libc's symbols are trumped. However
there is a problem when a program not linked with
-lglusterfs will dlopen() glusterfs component. In
such a case, libc's uuid implementation is already
loaded in the calling program, and it will be used
instead of libglusterfs's implementation, causing
crashes.
A possible workaround is to use pre-load libglusterfs
in the calling program (using LD_PRELOAD on NetBSD for
instance), but such a mechanism is not portable, nor
is it flexible. A much better approach is to rename
libglusterfs's uuid_* functions to gf_uuid_* to avoid
any possible conflict. This is what this change attempts.
BUG: 1206587
Change-Id: I9ccd3e13afed1c7fc18508e92c7beb0f5d49f31a
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/10017
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|