| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If volume uses quota, volume delete operation should unmount the
auxiliary quota mount usin glusterd_remove_auxiliary_mount(). This
may fail with EBADF is the mount is already gone. In that situation,
ignore the error so that volume delete succeeds.
This fixes a spurious failure on NetBSD in tests/basic/quota.t 74-75
Backport of I69325f71fc2c8af254db46f696c8669a4e6bd7e4
BUG: 1138897
Change-Id: If0d382d44a956bb9fd8c41299f82affdf2ee0618
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/9484
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
> Change-Id: Ie5eaa2beb4446640b22873f91e17da90d1cd8fad
> BUG: 1174625
> Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
> Reviewed-on: http://review.gluster.org/9280
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Kaushal M <kaushal@redhat.com>
> Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Change-Id: Ic5049f919fb444b45b3372a3b486183ed46d60f8
BUG: 1180404
Reviewed-on: http://review.gluster.org/9425
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* For samba export, the entry point is also added to the readdir response.
Change-Id: I825c017e0f16db1f1890bb56e086f36e6558a1c2
BUG: 1175742
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/9218
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9344
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Create a new rebalance volfile, which will not contain
snap-view client translators, irrespective of the status
of USS.
This volfile, will be created and regenerated everytime
the fuse-volfile is generated, and will be consumed
by the rebalance process.
Change-Id: I514a8e88d06c0b8fb6949c3a3e6dc4dbe55e38af
BUG: 1175758
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/9190
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9339
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd_handle_snapd_option.
glusterd_handle_snapd_option was returning failure if snapd is not running
because of which gluster commands were failing.
Change-Id: I22286f4ecf28b57dfb6fb8ceb52ca8bdc66aec5d
BUG: 1175765
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9206
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9311
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/9146
For rdma only volumes, daemons like snapd, glustershd etc make
use of tcp transport for their operations. This patch will introduce
the support of rdma by default for those daemons in rdma only volumes.
In order to accomodate this change we rename the tcp client volfile
labels from
<volname>-fuse.vol
to
<volname>.tcp-fuse.vol
Change-Id: Id5e5db0680a07fa6b6d003bad45748464cd7658e
BUG: 1166515
Signed-off-by: Anoop C S <achiraya@redhat.com>
Reviewed-on: http://review.gluster.org/9146
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/9183
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/8498
As of now for both tcp only volumes and rdma only volumes, volfile
names are in the format <volname>-fuse.vol. This patch will change
the client volfile namings as shown below.
* TCP mounts always use <volname>-fuse.vol
* RDMA mounts always use <volname>.rdma-fuse.vol
Following the above naming convention, for tcp,rdma volumes both
volfiles will be present under /var/lib/glusterd/vols/<volname>/
such that rdma only volume can be mounted as
mount -t glusterfs -o transport=rdma <server/ip>:/<volname> <mount-point>
OR
mount -t glusterfs <server/ip>:/<volname>.rdma <mount-point>
The above command format can also be used to fuse mount a tcp,rdma
volume via rdma transport.
When we try to fuse mount a tcp,rdma volume with transport-type as rdma
it silently mounts via tcp. This change will also make sure that it
fetches the correct volfile based on the transport-type specified
from client side.
Change-Id: Id8b74c1c3e1e7fd323463061f8b13dd623fa6876
BUG: 1166515
Signed-off-by: Anoop C S <achiraya@redhat.com>
Reviewed-on: http://review.gluster.org/8498
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/9182
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/8934
For rdma type only volume client connection establishment
with server takes more than three seconds. Because for
tcp,rdma type volume, will have 2 ports one for tcp and
one for rdma, tcp port is stored with brickname and rdma
port is stored as "brickname.rdma" during pamap_sighin.
During the handshake when trying to get the brick port
for rdma clients, since we are not aware of server
transport type, we will append '.rdma' with brick name.
So for tcp,rdma volume there will be an entry with
'.rdma', but it will fail for rdma type only volume.
So we will try again, this time without appending '.rdma'
using a flag variable need_different_port, and it will succeed,
but the reconnection happens only after 3 seconds.
In this patch for rdma only type volume
we will append '.rdma' during the pmap_signin. So during the
handshake we will get the correct port for first try itself.
Since we don't need to retry , we can remove the
need_different_port flag variable.
Change-Id: I82a8a27f0e65a2e287f321e5e8292d86c6baf5b4
BUG: 1166515
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/8934
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/9177
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/8762
When we try to mount a tcp,rdma volume as rdma
transport using FUSE protocol, then mount will
hang if the brick is down. When we kill a process,
signal will be received in glusterfsd process and
it will call pmap_signout with port listening on tcp only.
In case of the tcp,rdma there will be two ports,
and port which is listening for rdma will not
called for sign out.
So the mount process will try to connect to a port
which is not open and it will keep trying to connect.
This patch will call pmap_signout for rdma port also,
So when mount tries to get the brick port,it will fail.
Change-Id: I73f90d7340afa3b0b1278924206f1488e4094a62
BUG: 1166515
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/8762
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/9176
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we mount rdma only volume or tcp,rdma volume using newly
peer probed IP's(nfs-server on new nodes) through nfs protocol,
mount fails for rdma only volume and mount happens with
help of tcp protocol in the case of tcp,rdma volumes. That is for
newly added servers will always get transport type as "socket".
This is due to nfs_transport_type is exported correctly and
imported wrongly.
This can be verified by the following ,
* Create a rdma only volume or tcp,rdma volume
* Add a new server into the trusted pool.
* Checkout the client transport type specified nfs-server
volgraph.It will be always tcp(socket type) instead of rdma.
* And also for rdma only volume in the nfs log, we can see
'connection refused' message for every reconnect between
nfs server and glusterfsd.
Backport of http://review.gluster.org/8975
cherry picked from commit f380e2029d608f97e3ba9a728605e1d798b09e8d
>BUG: 1157381
>Change-Id: I6bd4979e31adfc72af92c1da06a332557b6289e2
>Author: Jiffin Tony Thottan <jthottan@redhat.com>
>Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
>Reviewed-on: http://review.gluster.org/8975
>Reviewed-by: Meghana M <mmadhusu@redhat.com>
>Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
>Reviewed-by: Niels de Vos <ndevos@redhat.com>
>Tested-by: Niels de Vos <ndevos@redhat.com>
Change-Id: I328c17b07e877fe3b29ca832bf6f2291cea16bbe
BUG: 1166505
Reviewed-on: http://review.gluster.org/9172
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: soumya k <skoduri@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem : When glusterd is down on one of the nodes and during that
time if USS is disabled then snapd will still be running
in the node where glusterd was down.
Solution : during restart of glusterd check if USS is disabled,
if so then issue a kill for snapd.
NOTE : The test case which I wrote in my previous patchset
is facing some spurious failures, hence I thought of removing
that test case. I'll add the test case once the issue is resolved.
Change-Id: I2870ebb4b257d863cdfc319e8485b19e932576e9
BUG: 1175735
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9062
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9307
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Id13dc4cd3f5246446a9dfeabc9caa52f91477524
BUG: 1175755
Signed-off-by: Varun Shastry <vshastry@redhat.com>
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8133
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9304
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
original brick already has this option
Change-Id: I2841d2ac371a3e9505f6061f35d1d447946c0bae
BUG: 1175732
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8526
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9303
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Check if the LV is present before deleting the LV. In case where
the LV is absent (already deleted?), need not fail the snap delete
operation.
Also check if the LV is mounted before trying umount. In case it
isn't umounted, only remove the LV.
Change-Id: I0f5b2674797299d8748c6fac5b091f0caba65ca4
BUG: 1175754
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/8954
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9299
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For USS we have 1 snapd log per volume and as many snap logs for volume.
For example if there are 4 volumes having 256 snaps each and USS is
enabled than total number of logs under /var/log/glusterfs for USS would
be 1028 logs.
Total logs = (4(snapd per volume) + 4(volumes)*256(snaps)) = 1028
Hence, it makes sense to move into into sub-folder structure like
/var/log/glusterfs/snaps/<vol-name>/<snapd + snaps logs>
Change-Id: I29262e6458c3906916923cd67d1145d6ae10bec3
BUG: 1175728
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/9050
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9298
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of displaying all the snapshots in the uss world,
it is better if we display only the activated snapshots.
Change-Id: I70d3ec212b62ec15956ae3e826bc4201d8dedd17
BUG: 1170548
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8958
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9242
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By default snapshot should be deactivated and this should be a
configurable option.
This behaviour can be configured by the command below:
gluster snapshot config activate-on-create <enable|disable>
Change-Id: I1911595c32beed43bb2fca4bf99f0d264b422513
BUG: 1170921
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8985
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9241
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
set help"
gluster volume set help for uss shows "User Servicable Snapshots"
whereas it should be "User Serviceable Snapshots"
> Change-Id: I3cc8b3ea2cb6d209e1a12678eb7d0e68f4160d99
> BUG: 1160236
> Signed-off-by: vmallika <vmallika@redhat.com>
> Reviewed-on: http://review.gluster.org/9041
> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
> Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Change-Id: Id2de0e353d3307023da9239f6dee8b59e8eb0d8f
BUG: 1175645
Reviewed-on: http://review.gluster.org/9295
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Tested-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ibc75713d35c9cbafd493c8cf6b5294eaf29f05d4
BUG: 1163920
Signed-off-by: Petr Medonos <petr.medonos@etnetera.cz>
Reviewed-on: http://review.gluster.org/9126
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: glusterd crashes in non-originator slave node during geo-rep
create push-pem.
Cause: In glusterd_op_copy_file, the value of the key "common_pem_contents"
is freed explicitly even after dict_set is successful when it is
taken cared by dict_free.
Solution: Free only in failure cases before dict_set.
BUG: 1159210
Change-Id: I726f923915fc24de6588469c27f2cc996c20c59d
Reviewed-On: http://review.gluster.org/9018/
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/9026
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PROBLEM:
Geo-rep misses few a files to sync when I/O happenned during
geo-rep start.
ANALYSES:
To use the available changelogs to handle deletes/renames,
'xsync upper limit' is introduced which limits the xsync
crawl till the changelog register time. But there is a
small time interval between the changelog register time and
the time changelog actually enabled. If there is I/O between
this interval, it will not be synced through xsync as it is
beyond changelog register time and not through changelog also
as changelog is not actually enabled.
SOLUTION:
Enable changelog and marker during geo-rep create instead
of geo-rep start so that entries are captured in changelog
and above said interval is nullified.
BUG: 1159205
Change-Id: If5203eb1cfcbde3999f97a5f1a6a1af4875ac358
Reviewed-on: http://review.gluster.org/8650
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/9023
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When geo-rep is in paused state and a node in a cluster
is rebooted, the geo-rep status goes to "faulty (Paused)"
and no worker processes are started on that node yet. In
this state, when geo-rep is resumed, there is a race in
updating status file between glusterd and gsyncd itself
as geo-rep is resumed first and then status is updated.
glusterd tries to update to previous state and gsyncd
tries to update it to "Initializing...(Paused)" on
restart as it was paused previously. If gsyncd on restart
wins, the state is always paused but the process is not
acutally paused. So the solution is glusterd to update
the status file and then resume.
BUG: 1159195
Change-Id: I4c06f42226db98f5a3c49b90f31ecf6cf2b6d0cb
Reviewed-on: http://review.gluster.org/8911
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/9021
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When GlusterD starts the brick processes, these will listen on all
interfaces. When the 'transport.socket.bind-address' option is set in
glusterd.vol, the brick processes should only listen on the specified
hostname or IP-address.
Cherry picked from commit 430b874c4f1a171c106a9e1e6507e14e79805a1d:
> Change-Id: I8e7d1f294904081137c23f3446261329d0d13bba
> BUG: 1149863
> Signed-off-by: Niels de Vos <ndevos@redhat.com>
> Reviewed-on: http://review.gluster.org/8910
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Change-Id: I8e7d1f294904081137c23f3446261329d0d13bba
BUG: 1151745
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/8951
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the transport.socket.bind-address option is set to a hostname or
ip-address, the services started by GlusterD fail to connect to the
management daemon. GlusterD always forces the services to connect to the
"localhost" hostname, even if it is not listening on that address.
GlusterD should take the transport.socket.bind-address option into
consideration, and pass that to the glusterfs-clients with the -s or
--volfile commandline parameter.
Note that this is not a change that removes all hard-coded dependencies
on "localhost". This change merely makes it possible to start required
services when the transport.socket.bind-address option is set.
Cherry picked from commit 283fa797f4bf98130b42c36972305b8cb6e5aaaf:
> Change-Id: I36a0ed6c69342e6327adc258fea023929055d7f2
> BUG: 1149863
> Signed-off-by: Niels de Vos <ndevos@redhat.com>
> Reviewed-on: http://review.gluster.org/8908
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Change-Id: I36a0ed6c69342e6327adc258fea023929055d7f2
BUG: 1151745
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/8950
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Use a system-dependent macro for umount(8) location instead of
relying on $PATH to find it, for security and portability sake.
2) Introduce gf_umount_lazy() to replace umount -l (-l for lazy) invocations,
which is only supported on Linux; On Linux behavior in unchanged. On other
systems, we fork an external process (umountd) that will take care of
periodically attempt to unmount, and optionally rmdir.
Backport of Ia91167c0652f8ddab85136324b08f87c5ac1edd51d
BUG: 1138897
Change-Id: I9d82c87e85af0dee79f2de39bc697c486b7103c8
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8863
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Csaba Henk <csaba@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a backport of http://review.gluster.org/#/c/8878/
The pgfid extended attributes are used to construct the ancestry path
(from the file to the volume root) for nameless lookups on files.
As NFS relies on nameless lookups heavily, quota enforcement through NFS
would be inconsistent if quota were to be enabled on a volume with
existing data.
Solution is to heal the pgfid extended attributes as a part of lookup
perfomed by quota-crawl process. In a posix lookup check for pgfid xattr
and if it is missing set the xattr.
BUG: 1147953
Change-Id: I707d91a056e07452bfd1e070af5eddaa752a84ac
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8890
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Today, when glusterd's internal locking mechanism fails
with invalid type or when another competing lock is being
held, the log message doesn't provide enough information
directly as to which command saw this (first). Following
is a snippet of how a failure would look in the log file.
This would greatly assist in debugging.
[2014-09-03 04:57:58.549418] E
[glusterd-locks.c:520:glusterd_mgmt_v3_lock]
(-->/usr/local/lib/glusterfs/3.7dev/xlator/mgmt/glusterd.so(__glusterd_handle_create_volume+0x801)
[0x7f30b071e651]
(-->/usr/local/lib/glusterfs/3.7dev/xlator/mgmt/glusterd.so(glusterd_op_begin_synctask+0x2c)
[0x7f30b072e19c]
(-->/usr/local/lib/glusterfs/3.7dev/xlator/mgmt/glusterd.so(gd_sync_task_begin+0x55d)
[0x7f30b072de6d]))) 0-management: Invalid entity. Cannot perform locking
operation on vol types
Change-Id: I0595f49d60e620e8b065f3506bdb147ccee383a7
BUG: 1145093
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/8842
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Introduced in "1f6e992f1aaa676be5bd47d17e58f1171825cf43"
Change-Id: Id684e2f082def7d01ef3c258ea6598da6205591f
BUG: 1117822
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/8840
Reviewed-by: Justin Clift <justin@gluster.org>
Tested-by: Justin Clift <justin@gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Turn setfattr(1) absolute path into an OS-dependant macro. Let compiler
option override it to fit custom installation if needed.
Backport of I8f469c5741a85b6e8d8f6299a9540b3d64611d2f
BUG: 1138897
Change-Id: I279752f2ec5db1abc25830cb9a23290cc401d517
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8828
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also, moved the backtrace fetching logic to a separate function.
Modified the backtrace fetching logic able to work under memory pressure
conditions.
Change-Id: Ie38bea425a085770f41831314aeda95595177ece
BUG:1145093
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/8794
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of
371bb42 glusterd: Authenticate management handshake requests
from master.
Management handshake requests, which are used to validate op-version
supported by the peers, are now only allowed if,
- the glusterd doesn't have any other peer, or
- the request was sent by another peer.
This prevents the op-version of a peer being changed because of a
connection attempt by an invalid peer.
BUG: 1144978
Change-Id: I5a909dad37e9873efe8b75dad41b7af71ce91c3d
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/8819
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Default 'open FD limit' is 1024. As the number of volumes/bricks
increases, brick-to-glusterd socket FDs also increases in glusterd and
runs out of the limit.
Solution is to set the 'Open FD' limit to higher value in glusterd
Change-Id: Iaa60b2155df2fa5a0759e054bdebffbc09f63ec1
BUG: 1145095
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8578
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8807
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As one of the recommandations for taking a snapshot is not to have
an active geo-replication session, its better to display an error
saying session is active when snapshot create command is issued.
Change-Id: I94593dbd2659610e033ca316176dda1ac8dc5ce6
BUG: 1145091
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8461
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8804
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
creating snapshots.
When creating a snapshot a LVM is created at the backend and is mounted
under /var/run/gluster/snaps/... However, this mount does not inherit
the mount options for the original brick acting as the parent for the
snap.
If the snap is restored, this could lead to performance degredations,
functional limitations, or in extreme scenarios even potential data
loss.
Change-Id: I67d70fd83430d83dacc5380c6c928e27fb9c9e1b
BUG: 1145088
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8394
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8802
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
aligned with error messages of info and list.
When a snapshot operation like status, info, list
performed on a non-existing snapshot.
For Status error message is displayed as 'Snap not found'
For List and Info error message is displayed as 'Snapshot does not exist'
Have the consistant error message all the places
Change-Id: I7b241217dba62fda844481731a6858e4ecb12897
BUG: 1145087
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8309
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8801
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
problem: Snapshot command fails if one or more bricks are not thinly
provisioned. But the error message is a generic error message which
is confusing to the user.
fix: Provide correct error message in case of failure.
Change-Id: Iad247f966423a8f73ef6da57cab7ed6cddc05861
BUG: 1145086
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-on: http://review.gluster.org/8377
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8800
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
as-well-as to a particular volume
Problem :
With the current design we can only delete a single snapshot.
And the deletion of volume which contains snapshot is not allowed.
Because of that user might be forced to delete all the snapshots
manually before he is allowed to delete a volume.
Solution:
Following is the interface with which user can delete
all the snapshots of a system or belonging to a particular volume.
Syntax : gluster snapshot delete all
*To delete all the snapshots present in a system
Syntax : gluster snapshot delete volume <volname>
*To deletes all the snapshot present in a volume specified.
========================================================================
Sample Output:
Case 1 : Deleting a single snapshot.
[root@snapshot-24 glusterfs]# gluster snapshot delete snap1
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: snap1: snap removed successfully
-----------------------------------------------------------------
Case 2 : Deleting all the snapshots in a Volume.
[root@snapshot-24 glusterfs]# gluster snapshot delete volume vol1
Volume (vol1) contains 9 snapshot(s).
Do you still want to continue and delete them? (y/n) y
snapshot delete: snap2: snap removed successfully
snapshot delete: snap3: snap removed successfully
snapshot delete: snap4: snap removed successfully
snapshot delete: snap5: snap removed successfully
.
.
.
-----------------------------------------------------------------
Case 3 : Deleting all the snapshots in a system.
[root@snapshot-24 glusterfs]# gluster snapshot delete all
System contains 4 snapshot(s).
Do you still want to continue and delete them? (y/n) y
snapshot delete: snap7: snap removed successfully
snapshot delete: snap8: snap removed successfully
snapshot delete: snap9: snap removed successfully
snapshot delete: snap10: snap removed successfully
========================================================================
Change-Id: Ifec8e128ab2011cbbba208376b9c92cfbe7d8d71
BUG: 1145083
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8162
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8798
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
performed on a cluster with op-version less than 30600.
Currently we get error message as on cli 'Another transaction is in progress
Please try again after sometime' when a snapshot operation is performed
on a cluster with op-version less than 30600.
We need to print the correct error message in this case.
Change-Id: I5f144428d928393c3796bde96ce6e3a40fca8141
BUG: 1145068
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8371
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8796
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
set explicitly.
Problem : Even though snap-max-hard-limit, snap-max-soft-limit and
auto-delete values were not set explicitly, It was getting showed
in the output of gluster volume info.
Solution : Check if the value is already present in dictionary
(That means, it is set), If value is not present then consider
the default value,
NOTE : This patch doesn't solve the problem where the values
which is set globally are being displayed in gluster volume info
Change-Id: I61445b3d2a12eb68c38a19bea53b9051ad028050
BUG: 1145020
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8191
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8793
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
notifications
* As of now snapview-server is polling (sending rpc requests to glusterd) to
get the latest list of snapshots at some regular time intervals
(non configurable). Instead of that register a callback with glusterd so that
glusterd sends notifications to snapd whenever a snapshot is created/deleted
and snapview-server can configure itself.
rebase of the patch http://review.gluster.org/#/c/8150/
Change-Id: Iee2582b1a823d50c79233a41cf2106f458b40691
BUG: 1143961
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/8767
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Requirement:
Snapshot needs an API to fail the CLI if any geo-rep session is active
for that volume.
Solution:
A function "gd_vol_is_geo_rep_active" is provided to check if any
geo-rep session is active for that volume. An in memory dict called
'gsync_running_slaves' is maintained in 'volinfo' structure to keep
track of active geo-rep session for the volume. The key
'slavenode::slavevol' with value 'running' is added whenever geo-rep
is started/resumed into the dict and the same is removed if
stopped/paused. So the 'count' in dict is used to decide whether the
geo-rep is active or not for that volume.
Also added "this->name" in gf_log in routines which this patch is
touched.
BUG: 1138952
Change-Id: Ib13aeb509a56edf510651b77e20bf3cc43a3e763
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/8459
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/8645
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Geo-replicatoin does a full xsync crawl after snapshot
restoration of slave and master. It does not do history crawl.
Analysis:
Marker creates 'marker.tstamp' file when geo-rep is started
for the first time. The virtual extended attribute
'trusted.glusterfs.volume-mark' is maintained and whenever
it is queried on gluster mount point, marker fills it on
the fly and returns the combination of uuid, ctime of
marker.tstamp and others. So ctime of marker.tstamp, in other
sense 'volume-mark' marks the geo-rep start time when the
session is freshly created.
From the above, after the first filesystem crawl(xsync) is
done during first geo-rep start, stime should always be less
than 'volume-mark'. So whenever stime is less than volume-mark,
it does full filesystem crawl (xsync).
Root Cause:
When snapshot is restored, marker.tstamp file is freshly
created losing the timestamps, it was originally created with.
Solution:
1. Change is made to depend on mtime instead of ctime.
2. mtime and atime of marker.tstamp is restored back when
snapshot is created and restored.
BUG: 1138952
Change-Id: I0e19e1cb2593171b9a2b41d0d303330feb7fd2b3
Signed-off-by: Kotresh H R <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/8401
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/8642
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added a script check_goto.pl, that when run from
the source code root, will scan all .c files to match
the following pattern:
label:
if (condition)
goto label;
On finding such a pattern the script will print the file name
and the line number. There are certain cases where the above
recursive pattern is intended. Hence adding those labels to
ignore-labels. Thanks Vijaikumar Mallikarjuna for the perl
script.
Also fixed all such existing errors
BUG: 1138952
Change-Id: Ie6b75621711736e7e30f2f9d25e50435d58fc1e2
Signed-off-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/8307
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/8637
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Linux defines ENODATA and ENOATTR with the same value, which means that
code can miss on on the two without breaking.
FreeBSD does not have ENODATA and GlusterFS defines it as ENOATTR just
like Linux does.
On NetBSD, ENODATA != ENOATTR, hence we need to check for both values
to get portable behavior.
This is a backport of I003a3af055fdad285d235f2a0c192c9cce56fab8
BUG: 1138897
Change-Id: I272cd53e637993c7fd2ac74bd607001d3581ced7
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8634
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NetBSD's FUSE being pure userland implementation, there is no /dev/fuse
to open. Test /dev/puffs (kernel fs-in-userland subsystem supporting FUSE)
insead.
This is a backport of Ia65e95c246dc31ea2839cf64d7c851430828542e
BUG: 1138897
Change-Id: I9beb673cff08d429c8ae66a819266f6037086b3e
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8633
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On NetBSD and FreeBSD, doing a 'gluster volume start $volume force' causes
NFS server, quotad, snapd and glustershd to be undetected by glusterd once the
volume has restarted. 'gluster volume status' shows the three processes
as 'N' in the online column, while they have been launched successfully.
This happens because glusterd attempts to connect to its child processes
just between the child does a unlink() on the socket in
__socket_server_bind() and the time it calls bind() and listen().
Different scheduling policy may explain why the problem does not happen
on Linux, but it may pop up some day since we make no guaranteed
assumptions here.
This patchet works this around by introducing a boolean
transport.socket.ignore-enoent option, set by nfs and glustershd,
which prevents ENOENT to be fatal and cause glusterd to retry and
suceed later. Behavior of other clients is unaffected.
This is a backport of Ifdc4d45b2513743ed42ee235a5c61a086321644c
BUG: 1138897
Change-Id: I04472f045249c99a9492218ceebfab847474db2d
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8630
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport from master branch - http://review.gluster.org/#/c/8246/
- Break-way from '/var/lib/glusterd' hard-coded previously,
instead rely on 'configure' value from 'localstatedir'
- Provide 's/lib/db' as default working directory for gluster
management daemon for BSD and Darwin based installations
- loff_t is really off_t on Darwin
- fix-off the warnings generated by clang on FreeBSD/Darwin
- Now 'tests/*' use GLUSTERD_WORKDIR a common variable for all
platforms.
- Define proper environment for running tests, define correct PATH
and LD_LIBRARY_PATH when running tests, so that the desired version
of glusterfs is used, regardless where it is installed.
(Thanks to manu@netbsd.org for this additional work)
Change-Id: I06e684ac4c26d1e74c9daf76753403ad15f79276
BUG: 1130308
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/8486
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds xml output for geo-replication status
and status detail command.
sample:
--------------------------------------------------------------
<geoRep>
<volume>
<name>master</name>
<sessions>
<session>
<session_slave>:2a301d66-b9d2-44b4-b827-d680d67123eb:ssh://XXXXXXXXXX::slave</session_slave>
<pair>
<master_node>localhost.localdomain</master_node>
<master_node_uuid>2a301d66-b9d2-44b4-b827-d680d67123eb</master_node_uuid>
<master_brick>/root/master_b1</master_brick>
<slave>ssh://XXXXXXXXXXX::slave</slave>
<status>faulty</status>
<checkpoint_status>N/A</checkpoint_status>
<crawl_status>N/A</crawl_status>
</pair>
</session>
</sessions>
</volume>
</geoRep>
-------------------------------------------------------------
Change-Id: Ia19dbe751c3ab1ec7cb8923cdd6c8b99c374072f
BUG: 1133464
Signed-off-by: ndarshan <dnarayan@redhat.com>
Reviewed-on: http://review.gluster.org/8089
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: ndarshan <dnarayan@redhat.com>
Reviewed-on: http://review.gluster.org/8532
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I012899be08a06d39ea5c9fb98a66acf833d7213f
BUG: 1120589
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/8323
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I01afe64685a5794cce9265580c6c5de57a045201
BUG: 1119582
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/8310
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|