| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using a uint64_t for the peerinfo generation number was overkill for how
the generation number is used within GlusterD. It also prevented
GlusterD from running on 32-bit architechtures, as uatomic_add_return
doesn't support 64-bit values on 32-bit architechtures.
This change was developed on the git branch at [1]. This commit is a
combination of the following commits on the development branch.
b78dba4 Use 32-bit generation number
2c37e4b Change other generation number variables to uint32_t
[1]: https://github.com/kshlm/glusterfs/tree/urcu
Change-Id: I0f310f56a4fb97d6bcbc23255a379ed5bb1ed9e1
BUG: 1218031
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/10426
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Transaction peer lists were used in GlusterD to peers belonging to a
transaction. This was needed to prevent newly added peers performing
partial transactions, which could be incorrect.
This was accomplished by creating a seperate transaction peers list at
the beginning of every transaction. A transaction peers list referenced
the peerinfo data structures of the peers which were present at the
beginning of the transaction. RCU protection of peerinfos referenced by
the transaction peers list is a hard problem and difficult to do
correctly.
To have proper RCU protection of peerinfos, the transaction peers lists
have been replaced by an alternative method to identify peers that
belong to a transaction. The alternative method is to the global peers
list along with generation numbers to identify peers that should belong
to a transaction.
This change introduces a global peer list generation number, and a
generation number for each peerinfo object. Whenever a peerinfo object
is created, the global generation number is bumped, and the peerinfos
generation number is set to the bumped global generation.
With the above changes, the algorithm to identify peers belonging to a
transaction with RCU protection is as follows,
- At the beginning of a transaction, the current global generation
number is saved
- To identify if a peers belonging to the transaction,
- Start a RCU read critical section
- For each peer in the global peers list,
- If the peers generation number is not greater than the saved
generation number, continue with the action on the peer
- End the RCU read critical section
The above algorithm guarantees that,
- The peer list is not modified when a transaction is iterating through
it
- The transaction actions are only done on peers that were present when
the transaction started
But, as a transaction could iterate over the peers list multiple times,
the algorithm cannot guarantee that same set of peers will be selected
every time. A peer could get deleted between two iterations of the list
within a transaction. This problem existed with transaction peers list
as well, but unlike before now it will not lead to invalid memory access
and potential crashes. This problem will be addressed seprately.
This change was developed on the git branch at [1]. This commit is a
combination of the following commits on the development branch.
52ded5b Add timespec_cmp
44aedd8 Add create timestamp to peerinfo
7bcbea5 Fix some silly mistakes
13e3241 Add start time to opinfo
17a6727 Use timestamp comparisions to identify xaction peers instead
of a xaction peer list
3be05b6 Correct check for peerinfo age
70d5b58 Use read-critical sections for peer list iteration
ba4dbca Use peerinfo timestamp checks in op-sm instead of xaction peer
list
d63f811 Add more peer status checks when iterating peers list in
glusterd-syncop
1998a2a Timestamp based peer list traversal of mgmtv3 xactions
f3c1a42 Remove transaction peer lists
b8b08ee Remove unused labels
32e5f5b Remove 'npeers' usage
a075fb7 Remove 'npeers' from mgmt-v3 framework
12c9df2 Use generation number instead of timestamps.
9723021 Remove timespec_cmp
80ae2c6 Remove timespec.h include
a9479b0 Address review comments on 10147/4
[1]: https://github.com/kshlm/glusterfs/tree/urcu
Change-Id: I9be1033525c0a89276f5b5d83dc2eb061918b97f
BUG: 1205186
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/10147
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On Linux systems we should use the libuuid from the distribution and not
bundle and statically link the contrib/uuid/ bits.
libglusterfs/src/compat-uuid.h has been introduced and should become an
abstraction layer for different UUID APIs. Non-Linux operating systems
should implement their compatibility layer there.
Once all operating systems have an implementation in compat-uuid.h, we
can remove contrib/uuid/ from the repository completely.
Change-Id: I345e5357644be2521685e00358bb8c83c4ea0577
BUG: 1206587
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/10129
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
http://review.gluster.org/9269 addresses maintaining local xaction_peers in
syncop and mgmt_v3 framework. This patch is to maintain local xaction_peers list
for op-sm framework as well.
Change-Id: Idd8484463fed196b3b18c2df7f550a3302c6e138
BUG: 1204727
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9972
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Include xattrop64-watchlist for index xlator for disperse volumes.
- Change the functions that exist to consider disperse volumes also
for sending commands to disperse xls in self-heal-daemon.
Change-Id: Iae75a5d3dd5642454a2ebf5840feba35780d8adb
BUG: 1177601
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9793
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch replaces usage of the libglusterfs lists data structures and
API in glusterd with the lists data structures and API from liburcu. The
liburcu data structes and APIs are a drop-in replacement for
libglusterfs lists.
All usages have been changed to keep the code consistent, and free from
confusion.
NOTE: glusterd_conf_t->xprt_list still uses the libglusterfs data
structures and API, as it holds rpc_transport_t objects, which is not a
part of glusterd and is not being changed in this patch.
This change was developed on the git branch at [1]. This commit is a
combination of the following commits on the development branch.
6dac576 Replace libglusterfs lists with liburcu lists
a51b5ab Fix compilation issues
d98a06f Fix merge issues
a5d918e Remove merge remnant
1cca113 More style cleanup
1917be3 Address review comments on 9624/1
8d10f13 Use cds_lists for glusterd_svc_t
524ad5d Add rculist header in glusterd-conn-helper.c
646f294 glusterd: add list_add_order API honouring rcu
[1]: https://github.com/kshlm/glusterfs/tree/urcu
Change-Id: Ic613c5b6e496a677b9d3de15fc042a0492109fb0
BUG: 1191030
Signed-off-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9624
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace brick:
If geo-replication was configured on a volume, replace brick
used to fail. This patch allows replace brick to go through
if all geo-rep sessions corresponding to that volume is stopped.
Remove brick:
There was no check for geo-replication for remove brick. Enforce
'remove brick commit' to fail if geo-rep session corresponding
to volume is running. Allow 'remove brick commit' only if all of
the geo-rep sessions corresponding to that volume is stopped.
Code is re-organized for better readability.
Change-Id: I02282c2764d8b81e319489c977847e6e437511a4
BUG: 1179638
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/9402
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Aravinda VK <avishwan@redhat.com>
Reviewed-by: ajeet jha <ajha@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd op-sm infrastructure has some loophole in handing error cases in
locking/unlocking phases which ends up having stale locks restricting
further transactions to go through.
This patch still doesn't handle all possible unlocking error cases as the
framework neither has retry mechanism nor the lock timeout. For eg - if
unlocking fails in one of the peer, cluster wide lock is not released and
further transaction can not be made until and unless originator node/the node
where unlocking failed is restarted.
Following test cases were executed (with the help of gdb) after applying this
patch:
* RPC timesout in lock cbk
* Decoding of RPC response in lock cbk fails
* RPC response is received from unknown peer in lock cbk
* Setting peerinfo in dictionary fails while sending lock request for first peer
in the list
* Setting peerinfo in dictionary fails while sending lock request for other
peers
* Lock RPC could not be sent for peers
For all above test cases the success criteria is not to have any stale locks
Change-Id: Ia1550341c31005c7850ee1b2697161c9ca04b01a
BUG: 1154635
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9012
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While creating a volume and adding a brick validation for _POSIX_PATH_MAX is
done on absolute pathname instead of relative pathname due to which a brickpath
having less than _POSIX_PATH_MAX may also fail the validation if the directory
length is greater than (_POSIX_PATH_MAX -strlen(brickpath/volume name).
Also this fix addresses one cli response message correction which says the
volume file is too long instead of brick path is too long (when brickpath
length validation doesn't fail and vol file length validation fails.)
It is also important to note that with the current design of volfile naming, it
can not be guranteed that volname and brickpath can have max of _POSIX_PATH_MAX
characters.
Change-Id: I1283d1f9dea96ae797620002c8723719f26a866d
BUG: 1085330
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/7420
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If entries like state_file or pid-file are missing in the gsyncd.conf
or if the gsyncd.conf is also missing, glusterd looks for the missing
configs in the gsyncd_template.conf
status will display "Config Corrupted" as long as the entry is missing in
the config file. Missing state-file entry in both config and template
will not allow starting a geo-rep session.
However stop force will successfully stop an already running session,
if the state-file entries are missing in both the config file and
the template, as long as either of them have a pid-file entry.
if the pid-file entry is missing in the gsyncd.conf file, starting a
geo-rep session will not be allowed.
if the pid-file entry is missing in an already started session, then
stop force will fetch it from the config template and stop the session.
if the pid-file entry is missing in both the config and the template,
stop force will fail with appropriate error stating pid-file entry is missing.
Change-Id: I81d7cbc4af085d82895bbef46ca732555aa5365d
BUG: 1059092
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/6856
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Moved globals(vol_lock and txn_opinfo dicts and global_txn_id) into
glusterd priv
Moved glusterd_op_send_cli_response() out of gd_unlock_op_phase
as gd_unlock_op_phase and glusterd_clear_txn_opinfo should only
be called if the txn id has been successfully generated. The
cli resp should be sent irrespective of that.
Changed log levels from ERROR to WARNING for some volume lock logs
where the logs are expected and is not an error
Added logs for better transparency of transaction ids.
Change-Id: Ifac9b23aa9f1648c9ae252cfd3ac50bb2ed46728
BUG: 1011470
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/6976
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this patch we are replacing the existing cluster-wide
lock taken on glusterds across the cluster, with volume locks
which are also taken on glusterds across the cluster, but are
volume specific. So with the volume locks we are able to perform
more than one gluster operation at the same time, as long as the
operations are being performed on different volumes.
We maintain a global list of volume-locks (using a dict for a list)
where the key is the volume name, and which saves the uuid of the
originator glusterd. These locks are held and released per volume
transaction.
In order to acheive multiple gluster operations occuring at the
same time, we also separate opinfos in the op-state-machine, as a
part of this patch. To do so, we generate a unique transaction-id
(uuid) per gluster transaction. An opinfo is then associated with
this transaction id, which is used throughout the transaction. We
maintain a run-time global list(using a dict) of transaction-ids,
and their respective opinfos to achieve this.
Upstream Feature Page: http://www.gluster.org/community/documentation/index.php/Features/glusterd-volume-locks
Change-Id: Iaad505a854bac8de8f83beec0357eb6cde3f7ea8
BUG: 1011470
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5994
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In staging phase of volume stop, code is added to read the state_file
for each slave of the master to which the volume belongs. If any of the
geo-rep session is active with at least one slave, volume is not
allowed to stop else it is allowed.
Change-Id: I4a01a357fc86b872e9635b3d19998cdbd9545114
BUG: 1049727
Signed-off-by: Kotresh H R <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/6663
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I2210f1ac7de04c6025c0ec02d998b626d41466ae
BUG: 1028672
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/6303
Reviewed-by: M. Mohan Kumar <mohan@in.ibm.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current BD xlator (block backend) has a few limitations such as
* Creation of directories not supported
* Supports only single brick
* Does not use extended attributes (and client gfid) like posix xlator
* Creation of special files (symbolic links, device nodes etc) not
supported
Basic limitation of not allowing directory creation is blocking
oVirt/VDSM to consume BD xlator as part of Gluster domain since VDSM
creates multi-level directories when GlusterFS is used as storage
backend for storing VM images.
To overcome these limitations a new BD xlator with following
improvements is suggested.
* New hybrid BD xlator that handles both regular files and block device
files
* The volume will have both POSIX and BD bricks. Regular files are
created on POSIX bricks, block devices are created on the BD brick (VG)
* BD xlator leverages exiting POSIX xlator for most POSIX calls and
hence sits above the POSIX xlator
* Block device file is differentiated from regular file by an extended
attribute
* The xattr 'user.glusterfs.bd' (BD_XATTR) plays a role in mapping a
posix file to Logical Volume (LV).
* When a client sends a request to set BD_XATTR on a posix file, a new
LV is created and mapped to posix file. So every block device will
have a representative file in POSIX brick with 'user.glusterfs.bd'
(BD_XATTR) set.
* Here after all operations on this file results in LV related
operations.
For example opening a file that has BD_XATTR set results in opening
the LV block device, reading results in reading the corresponding LV
block device.
When BD xlator gets request to set BD_XATTR via setxattr call, it
creates a LV and information about this LV is placed in the xattr of the
posix file. xattr "user.glusterfs.bd" used to identify that posix file
is mapped to BD.
Usage:
Server side:
[root@host1 ~]# gluster volume create bdvol host1:/storage/vg1_info?vg1 host2:/storage/vg2_info?vg2
It creates a distributed gluster volume 'bdvol' with Volume Group vg1
using posix brick /storage/vg1_info in host1 and Volume Group vg2 using
/storage/vg2_info in host2.
[root@host1 ~]# gluster volume start bdvol
Client side:
[root@node ~]# mount -t glusterfs host1:/bdvol /media
[root@node ~]# touch /media/posix
It creates regular posix file 'posix' in either host1:/vg1 or host2:/vg2 brick
[root@node ~]# mkdir /media/image
[root@node ~]# touch /media/image/lv1
It also creates regular posix file 'lv1' in either host1:/vg1 or
host2:/vg2 brick
[root@node ~]# setfattr -n "user.glusterfs.bd" -v "lv" /media/image/lv1
[root@node ~]#
Above setxattr results in creating a new LV in corresponding brick's VG
and it sets 'user.glusterfs.bd' with value 'lv:<default-extent-size'
[root@node ~]# truncate -s5G /media/image/lv1
It results in resizig LV 'lv1'to 5G
New BD xlator code is placed in xlators/storage/bd directory.
Also add volume-uuid to the VG so that same VG can't be used for other
bricks/volumes. After deleting a gluster volume, one has to manually
remove the associated tag using vgchange <vg-name> --deltag
<trusted.glusterfs.volume-id:<volume-id>>
Changes from previous version V5:
* Removed support for delayed deleting of LVs
Changes from previous version V4:
* Consolidated the patches
* Removed usage of BD_XATTR_SIZE and consolidated it in BD_XATTR.
Changes from previous version V3:
* Added support in FUSE to support full/linked clone
* Added support to merge snapshots and provide information about origin
* bd_map xlator removed
* iatt structure used in inode_ctx. iatt is cached and updated during
fsync/flush
* aio support
* Type and capabilities of volume are exported through getxattr
Changes from version 2:
* Used inode_context for caching BD size and to check if loc/fd is BD or
not.
* Added GlusterFS server offloaded copy and snapshot through setfattr
FOP. As part of this libgfapi is modified.
* BD xlator supports stripe
* During unlinking if a LV file is already opened, its added to delete
list and bd_del_thread tries to delete from this list when a last
reference to that file is closed.
Changes from previous version:
* gfid is used as name of LV
* ? is used to specify VG name for creating BD volume in volume
create, add-brick. gluster volume create volname host:/path?vg
* open-behind issue is fixed
* A replicate brick can be added dynamically and LVs from source brick
are replicated to destination brick
* A distribute brick can be added dynamically and rebalance operation
distributes existing LVs/files to the new brick
* Thin provisioning support added.
* bd_map xlator support retained
* setfattr -n user.glusterfs.bd -v "lv" creates a regular LV and
setfattr -n user.glusterfs.bd -v "thin" creates thin LV
* Capability and backend information added to gluster volume info (and
--xml) so
that management tools can exploit BD xlator.
* tracing support for bd xlator added
TODO:
* Add support to display snapshots for a given LV
* Display posix filename for list-origin instead of gfid
Change-Id: I00d32dfbab3b7c806e0841515c86c3aa519332f2
BUG: 1028672
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/4809
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Quota volume reset command without "force"
option fixed, doesn't fail anymore. It resets
unprotected fields and not the protected ones.
Also, an appropriate message is provided to the user
for the following cases :
1. only unprotected fields are reset, "force" option
should be used to reset protected fields.
2. Both protected and unprotected fields are reset.
3. No field was reset, "force" option required.
Test case for the same also added.
Change-Id: I24e8f1be87b79ccd81bf6f933e00608b861c7a16
BUG: 1022905
Signed-off-by: Anuradha <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/6135
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently to know the number of files to be healed, either user
has to go to backend and check the number of entries present in
indices/xattrop directory. But if a volume consists of large
number of bricks, going to each backend and counting the number
of entries is a time-taking task. Otherwise user can give
gluster volume heal vol-name info command but with this
approach if no. of entries are very hugh in the indices/
xattrop directory, it will comsume time.
So as a feature, new command is implemented.
Command 1: gluster volume heal vn statistics heal-count
This command will get the number of entries present in
every brick of a volume. The output displays only entries
count.
Command 2: gluster volume heal vn statistics heal-count
replica 192.168.122.1:/home/user/brickname
Here if we are concerned with just one replica.
So providing any one of the brick of a replica will get
the number of entries to be healed for that replica only.
Example:
Replicate volume with replica count 2.
Backend status:
--------------
[root@dhcp-0-17 xattrop]# ls -lia | wc -l
1918
NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy
entries so actual no. of entries to be healed are
1916.
[root@dhcp-0-17 xattrop]# pwd
/home/user/2ty/.glusterfs/indices/xattrop
Command output:
--------------
Gathering count of entries to be healed on volume volume3 has been successful
Brick 192.168.122.1:/home/user/22iu
Status: Brick is Not connected
Entries count is not available
Brick 192.168.122.1:/home/user/2ty
Number of entries: 1916
Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290
BUG: 1015990
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/6044
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commands:
gluster system:: execute gsec_create
gluster volume geo-rep <master> <slave-url> create [push-pem] [force]
gluster volume geo-rep <master> <slave-url> start [force]
gluster volume geo-rep <master> <slave-url> stop [force]
gluster volume geo-rep <master> <slave-url> delete
gluster volume geo-rep <master> <slave-url> config
gluster volume geo-rep <master> <slave-url> status
The geo-replication is distributed. The session will be created, and
gsyncd will be spawned on all relevant nodes, instead of only one
node.
geo-rep: Collecting status detail related data
Added persistent store for saving information about
TotalFilesSynced, TotalSyncTime, TotalBytesSynced
Changes in the status information in socket:
Existing(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;
New(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978;
TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640;
Persistent details stored in
/var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status
Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111
BUG: 847839
Original Author: Avra Sengupta <asengupt@redhat.com>
Original Author: Venky Shankar <vshankar@redhat.com>
Original Author: Aravinda VK <avishwan@redhat.com>
Original Author: Amar Tumballi <amarts@redhat.com>
Original Author: Csaba Henk <csaba@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5132
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ic6659335f18a3befcf9b8b3ca067883a2c889d03
BUG: 852147
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/4493
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Id4062799104e5831467ced65a43bfe377b6163f4
BUG: 852147
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/4297
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ib4c4794563a5a694fab16f17c642f788399462f6
BUG: 852147
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/4295
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PROBLEM:
When a transaction is already in progress, and the user tries to
execute another glusterd operation, the second operation fails as
glusterd fails to acquire lock. But to the user, a message like
"Operation failed" does not give ample information about why the
operation failed.
FIX:
Made glusterd_op_txn_begin use and initialise error string, which is
needed to capture failure in the "lock" phase.
Also made gd_sync_task_begin set error string appropriately when
locking fails.
In the process, I had to introduce error string in some glusterd_handle_*
functions. And because I introduced error string in these handlers, I
decided to also set them in places where these handlers could possibly
fail.
HOW I TESTED IT:
For want of a better idea, I "commented out" the call to
"glusterd_unlock", recompiled glusterd and ran two glusterd volume
operations, one after the other. The second operation fails with the
message "Another transaction is in progress. Please try again after
sometime." as expected.
The tests were performed on two volume ops : one of them
synctask'ized (volume start) and the other NOT (volume create).
Change-Id: Ia862972929872ae2f053707a544824d9cadc37be
BUG: 873549
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/4197
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Including hostname of the node where geo-rep start was
initiated from. This helps any consumers of the status
command to identify and possibly issue commands on those
node(s).
Change-Id: I005083878a3a4794da3b7f3f7d2cc9d28f004e3f
BUG: 858218
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/4218
Reviewed-by: Csaba Henk <csaba@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Added volume-id validation to glusterd-syncop code.
- All daemons are restarted using synctasks in init().
- glusterd_brick_start has wait/nowait variants to support
volume commands using synctask framework and those that aren't.
Change-Id: Ieec26fe1ea7e5faac88cc7798d93e4cc2b399d34
BUG: 862834
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/3969
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
License message changed for server-side, dual license GPLV2 and LGPLv3+.
Change-Id: Ia9e53061b9d2df3b3ef3bc9778dceff77db46a09
BUG: 852318
Signed-off-by: Varun Shastry <vshastry@redhat.com>
Reviewed-on: http://review.gluster.org/3940
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The license message is changed to
Copyright (c) 2008-2012 Red Hat, Inc. <http://www.redhat.com>
This file is part of GlusterFS.
This file is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3 or
later), or the GNU General Public License, version 2 (GPLv2), in all
cases as published by the Free Software Foundation.
Change-Id: I07d2b63ed5fbbbd1884f1e74f2dd56013d15b0f4
BUG: 852318
Signed-off-by: Varun Shastry <vshastry@redhat.com>
Reviewed-on: http://review.gluster.org/3858
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Retained apparent redundant checks in stage, commit phase of set
volume for the help options for backward compatibility
Change-Id: Iaefe3805d6b5eeeced2e7e4870830edf3e61dc87
BUG: 844696
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.com/3761
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
- properly resolve shortened key names
- make sure user gets decent feedback
Change-Id: I94b75f34b29cb71fb1a2edf17c3f1bf841bb552a
Signed-off-by: Csaba Henk <csaba@redhat.com>
BUG: 826958
Reviewed-on: http://review.gluster.com/3500
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I0c0b500bcb0b183ae445800fd334cd838b8af0d3
BUG: 764890
Signed-off-by: Csaba Henk <csaba@redhat.com>
Reviewed-on: http://review.gluster.com/3455
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes glusterd_op_build_payload() to,
1. take account of status cmd type when building payload for "volume status" to
prevent "volume status all" from failing.
2. take account of volname being "help/help-xml" in volume set to prevent
"volume set help/help-xml" from failing
3. obtain volname using key "master" prevent "volume geo-replication" commands
from failing
Also, fails op and sets correct op_errstr if volume not found during
glusterd_dict_set_volid(), to make sure cli displays proper message.
Change-Id: I40ded15c50b54a82ee61bf6d6e9d07f571679c8c
BUG: 812801
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.com/3157
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The major changes are,
* "volume status" now supports getting details of the self-heal daemon processes
for replica volumes. A new cli options "shd", similar to "nfs", has been
introduced for this. "detail", "fd" and "clients" status ops are not supported
for self-heal daemons.
* The default/normal ouput of "volume status" has been enhanced to contain
information about nfs-server and self-heal daemon processes as well. Some tweaks
have been done to the cli output to show appropriate output.
Also, changes have been done to rebalance/remove-brick status, so that hostnames
are displayed instead of uuids.
Change-Id: I3972396dcf72d45e14837fa5f9c7d62410901df8
BUG: 803676
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.com/3016
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I5424ebfadb5b2773ee6f7370cc2867a555aa48dd
BUG: 800352
Signed-off-by: Pranith Kumar K <pranithk@gluster.com>
Reviewed-on: http://review.gluster.com/2962
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Id128417219bdb7146253618a5f8f31ef35013894
BUG: 801322
Signed-off-by: shishir gowda <shishirng@gluster.com>
Reviewed-on: http://review.gluster.com/2942
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enables usage of volume monitoring operations "volume status", "volume top" and
"volume profile" for nfs servers. These operations can be performed on
nfs-servers by passing "nfs" as an option in cli. The output is similar to the
normal brick outputs for these commands.
The new syntaxes for the changed commands are as below,
#gluster volume profile <VOLNAME> {start|info|stop} [nfs]
#gluster volume top <VOLNAME> {[open|read|write|opendir|readdir [nfs]]
|[read-perf|write-perf [nfs|{bs <size> count <count>}]]}
[brick <brick>] [list-cnt <count>]
#gluster volume status [all | <VOLNAME> [nfs|<BRICK>]]
[detail|clients|mem|inode|fd|callpool]
Change-Id: Ia6eb50c60aecacf9b413d3ea993f4cdd90ec0e07
BUG: 795267
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.com/2820
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Id92d3276e65a6c0fe61ab328b58b3954ae116c74
BUG: 763820
Signed-off-by: Pranith Kumar K <pranithk@gluster.com>
Reviewed-on: http://review.gluster.com/2775
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* add-brick, stop-volume, remove-brick are the operations that are explicitly
'failed' when attempted while replace-brick is in progress.
* we attach the volume-id to the dst_brick volfile ensuring that the replace-brick
operation holds 'claim' on it.
Change-Id: If60b2af566ca940b2add600b473c99730e06ab47
BUG: 765470
Signed-off-by: Krishnan Parthasarathi <kp@gluster.com>
Reviewed-on: http://review.gluster.com/2740
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
earlier only DELETE_VOLUME was having volume name as context, where
as all other OPs used to have dictionary
Change-Id: I5bfcc458bff3295374eb4f0b0a31f6134745debd
BUG: 3158
Reviewed-on: http://review.gluster.com/718
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier we waited for brick disconnect or 5s whichever is smaller, before we
move op sm from brick op stage to commit stage. This involves a race where both
the above mentioned events can happen 'concurrently' and result in double free.
Change-Id: I8b1524afded84c20d55e29cfe2579ca872d2ac26
BUG: 3700
Reviewed-on: http://review.gluster.com/575
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amar@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allows resetting of only single options using 'volume reset' command.
New syntax of volume reset is: 'volume reset [option] [force]'.
Giving "all" as options or not specifying an option, causes all options
to be reset.
Change-Id: Ib9e220f326adeb1be1a774737a0b12c910012cea
BUG: 2980
Reviewed-on: http://review.gluster.com/450
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amar@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This cmd is used in the context of proactive self-heal for replicated
volumes. User invokes the following cmd when (s)he suspects that self-heal
needs to be done on a particular volume,
gluster volume heal <VOLNAME>.
Change-Id: I3954353b53488c28b70406e261808239b44997f3
BUG: 3602
Reviewed-on: http://review.gluster.com/454
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
If there are no bricks of a volume running 'local' to glusterd
where the 'profile info' command is issued, glusterd incorrectly
reports that all bricks of the volume are down.
Change-Id: Idd703c991f0bcf59b76b9ef8f4ad8cd71960a55b
BUG: 3553
Reviewed-on: http://review.gluster.com/430
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change contains,
- removal of the local cli lock used to serialize
cli ops to a glusterd.
- glusterd's state-machine can handle competing 'lockers' with
guaranteed progress.
- flush cluster lock on 'owner' disconnecting and as 'owner',
send unlock to all on first peer disconnect.
Change-Id: I25961436b0790b4196f2b3438b105c37279399ad
BUG: 3320
Reviewed-on: http://review.gluster.com/123
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ie. instead of writing out the fully expanded canonical URL like
ssh://root@192.168.3.4:gluster://127.0.0.1:bar
we just display
ssh://root@starship::bar
Change-Id: I2bd70650cbc9973d925f652bccb163d391e406c9
BUG: 2536
Reviewed-on: http://review.gluster.com/79
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushik BV <kaushikbv@gluster.com>
|
|
|
|
|
|
|
|
| |
Change-Id: Iea835b9e448e736016da2e44e3c9bfff93f2fa78
BUG: 3439
Reviewed-on: http://review.gluster.com/259
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@gluster.com>
|
|
|
|
|
|
|
|
| |
Change-Id: I2d10f2be44f518f496427f257988f1858e888084
BUG: 3348
Reviewed-on: http://review.gluster.com/200
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@gluster.com>
|
|
|
|
|
|
|
|
| |
Change-Id: I3914467611e573cccee0d22df93920cf1b2eb79f
BUG: 3348
Reviewed-on: http://review.gluster.com/182
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
master is referred as a volume name rather than a URL scheme.
old syntax:
> volume geo-replication start :vol-foo /bar/boo
new syntax:
> volume geo-replication start vol-foo /bar/boo
Signed-off-by: Kaushik BV <kaushikbv@gluster.com>
Signed-off-by: Anand Avati <avati@gluster.com>
BUG: 2786 (Having to prepend geo-replication master vol with colon spoils the UI)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2786
|
|
|
|
|
|
|
|
|
|
| |
gsync session is active.
Signed-off-by: Kaushik BV <kaushikbv@gluster.com>
Signed-off-by: Anand Avati <avati@gluster.com>
BUG: 2559 (provide two options in CLI for gluster volume gsync indexing <volname> <enable|disable>)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2559
|
|
|
|
|
|
|
|
|
|
| |
Use GEOREP macro if you want to refer to the feature in code.
Signed-off-by: Csaba Henk <csaba@gluster.com>
Signed-off-by: Anand Avati <avati@gluster.com>
BUG: 2757 (refactory gsync/gsyncd/syncdaemon/whatever to geo-replication)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2757
|
|
|
|
|
|
|
|
| |
Signed-off-by: Pranith Kumar K <pranithk@gluster.com>
Signed-off-by: Anand Avati <avati@gluster.com>
BUG: 2761 (Restart gsyncd processes on glusterd restart)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2761
|