| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem
=======
When quota is enabled on 3.6, it will have quota conf version in quota.conf
as v1.1. This node gets upgraded to 3.7 but it will still have quota conf
version as v1.1 until a quota enable/disable/set limit is initiated. When
this is not initiated and when this node tries to peer probe a node which
is a fresh install of 3.7 (which will have quota conf version as v1.2), then this
will result in "Peer rejected" state. This patch fixes the issue.
Solution
========
When an upgrade happens from 3.6 to 3.7, quota.conf file needs
to be modified as well. With 3.6, in quota.conf the version will be
v1.1 and it needs to be changed to v1.2 from 3.7. This is because in
3.7, inode quota feature is introduced. So when an op-version bumpup
happens quota.conf needs to be upgraded with quota conf version v1.2
and all the 16 byte uuid needs to be changed to 17 bytes uuid as well.
Previously, when the cluster version is upgraded to 3.7, the quota.conf
got upgraded as well. But, the upgradation was done only when quota
enable/disable/set limit is done. With this patch, the upgradation is done
during a cluster op version bump up as well.
>Reviewed-on: http://review.gluster.org/15352
>Tested-by: Atin Mukherjee <amukherj@redhat.com>
>NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
>CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
>Smoke: Gluster Build System <jenkins@build.gluster.org>
>Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Change-Id: Idb5ba29d3e1ea0e45c85d87c952c75da9e0f99f0
BUG: 1392715
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/15790
Tested-by: sanoj-unnikrishnan <sunnikri@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Manikandan Selvaganesh <manikandancs333@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/#/c/14018/
snap status --xml errors out if a brick is down and
doesn't have pid. It is handled in the cli of the snap
status where "N/A" is displayed in such a scenario.
Handled the same in xml
snap status <snapname> --xml fails as the writer is
not initialised for the same. Using GF_SNAP_STATUS_TYPE_ITER
instead of GF_SNAP_STATUS_TYPE_SNAP for all snap's
status to differentiate between the two scenarios.
Added testcase volume-snapshot-xml.t to check
all snapshot commands xml outputs
> Reviewed-on: http://review.gluster.org/14018
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Change-Id: I99563e8f3e84f1aaeabd865326bb825c44f5c745
BUG: 1369363
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/15290
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
back port of : http://review.gluster.org/#/c/12883/
For detach tier, the validation was done using the string "detach-tier"
but the new commands used has the string "tier". Making the string use
"tier" to compare, creates problem as the tier status and tier detach
have the keyword "tier". So tier detach and tier status were separated.
and strtok was used to prevent the condition from passing when the
volume name has a substring of "tier". (only the second word from the
string is got and checked if the feature is tier).
Problem: new detach tier command doesnt throw warnings like
"not a tier volume" or " detach tier not started" respectively
instead it prints empty output.
Fix: while validate the volume is checked if its a tiered volume
if yes it is checked if the detach tier is started, else a warning
is thrown respectively.
>Change-Id: I94246d53b18ab0e9406beaf459eaddb7c5b766c2
>BUG: 1288517
>Signed-off-by: hari gowtham <hgowtham@redhat.com>
>Reviewed-on: http://review.gluster.org/12883
>Tested-by: NetBSD Build System <jenkins@build.gluster.org>
>Tested-by: Gluster Build System <jenkins@build.gluster.com>
>Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Change-Id: I1ac3b6baaec644dbc2025085a7f17abd56ba169d
BUG: 1291970
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12976
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch is backport of: http://review.gluster.org/10231
CLI command for bitrot scrub status will be :
gluster volume bitrot <volname> scrub status
Above command will show the statistics of bitrot scrubber.
Upon execution of this command it will show some common
scrubber tunable value of volume <VOLNAME> followed by
statistics of scrubber statistics of individual nodes.
sample ouput for single node:
Volume name : <VOLNAME>
State of scrub: Active
Scrub frequency: biweekly
Bitrot error log location: /var/log/glusterfs/bitd.log
Scrubber error log location: /var/log/glusterfs/scrub.log
=========================================================
Node name:
Number of Scrubbed files:
Number of Unsigned files:
Last completed scrub time:
Duration of last scrub:
Error count:
=========================================================
This is just infrastructure. list of bad file, last scrub
time, error count value will be taken care by
http://review.gluster.org/#/c/12503/ and
http://review.gluster.org/#/c/12654/ patches.
>> Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1
>> BUG: 1207627
>> Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
>> Reviewed-on: http://review.gluster.org/10231
>> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
>> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
>> Tested-by: Gluster Build System <jenkins@build.gluster.com>
Change-Id: I45ed94e5e0e78a1e007c30eb0b252f74cf3c9187
BUG: 1283881
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/12704
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a backport of 12304
Snaps of tiered volumes cannot handle files undergoing migration.
We implement a helper mechanism to "pause" migration. Any files
undergoing migration are aborted. Clean up is done to remove
sticky bits and data at the destination. Migration is restarted
after snap completes.
For testing an internal switch is added. It is not exposed externally.
gluster volume set vol1 tier-pause [true|false]
> Change-Id: Ia85bbf89ac142e9b7e73fcbef98bb9da86097799
> BUG: 1267950
> Signed-off-by: Dan Lambright <dlambrig@redhat.com>
> Reviewed-on: http://review.gluster.org/12304
> Reviewed-by: N Balachandran <nbalacha@redhat.com>
> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
Signed-off-by: Dan Lambright <dlambrig@redhat.com>
Conflicts:
xlators/mgmt/glusterd/src/glusterd-messages.h
Change-Id: I5f039d8d38a4c915bd873969f336b96755a0b8f1
BUG: 1274101
Reviewed-on: http://review.gluster.org/12411
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/#/c/12027/
Problem: snapshot delete all command fails with --xml option
Fix: Provided xml support for delete all command
Change-Id: I77cad131473a9160e188c783f442b6a38a37f758
BUG: 1258113
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-on: http://review.gluster.org/12027
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
(cherry picked from commit fd47635a4ffab621a2357c99cd1edd0482940bd5)
Reviewed-on: http://review.gluster.org/12042
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently bitrot using 120 second waiting time for object to be signed
after all fop's released. This signing waiting time value should be tunable.
Command for changing the signing waiting time will be
#gluster volume bitrot <VOLNAME> signing-time <waiting time value in second>
Change-Id: I89f3121564c1bbd0825f60aae6147413a2fbd798
BUG: 1231832
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/11105
(cherry picked from commit 554fa0c1315d0b4b78ba35a2d332d7ac0fd07d48)
Reviewed-on: http://review.gluster.org/11235
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
back port of http://review.gluster.org/10284
Fix for handling cli output for attach-tier and
detach-tier
>Change-Id: I4d17f4b09612754fe1b8cec6c2e14927029b9678
>BUG: 1211562
>Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
>Reviewed-on: http://review.gluster.org/10284
>Reviewed-by: Dan Lambright <dlambrig@redhat.com>
>Tested-by: Gluster Build System <jenkins@build.gluster.com>
>Tested-by: NetBSD Build System
>Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Change-Id: Ic30c8f0a104d89bf98f5d0069937a34674ee3f2d
BUG: 1220052
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/10713
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Back port of http://review.gluster.org/10108
These commands work in a manner analagous to rebalancing when removing a
brick. The existing migration daemon detects "detach start" and switches
to moving data off the hot tier. While in this state all lookups are
directed to the cold tier.
gluster v detach-tier <vol> start
gluster v detach-tier <vol> commit
The status and stop cli commands shall be submitted separately.
>Change-Id: I24fda5cc3ba74f5fb8aa9a3234ad51f18b80a8a0
>BUG: 1205540
>Signed-off-by: Dan Lambright <dlambrig@redhat.com>
>Signed-off-by: root <root@localhost.localdomain>
>Signed-off-by: Dan Lambright <dlambrig@redhat.com>
>Reviewed-on: http://review.gluster.org/10108
>Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Change-Id: I212d748d077fb5870ee84b316c653acbafbea3f7
BUG: 1220047
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/10708
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
inode quota is a new feature implemented in glusterfs-3.7
if quota is enabled in the older version and is upgraded
to a new version, we can hit setxattr spike during self-heal
of inode quotas. So, when a quota is enabled, turn off
inode-quotas with a xlator option.
With this patch, we still account for inode quotas but only
when a write operation is performed for a particular file.
User will be able to query inode quotas once the Inode-quota
xlator option is enabled.
Change-Id: I52fb28bf7024989ce7bb08ac63a303bf3ec1ec9a
BUG: 1218243
Signed-off-by: vmallika <vmallika@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/10152
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/10621
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace-brick operation with data migration support have been
deprecated from gluster.
With this fix replace brick command will support only one commad
gluster volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force}
Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b
BUG: 1218602
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10577
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Command gluster volume status <VOLNAME> should show the status of bitrot
and scrubber daemon and its pid information.
Along with displaying bitrot and scrubber daemon information in gluster
volume status command there should be command to show its individual status
separately.
Command to show individual status of bitrot and scrubber daemon will
following.
command to show only bitd daemon information will be
gluster volume status <VOLNAME> bitd
command to show only scrubber daemon information
gluster volume status <VOLNAME> scrub
Change-Id: Id86aae1156c8c599347c98e2a538f294d37376e4
BUG: 1218123
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10175
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Kaushal M <kaushal@redhat.com>
(cherry picked from commit da1416051d19d612d131acfde8589bc8658979b5)
Reviewed-on: http://review.gluster.org/10519
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a backport of http://review.gluster.org/#/c/10261/
> In a heterogeneous cluster with op_version less than 3.7, inode quotas will
> be accounted on those bricks which has glusterfs version 3.7 and this is
> not available in older version.
> This will have incorrect values displayed when user queries inode count
> from CLI.
>
> This patch will display error when inode-quota commands
> are executed with cluster version less than 3.7
>
> Change-Id: Ia0e6d5635d1d8e7b2e2cfc3daa7b7f9e314a263a
> BUG: 1212253
> Signed-off-by: vmallika <vmallika@redhat.com>
> Reviewed-on: http://review.gluster.org/10261
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Change-Id: I1ddd605e5b87a248aa85f0eab14c404895751083
BUG: 1215907
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/10415
Tested-by: NetBSD Build System
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
xml parsing of voltype should be inline with the cli
Backport of http://review.gluster.org/10271
Change-Id: I41ddddac00d07f03b56a041e1c3f5a132fbd7393
BUG: 1215517
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/10271
Tested-by: NetBSD Build System
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
(cherry picked from commit 4c3724f195240e40994b71add255f85ee1b025fb)
Reviewed-on: http://review.gluster.org/10396
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A tiered volume is a normal volume with some number of new bricks
representing "hot" storage. The "hot" bricks can be attached or
detached dynamically to a normal volume. When this happens, a new graph
is constructed. The root of the new graph is an instance of the tier
translator. One subvolume of the tier translator leads to the old volume,
and another leads to the new hot bricks.
attach-tier <VOLNAME> [<replica> <COUNT>] <NEW-BRICK> ... [force]
volume detach-tier <VOLNAME> [replica <COUNT>] <BRICK>
... <start|stop|status|commit|force>
gluster volume rebalance <volume> tier start
gluster volume rebalance <volume> tier stop
gluster volume rebalance <volume> tier status
The "tier start" CLI command starts a server side daemon. The daemon
initiates file level migration based on caching policies. The daemon's
status can be monitored and stopped.
Note development on the "tier status" command is incomplete. It will be
added in a subsequent patch.
When the "hot" storage is detached, the tier translator is removed
from the graph and the tiered volume reverts to its original state as
described in the volume's info file.
For more background and design see the feature page [1].
[1]
http://www.gluster.org/community/documentation/index.php/Features/data-classification
Change-Id: Ic8042ce37327b850b9e199236e5be3dae95d2472
BUG: 1194753
Signed-off-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-on: http://review.gluster.org/9753
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CLI command for bitrot features.
volume bitrot <volname> enable|disable
Above command will enable/disable bitrot feature for particular volume.
BUG: 1170075
Change-Id: Ie84002ef7f479a285688fdae99c7afa3e91b8b99
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Signed-off-by: Anand nekkunti <anekkunt@redhat.com>
Signed-off-by: Dominic P Geevarghese <dgeevarg@redhat.com>
Reviewed-on: http://review.gluster.org/9866
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
==========================================================================
Inode quota
==========================================================================
= Currently, the only way to retrieve the number of files/objects in a =
= directory or volume is to do a crawl of the entire directory/volume. =
= This is expensive and is not scalable. =
= =
= The proposed mechanism will provide an easier alternative to determine =
= the count of files/objects in a directory or volume. =
= =
= The new mechanism proposes to store count of objects/files as part of =
= an extended attribute of a directory. Each directory's extended =
= attribute value will indicate the number of files/objects present =
= in a tree with the directory being considered as the root of the tree. =
= =
= The count value can be accessed by performing a getxattr(). =
= Cluster translators like afr, dht and stripe will perform aggregation =
= of count values from various bricks when getxattr() happens on the key =
= associated with file/object count. =
A new interface is introduced:
------------------------------
limit-objects : limit the number of inodes at directory level
list-objects : list the directories where the limit is set
remove-objects : remove the limit from the directory
==========================================================================
CLI COMMAND:
gluster volume quota <volname> limit-objects <path> <number> [<percent>]
* <number> is a hard-limit for number of objects limitation for path "<path>"
If hard-limit is exceeded, creation of file/directory is no longer
permitted.
* <percent> is a soft-limit for number of objects creation for path "<path>"
If soft-limit is exceeded, a warning is issued for each creation.
CLI COMMAND:
gluster volume quota <volname> remove-objects [path]
==========================================================================
CLI COMMAND:
gluster volume quota <volname> list-objects [path] ...
Sample output:
------------------
Path Hard-limit Soft-limit Used Available
Soft-limit exceeded?
Hard-limit exceeded?
------------------------------------------------------------------------
--------------------------------------
/dir 10 80% 10 0
Yes
Yes
==========================================================================
[root@snapshot-28 dir]# ls
a b file11 file12 file13 file14 file15 file16 file17
[root@snapshot-28 dir]# touch a1
touch: cannot touch `a1': Disk quota exceeded
* Nine files are created in directory "dir" and directory is included in
* the
count too. Hence the limit "10" is reached and further file creation
fails
==========================================================================
Note: We have also done some re-factoring in cli for volume name
validation. New function cli_validate_volname is created
==========================================================================
Change-Id: I1823497de4f790a2a20ebb1770293472ea33ee2b
BUG: 1190108
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/9769
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
snapshot clone will allow us to take a snpahot of a snapshot.
Newly created clone volume will be a regular volume with read/write
permissions.
CLI command
snapshot clone <clonename> <snapname>
Change-Id: Icadb993fa42fff787a330f8f49452da54e9db7de
BUG: 1199894
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/9750
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to a system as-well-as to a particular volume.
Problem :
With the current design we can only delete a single snapshot.
And the deletion of volume which contains snapshot is not allowed.
Because of that user might be forced to delete all the snapshots
manually before he is allowed to delete a volume.
Solution:
Following is the interface with which user can delete
all the snapshots of a system or belonging to a particular volume.
Syntax : gluster snapshot delete all
*To delete all the snapshots present in a system
Syntax : gluster snapshot delete volume <volname>
*To deletes all the snapshot present in a volume specified.
========================================================================
Sample Output:
Case 1 : Deleting a single snapshot.
[root@snapshot-24 glusterfs]# gluster snapshot delete snap1
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: snap1: snap removed successfully
-----------------------------------------------------------------
Case 2 : Deleting all the snapshots in a Volume.
[root@snapshot-24 glusterfs]# gluster snapshot delete volume vol1
Volume (vol1) contains 9 snapshot(s).
Do you still want to continue and delete them? (y/n) y
snapshot delete: snap2: snap removed successfully
snapshot delete: snap3: snap removed successfully
snapshot delete: snap4: snap removed successfully
snapshot delete: snap5: snap removed successfully
.
.
.
-----------------------------------------------------------------
Case 3 : Deleting all the snapshots in a system.
[root@snapshot-24 glusterfs]# gluster snapshot delete all
System contains 4 snapshot(s).
Do you still want to continue and delete them? (y/n) y
snapshot delete: snap7: snap removed successfully
snapshot delete: snap8: snap removed successfully
snapshot delete: snap9: snap removed successfully
snapshot delete: snap10: snap removed successfully
========================================================================
Change-Id: Ifec8e128ab2011cbbba208376b9c92cfbe7d8d71
BUG: 1112613
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8162
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Two new options have been added to the 'create' command of the cli
interface:
disperse [<count>] redundancy <count>
Both are optional. A dispersed volume is created by specifying, at
least, one of them. If 'disperse' is missing or it's present but
'<count>' does not, the number of bricks enumerated in the command
line is taken as the disperse count.
If 'redundancy' is missing, the lowest optimal value is assumed. A
configuration is considered optimal (for most workloads) when the
disperse count - redundancy count is a power of 2. If the resulting
redundancy is 1, the volume is created normally, but if it's greater
than 1, a warning is shown to the user and he/she must answer yes/no
to continue volume creation. If there isn't any optimal value for
the given number of bricks, a warning is also shown and, if the user
accepts, a redundancy of 1 is used.
If 'redundancy' is specified and the resulting volume is not optimal,
another warning is shown to the user.
A distributed-disperse volume can be created using a number of bricks
multiple of the disperse count.
Change-Id: Iab93efbe78e905cdb91f54f3741599f7ea6645e4
BUG: 1118629
Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-on: http://review.gluster.org/7782
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Made changes to save the port used by snapd in the info file for the volume
i.e. <glusterd-working-directory>/vols/<volname>/info
This is how the gluster volume status of a volume would look like for which the
uss feature is enabled.
[root@tatooine ~]# gluster volume status vol
Status of volume: vol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick tatooine:/export1/vol 49155 Y 5041
Snapshot Daemon on localhost 49156 Y 5080
NFS Server on localhost 2049 Y 5087
Task Status of Volume vol
------------------------------------------------------------------------------
There are no active volume tasks
Change-Id: I8f3e5d7d764a728497c2a5279a07486317bd7c6d
BUG: 1111041
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/8114
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces pause and resume cli command
for geo-replication.
Signed-off-by: Kotresh H R <khiremat@redhat.com>
Change-Id: I4f5e58e9175fe85077d56088473252391fb57de7
BUG: 1093602
Signed-off-by: Kotresh H R <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/7643
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, snapshots by default were activated on creation and there was
no option to activate or deactivate them on demand.
This will allow the user to activate and deactivate on demand.
The CLI goes as follows
1) Activate the snap using a command "gluster snapshot activate <snapname> [force]"
2) Deactivate the snap using a command "gluster snapshot deactivate <snapname>"
Note: Even now the snapshot will be activated during creation.
Change-Id: I0946d800780f26c63fa1fcaf29aabc900140448f
BUG: 1061685
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Reviewed-on: http://review.gluster.org/7476
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid modifying autogenerated files and keeping them in
repository - autogenerate them on demand from ".x" files
Change-Id: I2cdb1fe9b99768ceb80a8cb100fa00bd1d8fe2c6
BUG: 1090807
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/7526
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial patch for the Snapshot feature. Current patch
includes following features:
* Snapshot create
* Snapshot delete
* Snapshot restore
* Snapshot list
* Snapshot info
* Snapshot status
* Snapshot config
Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9
BUG: 1061685
Signed-off-by: shishir gowda <sgowda@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7128
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
stats
"volume profile info" automatically clears incremental stats. There
isn't a command to:
- fetch stats without clearing incremental stats and
- clear cumulative and incremental stats
This change introduces two arguments (i.e. peek and clear). 'clear'
will wipe both incremental and cumulative stats. 'peek' fetches stats
without wiping incremental stats.
'volume profile info peek' - fetches incremental and cumulative stats
without wiping incremental stats
'volume profile info incremental peek' - fetches incremental stats
without wiping incremental stats
'volume profile info clear' - clears both incremental and cumultiave
stats
Change-Id: I91834515ad672eca5f882809941147d7d997c4c9
BUG: 1047416
Signed-off-by: Dawit Alemu <dalemu@redhat.com>
Reviewed-on: http://review.gluster.org/6620
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ibd2edc5608ae6d3370607bff1c626c8347c4deda
BUG: 1031887
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6337
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
re-work.
Following are the cli commands that are new/re-worked:
======================================================
volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} |
volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} |
volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>}
volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [detail|clients|mem|inode|fd|callpool]
volume statedump <VOLNAME> [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history]
glusterd changes:
=================
* Quota limits are now set as extended attributes by glusterd from
the aux mount created by the cli.
* The gfids of the directories on which quota limits are set
for a given volume are stored in
/var/lib/glusterd/vols/<volname>/quota.conf file in binary format,
and whose cksum and version is stored in
/var/lib/glusterd/vols/<volname>/quota.cksum.
Original-author: Krutika Dhananjay <kdhananj@redhat.com>
Original-author: Krishnan Parthasarathi <kparthas@redhat.com>
BUG: 969461
Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/6003
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
oVirt's Gluster Integration needs an inexpensive command that can be
executed every 10 seconds to monitor async tasks and their parameters,
for all volumes.
The solution involves adding a 'tasks' sub-command to 'volume status'
to fetch only the async task IDs, type and other relevant parameters.
Only the originator glusterd participates in this command as all the
information needed is available on all the nodes. This is to make the
command suitable for being executed every 10 seconds.
Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1
BUG: 1012346
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/6006
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currenly the CLI rebalance status command output does not indicate the
'type' of rebalance, i.e. whether a full rebalance or only a fix-layout
was carried out.
Fix: After the rebalance status of all peers is received by the
originator glusterd, alter it to reflect the type of rebalance
before passing it on to the CLI process.
Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3
BUG: 1004744
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/5826
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Provides detailed status info in the following format
MASTER <master-vol> SLAVE <slave-vol>
NODE HEALTH UPTIME FILES SYNCD FILES PENDING BYTES PENDING DELETES PENDING
-----------------------------------------------------------------------------------
This patch introdues "status detail" command to show crawl related
information in CLI. These values are "pulled" from gsyncd when
"status detail" is executed.
Change-Id: I1fdaf7180eacce054a864d34971dc160bd7301e1
BUG: 990420
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/5590
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commands:
gluster system:: execute gsec_create
gluster volume geo-rep <master> <slave-url> create [push-pem] [force]
gluster volume geo-rep <master> <slave-url> start [force]
gluster volume geo-rep <master> <slave-url> stop [force]
gluster volume geo-rep <master> <slave-url> delete
gluster volume geo-rep <master> <slave-url> config
gluster volume geo-rep <master> <slave-url> status
The geo-replication is distributed. The session will be created, and
gsyncd will be spawned on all relevant nodes, instead of only one
node.
geo-rep: Collecting status detail related data
Added persistent store for saving information about
TotalFilesSynced, TotalSyncTime, TotalBytesSynced
Changes in the status information in socket:
Existing(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;
New(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978;
TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640;
Persistent details stored in
/var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status
Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111
BUG: 847839
Original Author: Avra Sengupta <asengupt@redhat.com>
Original Author: Venky Shankar <vshankar@redhat.com>
Original Author: Aravinda VK <avishwan@redhat.com>
Original Author: Amar Tumballi <amarts@redhat.com>
Original Author: Csaba Henk <csaba@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5132
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ia8e1af082078f2f791708ba4faa4992bf291dd6e
BUG: 961339
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/5023
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds an op_errstr member to the gf1_cli_deprobe_rsp structure to
enable return of an errstr to cli.
Change-Id: I0cbb6805b05d7cc0603c13d1c1550bb2bd062a7a
BUG: 816840
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.com/3307
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Presently glusterd only returns an errno to cli for peer probe command. This
patch allows glusterd to return an errstr as well to cli. An op_errstr member
has been added to gf1_cli_probe_rsp and gd1_mgmt_probe_rsp structs to allow
this.
In case of an error, cli will display the errstr if it was set. If errstr is not
set cli will display the error message based on errno.
Also, to allow for return of errstr in cases such as handshake failure, an
errstr member has been added to the glusterd_peerctx_t struct.
Change-Id: Iece2b44a7181555e960d9fe4517ec6cda4cdb385
BUG: 816840
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.com/3262
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove-brick stop now invokes rebalance stop. This leads
to a graceful stop of decommissioning.
The volfile is also updated (removal of decommission)
Change-Id: I5a8f725c0f54439b810ce32d988c21c02229c703
BUG: 811513
Signed-off-by: shishir gowda <shishirng@gluster.com>
Reviewed-on: http://review.gluster.com/3126
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Decommissioning through rebalance has no pause option.
Change-Id: I90f165cdb2eccfaefc99365ae4b48d81320fb753
BUG: 811459
Signed-off-by: shishir gowda <shishirng@gluster.com>
Reviewed-on: http://review.gluster.com/3123
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The major changes are,
* "volume status" now supports getting details of the self-heal daemon processes
for replica volumes. A new cli options "shd", similar to "nfs", has been
introduced for this. "detail", "fd" and "clients" status ops are not supported
for self-heal daemons.
* The default/normal ouput of "volume status" has been enhanced to contain
information about nfs-server and self-heal daemon processes as well. Some tweaks
have been done to the cli output to show appropriate output.
Also, changes have been done to rebalance/remove-brick status, so that hostnames
are displayed instead of uuids.
Change-Id: I3972396dcf72d45e14837fa5f9c7d62410901df8
BUG: 803676
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.com/3016
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enables usage of volume monitoring operations "volume status", "volume top" and
"volume profile" for nfs servers. These operations can be performed on
nfs-servers by passing "nfs" as an option in cli. The output is similar to the
normal brick outputs for these commands.
The new syntaxes for the changed commands are as below,
#gluster volume profile <VOLNAME> {start|info|stop} [nfs]
#gluster volume top <VOLNAME> {[open|read|write|opendir|readdir [nfs]]
|[read-perf|write-perf [nfs|{bs <size> count <count>}]]}
[brick <brick>] [list-cnt <count>]
#gluster volume status [all | <VOLNAME> [nfs|<BRICK>]]
[detail|clients|mem|inode|fd|callpool]
Change-Id: Ia6eb50c60aecacf9b413d3ea993f4cdd90ec0e07
BUG: 795267
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.com/2820
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rebalance will not use any maintainance clients. It is replaced by syncops,
with the volfile. Brickop (communication between glusterd<->glusterfs process)
is used for status and stop commands.
Dept-first traversal of dir is maintained, but data is migrated as and when
encounterd.
fix-layout (dir)
do
Complete migrate-data of dir
fix-layout (subdir)
done
Rebalance state is saved in the vol file, for restart-ability.
A disconnect event and pidfile state determine the defrag-status
Signed-off-by: shishirng <shishirng@gluster.com>
Change-Id: Iec6c80c84bbb2142d840242c28db3d5f5be94d01
BUG: 763844
Reviewed-on: http://review.gluster.com/2540
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Method of getting mount details of brick has been
changed from direct reading of /etc/mtab to using
libc's <mntent.h>, providing a fairly portable
version independent of different linux distributions.
It is only supported on Linux though.
* Wrong fs type (rootfs for /) in fedora-based
distributions has been fixed.
* Allows options (detail, mem, fd, et al) to "all" volumes.
* Use of the fnmatch's GNU extension flag,
FNM_LEADING_DIR is restricted to Linux hosts only.
In case of non-Linux hosts, partial match functionality
is absent.
Change-Id: I102ce808c192ef635c2536a2167101be0aa0fc50
BUG: 786367
Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com>
Reviewed-on: http://review.gluster.com/2705
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch enhances and extends the "volume status" command with information
obtained from the statedump of the bricks of volumes.
Adds new status types : clients, inode, fd, mem, callpool
The new syntax of "volume status" is,
#gluster volume status [all|{<volname> [<brickname>]
[misc-details|clients|inode|fd|mem|callpool]}]
Change-Id: I8d019718465bbc3de727653a839de7238f45da5c
BUG: 765495
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.com/2637
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Support "gluster volume status (all)" option to display all
volumes' status.
* On option "detail" appended to "gluster volume status *",
amount of storage free, total storage, and backend filesystem
details like inode size, inode count, free inodes, fs type,
device name of each brick is displayed.
* One can also obtain [detailed]status of only one brick.
* Format of the enhanced volume status command is:
"gluster volume status [all|<vol>] [<brick>] [detail]"
* Some generic functions have been added to common-utils:
skipword
get_nth_word
These functions enable parsing and fetching
of words in a sentence.
glusterd_get_brick_root (in glusterd)
These are self explanatory.
Change-Id: I6f40c1e19810f8504cd3b1786207364053d82e28
BUG: 765464
Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com>
Reviewed-on: http://review.gluster.com/777
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amar@gluster.com>
|
|
|
|
|
|
|
|
| |
Change-Id: Ibda7e9d7425ecea8c7c673b42bc9fd3489a3a042
BUG: 3158
Reviewed-on: http://review.gluster.com/726
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By using only 1 xdr struct for request and 1 xdr struct for response,
we will be able scale better and also be able to parse the o/p better
For request use-
gf1_cli_req - contains dict
For response use-
gf1_cli_rsp - conains op_ret, op_errno, op_errstr, dict
Change-Id: I94b034e1d8fa82dfd0cf96e7602d4039bc43fef3
BUG: 3720
Reviewed-on: http://review.gluster.com/662
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amar@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rotating geo-replication master/monitor log files from cli.
On invocation, the log file for a given master-slave session
is backed up with the current timestamp suffixed to the file
name and signal is sent to gsyncd to start logging to a new
log file.
Sample commands:
* Rotate log file for this <master>:<slave> session:
gluster volume geo-replication <master> <slave> log-rotate
* Rotate log files for all session for master volume <master>
gluster volume geo-replication <master> log-rotate
* Rotate log files for all sessions:
gluster volume geo-replication log-rotate
Change-Id: I75f641b4e082a04d5373c18583ca4a1d9651d27a
BUG: 3519
Reviewed-on: http://review.gluster.com/529
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Csaba Henk <csaba@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Adds add a 'force' option to 'peer detach' to forcefully detach a peer from a
cluster, even when the cluster contains volumes with bricks on the peer.
Change-Id: I134df51c16a07345c8869b318141d427b572eba5
BUG: 3549
Reviewed-on: http://review.gluster.com/429
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changes:
1. Add a new 'volume statedump' command, that performs statedumps of
all the bricks in the volume and saves them in a specified location.
2. Add new server option 'server.statedump-path'.
3. Remove multiple function definitions in glusterd.h
Statedump Information:
The 'volume statedump' command performs statedumps on all the bricks in
a given volume. The syntax of the command is,
gluster volume statedump <VOLNAME> [type]......
Types include,
* all
* mem
* iobuf
* callpool
* priv
* fd
* inode
Defaults to 'all' when no type is specified.
The statedump files are created by default in /tmp directory of the
server on which the bricks are present.
This path can be changed by setting the 'server.statedump-path' option.
The statedump files will be named as,
<brick-name>.<pid of brick process>.dump
Change-Id: I01c0e1a8aad490da818e086d89f292bd2ed06fd4
BUG: 1964
Reviewed-on: http://review.gluster.com/321
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amar@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This cmd is used in the context of proactive self-heal for replicated
volumes. User invokes the following cmd when (s)he suspects that self-heal
needs to be done on a particular volume,
gluster volume heal <VOLNAME>.
Change-Id: I3954353b53488c28b70406e261808239b44997f3
BUG: 3602
Reviewed-on: http://review.gluster.com/454
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I8a88d2b9228c3614ee7cbaf48782a419e6aee0f6
BUG: 3482
Reported-by: Krishnan Parthasarathi <kp@gluster.com>
Reviewed-on: http://review.gluster.com/432
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|