| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
If one of the paths given to _is_prefix is 0-length, then it is not a
prefix of the other. Hence, _is_prefix should return false.
Change-Id: I54aa577a64a58940ec91872d0d74dc19cff9106d
fixes: bz#1599785
Signed-off-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Instead, rely on programs to be in PATH, as gluster already
does in many places across its code base.
Change-Id: Id21152fe42f5b67205d8f1571b0656c4d5f74246
BUG: 1450546
Signed-off-by: Niklas Hambuechen <mail@nh2.me>
|
|
|
|
|
|
|
|
|
|
|
| |
The return value of glusterd_get_local_brickpaths is unused so add
goto statement. As it is reinitialized outside the if block. Also
change the if condition to check the failure case, when return value
is -1 and path_list is NULL.
Change-Id: I6b47d7751263f704bd69a6452a7e71bfcf226d49
updates: bz#789278
Signed-off-by: Varsha Rao <varao@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gluster-block project needs a dependency check to see if all the bricks
are online before bringing up the relevant gluster-block services. While
the patch https://review.gluster.org/#/c/19785/ attempts to write the
script but brick should be only marked as online only when the
pmap_signin is completed.
While this is perfectly fine for non brick multiplexing, but with brick
multiplexing this patch still doesn't eliminate the race completely as
the attach_req call is asynchrnous and glusterd immediately marks the
port as registerd.
Change-Id: I81db54b88f7315e1b24e0234beebe00de6429f9d
Fixes: bz#1563273
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd maintains a boolean flag 'port_registered' which is used to determine
if a brick has completed its portmap sign in process. This flag is (re)set in
pmap_sigin and pmap_signout events. In case of brick multiplexing this flag is
the identifier to determine if the very first brick with which the process is
spawned up has completed its sign in process. However in case of glusterd
restart when a brick is already identified as running, glusterd does a
pmap_registry_bind to ensure its portmap table is updated but this flag isn't
which is fine in case of non brick multiplex case but causes an issue if
the very first brick which came as part of process is replaced and then
the subsequent brick attach will fail. One of the way to validate this
is to create and start a volume, remove the first brick and then
add-brick a new one. Add-brick operation will take a very long time and
post that the volume status will show all other brick status apart from
the new brick as down.
Solution is to set brickinfo->port_registered to true for all the
running bricks when brick multiplexing is enabled.
Change-Id: Ib0662d99d0fa66b1538947fd96b43f1cbc04e4ff
Fixes: bz#1560957
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit a60fc2ddc03134fb23c5ed5c0bcb195e1649416b.
This commit was causing multiple tests to time out when brick
multiplexing is enabled. With further debugging, it's found that even
though the volume stop transaction is converted into mgmt_v3 to allow
the remote nodes to follow the synctask framework to process the command,
there are other callers of glusterd_brick_stop () which are not synctask
based.
Change-Id: I7aee687abc6bfeaa70c7447031f55ed4ccd64693
updates: bz#1545048
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: There's a race between the last glusterfs_handle_terminate()
response sent to glusterd and the kill that happens immediately if the
terminated brick is the last brick.
Solution: When it is a last brick for the brick process, instead of glusterfsd
killing itself, glusterd will kill the process in case of brick multiplexing.
And also changing gf_attach utility accordingly.
Change-Id: I386c19ca592536daa71294a13d9fc89a26d7e8c0
fixes: bz#1545048
BUG: 1545048
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. If a replica volume created on glusterfs-3.8 was upgraded to
glusterfs-3.12, `gluster vol get volname client-io-threads` displayed
'on' even though it wasn't and the xlator wasn't loaded on
the client-graph. This was due to removing certain checks in
glusterd_get_default_val_for_volopt as a part of commit
47604fad4c2a3951077e41e0c007ceb979bb2c24. Fix it.
2. Also, as a part of op-version bump-up, client-io-threads was being
loaded on the clients during volfile regeneration. Prevent it.
3. AFR assumes quorum-type to be auto in newly created replic 3 (odd
replica in general) volumes but `gluster vol get quorum-type` displays
'none'. Fix it.
Change-Id: I19e586361ed1065c70fb378533d3b4dac1095df9
BUG: 1545056
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
| |
If the above is not done, bricks created with different IP/hostname will
not be compatible with brick multiplexing.
Change-Id: I508eb59b0632df4b48466cca411c7ec6cc6bd577
BUG: 1547068
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With brick multiplexing, to attach a brick to an existing brick process
the prerequisite is to have the compatible brick to finish it's
initialization and portmap sign in and hence the thread might have to go
to a sleep and context switch the synctask to allow the brick process to
communicate with glusterd. In normal code path, this works fine as
glusterd_restart_bricks () is launched through a separate synctask.
In case there's a mismatch of the volume when glusterd restarts,
glusterd_import_friend_volume is invoked and then it tries to call
glusterd_start_bricks () from the main thread which eventually may land
into the similar situation. Now since this is not done through a
separate synctask, the 1st brick will never be able to get its turn to
finish all of its handshaking and as a consequence to it, all the bricks
will fail to get attached to it.
Solution : Execute import volume and glusterd restart bricks in separate
synctask. Importing snaps had to be also done through synctask as
there's a dependency of the parent volume need to be available for the
importing snap functionality to work.
Change-Id: I290b244d456afcc9b913ab30be4af040d340428c
BUG: 1540607
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
In case there's a version mismatch detected for one of the volumes
glusterd was ending up with updating all the volumes which is a
overkill.
Change-Id: I6df792db391ce3a1697cfa9260f7dbc3f59aa62d
BUG: 1539510
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I09f348ed7ae6cd481f8c4d8b4f65f2f2f6aad84e
BUG: 1537364
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NOT_APPLICABLE_QUORUM
First of all, this patch reverts commit 635c1c3 as the same is causing a
regression with bricks not coming up on time when a node is rebooted.
This patch tries to fix the problem in a different way by just trying to
connect to an existing running brick when quorum status is not
applicable.
Change-Id: I0efb5901832824b1c15dcac529bffac85173e097
BUG: 1509845
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem : after brick reset/replace snapshot creation fails
Solution : During brick reset/replace when we validate and aggrigate
dictionary data from another node it was rewriting
'mount_dir' value to NULL which is critical for snapshot
creation.
Change-Id: Iabefbfcef7d8ac4cbd2a241e821c0e51492c093e
BUG: 1512451
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
|
|
| |
rchecksum uses MD5 which is not fips compliant. Hence
using sha256 for the same.
Updates: #230
Change-Id: I7fad016fcc2a9900395d0da919cf5ba996ec5278
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
md5sum is not fips compliant. Using xxhash64 instead of
md5sum for socket file generation in glusterd and
changelog to enable fips support.
NOTE: md5sum is 128 bit hash. xxhash used is 64 bit.
Updates: #230
Change-Id: I1bf2ea05905b9151cd29fa951f903685ab0dc84c
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The size of the description of each option in gluster has become
greater than the hardcoded maximum size, causing a buffer overflow
when requesting a list of all options.
This patch uses dynamic memory to store all the information.
Change-Id: I115cb9b55fc440434638bf468a50db055cdf271d
BUG: 1526402
Signed-off-by: Xavier Hernandez <jahernan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: glusterd eats a huge amount of meory during volume set/stop/start.
Solution: At the time of compare graph topology create a graph and populate
key values in the dictionary, after finished graph comparison we
do destroy the new graph.At the time of construct graph we don't take
any reference and for server xlators we do take reference in
server_setvolume so in glusterd we do take reference after prepare
a new graph while we do create a graph to compare graph topology.
BUG: 1520245
Change-Id: I573133d57771b7dc431a04422c5001a06b7dda9a
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
- Sometimes a the process that glusterd is trying to kill is already dead.
- In that case, if it can't find the pid, it should just continue on and not fail the entire operation.
Change-Id: Ic96952a8d31927446f648830ede6ccd82512663f
BUG: 1522968
Reviewed-on: https://review.gluster.org/18234
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Signed-off-by: Ana M. Neri <amnerip@fb.com>
|
|
|
|
|
|
|
|
|
|
| |
Daemons like snapd, tierd and gfproxyd are maintained on per volume
basis and on a volume delete we should destroy the rpc connection
established for them.
Change-Id: Id1440e39da07b990fdb9b207df18da04b1ca8014
BUG: 1522775
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: tierd was stopped only after detach commit
This makes the detach take a longer time. The detach
demotes the files to the cold brick and if the promotion
frequency is hit, then the tierd starts to promote files to
hot tier again.
Fix: stop tierd after detach start so the files get
demoted faster.
Note: the is_tier_enabled was not maintained properly.
That has been fixed too. some code clean up has been done.
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Change-Id: I532f7410cea04fbb960105483810ea3560ca149b
BUG: 1446381
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem : snapshot creation was failing after brick reset/replace
Fix : changed code to set mount_dir value in rsp_dict during prerequisites
phase i.e glusterd_brick_op_prerequisites call and removed form prevalidate
phase.
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
Change-Id: Ief5d0fafe882a7eb1a7da8535b7c7ce6f011604c
BUG: 1512451
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
function: glusterd_volume_rebalance_use_rsp_dict
problem: Execution cannot reach this statement: "goto out;"
fix: removed the condition 'if(!ctx_dict)' and the corresponding action 'goto out;' because it will never be executed.
reason: if the execution reaches this condition, then the value of '!ctx_dict' will always be false otherwise the execution will reach the 'out'
label, skipping the execution of this conditional statement.
html link of issue: https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-11-10-0f524f07/html/1/99glusterd-utils.c.html#error
Change-Id: I7ab6b2386bb01c54edd872f9f83bb8d2a4cd499f
BUG: 789278
Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
issue: Calling "recursive_rmdir" without checking return value
fix: typecasted return value of function 'recursive_rmdir' to void
Change-Id: Ie95c2a2c503bb247afa69823d0043c3af5e036e8
BUG: 789278
Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
function: glusterd_import_volinfo
problem: Calling strncpy with a maximum size argument of 256 bytes on destination array "new_volinfo->parent_volname" of size 256 bytes might leave the destination string unterminated.
fix: The third argument of strncpy specifies the number of characters to be copied from the source string to the destination string. To make sure that the final string in destination is always null terminated, we copy 1 less character than the total capacity of the destination array and the last element of the array will be filled by '/0' automatically by the strncpy function.
html link of issue: https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-11-10-0f524f07/html/1/39glusterd-utils.c.html#error
Change-Id: I76b0d10e6a932b0885531c9be3c4f4ce7239f3e1
BUG: 789278
Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: when server-quorum-type is server, after restarting glusterd
in the node which is up, gluster volume status is giving incorrect
information.
Fix: check whether server is blank, before adding other keys into the
dictionary.
Change-Id: I926ebdffab330ccef844f23f6d6556e137914047
BUG: 1511339
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Coverity ID: 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417,
418, 419, 423, 424, 425, 426, 427, 428, 429, 436, 437, 438, 439,
440, 441, 442, 443
Issue: Event include_recursion
Removed redundant, recursive includes from the files.
Change-Id: I920776b1fa089a2d4917ca722d0075a9239911a7
BUG: 789278
Signed-off-by: Girjesh Rajoria <grajoria@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd's brick restart logic is not always sequential as there is
atleast three different ways how the bricks are restarted.
1. through friend-sm and glusterd_spawn_daemons ()
2. through friend-sm and handling volume quorum action
3. through friend handshaking when there is a mimatch on quorum on
friend import.
In a brick multiplexing setup, glusterd ended up trying to spawn the
same brick process couple of times as almost in fraction of milliseconds
two threads hit glusterd_brick_start () because of which glusterd didn't
have any choice of rejecting any one of them as for both the case brick
start criteria met.
As a solution, it'd be better to control this madness by two different
flags, one is a boolean called start_triggered which indicates a brick
start has been triggered and it continues to be true till a brick dies
or killed, the second is a mutex lock to ensure for a particular brick
we don't end up getting into glusterd_brick_start () more than once at
same point of time.
Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2
BUG: 1506513
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Consider a case where node reboot is performed and prior to reboot
brick was listening to 49153. Post reboot glusterd assigned 49152
to brick and started the brick process but the new port was never
persisted. Now when glusterd restarts glusterd always read the port
from its persisted store i.e 49153 however pmap signin happens with
the correct port i.e 49152.
Fix:
Make sure when glusterd_brick_start is called, glusterd_store_volinfo is
eventually invoked.
Change-Id: Ic0efbd48c51d39729ed951a42922d0e59f7115a1
BUG: 1506589
Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Coverity warn about "Using uninitialized value "ret".", which is
indeed the case. If we can't attach after 15 tries, attach_brick
return uninitiliazed return code, which is likely hard to trigger,
but bad.
I use -1 to follow the UNIX convetion, but there is no specific error
code tested by code.
Change-Id: I43ad60d25ab169f9cea0db600805ced7f77c37ba
BUG: 789278
Signed-off-by: Michael Scherer <misc@redhat.com>
|
|
|
|
|
|
|
| |
Updates: #242
BUG: 1428063
Change-Id: Iaaf2edf99b2ecc75f6d30762c752a6d445c1c826
Signed-off-by: Poornima G <pgurusid@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summmary:
Adds a new server-side daemon called gfproxyd & a new FUSE client
called gfproxy-client
Updates: #242
BUG: 1428063
Change-Id: I83210098d3a381922bc64fed1110ae1b76e6519f
Tested-by: Shreyas Siravara <sshreyas@fb.com>
Reviewed-by: Kevin Vigor <kvigor@fb.com>
Signed-off-by: Shreyas Siravara <sshreyas@fb.com>
Signed-off-by: Poornima G <pgurusid@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Commit ff075a3d6f9b142911d25c27fd209838782bfff0 disabled loading
client-io-threads for replicate volumes (it was set to on by default in
commit e068c1997314046658dd502e9118dab32decf879) due to performance
issues but in doing so, inadvertently failed to load the xlator even if
the user explicitly enabled the option using the volume set command.
This was despite returning returning sucess for the volume set.
Fix:
Modify the check in perfxl_option_handler() and add checks in volume
create/add-brick/remove-brick code paths, tying it all to
GD_OP_VERSION_3_12_2.
Change-Id: Ib612973a999a7da818cc926f5c2601b1f0794fcf
BUG: 1498570
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
command: gluster volume status <volname/all> client-list
output:
Client connections for volume v1
Name count
----- ------
fuse 2
tierd 1
total clients for volume v1 : 3
-----------------------------------------------------------------
Client connections for volume v2
Name count
----- ------
tierd 1
fuse.gsync 1
total clients for volume v2 : 2
-----------------------------------------------------------------
Updates: #178
Change-Id: I0ff2579d6adf57cc0d3bd0161a2ec6ac6c4747c0
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: https://review.gluster.org/18095
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: hari gowtham <hari.gowtham005@gmail.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd shouldn't concern itself with creating directories specific to
certain xlators.
The index xlator will now proceed creating './glusterfs/indices' dir
only if the parent '.glusterfs' directory exists, which still fixes the
original problem reported i.e 'volume start force' command shouldn't
create brick path if it doesn't exist (BUG 1457202)
This reverts most of the changes done by the commit
b58a15948fb3fc37b6c0b70171482f50ed957f42
Change-Id: I7fc52ad64dce220e336c218fb4d85933ca2e61c0
Signed-off-by: Prashanth Pai <ppai@redhat.com>
Reviewed-on: https://review.gluster.org/18003
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd.vol file always had an option (commented out) to indicate the
base-port to start the portmapper allocation. This patch brings in the
max-port configuration where one can limit the range of ports which
gluster can be allowed to bind.
Fixes: #305
Change-Id: Id7a864f818227b9530a07e13d605138edacd9aa9
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://review.gluster.org/18016
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Prashanth Pai <ppai@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Gaurav Yadav <gyadav@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: currently we can't identify which process is running and
how many instances of it are available.
Fix: name the process when its spawned and send it to the server
and save it in the client_t
The processes that abide by this change from this patch are:
1) fuse mount,
2) rebalance,
3) selfheal,
4) tier,
5) quota,
6) snapshot,
7) brick.
8) gfapi (by default. gfapi.<processname> if processname is found)
Note: fuse gets a process name as native-fuse-client by default.
If the user gives a name for the fuse and spawns it, it will be of
this type --process-name native-fuse-client.<name_specified>.
This can be made use by the process like aux mount done by quota,
geo-rep, etc by adding another option in the aux mount " -o
process-name=gsync_mount"
Updates: #178
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Change-Id: Ie4d02257216839338043737691753bab9a974d5e
Reviewed-on: https://review.gluster.org/17957
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: hari gowtham <hari.gowtham005@gmail.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Aravinda VK <avishwan@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently Gluster keeps process pid information of all the daemons
and brick processes in Gluster configuration file directory
(ie., /var/lib/glusterd/*).
These pid files should be seperate from configuration files.
Deletion of the configuration file directory might result into serious problems.
Also, /var/run/gluster is the default placeholder directory for pid files.
So, with this fix Gluster will keep all process pid information of all
processes in /var/run/gluster/* directory.
Change-Id: Idb09e3fccb6a7355fbac1df31082637c8d7ab5b4
BUG: 1258561
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com>
Reviewed-on: https://review.gluster.org/13580
Tested-by: MOHIT AGRAWAL <moagrawa@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: In multiplexing setup in a container environment we hit a race
where before the first brick finishes its handshake with glusterd, the
subsequent attach requests went through and they actually failed and
glusterd has no mechanism to realize it. This resulted into all the such
bricks not to be active resulting into clients not able to connect.
Solution: Introduce a new flag port_registered in glusterd_brickinfo
to make sure about pmap_signin finish before the subsequent
attach bricks can be processed.
Test: To reproduce the issue followed below steps
1) Create 100 volumes on 3 nodes(1x3) in CNS environment
2) Enable brick multiplexing
3) Reboot one container
4) Run below command
for v in ‛gluster v list‛
do
glfsheal $v | grep -i "transport"
done
After apply the patch command should not fail.
Note: A big thanks to Atin for suggest the fix.
BUG: 1478710
Change-Id: I8e1bd6132122b3a5b0dd49606cea564122f2609b
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: https://review.gluster.org/17984
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Despite the fact that appliances generally use UTC, some
users really want log entries in localtime.
fixes gluster/glusterfs#272
feature page: https://review.gluster.org/17807
Change-Id: I5fbf2c3eedd9eb128fb3f851dd67b2f4081c8bba
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: https://review.gluster.org/16911
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Samikshan Bairagya <samikshan@gmail.com>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
glusterd fails to start on nodes where glusterd tries to come up even
before network is up.
Fix:
On startup glusterd tries to resolve brick path which is based on
hostname/ip, but in the above scenario when network interface is not
up, glusterd is not able to resolve the brick path using ip_address or
hostname With this fix glusterd will use UUID to resolve brick path.
Change-Id: Icfa7b2652417135530479d0aa4e2a82b0476f710
BUG: 1472267
Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
Reviewed-on: https://review.gluster.org/17813
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Prashanth Pai <ppai@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gcc 7 (default in Fedora 26) complains about the following:
glusterd-utils.c: In function ‘glusterd_volinfo_copy_brickinfo’:
glusterd-utils.c:4279:54: warning: comparison between pointer and zero character constant [-Wpointer-compare]
if (old_brickinfo->real_path == '\0') {
^~
glusterd-utils.c:4279:29: note: did you mean to dereference the pointer?
if (old_brickinfo->real_path == '\0') {
^
Comparing a char* with a char is not correct in any case. Instead,
compare it to NULL and the char[0] with '\0'.
Change-Id: Ie5b925cd200416a1e2fa035046005f421994e641
Updates: #259
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/17847
Tested-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Prashanth Pai <ppai@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Samikshan Bairagya <samikshan@gmail.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently 'storage/posix' xlator has an option called option
`export-statfs-size no`, which exports zero as values for few
fields in `struct statvfs`. In a case of backend brick shared
between multiple brick processes, the values of these variables
should be `field_value / number-of-bricks-at-node`. This way,
even the issue of 'min-free-disk' etc at different layers would
also be handled properly when the statfs() sys call is made.
Fixes #241
Change-Id: I2e320e1fdcc819ab9173277ef3498201432c275f
Signed-off-by: Amar Tumballi <amarts@redhat.com>
Reviewed-on: https://review.gluster.org/17618
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Another race where glusterd was restarted glusterd_brick_start () is called
multiple times due to friend handshaking and in one instance when one of the
brick was attempted to be attached to the existing brick process,
send_attach_req failed as the first brick itself was still not up and then we
did a synlock_unlock () followed by a sleep of 1 sec, before the same thread
woke up, another thread tried to start the same brick process and then it
assumed that it has to start a fresh brick process.
Solution:
1. If brick is in starting phase (brickinfo->status ==
GF_BRICK_STARTING), no need for a reattempt to
start the brick.
2. While initiating attach_req set brickinfo->status to
GF_BRICK_STARTING
Change-Id: Ib007b6199ec36fdab4214a1d37f99d7f65ef64da
BUG: 1465559
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://review.gluster.org/17840
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When brick-multiplexing is enabled, and
"cluster.max-bricks-per-process" isn't explicitly set, multiplexing
happens without any limit set. But the default value set for that
tunable is 1, which is confusing. This commit sets the default
value to 0, and prevents the user from being able to set this value
to 1 when brick-multiplexing is enbaled. The default value of 0
denotes that brick-multiplexing can happen without any limit on the
number of bricks per process.
Change-Id: I4647f7bf5837d520075dc5c19a6e75bc1bba258b
BUG: 1472417
Signed-off-by: Samikshan Bairagya <samikshan@gmail.com>
Reviewed-on: https://review.gluster.org/17819
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit introduces a new global option that can be set to limit
the number of multiplexed bricks in one process.
Usage:
`# gluster volume set all cluster.max-bricks-per-process <value>`
If this option is not set then multiplexing will happen for now
with no limitations set; i.e. a brick process will have as many
bricks multiplexed to it as possible. In other words the current
multiplexing behaviour won't change if this option isn't set to
any value.
This commit also introduces a brick process instance that contains
information about brick processes, like the number of bricks
handled by the process (which is 1 in non-multiplexing cases), list
of bricks, and port number which also serves as an unique identifier
for each brick process instance. The brick process list is
maintained in 'glusterd_conf_t'.
Updates: #151
Change-Id: Ib987d14ab0a4f6034dac01b73a4b2839f7b0b695
Signed-off-by: Samikshan Bairagya <samikshan@gmail.com>
Reviewed-on: https://review.gluster.org/17469
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
brickinfo's port & status should be filled up only when attach brick is
successful.
Change-Id: I68b181be37cb94d176f0f4692e8d9dac5493181c
BUG: 1465559
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://review.gluster.org/17640
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Samikshan Bairagya <samikshan@gmail.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: In container environment sometime after delete gluster pod
and created new gluster pod brick process doesn't seem
to come up.
Solution: On the basis of logs it seems glusterd is trying to attach
with non glusterfs process.Change the code of function
glusterd_get_sock_from_brick_pid to fetch socketpath from argument
of running brick process.
BUG: 1464072
Change-Id: Ida6af00066341b683bbb4440d7a0d8042581656a
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: https://review.gluster.org/17601
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
..or else when a volume start force is given, we end up creating
/brick-path/.glusterfs/indices folder and various subdirs under it and
eventually starting the brick process.
As a part of this patch, glusterd_get_index_basepath() is added in
glusterd, who will then use it to create the basepath during
volume-create, add-brick, replace-brick and reset-brick. It also uses this
function to set the 'index-base' xlator option for the index translator.
Change-Id: Id018cf3cb6f1e2e35b5c4cf438d1e939025cb0fc
BUG: 1457202
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: https://review.gluster.org/17426
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit tries to handle a race where we might end up trying to spawn
the brick process twice with two different set of ports resulting into
glusterd portmapper having the same brick entry in two different ports
which will result into clients to fail connect to bricks because of
incorrect ports been communicated back by glusterd.
In glusterd_brick_start () checking brickinfo->status flag to identify
whether a brick has been started by glusterd or not is not sufficient as
there might be cases where while glusterd restarts
glusterd_restart_bricks () will be called through glusterd_spawn_daemons
() in synctask and immediately glusterd_do_volume_quorum_action () with
server-side-quorum set to on will again try to start the brick and in
case if the RPC_CLNT_CONNECT event for the same brick hasn't been processed by
glusterd by that time, brickinfo->status will still be marked as
GF_BRICK_STOPPED resulting into a reattempt to start the brick with a different
port and that would result portmap go for a toss and resulting clients to fetch
incorrect port.
Fix would be to introduce another enum value called GF_BRICK_STARTING in
brickinfo->status which will be set when a brick start is attempted by
glusterd and will be set to started through RPC_CLNT_CONNECT event. For
brick multiplexing, on attach brick request given the brickinfo->status
flag is marked to started directly this value will not have any effect.
Also this patch removes started_here flag as it looks to be redundant as
brickinfo->status.
Change-Id: I9dda1a9a531b67734a6e8c7619677867b520dcb2
BUG: 1457981
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://review.gluster.org/17447
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|