summaryrefslogtreecommitdiffstats
path: root/rpc
Commit message (Collapse)AuthorAgeFilesLines
* rpcsvc: Add rpchdr and proghdr to iobref before submitting to transportPoornima G2017-02-162-3/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of https://review.gluster.org/16613 Issue: When fio is run on multiple clients (each client writes to its own files), and meanwhile the clients does a readdirp, thus the client which did a readdirp will now recieve the upcalls. In this scenario the client disconnects with rpc decode failed error. RCA: Upcall calls rpcsvc_request_submit to submit the request to socket: rpcsvc_request_submit currently: rpcsvc_request_submit () { iobuf = iobuf_new iov = iobuf->ptr fill iobuf to contain xdrised upcall content - proghdr rpcsvc_callback_submit (..iov..) ... if (iobuf) iobuf_unref (iobuf) } rpcsvc_callback_submit (... iov...) { ... iobuf = iobuf_new iov1 = iobuf->ptr fill iobuf to contain xdrised rpc header - rpchdr msg.rpchdr = iov1 msg.proghdr = iov ... rpc_transport_submit_request (msg) ... if (iobuf) iobuf_unref (iobuf) } rpcsvc_callback_submit assumes that once rpc_transport_submit_request() returns the msg is written on to socket and thus the buffers(rpchdr, proghdr) can be freed, which is not the case. In especially high workload, rpc_transport_submit_request() may not be able to write to socket immediately and hence adds it to its own queue and returns as successful. Thus, we have use after free, for rpchdr and proghdr. Hence the clients gets garbage rpchdr and proghdr and thus fails to decode the rpc, resulting in disconnect. To prevent this, we need to add the rpchdr and proghdr to a iobref and send it in msg: iobref_add (iobref, iobufs) msg.iobref = iobref; The socket layer takes a ref on msg.iobref, if it cannot write to socket and is adding to the queue. Thus we do not have use after free. Thank You for discussing, debugging and fixing along: Prashanth Pai <ppai@redhat.com> Raghavendra G <rgowdapp@redhat.com> Rajesh Joseph <rjoseph@redhat.com> Kotresh HR <khiremat@redhat.com> Mohammed Rafi KC <rkavunga@redhat.com> Soumya Koduri <skoduri@redhat.com> > Reviewed-on: https://review.gluster.org/16613 > Reviewed-by: Prashanth Pai <ppai@redhat.com> > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: soumya k <skoduri@redhat.com> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Change-Id: Ifa6bf6f4879141f42b46830a37c1574b21b37275 BUG: 1422363 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: https://review.gluster.org/16623 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd: add a cli command to trigger a statedump on a clientPoornima G2017-02-133-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With this, we will be able to trigger statedumps on remote Gluster clients, mainly targetted for applications using libgfapi. Design: SIGUSR signal is the most comman way of taking a statedump in Gluster. But it cannot be used for libgfapi based processes, as the process loading the library might have already consumed SIGUSR signal. Hence going by the command way. One has to issue a Gluster command to initiate a statedump on the libgfapi based client. The command takes hostname and PID as an argument. All the glusterds in the cluster, check if they are connected to the specified hostname, and send an RPC request to all the connected clients from that hostname (via the mgmt connection). > URL: http://review.gluster.org/16357 > BUG: 1169302 > Signed-off-by: Poornima G <pgurusid@redhat.com> > [ndevos: minor fixes and split patch in smaller pieces] > Reviewed-on-master: https://review.gluster.org/9228 > Reviewed-by: Niels de Vos <ndevos@redhat.com> > Tested-by: Niels de Vos <ndevos@redhat.com> > Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> > Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> BUG: 1418981 Change-Id: Icbe4d2f026b32a2c7d5535e1bfb2cdaaff042e91 Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: https://review.gluster.org/16601 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* Tier: remove warning related to the enumhari gowtham2017-02-081-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | back-port of : https://review.gluster.org/#/c/16539/ PROBLEM: In the tier as a service patch the enums for tier (from gf1_op_command and gf_defrag_command) are put into a single enum gf_defrag_command which causes a warning that will make the build fail. FIX: send both the enum and eliminate the warning. >Change-Id: I899ff622dfb07134e6459aa65f65ea7252765293 >BUG: 1418973 >Signed-off-by: hari gowtham <hgowtham@redhat.com> >Reviewed-on: https://review.gluster.org/16539 >Smoke: Gluster Build System <jenkins@build.gluster.org> >Tested-by: hari gowtham <hari.gowtham005@gmail.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Change-Id: I8d2ec89b8689066091ab58406d225e1058f435cf BUG: 1419846 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: https://review.gluster.org/16553 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* socket: GF_REF_PUT should be called outside lockRajesh Joseph2017-02-071-2/+4
| | | | | | | | | | | | | | | | | | | | | | GF_REF_PUT was called inside lock which can call socket_poller_mayday which inturn tries to take the same lock. This can lead to deadlock scenario. >Reviewed-on: https://review.gluster.org/16343 >Reviewed-by: Raghavendra G <rgowdapp@redhat.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Smoke: Gluster Build System <jenkins@build.gluster.org> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> BUG: 1419503 Change-Id: Ib3b161bcfeac810bd3593dc04c10ef984f996b17 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: https://review.gluster.org/16548 Tested-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org>
* socket: retry connect immediately if it failsJeff Darcy2017-02-021-2/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | Previously we relied on a complex dance of setting flags, shutting down the socket, tearing stuff down, getting an event, tearing more stuff down, and waiting for a higher-level retry. What we really need, in the case where we're just trying to connect prematurely e.g. to a brick that hasn't fully come up yet, is a simple retry of the connect(2) call. This was discovered by observing failures in ec-new-entry.t with multiplexing enabled, but probably fixes other random failures as well. Backport of: > Change-Id: Ibedb8942060bccc96b02272a333c3002c9b77d4c > BUG: 1385758 > Reviewed-on: https://review.gluster.org/16510 BUG: 1418091 Change-Id: I4bac26929a12cabcee4f9e557c8b4d520948378b Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/16533 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* rpc/socket.c : Bonnie++ hangs during rewrites in ganesha + SSLMohit Agrawal2017-02-021-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Bonnie++ rewrite operation hangs in ganesha + SSL environment Solution: Bonnie++ hangs during execution of rewrite operation in ganesha + SSL environment.It was hanged due to blocking on poll call in ssl_do because no POLLOUT event was getting on socket. Socket is not getting POLLOUT event because all other threads are waiting to get lock and lock is not released ssl_do because it is not getting any event on poll.To correct it update the condition in ssl_do as same in getting error SSL_ERROR_WANT_READ. Test: To test the patch followed below procedure 1) Setup 2X2 Ganesha + SSL environment. 2) Run bonnie from 3 nfs client parallely 3) After run "Rewwrite operation" by bonnie it is hanged. 4) After apply the patch it is not hanged. > BUG: 1418213 > Change-Id: I5985cbbc4cfdac5d287268d791e31c274abc3c8d > Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> > Reviewed-on: https://review.gluster.org/16501 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > Reviewed-by: Jeff Darcy <jdarcy@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Vijay Bellur <vbellur@redhat.com> > (cherry picked from commit d7077bca4b372a056d23416294e729637e9af94e) Change-Id: Id029c71382025477bb5ff31f28ec537e4fe58b03 BUG: 1418541 Reviewed-on: https://review.gluster.org/16513 Tested-by: MOHIT AGRAWAL <moagrawa@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* libglusterfs+transport+io-threads: fix 256KB stack abuseJeff Darcy2017-02-022-12/+12
| | | | | | | | | | | | | | | | | | | | | | | Some functions were allocating 64K booleans, which are (crazily) mapped to 4-byte ints, for a total of 256KB per call. Changed to use bitfields instead, so usage is now only 8KB per call. This was the impediment to changing the io-threads stack size, so that has been adjusted too. Backport of: > Change-Id: I8781c4f2c8f2b830f4535e366995fac8dd0a8653 > BUG: 1418095 > Reviewed-on: https://review.gluster.org/15745 Change-Id: Ia5dada61703e6bea95f2511da71feb573fc9a429 BUG: 1418536 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/16511 Reviewed-by: N Balachandran <nbalacha@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* core: run many bricks within one glusterfsd processJeff Darcy2017-02-013-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for multiple brick translator stacks running in a single brick server process. This reduces our per-brick memory usage by approximately 3x, and our appetite for TCP ports even more. It also creates potential to avoid process/thread thrashing, and to improve QoS by scheduling more carefully across the bricks, but realizing that potential will require further work. Multiplexing is controlled by the "cluster.brick-multiplex" global option. By default it's off, and bricks are started in separate processes as before. If multiplexing is enabled, then *compatible* bricks (mostly those with the same transport options) will be started in the same process. Backport of: > Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb > BUG: 1385758 > Reviewed-on: https://review.gluster.org/14763 Change-Id: I4bce9080f6c93d50171823298fdf920258317ee8 BUG: 1418091 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/16496 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* tier : Tier as a servicehari gowtham2017-01-162-4/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tierd is implemented by separating from rebalance process. The commands affected: 1) Attach tier will trigger this process instead of old one 2) tier start and tier start force will also trigger this process. 3) volume status [tier] will show tier daemon as a process instead of task and normal tier status and tier detach status works. 4) tier stop implemented. 5) detach tier implemented separately along with new detach tier status 6) volume tier volname status will work using the changes. 7) volume set works This patch has separated the tier translator from the legacy DHT rebalance code. It now sends the RPCs from the CLI to glusterd separate to the DHT rebalance code. The daemon is now a service, similar to the snapshot daemon, and can be viewed using the volume status command. The code for the validation and commit phase are the same as the earlier tier validation code in DHT rebalance. The “brickop” phase has been changed so that the status command can use this framework. The service management framework is now used. DHT rebalance does not use this framework. This service framework takes care of : *) spawning the daemon, killing it and other such processes. *) volume set options , which are written on the volfile. *) restart and reconfigure functions. Restart is to restart the daemon at two points 1)after gluster goes down and comes up. 2) to stop detach tier. *) reconfigure is used to make immediate volfile changes. By doing this, we don’t restart the daemon. it has the code to rewrite the volfile for topological changes too (which comes into place during add and remove brick). With this patch the log, pid, and volfile are separated and put into respective directories. Change-Id: I3681d0d66894714b55aa02ca2a30ac000362a399 BUG: 1313838 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/13365 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterfs : Removing logically dead codeMuthu-vigneshwaran2017-01-071-6/+0
| | | | | | | | | | | | | | | CID = 1356502 BUG: 789278 Change-Id: I11a814addc6607902c12aca8f4efec5741cbd7d3 Signed-off-by: Muthu-vigneshwaran <mvignesh@redhat.com> Reviewed-on: http://review.gluster.org/16028 Tested-by: Muthu Vigneshwaran <muthuvigneshwaran77@gmail.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* socket: socket disconnect should wait for poller thread exitRajesh Joseph2016-12-218-34/+106
| | | | | | | | | | | | | | | | | | | | | When SSL is enabled or if "transport.socket.own-thread" option is set then socket_poller is run as different thread. Currently during disconnect or PARENT_DOWN scenario we don't wait for this thread to terminate. PARENT_DOWN will disconnect the socket layer and cleanup resources used by socket_poller. Therefore before disconnect we should wait for poller thread to exit. Change-Id: I71f984b47d260ffd979102f180a99a0bed29f0d6 BUG: 1404181 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/16141 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* socket: Keepalives should happen on IPv6 as well as IPv4Shreyas Siravara2016-12-161-1/+1
| | | | | | | | | | | | | | | | | Summary: - Check for AF_INET *and* AF_INET6. - This is a cherry-pick of D3057373 to 3.8 Signed-off-by: Shreyas Siravara <sshreyas@fb.com> Change-Id: I53eb79284eddfee6e13821c6570809f575b96769 BUG: 1405478 Reviewed-on: http://review.gluster.org/16167 Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Jeff Darcy <jdarcy@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rpc: fix for race between rpc and protocol/clientRajesh Joseph2016-12-052-40/+59
| | | | | | | | | | | | | | | | | | | | | | | | It is possible that the notification thread which notifies protocol/client layer about the disconnection is put to sleep and meanwhile, a fuse thread or a timer thread initiates and completes reconnection to the brick. The notification thread is then woken up and protocol/client layer updates its flags to indicate that network is disconnected. No reconnection is initiated because reconnection is rpc-lib layer's responsibility and its flags indicate that connection is connected. Fix: Serialize connect and disconnect notify Credit: Raghavendra Talur <rtalur@redhat.com> Change-Id: I8ff5d1a3283b47f5c26848a42016a40bc34ffc1d BUG: 1386626 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/15916 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra Talur <rtalur@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* dht/md-cache: Filter invalidate if the file is made a linkto filePoornima G2016-12-021-1/+9
| | | | | | | | | | | | | | | | | | | | | | | | Upcall as a part of setattr, sends an invalidation and the invalidation carries the resulting stat value. When a file is converted to linkto files, even then an invalidation is set and as a result the mountpoint shows the sticky bit in the stat of the file. eg: ---------T. 945 root root 0 Nov 8 10:14 hardlink.999 Fix: When dht recieves a notification of sticky bit change, it updates the flag, to indicate md-cache to send the subsequent lookup. Change-Id: Ic2fd7a5b196db0754f9b97072e644e6bf69da606 BUG: 1392713 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: http://review.gluster.org/15789 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Susant Palai <spalai@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* cluster/afr: CLI for granular entry heal enablement/disablementKrutika Dhananjay2016-11-281-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When there are already existing non-granular indices created that are yet to be healed, if granular-entry-heal option is toggled from 'off' to 'on', AFR self-heal whenever it kicks in, will try to look for granular indices in 'entry-changes'. Because of the absence of name indices, granular entry healing logic will fail to heal these directories, and worse yet unset pending extended attributes with the assumption that are no entries that need heal. To get around this, a new CLI is introduced which will invoke glfsheal program to figure whether at the time an attempt is made to enable granular entry heal, there are pending heals on the volume OR there are one or more bricks that are down. If either of them is true, the command will be failed with the appropriate error. New CLI: gluster volume heal <VOL> granular-entry-heal {enable,disable} Change-Id: I1f4fe8162813b9068e198965d94169fee4adc099 BUG: 1370410 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/15747 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* afr,dht,ec: Replace GF_EVENT_CHILD_MODIFIED with event SOME_DESCENDENT_DOWN/UPPoornima G2016-11-211-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently these are few events related to child_up/down: GF_EVENT_CHILD_UP : Issued when any of the protocol client connects. GF_EVENT_CHILD_MODIFIED : Issued by afr/dht/ec GF_EVENT_CHILD_DOWN : Issued when any of the protocol client disconnects. These events get modified at the dht/afr/ec layers. Here is a brief on the same. DHT: - All the subvolumes reported once, and atleast one child came up, then GF_EVENT_CHILD_UP is issued - connect GF_EVENT_CHILD_UP is issued - disconnect GF_EVENT_CHILD_MODIFIED is issued - All the subvolumes disconnected, GF_EVENT_CHILD_DOWN is issued AFR: - First subvolume came up, then GF_EVENT_CHILD_UP is issued - Subsequent subvolumes coming up, results in GF_EVENT_CHILD_MODIFIED - Any of the subvolumes go down, then GF_EVENT_SOME_CHILD_DOWN is issued - Last up subvolume goes down, then GF_EVENT_CHILD_DOWN is issued Until the patch [1] introduced GF_EVENT_SOME_CHILD_UP, GF_EVENT_CHILD_MODIFIED was issued by afr/dht when any of the subvolumes go up or down. Now with md-cache changes, there is a necessity to differentiate between child up and down. Hence, introducing GF_EVENT_SOME_DESCENDENT_DOWN/UP and getting rid of GF_EVENT_CHILD_MODIFIED. [1] http://review.gluster.org/12573 Change-Id: I704140b6598f7ec705493251d2dbc4191c965a58 BUG: 1396038 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: http://review.gluster.org/15764 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* Revert "rpc: Fix the race between notification and reconnection"Pranith Kumar Karampuri2016-11-161-4/+3
| | | | | | | | | | | | | | | | This reverts commit a6b63e11b7758cf1bfcb67985e25ec02845f0995. Nithya and Rajesh found that the mount fails sometimes after this patch was merged so reverting it. BUG: 1386626 Change-Id: I959a5b6c7da61368cf4c67c98193c6e8fdd1755d Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/15838 Reviewed-by: N Balachandran <nbalacha@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org>
* glusterd/quota: upgrade quota.conf file during an upgradeManikandan Selvaganesh2016-11-071-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem ======= When quota is enabled on 3.6, it will have quota conf version in quota.conf as v1.1. This node gets upgraded to 3.7 but it will still have quota conf version as v1.1 until a quota enable/disable/set limit is initiated. When this is not initiated and when this node tries to peer probe a node which is a fresh install of 3.7 (which will have quota conf version as v1.2), then this will result in "Peer rejected" state. This patch fixes the issue. Solution ======== When an upgrade happens from 3.6 to 3.7, quota.conf file needs to be modified as well. With 3.6, in quota.conf the version will be v1.1 and it needs to be changed to v1.2 from 3.7. This is because in 3.7, inode quota feature is introduced. So when an op-version bumpup happens quota.conf needs to be upgraded with quota conf version v1.2 and all the 16 byte uuid needs to be changed to 17 bytes uuid as well. Previously, when the cluster version is upgraded to 3.7, the quota.conf got upgraded as well. But, the upgradation was done only when quota enable/disable/set limit is done. With this patch, the upgradation is done during a cluster op version bump up as well. Change-Id: Idb5ba29d3e1ea0e45c85d87c952c75da9e0f99f0 BUG: 1371539 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/15352 Tested-by: Atin Mukherjee <amukherj@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* rpc: Fix the race between notification and reconnectionPranith Kumar K2016-10-241-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: There was a hang because unlock on an entry failed with ENOTCONN. Client thinks the connection is down where as server thinks the connection is up. This is the race we are seeing: 1) Connection from client to the brick disconnects. 2) Saved frames unwind is called which unwinds all frames that were wound before disconnect. 3) connection from client to the brick happens and setvolume. 4) Disconnect notification for the connection in 1) comes now and calls client_rpc_notify() which marks the connection to be offline even when the connection is up. This is happening because I/O can retrigger connection before disconnect notification is sent to the higher layers in rpc. Fix: Notify the higher layers that a disconnect happened and then go ahead with reconnect logic. For the logs which point to the information above check: https://bugzilla.redhat.com/show_bug.cgi?id=1386626#c1 Thanks to Raghavendra G for suggesting the correct fix. BUG: 1386626 Change-Id: I3c84ba1f17010bd69049fa88ec5f0ae431f8cda9 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/15681 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* compound fops: Fix file corruption issueKrutika Dhananjay2016-10-244-15/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1. Address of a local variable @args is copied into state->req in server3_3_compound (). But even after the function has gone out of scope, in server_compound_resume () this pointer is accessed and dereferenced. This patch fixes that. 2. Compound fops, by virtue of NOT having a vector sizer (like the one writev has), ends up having both the header and the data (in case one of its member fops is WRITEV) in the same hdr_iobuf. This buffer was not being preserved through the lifetime of the compound fop, causing it to be overwritten by a parallel write fop, even when the writev associated with the currently executing compound fop is yet to hit the desk, thereby corrupting the file's data. This is fixed by associating the hdr_iobuf with the iobref so its memory remains valid through the lifetime of the fop. 3. Also fixed a use-after-free bug in protocol/client in compound fops cbk, missed by Linux but caught by NetBSD. Finally, big thanks to Pranith Kumar K and Raghavendra Gowdappa for their help in debugging this file corruption issue. Change-Id: I6d5c04f400ecb687c9403a17a12683a96c2bf122 BUG: 1378778 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/15654 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* rpc/socket.c : Modify socket_poller code in case of ENODATA error code.Mohit Agrawal2016-10-231-1/+1
| | | | | | | | | | | | | | | | | | Problem: Continuous warning message(ENODATA) are coming in socket_rwv while SSL is enabled. Solution: To avoid the warning message update one condition in socket_poller loop code before break from loop in case of error returned by poll functions. BUG: 1386450 Change-Id: I19b3a92d4c3ba380738379f5679c1c354f0ab9b1 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: http://review.gluster.org/15677 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* rpc/socket: Close pipe on disconnectionKaushal M2016-10-231-1/+8
| | | | | | | | | | | | | | | | Encrypted connections create a pipe, which isn't closed when the connection disconnects. This leaks fds, and gluster eventually ends up in a situation with fd starvation which leads to operation failures. Change-Id: I144e1f767cec8c6fc1aa46b00cd234129d2a4adc BUG: 1336371 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/14356 Tested-by: MOHIT AGRAWAL <moagrawa@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* rpc/socket.c : Modify gf_log message in socket_poller code in case of errorMohit Agrawal2016-10-121-3/+6
| | | | | | | | | | | | | | | | | | | Problem: In case of SSL after stopping the volume if client(mount point) is still trying to write the data on socket then it will throw an EIO error on that socket and given this log message is captured at every attempt this would flood the log file. Solution: To reduce the frequency of stored log message use GF_LOG_OCCASIONALLY instead of gf_log. BUG: 1381115 Change-Id: I66151d153c2cbfb017b3ebc4c52162278c0f537c Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: http://review.gluster.org/15605 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* socket: log the client identifier in ssl connectMohit Agrawal2016-10-051-2/+2
| | | | | | | | | | | | | | | | | | Problem: client identifier is not logged in message in ssl_setup_connection Solutuion: In ssl_setup_connection xl_private is not available in rpc_transport so changed to this peerinfo.identifier. BUG: 1380275 Change-Id: I05006a3d63e46de8c388298c22faa9a3329eb6f3 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: http://review.gluster.org/15596 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* build: rpc/xdr .c and .h files are regenerated unnecessarilyKaleb S. KEITHLEY2016-09-201-4/+8
| | | | | | | | | | | | | | | | | | | .c and .h files are blindly (re)generated. If you run `make` followed by a second `make` or `make install` the subsequent `make` will unnecessarily recompile everything because of the redundant regeneration of the .c and .h files are now newer than the prior builds .o files Change-Id: I7e477bcdcc20869e144ada7e6d91e7221b8ee71f BUG: 1377341 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/15530 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Niels de Vos <ndevos@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Anoop C S <anoopcs@redhat.com>
* socket: pollerr event shouldn't trigger socket_connnect_finishAtin Mukherjee2016-09-192-1/+44
| | | | | | | | | | | | | | | | | | | | | | | | If connect fails with any other error than EINPROGRESS we cannot get the error status using getsockopt (... SO_ERROR ... ). Hence we need to remember the state of connect and take appropriate action in the event_handler for the same. As an added note, a event can come where poll_err is HUP and we have poll_in as well (i.e some status was written to the socket), so for such cases we need to finish the connect, process the data and then the poll_err as is the case in the current code. Special thanks to Kaushal M & Raghavendra G for figuring out the issue. Change-Id: Ic45ad59ff8ab1d0a9d2cab2c924ad940b9d38528 BUG: 1372356 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/15440 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* rpc: increase RPC/XID with each callbackNiels de Vos2016-09-192-3/+19
| | | | | | | | | | | | | | | | | | | | | The RPC/XID for callbacks has been hardcoded to GF_UNIVERSAL_ANSWER. In Wireshark these RPC-calls are marked as "RPC retransmissions" because of the repeating RPC/XID. This is most confusing when verifying the callbacks that the upcall framework sends. There is no way to see the difference between real retransmissions and new callbacks. This change was verified by create and removal of files through different Gluster clients. The RPC/XID is increased on a per connection (or client) base. The expectations of the RPC protocol are met this way. Change-Id: I2116bec0e294df4046d168d8bcbba011284cd0b2 BUG: 1377097 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/15524 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* build: out-of-tree builds generates files in the wrong directoryKaleb S KEITHLEY2016-09-1816-74/+202
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | And minor cleanup of a few of the Makefile.am files while we're at it. Rewrite the make rules to do what xdrgen does. Now we can get rid of xdrgen. Note 1. netbsd6's sed doesn't do -i. Why are we still running smoke tests on netbsd6 and not netbsd7? We barely support netbsd7 as it is. Note 2. Why is/was libgfxdr.so (.../rpc/xdr/src/...) linked with libglusterfs? A cut-and-paste mistake? It has no references to symbols in libglusterfs. Note3. "/#ifndef\|#define\|#endif/" (note the '\'s) is a _basic_ regex that matches the same lines as the _extended_ regex "/#(ifndef|define|endif)/". To match the extended regex sed needs to be run with -r on Linux; with -E on *BSD. However NetBSD's and FreeBSD's sed helpfully also provide -r for compatibility. Using a basic regex avoids having to use a kludge in order to run sed with the correct option on OS X. Note 4. Not copying the bit of xdrgen that inserts copyright/license boilerplate. AFAIK it's silly to pretend that machine generated files like these can be copyrighted or need license boilerplate. The XDR source files have their own copyright and license; and their copyrights are bound to be more up to date than old boilerplate inserted by a script. From what I've seen of other Open Source projects -- e.g. gcc and its C parser files generated by yacc and lex -- IIRC they don't bother to add copyright/license boilerplate to their generated files. It appears that it's a long-standing feature of make (SysV, BSD, gnu) for out-of-tree builds to helpfully pretend that the source files it can find in the VPATH "exist" as if they are in the $cwd. rpcgen doesn't work well in this situation and generates files with "bad" #include directives. E.g. if you `rpcgen ../../../../$srcdir/rpc/xdr/src/glusterfs3-xdr.x`, you get an #include directive in the generated .c file like this: ... #include "../../../../$srcdir/rpc/xdr/src/glusterfs3-xdr.h" ... which (obviously) results in compile errors on out-of-tree build because the (generated) header file doesn't exist at that location. Compared to `rpcgen ./glusterfs3-xdr.x` where you get: ... #include "glusterfs3-xdr.h" ... Which is what we need. We have to resort to some Stupid Make Tricks like the addition of various .PHONY targets to work around the VPATH "help". Warning: When doing an in-tree build, -I$(top_builddir)/rpc/xdr/... looks exactly like -I$(top_srcdir)/rpc/xdr/... Don't be fooled though. And don't delete the -I$(top_builddir)/rpc/xdr/... bits Change-Id: Iba6ab96b2d0a17c5a7e9f92233993b318858b62e BUG: 1330604 Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/14085 Tested-by: Niels de Vos <ndevos@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* io-stats: Add stats for upcall notificationsv3.10devPoornima G2016-08-311-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With this patch, there will be additional entries seen in the profile info: UPCALL : Total number of upcall events that were sent from the brick(in brick profile), and number of upcall notifications recieved by client(in client profile) Cache invalidation events: ------------------------- CI_IATT : Number of upcalls that were cache invalidation and had one of the IATT_UPDATE_FLAGS set. This indicates that one of the iatt value was changed. CI_XATTR : Number of upcalls that were cache invalidation, and had one of the UP_XATTR or UP_XATTR_RM set. This indicates that an xattr was updated or deleted. CI_RENAME : Number of upcalls that were cache invalidation, resulted by the renaming of a file or directory CI_UNLINK : Number of upcalls that were cache invalidation, resulted by the unlink of a file. CI_FORGET : Number of upcalls that were cache invalidation, resulted by the forget of inode on the server side. Lease events: ------------ LEASE_RECALL : Number of lease recalls sent by the brick (in brick profile), and number of lease recalls recieved by client(in client profile) Note that the sum of CI_IATT, CI_XATTR, CI_RENAME, CI_UNLINK, CI_FORGET, LEASE_RECALL may not be equal to UPCALL. This is because, each cache invalidation can carry multiple flags. Eg: - Every CI_XATTR will have CI_IATT - Every CI_UNLINK will also increment CI_IATT as link count is an iatt attribute. Also UP_PARENT_DENTRY_FLAGS is currently not accounted for, as CI_RENAME and CI_UNLINK will always have the flag UP_PARENT_DENTRY_FLAGS Change-Id: Ieb8cd21dde2c4c7618f12d025a5e5156f9cc0fe9 BUG: 1371543 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: http://review.gluster.org/15193 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd : Introduce reset brickAnuradha Talur2016-08-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The command basically allows replace brick with src and dst bricks as same. Usage: gluster v reset-brick <volname> <hostname:brick-path> start This command kills the brick to be reset. Once this command is run, admin can do other manual operations that they need to do, like configuring some options for the brick. Once this is done, resetting the brick can be continued with the following options. gluster v reset-brick <vname> <hostname:brick> <hostname:brick> commit {force} Does the job of resetting the brick. 'force' option should be used when the brick already contains volinfo id. Problem: On doing a disk-replacement of a brick in a replicate volume the following 2 scenarios may occur : a) there is a chance that reads are served from this replaced-disk brick, which leads to empty reads. b) potential data loss if next writes succeed only on replaced brick, and heal is done to other bricks from this one. Solution: After disk-replacement, make sure that reset-brick command is run for that brick so that pending markers are set for the brick and it is not chosen as source for reads and heal. But, as of now replace-brick for the same brick-path is not allowed. In order to fix the above mentioned problem, same brick-path replace-brick is needed. With this patch reset-brick commit {force} will be allowed even when source and destination <hostname:brickpath> are identical as long as 1) destination brick is not alive 2) source and destination brick have the same brick uuid and path. Also, the destination brick after replace-brick will use the same port as the source brick. Change-Id: I440b9e892ffb781ea4b8563688c3f85c7a7c89de BUG: 1266876 Signed-off-by: Anuradha Talur <atalur@redhat.com> Reviewed-on: http://review.gluster.org/12250 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Ashish Pandey <aspandey@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* rpc: dead code fixankit2016-08-291-2/+2
| | | | | | | | | | | | | | CID: 1357866 Change-Id: I117cfd455b05df8d2bd9c8df4a081ffe4d053d09 BUG: 789278 Signed-off-by: ankit <anraj@redhat.com> Reviewed-on: http://review.gluster.org/15310 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: ankitraj NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* cli,rpc : Removing logically dead codeMuthu-vigneshwaran2016-08-291-2/+0
| | | | | | | | | | | | | | | CID = 1357865, 1357866 Change-Id: I05eb345e9357d889872d259a600759392b188bb3 BUG: 789278 Signed-off-by: Muthu-vigneshwaran <mvignesh@redhat.com> Reviewed-on: http://review.gluster.org/15148 Tested-by: Muthu Vigneshwaran Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com>
* rpc: fix unused variable warnings/errorsKaleb S. KEITHLEY2016-08-292-5/+2
| | | | | | | | | | | | | | | | http://review.gluster.org/14085 fixes a/the "leak" - via the generated rpc/xdr headers - of pragmas that mask these warnings. However 14085 won't pass the smoke test until all the warnings are fixed. Change-Id: I20d91091bee0bf8f198a307ebba4b284bc3817ff BUG: 1369124 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/15240 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org>
* glusterd/cli: cli to get local state representation from glusterdSamikshan Bairagya2016-08-261-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently there is no existing CLI that can be used to get the local state representation of the cluster as maintained in glusterd in a readable as well as parseable format. The CLI added has the following usage: # gluster get-state [daemon] [odir <path/to/output/dir>] [file <filename>] This would dump data points that reflect the local state representation of the cluster as maintained in glusterd (no other daemons are supported as of now) to a file inside the specified output directory. The default output directory and filename is /var/run/gluster and glusterd_state_<timestamp> respectively. The option for specifying the daemon name leaves room to add support for other daemons in the future. Following are the data points captured as of now to represent the state from the local glusterd pov: * Peer: - Primary hostname - uuid - state - connection status - List of hostnames * Volumes: - name, id, transport type, status - counts: bricks, snap, subvol, stripe, arbiter, disperse, redundancy - snapd status - quorum status - tiering related information - rebalance status - replace bricks status - snapshots * Bricks: - Path, hostname (for all bricks these info will be shown) - port, rdma port, status, mount options, filesystem type and signed in status for bricks running locally. * Services: - name, online status for initialised services * Others: - Base port, last allocated port - op-version - MYUUID Change-Id: I4a45cc5407ab92d8afdbbd2098ece851f7e3d618 BUG: 1353156 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: http://review.gluster.org/14873 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* feature/bitrot: Ondemand scrub option for bitrotKotresh HR2016-08-252-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The bitrot scrubber takes 'hourly/daily/biweekly/monthly' as the values for 'scrub-frequency'. There is no way to schedule the scrubbing when the admin wants it. Ondemand scrubbing brings in the new option 'ondemand' with which the admin can start scrubbing ondemand. It starts the scrubbing immediately. Ondemand scrubbing is successful only if the scrubber is in 'Active (Idle)' (waiting for it's next frequency cycle to start scrubbing). It is not entertained when the scrubber is in 'Paused' or already running. Here is the command line syntax. gluster volume bitrot <vol name> scrub ondemand Change-Id: I84c28904367eed827a7dae8d6a535c14b28e9f4d BUG: 1366195 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/15111 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Venky Shankar <vshankar@redhat.com>
* snapshot/cli: Fix snapshot status xml outputAvra Sengupta2016-08-231-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | snap status --xml errors out if a brick is down and doesn't have pid. It is handled in the cli of the snap status where "N/A" is displayed in such a scenario. Handled the same in xml snap status <snapname> --xml fails as the writer is not initialised for the same. Using GF_SNAP_STATUS_TYPE_ITER instead of GF_SNAP_STATUS_TYPE_SNAP for all snap's status to differentiate between the two scenarios. Added testcase volume-snapshot-xml.t to check all snapshot commands xml outputs Change-Id: I99563e8f3e84f1aaeabd865326bb825c44f5c745 BUG: 1325831 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/14018 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* socket: SSL_shutdown should be called before socket shutdownRajesh Joseph2016-08-161-13/+28
| | | | | | | | | | | | | | SSL_shutdown shuts down an active SSL connection. But we are calling this after underlying socket is disconnected. Change-Id: Ia943179d23395f42b942450dbcf26336d4dfc813 BUG: 1362602 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/15072 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* rpc: fixing illegal memory accessMuthu-vigneshwaran2016-08-111-1/+1
| | | | | | | | | | | | | | CID:1357876 Change-Id: I34eee0bf0367f963ddf6e9d6b28b7d3a9af12c90 BUG: 789278 Signed-off-by: Muthu-vigneshwaran <mvignesh@redhat.com> Reviewed-on: http://review.gluster.org/15094 Tested-by: Muthu Vigneshwaran NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* changelog/rpc: Fix rpc_clnt_t mem leaksKotresh HR2016-07-222-6/+99
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PROBLEM: 1. Freeing up rpc_clnt object might lead to crashes. Well, it was not a necessity to free rpc-clnt object till now because all the existing use cases needs to reconnect back on disconnects. Hence timer code was not taking ref on rpc-clnt object. Glusterd had some use-cases that led to crash due to ping-timer and they fixed only those code paths that involve ping-timer. Now, since changelog has an use-case where rpc-clnt need to be freed up, we need to fix timer code to take refs 2. In changelog, because of issue 1, only mydata was being freed which is incorrect. And there are races where rpc-clnt object would access the freed mydata which would lead to crashes. Since changelog xlator resides on brick side and is long living process, if multiple libgfchangelog consumers register to changelog and disconnect/reconnect mulitple times, it would result in leak of 'rpc-clnt' object for every connect/disconnect. SOLUTION: 1. Handle ref/unref of 'rpc_clnt' structure in timer functions properly. 2. In changelog, unref 'rpc_clnt' in RPC_CLNT_DISCONNECT after disabling timers and free mydata on RPC_CLNT_DESTROY. RPC SETUP IN CHANGELOG: 1. changelog xlator initiates rpc server say 'changelog_rpc_server' 2. libgfchangelog initiates one rpc server say 'libgfchangelog_rpc_server' 3. libgfchangelog initiates rpc client and connects to 'changelog_rpc_server' 4. In return changelog_rpc_server initiates a rpc client and connects back to 'libgfchangelog_rpc_server' REF/UNREF HANDLING IN TIMER FUNCTIONS: Let's say rpc clnt refcount = 1 1. Take the ref before reigstering callback to timer queue >>>> rpc_clnt_ref (say ref count becomes = 2) 2. Register a callback to timer say 'callback1' 3. If register fails: >>>> rpc_clnt_unref (ref count = 1) 4. On timer expiration, 'callback1' gets called. So unref rpc clnt at the end in 'callback1'. This is corresponding to ref taken in step 1 >>>> rpc_clnt_unref (ref count = 1) 5. The cycle from step-1 to step-4 continues....until timer cancel event happens 6. timer cancel of say 'callback1' If timer cancel fails: Do nothing, Step-4 would have unrefd If timer cancel succeeds: >>>> rpc_clnt_unref (ref count = 1) Change-Id: I91389bc511b8b1a17824941970ee8d2c29a74a09 BUG: 1316178 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/13658 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* rpc/socket.c : Modify socket code to avoid fd leak in case of socket_poller ↵Mohit Agrawal2016-07-201-4/+3
| | | | | | | | | | | | | | | thread Update socket code to avoid fd leak in case of socket_poller thread. Change-Id: Ifa9718fbaf065988a299cda7ba0282dfd6b10a32 BUG: 1356888 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: http://review.gluster.org/14929 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org>
* rpc: fix several problems in failure handle logicZhou Zhengping2016-07-141-9/+8
| | | | | | | | | | | | | | | | Once dynstr is set into a dict by function dict_set_dynstr, its free operation will be called by this dict when the dict is destroyed. Signed-off-by: Zhou Zhengping <johnzzpcrystal@gmail.com> Change-Id: Idd2bd19a041bcb477e1c897428ca1740fb75c5f3 BUG: 1354141 Reviewed-on: http://review.gluster.org/14882 Tested-by: Zhou Zhengping <johnzzpcrystal@gmail.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org>
* rpc: invalid argument when function setsockopt sets option TCP_USER_TIMEOUTZhou Zhengping2016-07-101-0/+2
| | | | | | | | | | | | | | | If option "transport.tcp-user-timeout" hasn't been setted, glusterd's priv->timeout will be -1, which will cause invalid argument when set TCP_USER_TIMEOUT. Change-Id: Ibc16264ceac0e69ab4a217ffa27c549b9fa21df9 BUG: 1349657 Signed-off-by: Zhou Zhengping <johnzzpcrystal@gmail.com> Reviewed-on: http://review.gluster.org/14785 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* rpc/socket: pthread resources are not cleaned upN Balachandran2016-07-081-4/+6
| | | | | | | | | | | | | | | | | | A socket_connect failure creates a new pthread which is not a detached thread. As no pthread_join is called, the thread resources are not cleaned up causing a memory leak. Now, socket_connect creates a detached thread to handle failure. Change-Id: Idbf25d312f91464ae20c97d501b628bfdec7cf0c BUG: 1343374 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/14875 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* rpc/socket.c : Modify socket_poller code in case of ENODATA error code.Mohit Agrawal2016-07-081-1/+1
| | | | | | | | | | | | | | | | | | Problem: Polling failure errors are coming till volume is not come while SSL is enabled. Solution: To avoid the message update one condition in socket_poller code It will not exit from thread in case of received ENODATA from ssl_do function. Change-Id: Ia514e99b279b07b372ee950f4368ac0d9c702d82 BUG: 1349709 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: http://review.gluster.org/14786 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* mgmt: remove unused and misleading pmap_signupJeff Darcy2016-07-072-11/+5
| | | | | | | | | | | | | | | We use signin, but not signup. Having both just introduces confusion. The proc number has been retained to avoid changes to the numbering of other procs, and the mapping to a name has similarly been retained as a placeholder, but the code and structure definitions have been removed. Change-Id: I60f64f3b5d71ba6ed6862b36a38f90a9c8271c9f Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/14792 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* socket: log the client identifier in ssl connectRaghavendra Bhat2016-06-301-1/+6
| | | | | | | | | | | | Change-Id: I4b463ecafb66de16cbe7ed23fae800bb1204f829 BUG: 1333912 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/14242 Tested-by: Vijay Bellur <vbellur@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org>
* rpc/socket.c: Modify approach to cleanup threads of socket_poller in ↵Mohit Agrawal2016-06-241-141/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | socket_spawn. Problem: Current approach to cleanup threads of socket_poller is not appropriate. Solution: Enable detach flag at the time of thread creation in socket_spawn. Fix: Write a new wrapper(gf_create_detach_thread) to create detachable thread instead of store thread ids in a queue. Test: Fix is verfied on gluster process, To test the patch followed below procedure Enable the client.ssl and server.ssl option on the volume Start the volume and count anon segment in pmap output for glusterd process pmap -x <glusterd-pid> | grep "\[ anon \]" | wc -l Stop the volume and check again count of anon segment it should not increase. Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Change-Id: Ib8f7ec7504ec8f6f74b45ce6719b6fb47f9fdc37 BUG: 1336508 Reviewed-on: http://review.gluster.org/14694 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* protocol: Add framework to send transaction id with recallPoornima G2016-06-102-7/+25
| | | | | | | | | | | | | | | | | | | | | | | | | Issue: The upcall(cache invalidation/recall) event is sent from the bricks to clients. In AFR/EC setup, it can so happen that all the bricks will send the upcall for the same event, and if AFR/EC doesn't filter out these duplicate notifications, the logic above cluster xlators can fail. Solution: Use transaction id to filter out duplicate notifications. This patch adds framework for duplicate notifications. AFR/EC can build up on this patch for deduping the notifications Change-Id: I66b08e63b8799bc5932f2b2545376138a5701168 BUG: 1319992 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: http://review.gluster.org/14647 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* xdr/nfs: free complete groupnode structureBipin Kunal2016-06-071-4/+14
| | | | | | | | | | | | | | | | | | | | The groupnode->gr_next pointer is not traversed upon free. This is currently not a problem, because the pointer is never used. However the correct way to free a groupnode should check the ->gr_next pointer and free any of the groups that it encounters. This problem was identified while correcting a problem with the MOUNT protocol. The change "nfs: build exportlist with multiple groups" starts to use ->gr_next. Change-Id: I9d04eaf4c65bdb8db136321d60e70789da1739d7 BUG: 1343286 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/14666 Smoke: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: bipin kunal <kunalbipin@gmail.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* libglusterfs (timer): race conditions, illegal mem access, mem leakKaleb S KEITHLEY2016-06-011-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | While investigating gfapi memory consumption with valgrind, valgrind reported several memory access issues. Also see the timer 'registry' being recreated (shortly) after being freed during teardown due to the way it's currently written. Passing ctx as data to gf_timer_proc() is prone to memory access issues if ctx is freed before gf_timer_proc() terminates. (And in fact this does happen, at least in valgrind.) gf_timer_proc() doesn't need ctx for anything, it only needs ctx->timer, so just pass that. Nothing ever calls gf_timer_registry_init(). Nothing outside of timer.c that is. Making it and gf_timer_proc() static. Change-Id: Ia28454dda0cf0de2fec94d76441d98c3927a906a BUG: 1333925 Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/14247 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Poornima G <pgurusid@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>