summaryrefslogtreecommitdiffstats
path: root/rpc
Commit message (Collapse)AuthorAgeFilesLines
* rpc-transport: socket_poller fixes for proper working of mgmt encryptionKaushal M2015-07-171-5/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of 8c39f14 from master socket_poller, the polling function used by ssl own_thread, had two issues which lead to GlusterD crashes when using management encryption Issue 1 ------- socket_poller calls functions which require THIS to be set. But, THIS was being set conditionally. Because of this, functions could sometimes be called without THIS being set. For example, rpc_transport_notify could be called for an accepted client socket without THIS being set, as THIS was only set it the transport wasn't yet connected. This would cause the process to crash when THIS was accessed by the called functions. To fix this, THIS is being set at the start of socket_poller unconditionally. Issue 2 ------- DISCONNECT notify was being sent on the listener transport instead of the client transport. The DISCONNECT event was converted to a LISTENER_DEAD event in rpcsvc_handle_disconnect, as it could not find the listener socket of the listener socket. GlusterD was notified of a LISTENER_DEAD event instead of a DISCONNECT and failed to remove the client transport from its xprt_list. The transport would subsequently be freed, leaving the xprt_list with a corrupted/invalid entry. Later, when GlusterD would iterate over the xprt_list to send notifications, it would crash when the invalid entry was accessed. To fix this, DISCONNECT notification in socket_poller is sent on the client socket, as it is done in the epoll handler. Change-Id: I0370b7c6d7eb13de10ebf08d91a4a39dc7d64c7a BUG: 1243700 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/11690 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* socket: fix multithreading-related build problems on RHEL5Jeff Darcy2015-05-271-0/+13
| | | | | | | | | | | Change-Id: I3e145137c3ef738c4459c8d3098d6094ccfb008a BUG: 1225072 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/10921 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* socket: use OpenSSL multi-threading interfacesJeff Darcy2015-05-192-3/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | OpenSSL isn't thread-safe unless you register these locking and thread ID functions. Most often the crashes would occur around X509_verify_cert, even though it's insane that the certificate parsing functions wouldn't be thread-safe. The bug for this was filed over two years ago, but it didn't seem like a high priority because the bug didn't bite anyone until it caused a spurious regression-test failure. Ironically, that was on a test for a *different* spurious regression-test failure, which I guess is just deserts for leaving this on the to-do list so long. This is a backport of 8830e90fa1b131057e4ee1742cb83d78102714c0 > Change-Id: I2a6c0e9b361abf54efa10ffbbbe071404f82b0d9 > BUG: 906763 > Signed-off-by: Jeff Darcy <jdarcy@redhat.com> > Reviewed-on: http://review.gluster.org/10075 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> > Reviewed-by: Vijay Bellur <vbellur@redhat.com> Change-Id: I66de5f2830936ccc8d447fed6d1a83ebe7e93460 BUG: 1218167 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/10591 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* socket: fix segfaults when TLS management connections failJeff Darcy2015-04-211-11/+19
| | | | | | | | | | | | Backport of 0b9a6a6 from master. BUG: 1212684 Change-Id: Iec5410b937af150621b757924836cec638b5eb55 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/10280 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma:smoke failure in NetBSD if RDMA_OPTION_ID_REUSEADDR is not definedMohammed Rafi KC2015-04-161-0/+2
| | | | | | | | | | | | | | | | By the change http://review.gluster.org/#/c/10014/, we only set reuseaddr option if the macro RDMA_OPTION_ID_REUSEADDR is defined in rdmacma.h . The variable optval is only used in the section where we use conditional compiling based on the macro. So if the macro is not defined, then compiler will throw a warning for optval. Change-Id: I162143c928e84b40c6fd6108d3aa5b045dd9de95 BUG: 1201484 Reviewed-on: http://review.gluster.org/10208 Reviewed-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Tested-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma: RDMA_OPTION_ID_REUSEADDR undeclared on Ubuntu Precise (LTS)kkeithle2015-03-271-0/+3
| | | | | | | | | | | | Very old release of Ubuntu LTSwq Change-Id: Ib6fb4493f1f34ba853bd74c8037da7663639f40e BUG: 1201484 Signed-off-by: kkeithle <kkeithle@precise.kkeithle.usersys.redhat.com> Reviewed-on: http://review.gluster.org/10014 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma:changing list iteration to safe modeMohammed Rafi KC2015-03-271-5/+10
| | | | | | | | | | | | | | | | Bck port of : http://review.gluster.org/9872 Change-Id: I2299378f02a5577a8bf2874664ba79e92c3811b5 BUG: 1202212 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9872 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/9891 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma:setting wrong remote memory.Mohammed Rafi KC2015-03-271-2/+2
| | | | | | | | | | | | | | | | | | | Back port of : http://review.gluster.org/9794 when we send more than one work request in a single call, the remote addr is always setting as the first address of the vector. Change-Id: I55aea7bd6542abe22916719a139f7c8f73334d26 BUG: 1202212 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9794 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/9890 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma: pre-register iobuf_pool with rdma devices.Mohammed Rafi KC2015-03-272-14/+196
| | | | | | | | | | | | | | | | | | | | | | Back port pf : http://review.gluster.org/9506 registring buffers with rdma device is a time consuming operation. So performing registration in code path will decrease the performance. Using a pre registered memory will give a bettor performance, ie, register iobuf_pool during rdma initialization. For dynamically created arena, we can register with all the device. Change-Id: Ic79183e2efd014c43faf5911fdb6d5cfbcee64ca BUG: 1202212 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9506 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/9889 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma:read multiple wr from cq and ack them in one callMohammed Rafi KC2015-03-271-71/+97
| | | | | | | | | | | | | | | | | | | | | Back port of : http://review.gluster.org/9329 we are reading one work completion request at a time from cq, though we can read multiple work completion requests from cq. Also we can acknowledge them in one call itself. Both will give a better performance because of less mutual exclusion locks are being performed. Change-Id: Ib5664cab25c87db7f575d482eee4dcd2b5005c04 BUG: 1202212 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9329 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/9888 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma: post multiple work request in a single call.Mohammed Rafi KC2015-03-271-55/+65
| | | | | | | | | | | | | | | | | | | | | Back port of : http://review.gluster.org/9327 ibv_post-send will allow to send multiple work request in a single call posting as linked list. So if the payload count > 1, we can perform the data operation in a single call to ibv_post_send. Change-Id: Ib2e485cbbe6887919109e73e17d4fab595d5e65e BUG: 1202212 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9327 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/9887 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma : agregate a vectored read as oneMohammed Rafi KC2015-03-271-0/+10
| | | | | | | | | | | | | | | | | | | | Back port of : http://review.gluster.org/9321 For a vectored read with payload count>1 will make two read requests and to hold that a single contiguous memory is allocated. So after completing the read request, instead of sending as vector we will aggregate all the reads one. Change-Id: I15e7d7bddc1a62d5097a39392575f47cfff3d3a8 BUG: 1202212 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9321 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/9886 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma:vectored write fails for rdma.Mohammed Rafi KC2015-01-062-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/9247 For rdma write with payload count greater than one will fail due to insuffient memory to hold the buffers in rpc transport layer. It was expecting only one vector in payload, So it can only able to decode the first iovec from payload, and the rest will be discarded. Thnaks to Raghavendra Gowdappa for fixing the same. BUG: 1166515 Change-Id: I5e44d240366af78df35df326fe0660ef8497147b Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9247 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/9392 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma:client process will hang if server is started to send the request ↵Mohammed Rafi KC2015-01-061-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | before completing connection establishment Backport of http://review.gluster.org/9003 in rdma, client and server will interchange their available buffers during the handshake to post incoming messages. Initially the available buffer is set to one, for the first message during handshake,when first message is received, quota for the buffer will set to proper value. So before receiving the message if server started to send the message, then the reserverd buffer for handshake will be utilised, then the handshake will fail because of lack of buffers. So we should block sending messages by server before proper connection establishment. Change-Id: I1797ae8a25163864c338641d74cd4eef6fb34bb5 BUG: 1166515 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9003 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/9178 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma: client connection establishment takes more timeMohammed Rafi KC2015-01-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/8934 For rdma type only volume client connection establishment with server takes more than three seconds. Because for tcp,rdma type volume, will have 2 ports one for tcp and one for rdma, tcp port is stored with brickname and rdma port is stored as "brickname.rdma" during pamap_sighin. During the handshake when trying to get the brick port for rdma clients, since we are not aware of server transport type, we will append '.rdma' with brick name. So for tcp,rdma volume there will be an entry with '.rdma', but it will fail for rdma type only volume. So we will try again, this time without appending '.rdma' using a flag variable need_different_port, and it will succeed, but the reconnection happens only after 3 seconds. In this patch for rdma only type volume we will append '.rdma' during the pmap_signin. So during the handshake we will get the correct port for first try itself. Since we don't need to retry , we can remove the need_different_port flag variable. Change-Id: I82a8a27f0e65a2e287f321e5e8292d86c6baf5b4 BUG: 1166515 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/8934 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/9177 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma:rdma fuse mount hangs for tcp,rdma volumes if brick is down.Mohammed Rafi KC2015-01-062-5/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/8762 When we try to mount a tcp,rdma volume as rdma transport using FUSE protocol, then mount will hang if the brick is down. When we kill a process, signal will be received in glusterfsd process and it will call pmap_signout with port listening on tcp only. In case of the tcp,rdma there will be two ports, and port which is listening for rdma will not called for sign out. So the mount process will try to connect to a port which is not open and it will keep trying to connect. This patch will call pmap_signout for rdma port also, So when mount tries to get the brick port,it will fail. Change-Id: I73f90d7340afa3b0b1278924206f1488e4094a62 BUG: 1166515 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/8762 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/9176 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma:setting rdma REUSEADDR flag to rdma id.Mohammed Rafi KC2015-01-061-0/+10
| | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/9005 When we restart the process, it will go TIME_WAIT state to make sure that all the data in the transport is successfully delivered. REUSEADDR allows server to bind to an address which is in TIME_WAIT state. Change-Id: Ifb191967930d23bac31ceef1b1745d6b9fb0007b BUG: 1166515 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9005 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/9175 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma: glusterd crash if rdma_disconnect is called as soon as connect a request.Mohammed Rafi KC2015-01-061-9/+17
| | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/8925 we are initializing connection in server side immediately after rdma_accept is called. But we are delaying adding the transport to listener list until getting RDMA_CM_EVENT_ESTABLISHED event. Before getting this event if disconnect is called glusterd will try to remove the transport from list which is not added. So if the list is empty it causes a glusterd crash . In this patch we will call the function to initialize the connection as soon as rdma_accept is called. Change-Id: Ie783fdbb4342459e5bc162bf8600bf47c1e2e909 BUG: 1166515 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/8925 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/9174 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma: mount hangs for rdma type transport.Mohammed Rafi KC2015-01-061-46/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/8850 rdma transport type mount will hang if there is a delay in network to receive,we will set transport as connected if we get an event type RDMA_CM_EVENT_ESTABLISHED, we cannot assure whether client or server will get the event first, the only condition is that the side which sends the first request should wait for the event. If client gets the event first, then it sends DUMP request, in server side the request will reach, but it will reject the rpc request since it didn't get the RDMA_CM_EVENT_ESTABLISHED. So in server we will set the connected flag as soon as rdma_accept is called. Change-Id: Ia15884ab28a0c51cccfef84258c3671a468054d9 BUG: 1166515 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/8850 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/9173 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* build: FreeBSD 11-Current causes libtool to fail with '-shared'Harshavardhana2014-12-121-1/+1
| | | | | | | | | | | | | | Thanks for Markiyan Kushnir <markiyan.kushnir@gmail.com> for reporting this Change-Id: Ia0272e51be4ddede1e6bc188dfb892979626a7cd BUG: 1171524 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/9252 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Tested-by: Raghavendra Bhat <raghavendra@redhat.com>
* socket: disallow CBC cipher modesJeff Darcy2014-10-291-1/+67
| | | | | | | | | | | | | | | | | | | | | | | | | This is related to CVE-2014-3566 a.k.a. POODLE. http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566 POODLE is specific to CBC cipher modes in SSLv3. Because there is no way to prevent SSLv3 fallback on a system with an unpatched version of OpenSSL, users of such systems can only be protected by disallowing CBC modes. The default cipher-mode specification in our code has been changed accordingly. Users can still set their own cipher modes if they wish. To support them, the ssl-authz.t test script provides an example of how to combine the CBC exclusion with other criteria in a script. Change-Id: Ib1fa547082fbb7de9df94ffd182b1800d6e354e5 BUG: 1157659 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/8962 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/8987 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Niels de Vos <ndevos@redhat.com>
* Use sane OS-dependent defaults for SSL configurationEmmanuel Dreyfus2014-09-261-3/+17
| | | | | | | | | | | | | | Current code assumes /etc/ssl exists, which may not be the case. Attempt to guess sane default for a few OS. Backport of I0f3168f79b8f4275636581041740dfcaf25f3edd BUG: 1138897 Change-Id: I972c26236cbf070f15c3846add059bd33d60216d Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8861 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* socket: Fixed parsing RPC records containing multi fragmentsGu Feng2014-09-251-3/+20
| | | | | | | | | | | | | | | | In __socket_proto_state_machine(), when parsing RPC records containing multi fragments, just change the state of parsing process, had not processed the memory to coalesce the multi fragments. Change-Id: I5583e578603bd7290814a5d26885b31759c73115 BUG: 1146200 Signed-off-by: Gu Feng <flygoast@126.com> Reviewed-on: http://review.gluster.org/8662 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/8838
* feature/snapshot : Interface to delete all snapshots belonging to a system ↵Sachin Pandit2014-09-231-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | as-well-as to a particular volume Problem : With the current design we can only delete a single snapshot. And the deletion of volume which contains snapshot is not allowed. Because of that user might be forced to delete all the snapshots manually before he is allowed to delete a volume. Solution: Following is the interface with which user can delete all the snapshots of a system or belonging to a particular volume. Syntax : gluster snapshot delete all *To delete all the snapshots present in a system Syntax : gluster snapshot delete volume <volname> *To deletes all the snapshot present in a volume specified. ======================================================================== Sample Output: Case 1 : Deleting a single snapshot. [root@snapshot-24 glusterfs]# gluster snapshot delete snap1 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: snap1: snap removed successfully ----------------------------------------------------------------- Case 2 : Deleting all the snapshots in a Volume. [root@snapshot-24 glusterfs]# gluster snapshot delete volume vol1 Volume (vol1) contains 9 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap2: snap removed successfully snapshot delete: snap3: snap removed successfully snapshot delete: snap4: snap removed successfully snapshot delete: snap5: snap removed successfully . . . ----------------------------------------------------------------- Case 3 : Deleting all the snapshots in a system. [root@snapshot-24 glusterfs]# gluster snapshot delete all System contains 4 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap7: snap removed successfully snapshot delete: snap8: snap removed successfully snapshot delete: snap9: snap removed successfully snapshot delete: snap10: snap removed successfully ======================================================================== Change-Id: Ifec8e128ab2011cbbba208376b9c92cfbe7d8d71 BUG: 1145083 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8162 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8798 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
* snapview-server: register a callback with glusterd to getRaghavendra Bhat2014-09-181-0/+1
| | | | | | | | | | | | | | | | | | | notifications * As of now snapview-server is polling (sending rpc requests to glusterd) to get the latest list of snapshots at some regular time intervals (non configurable). Instead of that register a callback with glusterd so that glusterd sends notifications to snapd whenever a snapshot is created/deleted and snapview-server can configure itself. rebase of the patch http://review.gluster.org/#/c/8150/ Change-Id: Iee2582b1a823d50c79233a41cf2106f458b40691 BUG: 1143961 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/8767 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Do not call rpc_transport_unref() on NULL transEmmanuel Dreyfus2014-09-161-2/+4
| | | | | | | | | | | | | | | | | | rpc_clnt_disable() sets rpc->conn->trans to NULL, hence we should not call rpc_transport_unref() afterwards. I moved it before the rpc_clnt_disable() call, but I am not sure it should be called at all, perhaps it should just go away. This is a backport of I488d0207494e3a3fad52e64e67b2e740b236b864 BUG: 1138897 Change-Id: I001ab73b37f74ba98463bd9c32c01c670e9c7ad8 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8629 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Fix glustershd detection on volume restartEmmanuel Dreyfus2014-09-071-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | On NetBSD and FreeBSD, doing a 'gluster volume start $volume force' causes NFS server, quotad, snapd and glustershd to be undetected by glusterd once the volume has restarted. 'gluster volume status' shows the three processes as 'N' in the online column, while they have been launched successfully. This happens because glusterd attempts to connect to its child processes just between the child does a unlink() on the socket in __socket_server_bind() and the time it calls bind() and listen(). Different scheduling policy may explain why the problem does not happen on Linux, but it may pop up some day since we make no guaranteed assumptions here. This patchet works this around by introducing a boolean transport.socket.ignore-enoent option, set by nfs and glustershd, which prevents ENOENT to be fatal and cause glusterd to retry and suceed later. Behavior of other clients is unaffected. This is a backport of Ifdc4d45b2513743ed42ee235a5c61a086321644c BUG: 1138897 Change-Id: I04472f045249c99a9492218ceebfab847474db2d Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8630 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* build: make GLUSTERD_WORKDIR rely on localstatedirHarshavardhana2014-09-033-2/+16
| | | | | | | | | | | | | | | | | | | | | | | Backport from master branch - http://review.gluster.org/#/c/8246/ - Break-way from '/var/lib/glusterd' hard-coded previously, instead rely on 'configure' value from 'localstatedir' - Provide 's/lib/db' as default working directory for gluster management daemon for BSD and Darwin based installations - loff_t is really off_t on Darwin - fix-off the warnings generated by clang on FreeBSD/Darwin - Now 'tests/*' use GLUSTERD_WORKDIR a common variable for all platforms. - Define proper environment for running tests, define correct PATH and LD_LIBRARY_PATH when running tests, so that the desired version of glusterfs is used, regardless where it is installed. (Thanks to manu@netbsd.org for this additional work) Change-Id: I06e684ac4c26d1e74c9daf76753403ad15f79276 BUG: 1130308 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8486 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: Xml output for geo-replication status commandndarshan2014-08-281-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds xml output for geo-replication status and status detail command. sample: -------------------------------------------------------------- <geoRep> <volume> <name>master</name> <sessions> <session> <session_slave>:2a301d66-b9d2-44b4-b827-d680d67123eb:ssh://XXXXXXXXXX::slave</session_slave> <pair> <master_node>localhost.localdomain</master_node> <master_node_uuid>2a301d66-b9d2-44b4-b827-d680d67123eb</master_node_uuid> <master_brick>/root/master_b1</master_brick> <slave>ssh://XXXXXXXXXXX::slave</slave> <status>faulty</status> <checkpoint_status>N/A</checkpoint_status> <crawl_status>N/A</crawl_status> </pair> </session> </sessions> </volume> </geoRep> ------------------------------------------------------------- Change-Id: Ia19dbe751c3ab1ec7cb8923cdd6c8b99c374072f BUG: 1133464 Signed-off-by: ndarshan <dnarayan@redhat.com> Reviewed-on: http://review.gluster.org/8089 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: ndarshan <dnarayan@redhat.com> Reviewed-on: http://review.gluster.org/8532 Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* porting: OSX/Darwin 10.9 porting issuesHarshavardhana2014-08-152-4/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | xdrproc_t() arguments are variadic and non-variadic On OSX > 10.9 ------------- typedef bool_t (*xdrproc_t)(XDR *, void *, unsigned int); On OSX < 10.9 ------------ typedef bool_t (*xdrproc_t)(XDR *, ...); FreeBSD all versions ------------ typedef bool_t (*xdrproc_t)(XDR *, ...); NetBSD 6.1.4 ----------- typedef bool_t (*xdrproc_t)(XDR *, const void *); Linux all versions ----------- typedef bool_t (*xdrproc_t)(XDR *, void *,...); This weird and odd implementations across various platforms should be handled properly. Change-Id: I49fab73cba0d965c78c71da1beba1ffb2d58b8f8 BUG: 1130307 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* rdma: glusterfsd SEGV at volume startKaleb S. KEITHLEY2014-08-131-0/+1
| | | | | | | | | | | | | | | | | glusterfsd NULL ptr deref in proto/server: get_frame_from_request() with 'transport rdma' volume no test case, our regression test framework doesn't have Infiniband. If it did, the test case would be to create a 'transport rdma' volume, start it, and create/write/read/delete files on the volume. Change-Id: I8dd4bea08bdecbbdf0115d3badccb1594fa69a27 BUG: 1129710 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/8480 Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/glusterd: Added support for dispersed volumesXavier Hernandez2014-07-111-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Two new options have been added to the 'create' command of the cli interface: disperse [<count>] redundancy <count> Both are optional. A dispersed volume is created by specifying, at least, one of them. If 'disperse' is missing or it's present but '<count>' does not, the number of bricks enumerated in the command line is taken as the disperse count. If 'redundancy' is missing, the lowest optimal value is assumed. A configuration is considered optimal (for most workloads) when the disperse count - redundancy count is a power of 2. If the resulting redundancy is 1, the volume is created normally, but if it's greater than 1, a warning is shown to the user and he/she must answer yes/no to continue volume creation. If there isn't any optimal value for the given number of bricks, a warning is also shown and, if the user accepts, a redundancy of 1 is used. If 'redundancy' is specified and the resulting volume is not optimal, another warning is shown to the user. A distributed-disperse volume can be created using a number of bricks multiple of the disperse count. Change-Id: Iab93efbe78e905cdb91f54f3741599f7ea6645e4 BUG: 1118629 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/7782 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* socket/glusterd/client: enable SSL for managementJeff Darcy2014-07-102-20/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The feature is controlled by presence of the following file: /var/lib/glusterd/secure-access See the comment near the definition of SECURE_ACCESS_FILE in glusterfs.h for the rationale. With this enabled, the following rules apply to connections: UNIX-domain sockets never have SSL. Management-port sockets (both connecting and accepting, in daemons and CLI) have SSL based on presence of the file. Other IP sockets have SSL based on the existing client.ssl and server.ssl volume options. Transport multi-threading is explicitly turned off in glusterd (it would otherwise be turned on when SSL is) due to multi-threading issues. Tests have been elided to avoid risk of leaving a file which will cause all subsequent tests to run with management SSL still enabled. IMPLEMENTATION NOTE The implementation is a bit messy, and consists of two stages. First we decide whether to set the relevant fields in our context structure, based on presence of the sentinel file OR a command-line override. Later we decide whether a particular connection should actually use SSL, based on the context flags plus what kind of connection we're making[1] and what kind of daemon we're in[2]. [1] inbound, outbound to glusterd port, other outbound [2] glusterd, glusterfsd, other TESTING NOTE Instead of just running one special test for this feature, the ideal would be to run all tests with management SSL enabled. However, it would be inappropriate or premature to set up an optional feature in the patch itself. Therefore, the method of choice is to submit a separate patch on top, which modifies "cleanup" in include.rc to recreate the secure-access file and associated SSL certificate/key files before each test. Change-Id: I0e04d6d08163893e24ec8c031748c5c447d7f780 BUG: 1114604 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/8094 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rpc: Do not reset @ping_started to 0 in ping callbackKrutika Dhananjay2014-07-093-12/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | This is to avoid indefinite recursion of the following kind, that could lead to a stack overflow: rpc_clnt_start_ping() -> rpc_clnt_ping() -> rpc_clnt_submit() -> rpc_clnt_start_ping() -> rpc_clnt_ping() -> rpc_clnt_submit() ... and so on, since it is possible that before rpc_clnt_start_ping() is called a second time by the thread executing this codepath, the response to previous ping request could ALWAYS come by and cause epoll thread to reset conn->ping_started to 0. This patch also fixes the issue of excessive ping traffic, which was due to the client sending one ping rpc for every fop in the worst case. Also removed dead code in glusterd. Change-Id: I7c5e6ae3b1c9d23407c0a12a319bdcb43ba7a359 BUG: 1116243 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8257 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* socket: add certificate-depth and cipher-list options for SSLJeff Darcy2014-07-041-3/+26
| | | | | | | | | | Change-Id: I82757f8461807301a4a4f28c4f5bf7f0ee315113 BUG: 1114604 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/8040 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* porting: Port for FreeBSD rebased from Mike Ma's effortsHarshavardhana2014-07-022-2/+2
| | | | | | | | | | | | | | | | | | | - Provides a working Gluster Management Daemon, CLI - Provides a working GlusterFS server, GlusterNFS server - Provides a working GlusterFS client - execinfo port from FreeBSD is moved into ./contrib/libexecinfo for ease of portability on NetBSD. (FreeBSD 10 and OSX provide execinfo natively) - More portability cleanups for Darwin, FreeBSD and NetBSD - Provides a new rc script for FreeBSD Change-Id: I8dff336f97479ca5a7f9b8c6b730051c0f8ac46f BUG: 1111774 Original-Author: Mike Ma <mikemandarine@gmail.com> Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8141 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* rpc/auth: allow SSL identity to be used for authorizationJeff Darcy2014-07-023-7/+35
| | | | | | | | | | | | | | | | | | | Access to a volume is now controlled by the following options, based on whether SSL is enabled or not. * server.ssl-allow: get identity from certificate, no password needed * auth.allow: get identity and matching password from command line It is not possible to allow both simultaneously, since the connection itself is either using SSL or it isn't. Change-Id: I5a5be66520f56778563d62f4b3ab35c66cc41ac0 BUG: 1114604 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/3695 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* mgmt/glusterd: display snapd status as part of volume statusRaghavendra Bhat2014-06-301-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | * Made changes to save the port used by snapd in the info file for the volume i.e. <glusterd-working-directory>/vols/<volname>/info This is how the gluster volume status of a volume would look like for which the uss feature is enabled. [root@tatooine ~]# gluster volume status vol Status of volume: vol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick tatooine:/export1/vol 49155 Y 5041 Snapshot Daemon on localhost 49156 Y 5080 NFS Server on localhost 2049 Y 5087 Task Status of Volume vol ------------------------------------------------------------------------------ There are no active volume tasks Change-Id: I8f3e5d7d764a728497c2a5279a07486317bd7c6d BUG: 1111041 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/8114 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* gNFS: Fix multi-homed m/c issue in NFS subdir authSantosh Kumar Pradhan2014-06-251-27/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | NFS subdir authentication doesn't correctly handle multi-homed (host with multiple NIC having multiple IP addr) OR multi-protocol (IPv4 and IPv6) network addresses. When user/admin sets HOSTNAME in gluster CLI for NFS subdir auth, mnt3_verify_auth() routine does not iterate over all the resolved n/w addrs returned by getaddrinfo() n/w API. Instead, it just tests with the one returned first. 1. Iterate over all the n/w addrs (linked list) returned by getaddrinfo(). 2. Move the n/w mask calculation part to mnt3_export_fill_hostspec() instead of doing it in mnt3_verify_auth() i.e. calculating for each mount request. It does not change for MOUNT req. 3. Integrate "subnet support code rpc-auth.addr.<volname>.allow" and "NFS subdir auth code" to remove code duplication. Change-Id: I26b0def52c22cda35ca11766afca3df5fd4360bf BUG: 1102293 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/8048 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* glusterd: Fail peer probe/detach commands when peer detach is ongoingKrishnan Parthasarathi2014-06-161-0/+2
| | | | | | | | | | Change-Id: Ifd8099bc235eb395e8fd9ead3197bef71c78042b BUG: 1109812 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/8079 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Get snapshot info dynamically via new rpc and infra for snapview-server to ↵Anand Subramanian2014-06-152-0/+19
| | | | | | | | | | | refresh snaplist BUG: 1105439 Change-Id: I4bb312a53d88f6f4955e69a3ef2b4955ec17f26d Signed-off-by: Anand Subramanian <anands@redhat.com> Reviewed-on: http://review.gluster.org/8001 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cleanup: Fix order of arguments passed in log messageKrutika Dhananjay2014-06-131-1/+1
| | | | | | | | | Change-Id: Iae85cdfc223875688ea17155fffcf2a3a435d245 BUG: 764890 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8044 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cleanup: Fix domain in log messageKrutika Dhananjay2014-06-121-1/+1
| | | | | | | | | Change-Id: I554b9bcacf6c8acd6dffea0a485fc50e82c3dc04 BUG: 764890 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8043 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* rpc: Reconfigure() does not work for auth-rejectSantosh Kumar Pradhan2014-06-091-6/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: If volume is set for rpc-auth.addr.<volname>.reject with value as "host1", ideally the NFS mount from "host1" should FAIL. It works as expected. But when the volume is RESET, then previous value set for auth-reject should go off, and further NFS mount from "host1" should PASS. But it FAILs because of stale value in dict for key "rpc-auth.addr.<volname>.reject". It does not impact rpc-auth.addr.<volname>.allow key because, each time NFS volfile gets generated, allow key ll have "*" as default value. But reject key does not have default value. FIX: Delete the OLD value for key irrespective of anything. Add NEW value for the key, if and only if that is SET in the reconfigured new volfile. Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Change-Id: Ie80bd16cd1f9e32c51f324f2236122f6d118d860 BUG: 1103050 Reviewed-on: http://review.gluster.org/7931 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gNFS: Make NFS DRC off by defaultNiels de Vos2014-06-091-10/+9
| | | | | | | | | | | | | | | DRC in NFS causes memory bloat and there are known memory corruptions. It would be good to disable drc by default till the feature is stable. Change-Id: I93db6ef5298672c56fb117370bb582a5e5550b17 BUG: 1105524 Original-patch-by: Santosh Kumar Pradhan <spradhan@redhat.com> Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8004 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Santosh Pradhan <spradhan@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Handle rpc_connect failure in the event handlerVijaikumar M2014-06-054-94/+169
| | | | | | | | | | | | | | | | | | | | | | | | | | | Currently rpc_connect calls the notification function on failure in the same thread, glusterd notification holds the big_lock and hence big_lock is released before rpc_connect In snapshot creation, releasing the big-lock before completeing operation can cause problem like deadlock or memory corruption. Bricks are started as part of snapshot created operation. brick_start releases the big_lock when doing brick_connect and this might cause glusterd crash. There is a similar issue in bug# 1088355. Solution is let the event handler handle the failure than doing it in the rpc_connect. Change-Id: I088d44092ce845a07516c1d67abd02b220e08b38 BUG: 1101507 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7843 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rpc: make sure we use relative pathHarshavardhana2014-05-191-36/+36
| | | | | | | | | | | | | | | In commit "618d465295df02ae6d53be1327947a210bb8b47d" we made change regarding NetBSD make by replacing `$<`, fix it accordingly. Change-Id: Ief82887253ede8216efd0ae7d5f73329f1492846 BUG: 1089172 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/7787 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Justin Clift <justin@gluster.org> Tested-by: Justin Clift <justin@gluster.org>
* glusterd: Disable ping-timer between glusterd and brick processVijaikumar M2014-05-193-5/+9
| | | | | | | | | | | | | | | | | | | | | | When there are too many IO happening, brick process epoll thread will be busy and fails to respond to the glusterd pick packet within 30sec. Also epoll thread can be blocked by a big-lock. Solution is to disable ping-timer by default and only enable where ever required Later when the epoll thread model changed and made lighter, we need to revert back this change. http://review.gluster.com/3842 is one such approach. Change-Id: I7f80ad3eb00f7d9c4d4527305932f7cf4920e73f BUG: 1097224 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7753 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rpcsvc: Validate RPC procedure number before fetchSantosh Kumar Pradhan2014-05-171-6/+16
| | | | | | | | | | | | | | | | | | | | | While accessing the procedures of given RPC program in, rpcsvc_get_program_vector_sizer(), It was not checking boundary conditions which would cause buffer overflow and subsequently SEGV. Make sure rpcsvc_actor_t arrays have numactors number of actors. FIX: Validate the RPC procedure number before fetching the actor. Special Thanks to: Murray Ketchion, Grant Byers Change-Id: I8b5abd406d47fab8fca65b3beb73cdfe8cd85b72 BUG: 1096020 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/7726 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* NetBSD build fixesEmmanuel Dreyfus2014-05-172-39/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - Shell scripts: == is specific to bash and ksh. Use = instead. - Shell scripts: use sh instead of bash if bash functionnality is not used - Shell scripts: ${var/search/replace} is specific to bash - sed: The -i option is specific to GNU sed. - Makefiles: $< outside of generic rules only work in GNU make. - xdrproc_t() is not universally defined as variadic. Do not specify third argument if it is not used - NetBSD FUSE specific: only include <perfuse.h> in FUSE client code, it harms in other locations - configure: Search for gettext() in libintl as NetBSD stores it there - Like MacOS X, NetBSD has unmount(2) and not umount(2) (un vs u) Some other build issues previously included in this change were removed: - __THROW macro, addressed in http://review.gluster.com/#/c/7757/ - getmntent() compat shared with MacOS X, in http://review.gluster.com/#/c/7722/ This patchset adds warning fixes for mount_glusterfs BUG: 764655 Change-Id: I2f1faf8ff96362d3e2baf237b943df619011f1f4 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/7783 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net>