summaryrefslogtreecommitdiffstats
path: root/rpc
Commit message (Collapse)AuthorAgeFilesLines
* rdma:changing list iteration to safe modeMohammed Rafi KC2015-03-151-5/+10
| | | | | | | | | | | Change-Id: I2299378f02a5577a8bf2874664ba79e92c3811b5 BUG: 1201621 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9872 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rdma: Free resources related to iobuf in finiMohammed Rafi KC2015-03-102-0/+19
| | | | | | | | | | | | | | | | If rdma transport is destroyed because of any reason, then rdma.so will be unloaded. But we are not setting iobuf registeration function to null. After this, if an iobuf request is came, then we will try to call a function which is not loaded. Change-Id: I3293f9974e16d8e865131785ee697ea02be8cdfc BUG: 1187456 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9697 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rdma:enhance logging when a connection error occurMohammed Rafi KC2015-03-101-1/+3
| | | | | | | | | | | Change-Id: I6146307949a3d852d3af5f8b273004ad6b27451b BUG: 1196584 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9756 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rdma: return proper data type.Humble Devassy Chirammal2015-03-091-1/+1
| | | | | | | | | | Change-Id: I9bb0898af96cfcfaba0f0c976a7808bc6ea08e6a Signed-off-by: Humble Devassy Chirammal <hchiramm@redhat.com> Reviewed-on: http://review.gluster.org/9838 Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/ec: Add self-heal-daemon command handlersPranith Kumar K2015-03-091-14/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces the changes required in ec xlator to handle index/full heal. Index healer threads: Ec xlator start an index healer thread per local brick. This thread keeps waking up every minute to check if there are any files to be healed based on the indices kept in index directory. Whenever child_up event comes, then also this index healer thread wakes up and crawls the indices and triggers heal. When self-heal-daemon is disabled on this particular volume then the healer thread keeps waiting until it is enabled again to perform heals. Full healer threads: Ec xlator starts a full healer thread for the local subvolume provided by glusterd to perform full crawl on the directory hierarchy to perform heals. Once the crawl completes the thread exits if no more full heals are issued. Changed xl-op prefix GF_AFR_OP to GF_SHD_OP to make it more generic. Change-Id: Idf9b2735d779a6253717be064173dfde6f8f824b BUG: 1177601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9787 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rdma: 'list','wr' and 'new' memory has to be verified.Humble Devassy Chirammal2015-03-091-1/+22
| | | | | | | | | | | Change-Id: I29a8825107b8f4cefe4f4c59296e98fe675ee943 BUG: 1199053 Signed-off-by: Humble Devassy Chirammal <hchiramm@redhat.com> Reviewed-on: http://review.gluster.org/9811 Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rdma:setting wrong remote memory.Mohammed Rafi KC2015-03-041-2/+2
| | | | | | | | | | | | | when we send more than one work request in a single call, the remote addr is always setting as the first address of the vector. Change-Id: I55aea7bd6542abe22916719a139f7c8f73334d26 BUG: 1197548 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9794 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rdma: segfault trying to call ibv_dealloc_pd on a null pointer Mark Lipscombe2015-03-031-1/+3
| | | | | | | | | | | | | | if ibv_alloc_pd failed If creating an ib protection domain fails, during the cleanup a segfault will occur because trav->pd is null. Bug: 1197260 Change-Id: I21b867c204c4049496b1bf11ec47e4139610266a Signed-off-by: Mark Lipscombe <mlipscombe@gmail.com> Reviewed-on: http://review.gluster.org/9774 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* epoll: Fix broken RPC throttling due to MT epollShyam2015-03-011-11/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | The RPC throttle which kicks in by setting the poll-in event on a socket to false, is broken with the MT epoll commit. This is due to the event handler of poll-in attempting to read as much out of the socket till it receives an EAGAIN. Which may never happen and hence we would be processing far more RPCs that we want to. This is being fixed by changing the epoll from ET to LT, and reading request by request, so that we honor the throttle. The downside is that we do not drain the socket, but go back to epoll_wait before reading the next request, but when kicking in throttle, we need to anyway and so a busy connection would degrade to LT anyway to maintain the throttle. As a result this change should not cause deviation in the performance much for busy connections. Change-Id: I522d284d2d0f40e1812ab4c1a453c8aec666464c BUG: 1192114 Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/9726 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rpcsvc: New rpc routines defined to send callback requestsSoumya Koduri2015-03-012-0/+47
| | | | | | | | | | | | | Change-Id: I7f95682faada16308314bfbf84298b02d1198efa BUG: 1188184 Signed-off-by: Poornima G <pgurusid@redhat.com> Signed-off-by: Soumya Koduri <skoduri@redhat.com> Reviewed-on: http://review.gluster.org/9534 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* socket: allow only one epoll thread to read msg fragmentsKrishnan Parthasarathi2015-02-272-0/+13
| | | | | | | | | | | | | | | | | __socket_read_reply function releases sock priv->lock briefly for notifying higher layers of message's xid. This could result in other epoll threads that are processing events on this socket to read further fragments of the same message. This may lead to incorrect fragment processing and result in a crash. Change-Id: I915665b2e54ca16f2ad65970e51bf76c65d954a4 BUG: 1197118 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/9742 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rdma: Fix failure to call rdma_bind_addr if unable to bind privileged port.Mark Lipscombe2015-02-261-1/+8
| | | | | | | | | | | | | | | When unable to bind a privileged port, rdma_bind_addr is not called. This patch fixes that. Change-Id: I175884a5d6a08b93dc62653ee0a6622bfc06e618 Bug: 1195907 Signed-off-by: Mark Lipscombe <mlipscombe@gmail.com> Reviewed-on: http://review.gluster.org/9737 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* glusterd: nfs,shd,quotad,snapd daemons refactoringAtin Mukherjee2015-02-202-0/+44
| | | | | | | | | | | | | This patch ports nfs, shd, quotad & snapd with the approach suggested in http://www.gluster.org/pipermail/gluster-devel/2014-December/043180.html Change-Id: I4ea5b38793f87fc85cc9d2cf873727351dedffd2 BUG: 1191486 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9428 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
* rdma: free rdma priv data if init failsAtin Mukherjee2015-02-191-0/+2
| | | | | | | | | | | Change-Id: I57b38c8783666e806836dacf3f74cf9f6876070a BUG: 1164079 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9687 Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rdma: pre-register iobuf_pool with rdma devices.Mohammed Rafi KC2015-02-172-14/+196
| | | | | | | | | | | | | | | | | | registring buffers with rdma device is a time consuming operation. So performing registration in code path will decrease the performance. Using a pre registered memory will give a bettor performance, ie, register iobuf_pool during rdma initialization. For dynamically created arena, we can register with all the device. Change-Id: Ic79183e2efd014c43faf5911fdb6d5cfbcee64ca BUG: 1187456 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9506 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rdma: reduce log level from E to WMohammed Rafi KC2015-02-173-2/+29
| | | | | | | | | | | | | | glusterd process, when try to initialize default vol file, will always through an error if there is no rdma device. Changing the log levels and log messages to more appropriately. Change-Id: I75b919581c6738446dd2d5bddb7b7658a91efcf4 BUG: 1188232 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9559 Reviewed-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rdma:read multiple wr from cq and ack them in one callMohammed Rafi KC2015-02-171-71/+97
| | | | | | | | | | | | | | | we are reading one work completion request at a time from cq, though we can read multiple work completion requests from cq. Also we can acknowledge them in one call itself. Both will give a better performance because of less mutual exclusion locks are being performed. Change-Id: Ib5664cab25c87db7f575d482eee4dcd2b5005c04 BUG: 1164079 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9329 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rdma: post multiple work request in a single call.Mohammed Rafi KC2015-02-121-55/+65
| | | | | | | | | | | | | | | ibv_post-send will allow to send multiple work request in a single call posting as linked list. So if the payload count > 1, we can perform the data operation in a single call to ibv_post_send. Change-Id: Ib2e485cbbe6887919109e73e17d4fab595d5e65e BUG: 1164079 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9327 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rdma : agregate a vectored read as oneMohammed Rafi KC2015-02-121-0/+10
| | | | | | | | | | | | | | | For a vectored read with payload count>1 will make two read requests and to hold that a single contiguous memory is allocated. So after completing the read request, instead of sending as vector we will aggregate all the reads one. Change-Id: I15e7d7bddc1a62d5097a39392575f47cfff3d3a8 BUG: 1164079 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9321 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* epoll: edge triggered and multi-threaded epollVijaikumar M2015-02-071-5/+28
| | | | | | | | | | | | | | | | | | | | | | | - edge triggered (oneshot) polling with epoll - pick one event to avoid multiple events getting picked up by same thread and so get better distribution of events against multiple threads - wire support for multiple poll threads to epoll_wait in parallel - evdata to store absolute index and not hint for epoll - store index and gen of slot instead of fd and index hint - perform fd close asynchronously inside event.c for multithread safety - poll is still single threaded Change-Id: I536851dda0ab224c5d5a1b130a571397c9cace8f BUG: 1104462 Signed-off-by: Anand Avati <avati@redhat.com> Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/3842 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rpc: fix ref leak in ping timerKrishnan Parthasarathi2015-02-041-1/+16
| | | | | | | | Change-Id: I4ddc371d01ec763706a168a215410015ee2a3787 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9578 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* geo-rep: Adding Slave user field to georep statusAravinda VK2015-02-021-0/+1
| | | | | | | | | | | | | | | | | | | New column introduced in Status output, "SLAVE USER", Slave user is not "root" in non root Geo-replication setup. Added additional tag in XML output <slave_user> BUG: 1180459 Change-Id: Ia48a5a8eb892ce883b9ec114be7bb2d46eff8535 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/9409 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* socket: fix segfaults when TLS management connections failJeff Darcy2015-01-271-11/+19
| | | | | | | | | | Change-Id: I1fd085b04ad1ee68c982d3736b322c19dd12e071 BUG: 1160900 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/9059 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* mgmt/glusterd: Implement Volume heal enable/disablePranith Kumar K2015-01-201-0/+2
| | | | | | | | | | | | | | | | | | For volumes with replicate, disperse xlators, self-heal daemon should do healing. This patch provides enable/disable functionality for the xlators to be part of self-heal-daemon. Replicate already had this functionality with 'gluster volume set cluster.self-heal-daemon on/off'. But this patch makes it uniform for both types of volumes. Internally it still does 'volume set' based on the volume type. Change-Id: Ie0f3799b74c2afef9ac658ef3d50dce3e8072b29 BUG: 1177601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9358 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cluster/afr: split-brain resolution CLIRavishankar N2015-01-151-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Extend the AFR heal command to include automated split-brain resolution. This patch [3/3] is the final patch for afr automated split-brain resolution implementation. "gluster volume heal <VOLNAME> [full | statistics [heal-count [replica <HOSTNAME:BRICKNAME>]] |info [healed | heal-failed | split-brain]| split-brain {bigger-file <FILE> |source-brick <HOSTNAME:BRICKNAME> [<FILE>]}]" The new additions being: 1.gluster volume heal <VOLNAME> split-brain bigger-file <FILE> Locates the replica containing the FILE, selects bigger-file as source and completes heal. 2.gluster volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME> <FILE> Selects <FILE> present in <HOSTNAME:BRICKNAME> as source and completes heal. 3.gluster volume heal <VOLNAME> split-brain <HOSTNAME:BRICKNAME> Selects all split-brained files in <HOSTNAME:BRICKNAME> as source and completes heal. Note: <FILE> can be either the full file name as seen from the root of the volume (or) the gfid-string representation of the file, which sometimes gets displayed in the heal info command's output. Entry/gfid split-brain resolution is not supported. Example can be found in the test case. Change-Id: I4649733922d406f14f28ee9033a5cb627b9538b3 BUG: 1136769 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/9377 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* rpc: initialise transport's list on creationKrishnan Parthasarathi2015-01-151-0/+2
| | | | | | | | | | | | | | Initialising the transport's list, meant to hold clients connected to it, on the first connection event is prone to race, especially with the introduction of multi-threaded event layer. BUG: 1181203 Change-Id: I6a20686a2012c1f49a279cc9cd55a03b8c7615fc Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9413 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* build: FreeBSD 11-Current causes libtool to fail with '-shared'Harshavardhana2014-12-121-1/+1
| | | | | | | | | | | | Thanks for Markiyan Kushnir <markiyan.kushnir@gmail.com> for reporting this Change-Id: I7f637295c7c2d54c33a4c16e29daf0b518874911 BUG: 1111774 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/9251 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* rpc/rpcsvc: add peername to log messagesKrishnan Parthasarathi2014-12-102-13/+22
| | | | | | | | | | | | | This would allow users/developers to associate rpc layer log messages to the corresponding connection. Change-Id: I040f79248dced7174a4364d9f995612ed3540dd4 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/8535 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rdma:vectored write fails for rdma.Mohammed Rafi KC2014-12-072-3/+3
| | | | | | | | | | | | | | | | | | | | For rdma write with payload count greater than one will fail due to insuffient memory to hold the buffers in rpc transport layer. It was expecting only one vector in payload, So it can only able to decode the first iovec from payload, and the rest will be discarded. Thnaks to Raghavendra Gowdappa for fixing the same. Change-Id: I82a649a34abe6320d6216c8ce73e69d9b5e99326 BUG: 1171142 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9247 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rdma:client process will hang if server is started to sendMohammed Rafi KC2014-11-181-0/+21
| | | | | | | | | | | | | | | | | | | | | | the request before completing connection establishment in rdma, client and server will interchange their available buffers during the handshake to post incoming messages. Initially the available buffer is set to one, for the first message during handshake,when first message is received, quota for the buffer will set to proper value. So before receiving the message if server started to send the message, then the reserverd buffer for handshake will be utilised, then the handshake will fail because of lack of buffers. So we should block sending messages by server before proper connection establishment. Change-Id: I68ef44998f5df805265d3f42a5df7c31cb57f136 BUG: 1158746 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9003 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rdma: client connection establishment takes more timeMohammed Rafi KC2014-11-181-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For rdma type only volume client connection establishment with server takes more than three seconds. Because for tcp,rdma type volume, will have 2 ports one for tcp and one for rdma, tcp port is stored with brickname and rdma port is stored as "brickname.rdma" during pamap_sighin. During the handshake when trying to get the brick port for rdma clients, since we are not aware of server transport type, we will append '.rdma' with brick name. So for tcp,rdma volume there will be an entry with '.rdma', but it will fail for rdma type only volume. So we will try again, this time without appending '.rdma' using a flag variable need_different_port, and it will succeed, but the reconnection happens only after 3 seconds. In this patch for rdma only type volume we will append '.rdma' during the pmap_signin. So during the handshake we will get the correct port for first try itself. Since we don't need to retry , we can remove the need_different_port flag variable. Change-Id: Ie8e3a7f532d4104829dbe995e99b35e95571466c BUG: 1153569 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/8934 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rdma:rdma fuse mount hangs for tcp,rdma volumes if brick is down.Mohammed Rafi KC2014-11-172-5/+3
| | | | | | | | | | | | | | | | | | | | | | | | When we try to mount a tcp,rdma volume as rdma transport using FUSE protocol, then mount will hang if the brick is down. When we kill a process, signal will be received in glusterfsd process and it will call pmap_signout with port listening on tcp only. In case of the tcp,rdma there will be two ports, and port which is listening for rdma will not called for sign out. So the mount process will try to connect to a port which is not open and it will keep trying to connect. This patch will call pmap_signout for rdma port also, So when mount tries to get the brick port,it will fail. Change-Id: I23676f65f96eb90b69b76478f7a21412a6aba70f BUG: 1143886 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/8762 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rdma: glusterd crash if rdma_disconnect is called as soon as connect a request.Mohammed Rafi KC2014-11-141-9/+17
| | | | | | | | | | | | | | | | | | we are initializing connection in server side immediately after rdma_accept is called. But we are delaying adding the transport to listener list until getting RDMA_CM_EVENT_ESTABLISHED event. Before getting this event if disconnect is called glusterd will try to remove the transport from list which is not added. So if the list is empty it causes a glusterd crash . In this patch we will call the function to initialize the connection as soon as rdma_accept is called. Change-Id: I019480297a85349ede3101ee9c7c1596dc5c73e2 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> BUG: 1164079 Reviewed-on: http://review.gluster.org/8925 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* Build fix: xdrgenEmmanuel Dreyfus2014-11-131-54/+19
| | | | | | | | | | | | | | | | | | | | As discovered in https://review.gluster.org/8762, BSD systems fail to run xdrgen during glusterfs build. This seems to be caused by a difference between BSD make and GNU make whith implcit targets. The former seems to use absolute path here, which means we should not prepend it with the current directory path, otherwise we have the directory path twice and the files cannot be found my make. This is a second attempt after I178123bf6f3d9e963ff5b78839d498f530c05a97 which was broken and reverted in I3c8966288f66d0eafa2e94490e3b64a057b4f2c0 BUG: 1157839 Change-Id: I797c536c319a156b71a42c82cbaf80bbf17b7234 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/9046 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rdma:setting rdma REUSEADDR flag to rdma id.Mohammed Rafi KC2014-11-131-0/+10
| | | | | | | | | | | | | When we restart the process, it will go TIME_WAIT state to make sure that all the data in the transport is successfully delivered. REUSEADDR allows server to bind to an address which is in TIME_WAIT state. Change-Id: Ic7deb0d7442c29494fe088598ffe9c87977c04ff Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9005 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* Revert "Build fix: xdrgen"Niels de Vos2014-11-041-36/+36
| | | | | | | | | | | | | | | | This reverts commit 12bc39c144aa41a097435f2aab304ddfbbb9b625 This causes all regression tests to fail. Reverting for now, we can include a correct fix later. Change-Id: I3c8966288f66d0eafa2e94490e3b64a057b4f2c0 URL: http://supercolony.gluster.org/pipermail/gluster-devel/2014-November/042759.html BUG: 1157839 Reported-by: Emmanuel Dreyfus <manu@netbsd.org> Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/9040 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Build fix: xdrgenEmmanuel Dreyfus2014-11-021-36/+36
| | | | | | | | | | | | | | | | | As discovered in https://review.gluster.org/8762, BSD systems fail to run xdrgen during glusterfs build. This seems to be caused by a difference between BSD make and GNU make whith implcit targets. The former seems to use absolute path here, which means we should not prepend it with the current directory path, otherwise we have the directory path twice and the files cannot be found my make. BUG: 1157839 Change-Id: I178123bf6f3d9e963ff5b78839d498f530c05a97 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/9016 Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com>
* socket: disallow CBC cipher modesJeff Darcy2014-10-271-1/+67
| | | | | | | | | | | | | | | | | | | | | | This is related to CVE-2014-3566 a.k.a. POODLE. http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566 POODLE is specific to CBC cipher modes in SSLv3. Because there is no way to prevent SSLv3 fallback on a system with an unpatched version of OpenSSL, users of such systems can only be protected by disallowing CBC modes. The default cipher-mode specification in our code has been changed accordingly. Users can still set their own cipher modes if they wish. To support them, the ssl-authz.t test script provides an example of how to combine the CBC exclusion with other criteria in a script. Change-Id: Ib1fa547082fbb7de9df94ffd182b1800d6e354e5 BUG: 1155328 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/8962 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rdma: mount hangs for rdma type transport.Mohammed Rafi KC2014-10-081-46/+46
| | | | | | | | | | | | | | | | | | | | | | rdma transport type mount will hang if there is a delay in network to receive,we will set transport as connected if we get an event type RDMA_CM_EVENT_ESTABLISHED, we cannot assure whether client or server will get the event first, the only condition is that the side which sends the first request should wait for the event. If client gets the event first, then it sends DUMP request, in server side the request will reach, but it will reject the rpc request since it didn't get the RDMA_CM_EVENT_ESTABLISHED. So in server we will set the connected flag as soon as rdma_accept is called. Change-Id: Iac5845e3592666daa575c727822889779b5bd203 BUG: 1146492 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/8850 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* gNFS: Subdir mount does not work on UDP protoSantosh Kumar Pradhan2014-10-072-40/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After enabling nfs.mount-udp, mounting a subdir on a volume over NFS fails. Because mountudpproc3_mnt_3_svc() invokes nfs3_rootfh() which internally calls mnt3_mntpath_to_export() to resolve the mount path. mnt3_mntpath_to_export() just works if the mount path requested is volume itself. It is not able to resolve, if the path is a subdir inside the volume. MOUNT over TCP uses mnt3_find_export() to resolve subdir path but UDP can't use this routine because mnt3_find_export() needs the req data (of type rpcsvc_request_t) and it's available only for TCP version of RPC. FIX: (1) Use syncop_lookup() framework to resolve the MOUNT PATH by breaking it into components and resolve component-by-component. i.e. glfs_resolve_at () API from libgfapi shared object. (2) If MOUNT PATH is subdir, then make sure subdir export is not disabled. (3) Add auth mechanism to respect nfs.rpc-auth-allow/reject and subdir auth i.e. nfs.export-dir (4) Enhanced error handling for MOUNT over UDP Change-Id: I42ee69415d064b98af4f49773026562824f684d1 BUG: 1118311 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/8346 Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* Sane default for SSL on OSXHarshavardhana2014-09-291-1/+1
| | | | | | | | | | | - /opt/local is not preferred anymore use /usr/local Change-Id: I30cad4cbd28850063f26121cace05371e13bb314 BUG: 1129939 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8872 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Use sane OS-dependent defaults for SSL configurationEmmanuel Dreyfus2014-09-261-3/+17
| | | | | | | | | | | | Current code assumes /etc/ssl exists, which may not be the case. Attempt to guess sane default for a few OS. BUG: 1129939 Change-Id: I0f3168f79b8f4275636581041740dfcaf25f3edd Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8790 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* socket: Fixed parsing RPC records containing multi fragmentsGu Feng2014-09-191-3/+20
| | | | | | | | | | | | | | | In __socket_proto_state_machine(), when parsing RPC records containing multi fragments, just change the state of parsing process, had not processed the memory to coalesce the multi fragments. Change-Id: I5583e578603bd7290814a5d26885b31759c73115 BUG: 1139598 Signed-off-by: Gu Feng <flygoast@126.com> Reviewed-on: http://review.gluster.org/8662 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* snapview-server: register a callback with glusterd to get notificationsRaghavendra Bhat2014-09-081-0/+1
| | | | | | | | | | | | | | | * As of now snapview-server is polling (sending rpc requests to glusterd) to get the latest list of snapshots at some regular time intervals (non configurable). Instead of that register a callback with glusterd so that glusterd sends notifications to snapd whenever a snapshot is created/deleted and snapview-server can configure itself. Change-Id: I17a274fd2ab487d030678f0077feb2b0f35e5896 BUG: 1119628 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/8150 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/glusterd: Support of volume get for a specific volume optionAtin Mukherjee2014-08-261-0/+1
| | | | | | | | | | | | | | This patch introduces a cli command to display a specific volume option/all volume options of a specific volume with the following usage: Usage: volume get <VOLNAME> <key|all> Change-Id: Ic88edb33c5509d7a37cd5ade6341e45e3cdbf59d BUG: 983317 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/8305 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* Fix glustershd detection on volume restartEmmanuel Dreyfus2014-08-251-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | On NetBSD and FreeBSD, doing a 'gluster volume start $volume force' causes NFS server, quotad, snapd and glustershd to be undetected by glusterd once the volume has restarted. 'gluster volume status' shows the three processes as 'N' in the online column, while they have been launched successfully. This happens because glusterd attempts to connect to its child processes just between the child does a unlink() on the socket in __socket_server_bind() and the time it calls bind() and listen(). Different scheduling policy may explain why the problem does not happen on Linux, but it may pop up some day since we make no guaranteed assumptions here. This patchet works this around by introducing a boolean transport.socket.ignore-enoent option, set by nfs and glustershd, which prevents ENOENT to be fatal and cause glusterd to retry and suceed later. Behavior of other clients is unaffected. BUG: 1129939 Change-Id: Ifdc4d45b2513743ed42ee235a5c61a086321644c Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8403 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rdma: glusterfsd SEGV at volume startKaleb S. KEITHLEY2014-08-131-0/+1
| | | | | | | | | | | | | | | | glusterfsd NULL ptr deref in proto/server: get_frame_from_request() with 'transport rdma' volume no test case, our regression test framework doesn't have Infiniband. If it did, the test case would be to create a 'transport rdma' volume, start it, and create/write/read/delete files on the volume. Change-Id: I91a6956bdf8f61f3853e0c0951744460ba138576 BUG: 1129708 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/8479 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* porting: OSX/Darwin 10.9 porting issuesHarshavardhana2014-08-122-4/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | xdrproc_t() arguments are variadic and non-variadic On OSX > 10.9 ------------- typedef bool_t (*xdrproc_t)(XDR *, void *, unsigned int); On OSX < 10.9 ------------ typedef bool_t (*xdrproc_t)(XDR *, ...); FreeBSD all versions ------------ typedef bool_t (*xdrproc_t)(XDR *, ...); NetBSD 6.1.4 ----------- typedef bool_t (*xdrproc_t)(XDR *, const void *); Linux all versions ----------- typedef bool_t (*xdrproc_t)(XDR *, void *,...); This weird and odd implementations across various platforms should be handled properly. Change-Id: Iad8b7da2e5b82526bf3708cff31ab10ce09f59c9 BUG: 1128820 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8458 Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* build: make GLUSTERD_WORKDIR rely on localstatedirHarshavardhana2014-08-073-2/+16
| | | | | | | | | | | | | | | | | | | | | | - Break-way from '/var/lib/glusterd' hard-coded previously, instead rely on 'configure' value from 'localstatedir' - Provide 's/lib/db' as default working directory for gluster management daemon for BSD and Darwin based installations - loff_t is really off_t on Darwin - fix-off the warnings generated by clang on FreeBSD/Darwin - Now 'tests/*' use GLUSTERD_WORKDIR a common variable for all platforms. - Define proper environment for running tests, define correct PATH and LD_LIBRARY_PATH when running tests, so that the desired version of glusterfs is used, regardless where it is installed. (Thanks to manu@netbsd.org for this additional work) Change-Id: I2339a0d9275de5939ccad3e52b535598064a35e7 BUG: 1111774 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8246 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* socket: add boundary checks for iobuf_get2 over rpc_hdr_bytesHarshavardhana2014-08-041-4/+12
| | | | | | | | | | | | | | | | | | | | | | | A malformed packet can cause a OOM while peforming iobuf_get2() on a large enough packet side. Such a scenario is observed when running vulnerability tests, it would look like one of those tests perhaps based on DDOS (Denial of Service) attacks hand-crafts a RPC packet which is of a large enough size - since we do not verify the size and do not provide any boundary checks, there are secenarios where it leads to OOM. Reproducible consistently with those tests has revealed that we should be ideally be adding a boundary check. Limit such an allocation to a 1gigabyte, as it might be sufficient to handle for all variety of RPC packets. Change-Id: I5f1411dd96d6f167993d28a1718ffef2fb4e9923 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8384 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>