summaryrefslogtreecommitdiffstats
path: root/xlators/features/quota/src/quotad-helpers.c
Commit message (Collapse)AuthorAgeFilesLines
* stack: Make sure to have unique call-stacks in all casesPranith Kumar K2019-05-301-3/+0
| | | | | | | | | | | | | | | At the moment new stack doesn't populate frame->root->unique in all cases. This makes it difficult to debug hung frames by examining successive state dumps. Fuse and server xlators populate it whenever they can, but other xlators won't be able to assign 'unique' when they need to create a new frame/stack because they don't know what 'unique' fuse/server xlators already used. What we need is for unique to be correct. If a stack with same unique is present in successive statedumps, that means the same operation is still in progress. This makes 'finding hung frames' part of debugging hung frames easier. fixes bz#1714098 Change-Id: I3e9a8f6b4111e260106c48a2ac3a41ef29361b9e Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* quotad: fix passing GF_DATA_TYPE_STR_OLD dict data to v4 protocolKinglong Mee2019-03-041-0/+3
| | | | | | | | | | | | | | | | | | quotad prints many logs as, [glusterfs3.h:752:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument] [glusterfs3.h:752:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument] For quota, there is a deamon named quotad which has a rpcsvc_program quotad_aggregator_prog that only supports v3 right now. Quotad has two actors (LOOKUP,GETLIMIT) that contains a dict in request, quotad just decodes the dict by dict_unserialize, those dict dates's type is GF_DATA_TYPE_STR_OLD, which type is not supported at glusterfs v4. Change-Id: Ib649d7a2e3c68c32dc26bc0f88923a0ba967ebd7 Updates: bz#1596787 Signed-off-by: Kinglong Mee <mijinlong@open-fs.com>
* Land part 2 of clang-format changesGluster Ant2018-09-121-57/+57
| | | | | Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4 Signed-off-by: Nigel Babu <nigelb@redhat.com>
* rpc: Maintain separate xlator pointer in 'rpcsvc_state'Kotresh HR2015-05-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The structure 'rpcsvc_state', which maintains rpc server state had no separate pointer to track the translator. It was using the mydata pointer itself. So callers were forced to send xlator pointer as mydata which is opaque (void pointer) by function prototype. 'rpcsvc_register_init' is setting svc->mydata with xlator pointer. 'rpcsvc_register_notify' is overwriting svc->mydata with mydata pointer. And rpc interprets svc->mydata as xlator pointer internally. If someone passes non xlator structure pointer to rpcsvc_register_notify as libgfchangelog currently does, it might corrupt mydata. So interpreting opaque mydata as xlator pointer is incorrect as it is caller's choice to send mydata as any type of data to 'rpcsvc_register_notify'. Maintaining two different pointers in 'rpcsvc_state' for xlator and mydata solves the issue. Change-Id: I7874933fefc68f3fe01d44f92016a8e4e9768378 BUG: 1215161 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/10366 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* quotad: Remove dead codePranith Kumar K2014-06-301-6/+0
| | | | | | | | | | | | | | client_t is created by server xlator for managing connection related resources. Quotad doesn't do that. So no need to handle anything related to it. Change-Id: I83e6f9e1c57458d60529dc62086bb63642932d49 BUG: 1113403 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8180 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* features/quota: Improvements to quotaRaghavendra G2013-11-261-0/+113
* Two stages of quota enforcement is done: Soft and hard quota Upon reaching soft quota limit on the directory it logs/alerts in the quota daemon log (ie DEFAULT_LOG_DIR/quotad.log) and no more writes allowed after hard quota limit. After reaching the soft-limit the daemon alerts the user/admin repeatively for every 'alert-time', which is configurable. * Quota enforcer is moved to server-side. It takes care of enforcing quota. Since enforcer doesn't have the cluster view, it relies on another service called quota-aggregator. Aggregator, on query can return the size of a directory based on the cluster view. Enforcer is always loaded in the server graph and is by passed if the feature is not enabled. Options specific to enforcer: server-quota - Specifies whether the feature is on/off. It is used to by pass the quota if turned off. deem-statfs - If set to on, it takes quota limits into consideration while estimating fs size. (df command). The algorithm followed is, i. Adjust statvfs based on limit configured on root. ii. If limit is set on the inode passed, use size/limits on that inode to populate statvfs. Otherwise, use size/limits configured on root. iii. Upon statvfs, update the ctx->size on the inode. iv. Don't let DHT aggregate, instead take the maximum of the usages from the subvols of the DHT, since each of it contains the complete information. Enforcer also makes use of gfid-to-path conversion functionality to work correctly when a client like nfs predominently relies on nameless lookups. * Quota Aggregator acts as a thin client to provide cluster view Its a lightweight *gluster client* process with no mount point, started upon enabling quota or restarting the volume. This is a single process run on each brick, which can answer queries on all volumes in the cluster. Its volfile stored in GLUSTERD_DEFAULT_WORKING_DIR/quotad/quotad.vol. Credits: Raghavendra Bhat <rabhat@redhat.com> Varun Shastry <vshastry@redhat.com> Shishir Gowda <sgowda@redhat.com> Kruthika Dhananjay <kdhananj@redhat.com> Brian Foster <bfoster@redhat.com> Krishnan Parthasarathi <kparthas@redhat.com> Change-Id: Id1cb25b414951da34c665a55f77385d482e0f9de BUG: 969461 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/5952 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>