path: root/cli
diff options
authorRavishankar N <>2016-01-18 12:16:31 +0000
committerPranith Kumar Karampuri <>2016-02-17 01:49:28 -0800
commit45301bcd97825206f7f19b25a4ad722e7dc13cc6 (patch)
tree841a3321def4d18d615daf30ae24c4b80eaa3a0e /cli
parentd132a4704d9b1df55c4c4e56a8389078b80897bd (diff)
cli/ afr: op_ret for index heal launch
Backport of Problem: If index heal is launched when some of the bricks are down, glustershd of that node sends a -1 op_ret to glusterd which eventually propagates it to the CLI. Also, glusterd sometimes sends an err_str and sometimes not (depending on the failure happening in the brick-op phase or commit-op phase). So the message that gets displayed varies in each case: "Launching heal operation to perform index self heal on volume testvol has been unsuccessful" (OR) "Commit failed on <host>. Please check log file for details." Fix: 1. Modify afr_xl_op() to return -1 even if index healing of atleast one brick fails. 2. Ignore glusterd's error string in gf_cli_heal_volume_cbk and print a more meaningful message. The patch also fixes a bug in glusterfs_handle_translator_op() where if we encounter an error in notify of one xlator, we break out of the loop instead of sending the notify to other xlators. Change-Id: I957f6c4b4d0a45453ffd5488e425cab5a3e0acca BUG: 1306922 Signed-off-by: Ravishankar N <> Reviewed-on: Smoke: Gluster Build System <> NetBSD-regression: NetBSD Build System <> CentOS-regression: Gluster Build System <> Reviewed-by: Pranith Kumar Karampuri <>
Diffstat (limited to 'cli')
1 files changed, 4 insertions, 7 deletions
diff --git a/cli/src/cli-rpc-ops.c b/cli/src/cli-rpc-ops.c
index b4fbd29c1f1..416b1e09539 100644
--- a/cli/src/cli-rpc-ops.c
+++ b/cli/src/cli-rpc-ops.c
@@ -8475,13 +8475,10 @@ gf_cli_heal_volume_cbk (struct rpc_req *req, struct iovec *iov,
if (rsp.op_ret) {
- if (strcmp (rsp.op_errstr, "")) {
- cli_err ("%s", rsp.op_errstr);
- } else {
- cli_err ("%s%s on volume %s has been unsuccessful",
- operation, heal_op_str, volname);
- }
+ cli_err ("%s%s on volume %s has been unsuccessful on "
+ "bricks that are down. Please check if all brick "
+ "processes are running.",
+ operation, heal_op_str, volname);
ret = rsp.op_ret;
goto out;
} else {