<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt, branch v3.12.3</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: restart the brick if qorum status is NOT_APPLICABLE_QUORUM</title>
<updated>2017-11-10T08:52:14+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-11-06T07:53:32+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=ae9d80461a1f95097b7b406d29c020f64c56ffb4'/>
<id>ae9d80461a1f95097b7b406d29c020f64c56ffb4</id>
<content type='text'>
If a volume is not having server quorum enabled and in a trusted storage
pool all the glusterd instances from other peers are down, on restarting
glusterd the brick start trigger doesn't happen resulting into the
brick not coming up.

&gt; mainline patch : https://review.gluster.org/#/c/18669/

Change-Id: If1458e03b50a113f1653db553bb2350d11577539
BUG: 1511301
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit 635c1c3691a102aa658cf1219fa41ca30dd134ba)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If a volume is not having server quorum enabled and in a trusted storage
pool all the glusterd instances from other peers are down, on restarting
glusterd the brick start trigger doesn't happen resulting into the
brick not coming up.

&gt; mainline patch : https://review.gluster.org/#/c/18669/

Change-Id: If1458e03b50a113f1653db553bb2350d11577539
BUG: 1511301
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit 635c1c3691a102aa658cf1219fa41ca30dd134ba)
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd : introduce timer in mgmt_v3_lock</title>
<updated>2017-11-06T12:16:15+00:00</updated>
<author>
<name>Gaurav Yadav</name>
<email>gyadav@redhat.com</email>
</author>
<published>2017-10-05T18:14:46+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=1a1bdfbffc81981a80af40ebf000194d9bcd1bf0'/>
<id>1a1bdfbffc81981a80af40ebf000194d9bcd1bf0</id>
<content type='text'>
Problem:
In a multinode environment, if two of the op-sm transactions
are initiated on one of the receiver nodes at the same time,
there might be a possibility that glusterd  may end up in
stale lock.

Solution:
During mgmt_v3_lock a registration is made to  gf_timer_call_after
which release the lock after certain period of time

&gt;mainline patch : https://review.gluster.org/#/c/18437/

Change-Id: I16cc2e5186a2e8a5e35eca2468b031811e093843
BUG: 1503239
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In a multinode environment, if two of the op-sm transactions
are initiated on one of the receiver nodes at the same time,
there might be a possibility that glusterd  may end up in
stale lock.

Solution:
During mgmt_v3_lock a registration is made to  gf_timer_call_after
which release the lock after certain period of time

&gt;mainline patch : https://review.gluster.org/#/c/18437/

Change-Id: I16cc2e5186a2e8a5e35eca2468b031811e093843
BUG: 1503239
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: clean up portmap on brick disconnect</title>
<updated>2017-11-06T06:09:45+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-10-17T16:02:44+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=213dfd853bfa8c46d4032698a10208516437ad6a'/>
<id>213dfd853bfa8c46d4032698a10208516437ad6a</id>
<content type='text'>
GlusterD's portmap entry for a brick is cleaned up when a PMAP_SIGNOUT event is
initiated by the brick process at the shutdown. But if the brick process crashes
or gets killed through SIGKILL then this event is not initiated and glusterd
ends up with a stale port. Since GlusterD's portmap traversal happens both ways,
forward for allocation and backward for registry search, there is a possibility
that glusterd might end up running with a stale port for a brick which
eventually will end up with clients to fail to connect to the bricks.

Solution is to clean up the port entry in case the process is down as
part of the brick disconnect event. Although with this the handling
PMAP_SIGNOUT event becomes redundant in most of the cases, but this is
the safeguard method to avoid glusterd getting into the stale port
issues.

&gt; mainline patch : https://review.gluster.org/#/c/18541/

Change-Id: I04c5be6d11e772ee4de16caf56dbb37d5c944303
BUG: 1507747
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit 30e0b86aae00430823f2523c6efa3c4ebbf0a478)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
GlusterD's portmap entry for a brick is cleaned up when a PMAP_SIGNOUT event is
initiated by the brick process at the shutdown. But if the brick process crashes
or gets killed through SIGKILL then this event is not initiated and glusterd
ends up with a stale port. Since GlusterD's portmap traversal happens both ways,
forward for allocation and backward for registry search, there is a possibility
that glusterd might end up running with a stale port for a brick which
eventually will end up with clients to fail to connect to the bricks.

Solution is to clean up the port entry in case the process is down as
part of the brick disconnect event. Although with this the handling
PMAP_SIGNOUT event becomes redundant in most of the cases, but this is
the safeguard method to avoid glusterd getting into the stale port
issues.

&gt; mainline patch : https://review.gluster.org/#/c/18541/

Change-Id: I04c5be6d11e772ee4de16caf56dbb37d5c944303
BUG: 1507747
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit 30e0b86aae00430823f2523c6efa3c4ebbf0a478)
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: fix brick restart parallelism</title>
<updated>2017-11-06T06:09:21+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-10-26T08:56:30+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=44e3c3b5c813168d72f10ecb3c058ac3489c719c'/>
<id>44e3c3b5c813168d72f10ecb3c058ac3489c719c</id>
<content type='text'>
glusterd's brick restart logic is not always sequential as there is
atleast three different ways how the bricks are restarted.
1. through friend-sm and glusterd_spawn_daemons ()
2. through friend-sm and handling volume quorum action
3. through friend handshaking when there is a mimatch on quorum on
friend import.

In a brick multiplexing setup, glusterd ended up trying to spawn the
same brick process couple of times as almost in fraction of milliseconds
two threads hit glusterd_brick_start () because of which glusterd didn't
have any choice of rejecting any one of them as for both the case brick
start criteria met.

As a solution, it'd be better to control this madness by two different
flags, one is a boolean called start_triggered which indicates a brick
start has been triggered and it continues to be true till a brick dies
or killed, the second is a mutex lock to ensure for a particular brick
we don't end up getting into glusterd_brick_start () more than once at
same point of time.

Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2
BUG: 1508283
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit 82be66ef8e9e3127d41a4c843daf74c1d8aec4aa)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
glusterd's brick restart logic is not always sequential as there is
atleast three different ways how the bricks are restarted.
1. through friend-sm and glusterd_spawn_daemons ()
2. through friend-sm and handling volume quorum action
3. through friend handshaking when there is a mimatch on quorum on
friend import.

In a brick multiplexing setup, glusterd ended up trying to spawn the
same brick process couple of times as almost in fraction of milliseconds
two threads hit glusterd_brick_start () because of which glusterd didn't
have any choice of rejecting any one of them as for both the case brick
start criteria met.

As a solution, it'd be better to control this madness by two different
flags, one is a boolean called start_triggered which indicates a brick
start has been triggered and it continues to be true till a brick dies
or killed, the second is a mutex lock to ensure for a particular brick
we don't end up getting into glusterd_brick_start () more than once at
same point of time.

Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2
BUG: 1508283
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit 82be66ef8e9e3127d41a4c843daf74c1d8aec4aa)
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: delete source brick only once in reset-brick commit force</title>
<updated>2017-11-02T09:53:37+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-10-30T10:25:32+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=4e6dc4f134ed81005bb91f9cb4e18bf5836dffb5'/>
<id>4e6dc4f134ed81005bb91f9cb4e18bf5836dffb5</id>
<content type='text'>
While stopping the brick which is to be reset and replaced delete_brick
flag was passed as true which resulted glusterd to free up to source
brick before the actual operation. This results commit force to fail
failing to find the source brickinfo.

&gt; mainline patch : https://review.gluster.org/#/c/18581/

Change-Id: I1aa7508eff7cc9c9b5d6f5163f3bb92736d6df44
BUG: 1507877
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit 0fb8acaa6ff80c43e46deac0ce66b29ae0df0ca4)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
While stopping the brick which is to be reset and replaced delete_brick
flag was passed as true which resulted glusterd to free up to source
brick before the actual operation. This results commit force to fail
failing to find the source brickinfo.

&gt; mainline patch : https://review.gluster.org/#/c/18581/

Change-Id: I1aa7508eff7cc9c9b5d6f5163f3bb92736d6df44
BUG: 1507877
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit 0fb8acaa6ff80c43e46deac0ce66b29ae0df0ca4)
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: persist brickinfo's port change into glusterd's store</title>
<updated>2017-11-02T09:52:58+00:00</updated>
<author>
<name>Gaurav Yadav</name>
<email>gyadav@redhat.com</email>
</author>
<published>2017-10-27T10:34:46+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9e3244cd177f7ddd4f9697e07bd119e2becb982d'/>
<id>9e3244cd177f7ddd4f9697e07bd119e2becb982d</id>
<content type='text'>
Problem:
Consider a case where node reboot is performed and prior to reboot
brick was listening to 49153. Post reboot glusterd assigned 49152
to brick and started the brick process but the new port was never
persisted. Now when glusterd restarts glusterd always read the port
from its persisted store i.e 49153 however pmap signin happens with
the correct port i.e 49152.

Fix:
Make sure when glusterd_brick_start is called, glusterd_store_volinfo is
eventually invoked.

Change-Id: Ic0efbd48c51d39729ed951a42922d0e59f7115a1
BUG: 1507748
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Consider a case where node reboot is performed and prior to reboot
brick was listening to 49153. Post reboot glusterd assigned 49152
to brick and started the brick process but the new port was never
persisted. Now when glusterd restarts glusterd always read the port
from its persisted store i.e 49153 however pmap signin happens with
the correct port i.e 49152.

Fix:
Make sure when glusterd_brick_start is called, glusterd_store_volinfo is
eventually invoked.

Change-Id: Ic0efbd48c51d39729ed951a42922d0e59f7115a1
BUG: 1507748
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: documenting server.allow-insecure</title>
<updated>2017-10-25T11:47:45+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2017-10-18T06:10:59+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=7d72d5de2f978db30072f72caa559c1f5f543ef6'/>
<id>7d72d5de2f978db30072f72caa559c1f5f543ef6</id>
<content type='text'>
problem: "server.allow-insecure" is invisible in gluster volume set
help.

Fix: "server.allow-insecure" is defined as NO_DOC type, chainging
it to DOC type solve the problem.

Change-Id: I327f1e4c1684ff846deb8b7df07d4d8a09073274
BUG: 1505373
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit c0b08f10ed07bfe06309e31a7fff85cadb733ce2)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
problem: "server.allow-insecure" is invisible in gluster volume set
help.

Fix: "server.allow-insecure" is defined as NO_DOC type, chainging
it to DOC type solve the problem.

Change-Id: I327f1e4c1684ff846deb8b7df07d4d8a09073274
BUG: 1505373
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit c0b08f10ed07bfe06309e31a7fff85cadb733ce2)
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd:Marking all the brick status as stopped when a process goes down in brick multiplexing</title>
<updated>2017-10-12T18:49:37+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2017-10-06T22:03:40+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=8aa0c34c5301a15a87c0cb168a89cb291e85d741'/>
<id>8aa0c34c5301a15a87c0cb168a89cb291e85d741</id>
<content type='text'>
In brick multiplexing environment, if a brick process goes down
i.e., if we kill it with SIGKILL, the status of the brick for which
the process came up for the first time is only changing to stopped.
all other brick statuses are remain started. This is happening because
the process was killed abruptly using SIGKILL signal and signal
handler wasn't invoked and further cleanup wasn't triggered.

When we try to start a volume using force, it shows error saying
"Request timed out", since all the brickinfo-&gt;status are still in
started state, we're waiting for one of the brick process to come up
which never going to happen since the brick process was killed.

To resolve this, In the disconnect event, We are checking all the
processes that whether the brick which got disconnected belongs the
process. Once we get the process we are calling a function named
glusterd_mark_bricks_stopped_by_proc() and sending brick_proc_t object as
an argument.

From the glusterd_brick_proc_t we can get all the bricks attached
to that process. but these are duplicated ones. To get the original
brickinfo we are reading volinfo from brick. In volinfo we will have
original brickinfo copies. We are changing brickinfo-&gt;status to
stopped for all the bricks.

&gt;Change-Id: Ifb9054b3ee081ef56b39b2903ae686984fe827e7
&gt;BUG: 1499509
&gt;Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
&gt;Reviewed-on: https://review.gluster.org/#/c/18444/
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;cherry picked from commit 9422446d72bc054962d72ace9912ecb885946d49)

Change-Id: Ifb9054b3ee081ef56b39b2903ae686984fe827e7
BUG: 1501154
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In brick multiplexing environment, if a brick process goes down
i.e., if we kill it with SIGKILL, the status of the brick for which
the process came up for the first time is only changing to stopped.
all other brick statuses are remain started. This is happening because
the process was killed abruptly using SIGKILL signal and signal
handler wasn't invoked and further cleanup wasn't triggered.

When we try to start a volume using force, it shows error saying
"Request timed out", since all the brickinfo-&gt;status are still in
started state, we're waiting for one of the brick process to come up
which never going to happen since the brick process was killed.

To resolve this, In the disconnect event, We are checking all the
processes that whether the brick which got disconnected belongs the
process. Once we get the process we are calling a function named
glusterd_mark_bricks_stopped_by_proc() and sending brick_proc_t object as
an argument.

From the glusterd_brick_proc_t we can get all the bricks attached
to that process. but these are duplicated ones. To get the original
brickinfo we are reading volinfo from brick. In volinfo we will have
original brickinfo copies. We are changing brickinfo-&gt;status to
stopped for all the bricks.

&gt;Change-Id: Ifb9054b3ee081ef56b39b2903ae686984fe827e7
&gt;BUG: 1499509
&gt;Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
&gt;Reviewed-on: https://review.gluster.org/#/c/18444/
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;cherry picked from commit 9422446d72bc054962d72ace9912ecb885946d49)

Change-Id: Ifb9054b3ee081ef56b39b2903ae686984fe827e7
BUG: 1501154
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: disallow replace brick for dist only volumes</title>
<updated>2017-10-12T05:38:57+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-09-07T13:44:23+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=08e083ac43f3b35892df808f41d4f9fbe6c2154b'/>
<id>08e083ac43f3b35892df808f41d4f9fbe6c2154b</id>
<content type='text'>
Allowing replace-brick on dist only volumes will lead to data loss. This
patch blocks replace brick commit force to fail if a volume is dist
only.

Also removing tests/basic/pump.t as its of no use as per the discussion
in
http://lists.gluster.org/pipermail/gluster-devel/2017-September/053652.html

&gt;Reviewed-on: https://review.gluster.org/18226
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;(cherry picked from commit 7f70d38b66ce755f848ff0197814457a28b321df)

Change-Id: Iabb0c16f865f3fc361b64a19bfcf0c0fbb5c2682
BUG: 1493975
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Allowing replace-brick on dist only volumes will lead to data loss. This
patch blocks replace brick commit force to fail if a volume is dist
only.

Also removing tests/basic/pump.t as its of no use as per the discussion
in
http://lists.gluster.org/pipermail/gluster-devel/2017-September/053652.html

&gt;Reviewed-on: https://review.gluster.org/18226
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;(cherry picked from commit 7f70d38b66ce755f848ff0197814457a28b321df)

Change-Id: Iabb0c16f865f3fc361b64a19bfcf0c0fbb5c2682
BUG: 1493975
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli/afr: gluster volume heal info "healed" command output is not appropriate</title>
<updated>2017-10-11T10:02:57+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2016-10-25T14:27:02+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=c49fcf570439e47a5e1224436bbaf3f8dd580105'/>
<id>c49fcf570439e47a5e1224436bbaf3f8dd580105</id>
<content type='text'>
Problem: "gluster volume heal info [healed] [heal-failed]" command
          output on terminal is not appropriate in case of down any volume.

Solution: To make message more appropriate change the condition
          in function "gd_syncop_mgmt_brick_op".

Test : To verify the fix followed below procedure
       1) Create 2*3 distribute replicate volume
       2) set self-heal daemon off
       3) kill two bricks (3, 6)
       4) create some file on mount point
       5) bring brick 3,6 up
       6) kill other two brick (2 and 4)
       7) make self heal daemon on
       8) run "gluster v heal &lt;vol-name&gt;"

Note: After apply the patch options (healed | heal-failed) will deprecate
      from command line.

&gt; BUG: 1388509
&gt; Change-Id: I229c320c9caeb2525c76b78b44a53a64b088545a
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; (Cherry pick from commit d1f15cdeb609a1b720a04a502f7a63b2d3922f41)

BUG: 1500662
Change-Id: I229c320c9caeb2525c76b78b44a53a64b088545a
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: "gluster volume heal info [healed] [heal-failed]" command
          output on terminal is not appropriate in case of down any volume.

Solution: To make message more appropriate change the condition
          in function "gd_syncop_mgmt_brick_op".

Test : To verify the fix followed below procedure
       1) Create 2*3 distribute replicate volume
       2) set self-heal daemon off
       3) kill two bricks (3, 6)
       4) create some file on mount point
       5) bring brick 3,6 up
       6) kill other two brick (2 and 4)
       7) make self heal daemon on
       8) run "gluster v heal &lt;vol-name&gt;"

Note: After apply the patch options (healed | heal-failed) will deprecate
      from command line.

&gt; BUG: 1388509
&gt; Change-Id: I229c320c9caeb2525c76b78b44a53a64b088545a
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; (Cherry pick from commit d1f15cdeb609a1b720a04a502f7a63b2d3922f41)

BUG: 1500662
Change-Id: I229c320c9caeb2525c76b78b44a53a64b088545a
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
