<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs-snapshot.git/tests/bugs, branch upstream</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-snapshot.git/'/>
<entry>
<title>fuse: Check the return status from state-&gt;resolve_now</title>
<updated>2013-11-15T01:47:59+00:00</updated>
<author>
<name>Vijaykumar M</name>
<email>vmallika@redhat.com</email>
</author>
<published>2013-11-14T07:42:09+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-snapshot.git/commit/?id=f21cefed298ba21f4739d6ab4ceea81b97d2aab8'/>
<id>f21cefed298ba21f4739d6ab4ceea81b97d2aab8</id>
<content type='text'>
Change-Id: I85fc6dd393449d365bb908b38c2827b58cb08171
BUG: 1030208
Signed-off-by: Vijaykumar M &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6262
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I85fc6dd393449d365bb908b38c2827b58cb08171
BUG: 1030208
Signed-off-by: Vijaykumar M &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6262
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Release big-lock after log-rotate handler returns</title>
<updated>2013-10-31T05:19:13+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2013-10-22T14:48:28+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-snapshot.git/commit/?id=5e96e7d4a976975d8eac0bbf8d0d7ea663665bce'/>
<id>5e96e7d4a976975d8eac0bbf8d0d7ea663665bce</id>
<content type='text'>
... so that subsequent volume commands don't block waiting forever,
for the lock to be released.

Change-Id: I24b5ec47f6982900ab74ff1b492d523f31ecfb7f
BUG: 1022055
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6122
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
... so that subsequent volume commands don't block waiting forever,
for the lock to be released.

Change-Id: I24b5ec47f6982900ab74ff1b492d523f31ecfb7f
BUG: 1022055
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6122
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd : Improved quota volume reset command</title>
<updated>2013-10-28T18:39:21+00:00</updated>
<author>
<name>Anuradha</name>
<email>atalur@redhat.com</email>
</author>
<published>2013-10-24T09:33:48+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-snapshot.git/commit/?id=fc86b3a22ab0519652f74ef8a75cf1cbfa290fb8'/>
<id>fc86b3a22ab0519652f74ef8a75cf1cbfa290fb8</id>
<content type='text'>
Quota volume reset command without "force"
option fixed, doesn't fail anymore. It resets
unprotected fields and not the protected ones.

Also, an appropriate message is provided to the user
for the following cases :
1. only unprotected fields are reset, "force" option
should be used to reset protected fields.
2. Both protected and unprotected fields are reset.
3. No field was reset, "force" option required.

Test case for the same also added.

Change-Id: I24e8f1be87b79ccd81bf6f933e00608b861c7a16
BUG: 1022905
Signed-off-by: Anuradha &lt;atalur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6135
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Quota volume reset command without "force"
option fixed, doesn't fail anymore. It resets
unprotected fields and not the protected ones.

Also, an appropriate message is provided to the user
for the following cases :
1. only unprotected fields are reset, "force" option
should be used to reset protected fields.
2. Both protected and unprotected fields are reset.
3. No field was reset, "force" option required.

Test case for the same also added.

Change-Id: I24e8f1be87b79ccd81bf6f933e00608b861c7a16
BUG: 1022905
Signed-off-by: Anuradha &lt;atalur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6135
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli,glusterd: Changes to cli-glusterd communication</title>
<updated>2013-10-17T18:26:45+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@redhat.com</email>
</author>
<published>2013-07-03T11:01:22+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-snapshot.git/commit/?id=fc637b14cfad4d08e72bee7064194c8007a388d0'/>
<id>fc637b14cfad4d08e72bee7064194c8007a388d0</id>
<content type='text'>
Glusterd changes:
With this patch, glusterd creates a socket file in
DATADIR/run/glusterd.socket , and listen on it for cli requests. It
listens for 2 rpc programs on the socket file,
- The glusterd cli rpc program, for all cli commands
- A reduced glusterd handshake program, just for the 'system:: getspec'
  command

The location of the socket file can be changed with the glusterd option
'glusterd-sockfile'.

To retain compatibility with the '--remote-host' cli option, glusterd
also listens for the cli requests on port 24007. But, for the sake of
security, it listens using a reduced cli rpc program on the port. The
reduced rpc program only contains read-only procs used for 'volume
(info|list|status)', 'peer status' and 'system:: getwd' cli commands.

CLI changes:
The gluster cli now uses the glusterd socket file for communicating with
glusterd by default. A new option '--gluster-sock' has been added to
allow specifying the sockfile used to connect. Using the '--remote-host'
option will make cli connect to the given host &amp; port.

Tests changes:
cluster.rc has been modified to make use of socket files and use
different log files for each glusterd.
Some of the tests using cluster.rc have been fixed.

Change-Id: Iaf24bc22f42f8014a5fa300ce37c7fc9b1b92b53
BUG: 980754
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5280
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Glusterd changes:
With this patch, glusterd creates a socket file in
DATADIR/run/glusterd.socket , and listen on it for cli requests. It
listens for 2 rpc programs on the socket file,
- The glusterd cli rpc program, for all cli commands
- A reduced glusterd handshake program, just for the 'system:: getspec'
  command

The location of the socket file can be changed with the glusterd option
'glusterd-sockfile'.

To retain compatibility with the '--remote-host' cli option, glusterd
also listens for the cli requests on port 24007. But, for the sake of
security, it listens using a reduced cli rpc program on the port. The
reduced rpc program only contains read-only procs used for 'volume
(info|list|status)', 'peer status' and 'system:: getwd' cli commands.

CLI changes:
The gluster cli now uses the glusterd socket file for communicating with
glusterd by default. A new option '--gluster-sock' has been added to
allow specifying the sockfile used to connect. Using the '--remote-host'
option will make cli connect to the given host &amp; port.

Tests changes:
cluster.rc has been modified to make use of socket files and use
different log files for each glusterd.
Some of the tests using cluster.rc have been fixed.

Change-Id: Iaf24bc22f42f8014a5fa300ce37c7fc9b1b92b53
BUG: 980754
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5280
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: [Feature] Command implementation to get heal-count</title>
<updated>2013-10-14T21:41:54+00:00</updated>
<author>
<name>Venkatesh Somyajulu</name>
<email>vsomyaju@redhat.com</email>
</author>
<published>2013-10-07T08:17:47+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-snapshot.git/commit/?id=75caba63714c7f7f9ab810937dae69a1a28ece53'/>
<id>75caba63714c7f7f9ab810937dae69a1a28ece53</id>
<content type='text'>
Currently to know the number of files to be healed, either user
has to go to backend and check the number of entries present in
indices/xattrop directory. But if a volume consists of large
number of bricks, going to each backend and counting the number
of entries is a time-taking task. Otherwise user can give
gluster volume heal vol-name info command but with this
approach if no. of entries are very hugh in the indices/
xattrop directory, it will comsume time.

So as a feature, new command is implemented.

Command 1: gluster volume heal vn statistics heal-count
This command will get the number of entries present in
every brick of a volume. The output displays only entries
count.

Command 2: gluster volume heal vn statistics heal-count
           replica 192.168.122.1:/home/user/brickname

           Here if we are concerned with just one replica.
So providing any one of the brick of a replica will get
the number of entries to be healed for that replica only.

Example:
Replicate volume with replica count 2.

Backend status:
--------------
[root@dhcp-0-17 xattrop]# ls -lia | wc -l
1918

NOTE: Out of 1918, 2 entries are &lt;xattrop-gfid&gt; dummy
entries so actual no. of entries to be healed are
1916.

[root@dhcp-0-17 xattrop]# pwd
/home/user/2ty/.glusterfs/indices/xattrop

Command output:
--------------
Gathering count of entries to be healed on volume volume3 has been successful

Brick 192.168.122.1:/home/user/22iu
Status: Brick is Not connected
Entries count is not available

Brick 192.168.122.1:/home/user/2ty
Number of entries: 1916

Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290
BUG: 1015990
Signed-off-by: Venkatesh Somyajulu &lt;vsomyaju@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6044
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently to know the number of files to be healed, either user
has to go to backend and check the number of entries present in
indices/xattrop directory. But if a volume consists of large
number of bricks, going to each backend and counting the number
of entries is a time-taking task. Otherwise user can give
gluster volume heal vol-name info command but with this
approach if no. of entries are very hugh in the indices/
xattrop directory, it will comsume time.

So as a feature, new command is implemented.

Command 1: gluster volume heal vn statistics heal-count
This command will get the number of entries present in
every brick of a volume. The output displays only entries
count.

Command 2: gluster volume heal vn statistics heal-count
           replica 192.168.122.1:/home/user/brickname

           Here if we are concerned with just one replica.
So providing any one of the brick of a replica will get
the number of entries to be healed for that replica only.

Example:
Replicate volume with replica count 2.

Backend status:
--------------
[root@dhcp-0-17 xattrop]# ls -lia | wc -l
1918

NOTE: Out of 1918, 2 entries are &lt;xattrop-gfid&gt; dummy
entries so actual no. of entries to be healed are
1916.

[root@dhcp-0-17 xattrop]# pwd
/home/user/2ty/.glusterfs/indices/xattrop

Command output:
--------------
Gathering count of entries to be healed on volume volume3 has been successful

Brick 192.168.122.1:/home/user/22iu
Status: Brick is Not connected
Entries count is not available

Brick 192.168.122.1:/home/user/2ty
Number of entries: 1916

Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290
BUG: 1015990
Signed-off-by: Venkatesh Somyajulu &lt;vsomyaju@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6044
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli,glusterd: Implement 'volume status tasks'</title>
<updated>2013-10-09T06:13:16+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2013-09-24T11:31:46+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-snapshot.git/commit/?id=e51ca3c1c991416895e1e8693f7c3e6332d57464'/>
<id>e51ca3c1c991416895e1e8693f7c3e6332d57464</id>
<content type='text'>
oVirt's Gluster Integration needs an inexpensive command that can be
executed every 10 seconds to monitor async tasks and their parameters,
for all volumes.

The solution involves adding a 'tasks' sub-command to 'volume status'
to fetch only the async task IDs, type and other relevant parameters.
Only the originator glusterd participates in this command as all the
information needed is available on all the nodes. This is to make the
command suitable for being executed every 10 seconds.

Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1
BUG: 1012346
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6006
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
oVirt's Gluster Integration needs an inexpensive command that can be
executed every 10 seconds to monitor async tasks and their parameters,
for all volumes.

The solution involves adding a 'tasks' sub-command to 'volume status'
to fetch only the async task IDs, type and other relevant parameters.
Only the originator glusterd participates in this command as all the
information needed is available on all the nodes. This is to make the
command suitable for being executed every 10 seconds.

Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1
BUG: 1012346
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6006
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Tests: Enable fore-ground self-heal</title>
<updated>2013-10-04T04:28:28+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2013-10-03T06:22:53+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-snapshot.git/commit/?id=a25bd2d7695760c9fe35fec39065c9326f2952d6'/>
<id>a25bd2d7695760c9fe35fec39065c9326f2952d6</id>
<content type='text'>
Change-Id: Ibfca8ddb7c663d44ed447be13b2eabb7bd393bb3
BUG: 993981
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6028
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Ibfca8ddb7c663d44ed447be13b2eabb7bd393bb3
BUG: 993981
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6028
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: Create a regression-tests package for distribution</title>
<updated>2013-09-19T16:28:44+00:00</updated>
<author>
<name>Harshavardhana</name>
<email>harsha@harshavardhana.net</email>
</author>
<published>2013-08-21T07:50:41+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-snapshot.git/commit/?id=5d94695e9ad42343e72918024c046f22fe4941a0'/>
<id>5d94695e9ad42343e72918024c046f22fe4941a0</id>
<content type='text'>
As of today regression tests are an in-house breed, by making
it a new package and distributing it ensures larger set of
people use it and contribute to it. This can also be used
by any consumer/user to build their own environment for glusterfs
regression testing which is today limited only to 'upstream'
'glusterfs' releases and build.gluster.org

Change-Id: I4f7e9fd1c49982dcf0d788ef6a83ffe895a956ac
BUG: 764966
Signed-off-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
Reviewed-on: http://review.gluster.org/5674
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
As of today regression tests are an in-house breed, by making
it a new package and distributing it ensures larger set of
people use it and contribute to it. This can also be used
by any consumer/user to build their own environment for glusterfs
regression testing which is today limited only to 'upstream'
'glusterfs' releases and build.gluster.org

Change-Id: I4f7e9fd1c49982dcf0d788ef6a83ffe895a956ac
BUG: 764966
Signed-off-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
Reviewed-on: http://review.gluster.org/5674
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli/glusterd: improve rebalance fix-layout status reporting</title>
<updated>2013-09-19T16:22:36+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2013-09-13T13:18:38+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-snapshot.git/commit/?id=c550ae69526ad60b2f288ddc98a59141b9e64dcc'/>
<id>c550ae69526ad60b2f288ddc98a59141b9e64dcc</id>
<content type='text'>
Problem:
Currenly the CLI rebalance status command output does not indicate the
'type' of rebalance, i.e. whether a full rebalance or only a fix-layout
was carried out.

Fix: After the rebalance status of all peers is received by the
originator glusterd, alter it to reflect the type of rebalance
before passing it on to the CLI process.

Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3
BUG: 1004744
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5826
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Currenly the CLI rebalance status command output does not indicate the
'type' of rebalance, i.e. whether a full rebalance or only a fix-layout
was carried out.

Fix: After the rebalance status of all peers is received by the
originator glusterd, alter it to reflect the type of rebalance
before passing it on to the CLI process.

Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3
BUG: 1004744
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5826
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/glusterd: Update sub_count on remove brick</title>
<updated>2013-09-12T05:51:36+00:00</updated>
<author>
<name>Vijay Bellur</name>
<email>vbellur@redhat.com</email>
</author>
<published>2013-09-10T19:56:13+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-snapshot.git/commit/?id=643533c77fd49316b7d16015fa1a008391d14bb2'/>
<id>643533c77fd49316b7d16015fa1a008391d14bb2</id>
<content type='text'>
Change-Id: I7c17de39da03c6b2764790581e097936da406695
BUG: 1002556
Signed-off-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5893
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I7c17de39da03c6b2764790581e097936da406695
BUG: 1002556
Signed-off-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5893
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
