<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/cluster/dht/src/dht-common.h, branch v3.4.7beta2</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>cluster/dht: Rename should not fail post hardlink creation</title>
<updated>2014-10-20T14:56:00+00:00</updated>
<author>
<name>Shyam</name>
<email>srangana@redhat.com</email>
</author>
<published>2014-09-10T04:25:11+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=3ade63522322d3c09b603a802eb1058cf4d0b50d'/>
<id>3ade63522322d3c09b603a802eb1058cf4d0b50d</id>
<content type='text'>
In the rename path, we wind the creation of newname hardlink and
linkto file in dst hashed a the same time. If the linkto creation
fails, but the link creation succeeds, we enter the failure code
and cleanup the created newname hardlink.

In the interim if another client looks up newname and finds it as
a hardlink from FUSE, it could send an unlink for oldname instead
of a rename. This combined with the above cleanup code could end
up losing all the files copies, and thereby losing data.

This fix separates these steps into 2 parts, creating the linkto
first and then the link file, so that post link file creation no
failures would cleanup the newname file. If linkto fails then link
is not attempted, thereby not polluting the name space with
newname.

Change-Id: I61da8e906060da16a31ea1076eec2f01fd617f44
BUG: 1139998
Signed-off-by: Shyam &lt;srangana@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8570
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8683
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In the rename path, we wind the creation of newname hardlink and
linkto file in dst hashed a the same time. If the linkto creation
fails, but the link creation succeeds, we enter the failure code
and cleanup the created newname hardlink.

In the interim if another client looks up newname and finds it as
a hardlink from FUSE, it could send an unlink for oldname instead
of a rename. This combined with the above cleanup code could end
up losing all the files copies, and thereby losing data.

This fix separates these steps into 2 parts, creating the linkto
first and then the link file, so that post link file creation no
failures would cleanup the newname file. If linkto fails then link
is not attempted, thereby not polluting the name space with
newname.

Change-Id: I61da8e906060da16a31ea1076eec2f01fd617f44
BUG: 1139998
Signed-off-by: Shyam &lt;srangana@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8570
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8683
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: synchronize rename and file-migration</title>
<updated>2014-10-20T14:55:36+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2014-09-10T03:35:22+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=c527449c0504828378f33ed32d1e7fe2e2c51c6f'/>
<id>c527449c0504828378f33ed32d1e7fe2e2c51c6f</id>
<content type='text'>
Change-Id: I4f243c946f76d440680b651235f925e3d0ebf0fd
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8523
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
BUG: 1139998
Reviewed-on: http://review.gluster.org/8681
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I4f243c946f76d440680b651235f925e3d0ebf0fd
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8523
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
BUG: 1139998
Reviewed-on: http://review.gluster.org/8681
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: introduce locking api.</title>
<updated>2014-10-20T14:55:11+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2014-08-11T04:44:18+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=fcd256faa951f459e344ced46f0cc409a5bbc147'/>
<id>fcd256faa951f459e344ced46f0cc409a5bbc147</id>
<content type='text'>
Change-Id: I41389ba91951d3e63e617aa32cd0bee848261c72
BUG: 1139998
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8521
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8679
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I41389ba91951d3e63e617aa32cd0bee848261c72
BUG: 1139998
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8521
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8679
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Prevent dht_access from going into a loop.</title>
<updated>2014-10-20T14:54:49+00:00</updated>
<author>
<name>shishir gowda</name>
<email>sgowda@redhat.com</email>
</author>
<published>2013-07-11T08:14:51+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=91175b38c9264676d75a275c16add45f7c64f4c1'/>
<id>91175b38c9264676d75a275c16add45f7c64f4c1</id>
<content type='text'>
If access fails with ENOTCONN, do not wind to same subvol.
We wind to first-up-subvol if access fails with ENOTCONN.
In few cases, if dht has only 1 subvolume, and access fails with
ENOTCONN, we go into a infinite loop of winding to same subvol

The fix is to check if we previously wound to same subvol, and
fail if first-up-subvol is same.

Change-Id: Ib5d3ce7d33e8ea09147905a7df1ed280874fa549
BUG: 1139996
Signed-off-by: shishir gowda &lt;sgowda@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5319
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8677
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If access fails with ENOTCONN, do not wind to same subvol.
We wind to first-up-subvol if access fails with ENOTCONN.
In few cases, if dht has only 1 subvolume, and access fails with
ENOTCONN, we go into a infinite loop of winding to same subvol

The fix is to check if we previously wound to same subvol, and
fail if first-up-subvol is same.

Change-Id: Ib5d3ce7d33e8ea09147905a7df1ed280874fa549
BUG: 1139996
Signed-off-by: shishir gowda &lt;sgowda@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5319
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8677
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Fix races to avoid deletion of linkto</title>
<updated>2014-10-20T14:53:56+00:00</updated>
<author>
<name>Venkatesh Somyajulu</name>
<email>vsomyaju@redhat.com</email>
</author>
<published>2014-09-25T11:54:09+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b3387c83a0d968db7428920f46434f09f4922e53'/>
<id>b3387c83a0d968db7428920f46434f09f4922e53</id>
<content type='text'>
 file

Explanation of Race between rebalance processes:
https://bugzilla.redhat.com/show_bug.cgi?id=1110694#c4

scenario-1:
===========

STATE 1:                          BRICK-1
only one brick                   Cached File
in the system

STATE 2:
Add brick-2                       BRICK-1                BRICK-2

STATE 3:                                       Lookup of File on brick-2
                                               by this node's rebalance
                                               will fail because hashed
                                               file is not created yet.
                                               So dht_lookup_everywhere is
                                               about to get called.

STATE 4:                         As part of lookup
                                 link file at brick-2
                                 will be created.

STATE 5:                         getxattr to check that
                                 cached file belongs to
                                 this node is done

STATE 6:

                                            dht_lookup_everywhere_cbk detects
                                            the link created by rebalance-1.
                                            It will unlink it.

STATE 7:                        getxattr at the link
                                file with "pathinfo" key
                                will be called will fail
                                as the link file is deleted
                                by rebalance on node-2

Fix:
So in the STATE 6, we should avoid the deletion of link file. Every time
dht_lookup_everywhere gets called, lookup will be performed on all the nodes.
So to avoid STATE 6, if linkto file is found, it is not deleted until valid
case is found in dht_lookup_everywhere_done.

Case 1: if linkto file points to cached node, and cached file exists,
        uwind with success.

Case 2: if linkto does not point to current cached node, and cached file
        exists:
        a) Unlink stale link file
        b) Create new link file

Case 3: Only linkto file exists:
        Delete linkto file

Case 4: Only cached file
        Create link file (Handled event without patch)

Case 5: Neither cached nor hashed file is present
        Return with ENOENT (handled even without patch)

Reviewed-on: http://review.gluster.org/8231

******************************************************************************

scenario-2:
===========
cluster/dht: Modified logic of linkto file deletion on non-hashed

Currently whenever dht_lookup_everywhere gets called, if in
dht_lookup_everywhere_cbk, a linkto file is found on non-hashed
subvolume, file is unlinked. But there are cases when this file
is under migration. Under such condition, we should avoid deletion
of file.

When  some other rebalance process changes the layout of parent
such that dst_file (w.r.t. migration) falls on non-hashed node,
then may be lookup could have found it as linkto file but just
before unlink, file  is under migration or already migrated
In such cased unlink can be avoided.

Race:
-------
If we have two bricks (brick-1 and brick-2) with initial file "a"
under BaseDir which is hashed as well as cached on (brick-1).

Assume "a"  hashing gives 44.

                              Brick-1              Brick-2

Initial Setup:               BaseDir/a             BaseDir
                             [1-50]                [51-100]

Now add new-brick Brick-3.

1. Rebalance-1 on node Node-1 (Brick-1 node) will reset
the BaseDir Layout.

2. After that it will perform
a)  Create linkto file on  new-hashed (brick-2)
b)  Perform file migration.

1.Rebalance-1 Fixes the base-layout:
                 Brick-1             Brick-2           Brick-3
                 ---------         ----------         ------------
                 BaseDir/a            BaseDir           BaseDir
                  [1-33]              [34-66]           [67-100]

2. Only a) is     BaseDir/a          BaseDir/a(linkto)   BaseDir
   performed                         Create linktofile

Now rebalance 2 on node-2 jumped in and it will perform
step 1 and 2-a.

After (rebal-2, step-1), it changes the layout of the BaseDir.
                    BaseDir/a     BaseDir/a(link)    BaseDir
                    [67-100]           [1-33]        [34-66]

For  (rebale-2, step-2), It will perform lookup at Brick-3 as w.r.t new
layout 44 falls for brick-3. But lookup will fail.
So  dht_lookup_everywhere gets called.

NOTE: On brick-2 by rebalance-1, a linkto file was created.

Currently that linkto files gets deleted by rebalance-2 lookup as it
is considered as stale linkto file.  But  with patch if rebalance is
already in progress or rebalance is over,  linkto file will not be
unlinked. If rebalance is in progress fd will be  open and if rebalance
is over then linkto file wont be set.

Reviewed-on: http://review.gluster.org/8345

*******************************************************************************

scenario-3:
===========

cluster/dht: Added keys in dht_lookup_everywhere_done

Case where both cached  (C1)  and hashed file are found,
but hash does not point to above cached node (C1), then
dont unlink if either fd-is-open on hashed or
linkto-xattr is not found.

Reviewed-on: http://review.gluster.org/8429
BUG: 1139995
Signed-off-by: Venkatesh Somyajulu &lt;vsomyaju@redhat.com&gt;
Change-Id: I86d0a21d4c0501c45d837101ced4f96d6fedc5b9
Signed-off-by: Venkatesh Somyajulu &lt;vsomyaju@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: susant palai &lt;spalai@redhat.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8674
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
 file

Explanation of Race between rebalance processes:
https://bugzilla.redhat.com/show_bug.cgi?id=1110694#c4

scenario-1:
===========

STATE 1:                          BRICK-1
only one brick                   Cached File
in the system

STATE 2:
Add brick-2                       BRICK-1                BRICK-2

STATE 3:                                       Lookup of File on brick-2
                                               by this node's rebalance
                                               will fail because hashed
                                               file is not created yet.
                                               So dht_lookup_everywhere is
                                               about to get called.

STATE 4:                         As part of lookup
                                 link file at brick-2
                                 will be created.

STATE 5:                         getxattr to check that
                                 cached file belongs to
                                 this node is done

STATE 6:

                                            dht_lookup_everywhere_cbk detects
                                            the link created by rebalance-1.
                                            It will unlink it.

STATE 7:                        getxattr at the link
                                file with "pathinfo" key
                                will be called will fail
                                as the link file is deleted
                                by rebalance on node-2

Fix:
So in the STATE 6, we should avoid the deletion of link file. Every time
dht_lookup_everywhere gets called, lookup will be performed on all the nodes.
So to avoid STATE 6, if linkto file is found, it is not deleted until valid
case is found in dht_lookup_everywhere_done.

Case 1: if linkto file points to cached node, and cached file exists,
        uwind with success.

Case 2: if linkto does not point to current cached node, and cached file
        exists:
        a) Unlink stale link file
        b) Create new link file

Case 3: Only linkto file exists:
        Delete linkto file

Case 4: Only cached file
        Create link file (Handled event without patch)

Case 5: Neither cached nor hashed file is present
        Return with ENOENT (handled even without patch)

Reviewed-on: http://review.gluster.org/8231

******************************************************************************

scenario-2:
===========
cluster/dht: Modified logic of linkto file deletion on non-hashed

Currently whenever dht_lookup_everywhere gets called, if in
dht_lookup_everywhere_cbk, a linkto file is found on non-hashed
subvolume, file is unlinked. But there are cases when this file
is under migration. Under such condition, we should avoid deletion
of file.

When  some other rebalance process changes the layout of parent
such that dst_file (w.r.t. migration) falls on non-hashed node,
then may be lookup could have found it as linkto file but just
before unlink, file  is under migration or already migrated
In such cased unlink can be avoided.

Race:
-------
If we have two bricks (brick-1 and brick-2) with initial file "a"
under BaseDir which is hashed as well as cached on (brick-1).

Assume "a"  hashing gives 44.

                              Brick-1              Brick-2

Initial Setup:               BaseDir/a             BaseDir
                             [1-50]                [51-100]

Now add new-brick Brick-3.

1. Rebalance-1 on node Node-1 (Brick-1 node) will reset
the BaseDir Layout.

2. After that it will perform
a)  Create linkto file on  new-hashed (brick-2)
b)  Perform file migration.

1.Rebalance-1 Fixes the base-layout:
                 Brick-1             Brick-2           Brick-3
                 ---------         ----------         ------------
                 BaseDir/a            BaseDir           BaseDir
                  [1-33]              [34-66]           [67-100]

2. Only a) is     BaseDir/a          BaseDir/a(linkto)   BaseDir
   performed                         Create linktofile

Now rebalance 2 on node-2 jumped in and it will perform
step 1 and 2-a.

After (rebal-2, step-1), it changes the layout of the BaseDir.
                    BaseDir/a     BaseDir/a(link)    BaseDir
                    [67-100]           [1-33]        [34-66]

For  (rebale-2, step-2), It will perform lookup at Brick-3 as w.r.t new
layout 44 falls for brick-3. But lookup will fail.
So  dht_lookup_everywhere gets called.

NOTE: On brick-2 by rebalance-1, a linkto file was created.

Currently that linkto files gets deleted by rebalance-2 lookup as it
is considered as stale linkto file.  But  with patch if rebalance is
already in progress or rebalance is over,  linkto file will not be
unlinked. If rebalance is in progress fd will be  open and if rebalance
is over then linkto file wont be set.

Reviewed-on: http://review.gluster.org/8345

*******************************************************************************

scenario-3:
===========

cluster/dht: Added keys in dht_lookup_everywhere_done

Case where both cached  (C1)  and hashed file are found,
but hash does not point to above cached node (C1), then
dont unlink if either fd-is-open on hashed or
linkto-xattr is not found.

Reviewed-on: http://review.gluster.org/8429
BUG: 1139995
Signed-off-by: Venkatesh Somyajulu &lt;vsomyaju@redhat.com&gt;
Change-Id: I86d0a21d4c0501c45d837101ced4f96d6fedc5b9
Signed-off-by: Venkatesh Somyajulu &lt;vsomyaju@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: susant palai &lt;spalai@redhat.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8674
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>dht: fix rename race</title>
<updated>2014-10-20T14:52:48+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2014-07-09T01:56:04+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=a7356806f9f7c148d4e0a972f6a418d1ca82bcd0'/>
<id>a7356806f9f7c148d4e0a972f6a418d1ca82bcd0</id>
<content type='text'>
If two clients try to rename the same file at the same time, we
sometimes end up with *no file at all* in either the old or new
location.  That's kind of bad.  The culprit seems to be some overly
aggressive cleanup code.  AFAICT, based on today's study of the code,
the intent of the changed section is to remove any linkfile we might
have created before the actual rename.  However, what we're removing
might not be our extra link.  If we're racing with another client that's
also doing a rename, it might be the only remaining link to the user's
data.  The solution, which is good enough to pass this test but almost
certainly still not complete, is to be more selective about when we do
this unlink.  Now, we only do it if we know that, at some point, we did
in fact create the link without error (notably ENOENT on the source or
EEXIST on the destination) ourselves.

Change-Id: I8d8cce150b6f8b372c9fb813c90be58d69f8eb7b
BUG: 1139988
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8269
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8672
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If two clients try to rename the same file at the same time, we
sometimes end up with *no file at all* in either the old or new
location.  That's kind of bad.  The culprit seems to be some overly
aggressive cleanup code.  AFAICT, based on today's study of the code,
the intent of the changed section is to remove any linkfile we might
have created before the actual rename.  However, what we're removing
might not be our extra link.  If we're racing with another client that's
also doing a rename, it might be the only remaining link to the user's
data.  The solution, which is good enough to pass this test but almost
certainly still not complete, is to be more selective about when we do
this unlink.  Now, we only do it if we know that, at some point, we did
in fact create the link without error (notably ENOENT on the source or
EEXIST on the destination) ourselves.

Change-Id: I8d8cce150b6f8b372c9fb813c90be58d69f8eb7b
BUG: 1139988
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8269
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8672
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>DHT/readdirp: Directory not shown/healed on mount point if exists on single brick(non first up subvolume).</title>
<updated>2014-10-20T14:52:14+00:00</updated>
<author>
<name>Susant Palai</name>
<email>spalai@redhat.com</email>
</author>
<published>2014-05-13T16:56:17+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=bc75418e1e8eacb1978bdf7f7154d49895ffd544'/>
<id>bc75418e1e8eacb1978bdf7f7154d49895ffd544</id>
<content type='text'>
Problem: If snapshot is taken, when mkdir has succeeded only on
hashed_subvolume, then after restoring snapshot the directory
is not shown on mount point.

Why:    dht_readdirp takes only those directory entries in to
account, which are present on first_up_subvolume. Hence, if the
"hashed subvolume" is not same as first_up_subvolume, it wont be listed
on mount point and also not healed.

Solution:
Case 1: (Rebalance not running)If hashed subvolume is NULL or down then
filter in first_up_subvolume. Other wise the corresponding hashed subvolume
will take care of the directory entry.

Case 2: If readdirp_optimize option is turned on then read from first_up_subvol

Change-Id: Idaad28f1c9f688dbfb1a8a3ab8b244510c02365e
BUG: 1139986
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7599
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8671
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Tested-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: If snapshot is taken, when mkdir has succeeded only on
hashed_subvolume, then after restoring snapshot the directory
is not shown on mount point.

Why:    dht_readdirp takes only those directory entries in to
account, which are present on first_up_subvolume. Hence, if the
"hashed subvolume" is not same as first_up_subvolume, it wont be listed
on mount point and also not healed.

Solution:
Case 1: (Rebalance not running)If hashed subvolume is NULL or down then
filter in first_up_subvolume. Other wise the corresponding hashed subvolume
will take care of the directory entry.

Case 2: If readdirp_optimize option is turned on then read from first_up_subvol

Change-Id: Idaad28f1c9f688dbfb1a8a3ab8b244510c02365e
BUG: 1139986
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7599
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8671
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Tested-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Revert "core: fix errno for non-existent GFID"</title>
<updated>2013-12-24T11:31:53+00:00</updated>
<author>
<name>Vijay Bellur</name>
<email>vbellur@redhat.com</email>
</author>
<published>2013-12-23T10:39:00+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=63a3a0dd297f0615517de399f54df99041577a35'/>
<id>63a3a0dd297f0615517de399f54df99041577a35</id>
<content type='text'>
This reverts commit 837422858c2e4ab447879a4141361fd382645406

Change-Id: I0909f26ce088454bb14b3694b489c672286a4ae6
Reviewed-on: http://review.gluster.org/6575
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This reverts commit 837422858c2e4ab447879a4141361fd382645406

Change-Id: I0909f26ce088454bb14b3694b489c672286a4ae6
Reviewed-on: http://review.gluster.org/6575
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Del GF_READDIR_SKIP_DIRS key from dict for first_up</title>
<updated>2013-12-14T13:28:32+00:00</updated>
<author>
<name>shishir gowda</name>
<email>gowda.shishir@gmail.com</email>
</author>
<published>2013-12-10T09:12:05+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=6677b97f2d17699c74779922cf310adf8bff5558'/>
<id>6677b97f2d17699c74779922cf310adf8bff5558</id>
<content type='text'>
Currently, we sent GF_READDIR_SKIP_DIRS for all subvolumes if
first_subvol != first_up_subvolume.

Also first_up_subvolume can change with-in the life of a call and
cbk. Saving the first_up_subvol in dht_local for checks.

Back porting fix http://review.gluster.org/5577

BUG: 996474
Change-Id: I67b5bbe781e12812557b569b7d0a0beba4224159
Signed-off-by: shishir gowda &lt;gowda.shishir@gmail.com&gt;
Reviewed-on: http://review.gluster.org/6468
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently, we sent GF_READDIR_SKIP_DIRS for all subvolumes if
first_subvol != first_up_subvolume.

Also first_up_subvolume can change with-in the life of a call and
cbk. Saving the first_up_subvol in dht_local for checks.

Back porting fix http://review.gluster.org/5577

BUG: 996474
Change-Id: I67b5bbe781e12812557b569b7d0a0beba4224159
Signed-off-by: shishir gowda &lt;gowda.shishir@gmail.com&gt;
Reviewed-on: http://review.gluster.org/6468
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: fix errno for non-existent GFID</title>
<updated>2013-11-26T19:49:05+00:00</updated>
<author>
<name>Anand Avati</name>
<email>avati@redhat.com</email>
</author>
<published>2013-11-21T14:48:17+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=837422858c2e4ab447879a4141361fd382645406'/>
<id>837422858c2e4ab447879a4141361fd382645406</id>
<content type='text'>
When clients refer to a GFID which does not exist, the errno to
be returned in ESTALE (and not ENOENT). Even though ENOENT might
look "proper" most of the time, as the application eventually expects
ENOENT even if a parent directory does not exist, not returning
ESTALE results in resolvers (FUSE and GFAPI) to not retry resolution
in uncached mode. This can result in spurious ENOENTs during
concurrent path modification operations.

Change-Id: I7a06ea6d6a191739f2e9c6e333a1969615e05936
BUG: 1032894
Signed-off-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6322
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When clients refer to a GFID which does not exist, the errno to
be returned in ESTALE (and not ENOENT). Even though ENOENT might
look "proper" most of the time, as the application eventually expects
ENOENT even if a parent directory does not exist, not returning
ESTALE results in resolvers (FUSE and GFAPI) to not retry resolution
in uncached mode. This can result in spurious ENOENTs during
concurrent path modification operations.

Change-Id: I7a06ea6d6a191739f2e9c6e333a1969615e05936
BUG: 1032894
Signed-off-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6322
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
