<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs-nsr.git/xlators/mgmt/glusterd, branch master</title>
<subtitle>[no description]</subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-nsr.git/'/>
<entry>
<title>Merge branch 'upstream'</title>
<updated>2014-04-28T14:18:50+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2014-04-28T14:18:50+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-nsr.git/commit/?id=e139b4d0ba2286c0d4d44ba81260c2b287016019'/>
<id>e139b4d0ba2286c0d4d44ba81260c2b287016019</id>
<content type='text'>
Conflicts:
	rpc/xdr/src/glusterfs3-xdr.c
	rpc/xdr/src/glusterfs3-xdr.h
	xlators/features/changelog/src/Makefile.am
	xlators/features/changelog/src/changelog-helpers.h
	xlators/features/changelog/src/changelog.c
	xlators/mgmt/glusterd/src/glusterd-sm.c

Change-Id: I9972a5e6184503477eb77a8b56c50a4db4eec3e2
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Conflicts:
	rpc/xdr/src/glusterfs3-xdr.c
	rpc/xdr/src/glusterfs3-xdr.h
	xlators/features/changelog/src/Makefile.am
	xlators/features/changelog/src/changelog-helpers.h
	xlators/features/changelog/src/changelog.c
	xlators/mgmt/glusterd/src/glusterd-sm.c

Change-Id: I9972a5e6184503477eb77a8b56c50a4db4eec3e2
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot: Compare and update snapshots during peer handshake</title>
<updated>2014-04-28T11:02:22+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2014-04-22T00:52:57+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-nsr.git/commit/?id=54a5a42848870ee17b923c6c37d65fdfe4a5fec9'/>
<id>54a5a42848870ee17b923c6c37d65fdfe4a5fec9</id>
<content type='text'>
During a peer-handshake, after the volumes have synced, and the list of
missed snapshots have synced, the node will perform the pending deletes
and restores on this list. At this point, the current snapshot list in
the node will be updated, and hence in case of conflicts arising during
snapshot handshake, the peer hosting the bricks will be given precedence
Likewise, if there will be a conflict, and both peers will be in the same
state, i.e either both would be hosting bricks or both would not be hosting
bricks, then a decision can't be taken and a peer-reject will happen.

glusterd_compare_and_update_snap() implements the following algorithm to
perform the above task:
Step  1: Start.
Step  2: Check if the peer is missing a delete on the said snap.
         If yes, goto step 6.
Step  3: Check if there is a conflict between the peer's data and the
         local snap. If no, goto step 5.
Step  4: As there is a conflict, check if both the peer and the local nodes
         are hosting bricks. Based on the results perform the following:
         Peer Hosts Bricks    Local Node Hosts Bricks       Action
               Yes                     Yes                Goto Step 7
               No                      No                 Goto Step 7
               Yes                     No                 Goto Step 8
               No                      Yes                Goto Step 6
Step  5: Check if the local node is missing the peer's data.
         If yes, goto step 9.
Step  6: It's a no-op. Goto step 10
Step  7: Peer Reject. Goto step 10
Step  8: Delete local node's data.
Step  9: Accept Peer Data.
Step 10: Stop

Change-Id: I79be0f0f5f2a4f5c72277a4e77c2be732af432e1
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7525
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
During a peer-handshake, after the volumes have synced, and the list of
missed snapshots have synced, the node will perform the pending deletes
and restores on this list. At this point, the current snapshot list in
the node will be updated, and hence in case of conflicts arising during
snapshot handshake, the peer hosting the bricks will be given precedence
Likewise, if there will be a conflict, and both peers will be in the same
state, i.e either both would be hosting bricks or both would not be hosting
bricks, then a decision can't be taken and a peer-reject will happen.

glusterd_compare_and_update_snap() implements the following algorithm to
perform the above task:
Step  1: Start.
Step  2: Check if the peer is missing a delete on the said snap.
         If yes, goto step 6.
Step  3: Check if there is a conflict between the peer's data and the
         local snap. If no, goto step 5.
Step  4: As there is a conflict, check if both the peer and the local nodes
         are hosting bricks. Based on the results perform the following:
         Peer Hosts Bricks    Local Node Hosts Bricks       Action
               Yes                     Yes                Goto Step 7
               No                      No                 Goto Step 7
               Yes                     No                 Goto Step 8
               No                      Yes                Goto Step 6
Step  5: Check if the local node is missing the peer's data.
         If yes, goto step 9.
Step  6: It's a no-op. Goto step 10
Step  7: Peer Reject. Goto step 10
Step  8: Delete local node's data.
Step  9: Accept Peer Data.
Step 10: Stop

Change-Id: I79be0f0f5f2a4f5c72277a4e77c2be732af432e1
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7525
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Rename the export dictionary as peer_data</title>
<updated>2014-04-28T11:02:03+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2014-04-21T03:32:00+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-nsr.git/commit/?id=a7c8d514c0487019d218c327deb52f7d09645875'/>
<id>a7c8d514c0487019d218c327deb52f7d09645875</id>
<content type='text'>
During a glusterd handshake, a dictionary is passed among
the peers which contains, info of volumes, global opts,
and now also info of snaps and list of missed snaps

As it now contains more than just volume specific data,
renaming the dict in the code-base from "vols" to "peer_data"

Change-Id: Ib457172789ddd0d8978b08bceab0988c48e9eea7
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7524
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
During a glusterd handshake, a dictionary is passed among
the peers which contains, info of volumes, global opts,
and now also info of snaps and list of missed snaps

As it now contains more than just volume specific data,
renaming the dict in the code-base from "vols" to "peer_data"

Change-Id: Ib457172789ddd0d8978b08bceab0988c48e9eea7
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7524
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot: Recreate the mount dirs and mount the lvm snapshots on node reboot.</title>
<updated>2014-04-28T10:51:29+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2014-04-03T03:36:28+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-nsr.git/commit/?id=b46d0ba04901ebca81d0f477e3e9ac6ba8607946'/>
<id>b46d0ba04901ebca81d0f477e3e9ac6ba8607946</id>
<content type='text'>
The lvm snapshots of the bricks are mounted at /var/run/gluster/snaps/ or
/run/gluster/snaps. These paths being on a tempfs, on reboot are removed.
So when glusterd starts, we need to recreate these paths, activate the
respective logical volumes (lvm snapshots of the bricks), and mount
these logical volumes at their respective paths.

Change-Id: Ic5ef61e79a25d9830df717c592391965fe09db62
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7452
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The lvm snapshots of the bricks are mounted at /var/run/gluster/snaps/ or
/run/gluster/snaps. These paths being on a tempfs, on reboot are removed.
So when glusterd starts, we need to recreate these paths, activate the
respective logical volumes (lvm snapshots of the bricks), and mount
these logical volumes at their respective paths.

Change-Id: Ic5ef61e79a25d9830df717c592391965fe09db62
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7452
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot: Perform missed snap deletes and restores.</title>
<updated>2014-04-28T10:50:18+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2014-04-07T06:02:10+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-nsr.git/commit/?id=5d9172e0b3e14795db7aba321cfcac428a201399'/>
<id>5d9172e0b3e14795db7aba321cfcac428a201399</id>
<content type='text'>
Replacing is_volume_restored(gf_boolean_t) with
restored_from_snap(uuid_t) in glusterd_volinfo_

Also removed gd_restore_snap_volume from glusterd-volgen.c
to glusterd-snapshot.c

Change-Id: Ic615a1658cfaffa98d4590506ac82f20bf709ad6
BUG: 1089906
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7455
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Replacing is_volume_restored(gf_boolean_t) with
restored_from_snap(uuid_t) in glusterd_volinfo_

Also removed gd_restore_snap_volume from glusterd-volgen.c
to glusterd-snapshot.c

Change-Id: Ic615a1658cfaffa98d4590506ac82f20bf709ad6
BUG: 1089906
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7455
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Barrier: Barrier translator options configuration</title>
<updated>2014-04-28T05:25:32+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2014-03-03T12:30:59+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-nsr.git/commit/?id=22f47322d246c94d0bec8e893e4837a67d39f544'/>
<id>22f47322d246c94d0bec8e893e4837a67d39f544</id>
<content type='text'>
barrier enable/disable, barrier-timeout configuration in barrier translator.

Change-Id: I7cbf9cd4f5e55d42dcc6b7cd6827234566c7b6f3
BUG: 1060002
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7177
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
barrier enable/disable, barrier-timeout configuration in barrier translator.

Change-Id: I7cbf9cd4f5e55d42dcc6b7cd6827234566c7b6f3
BUG: 1060002
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7177
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot: Adding snap_vol_id and snap_uuid to missed_snap_list</title>
<updated>2014-04-28T05:00:20+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2014-04-07T05:25:28+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-nsr.git/commit/?id=d7b3e068290c41b13ecd664771814202d7d26881'/>
<id>d7b3e068290c41b13ecd664771814202d7d26881</id>
<content type='text'>
Persisting missing snapshot info on disk as well as in memory in
the following format:
-------------NODE-UUID--------------:--------------SNAP-UUID-------------=---------SNAP-VOL-ID------------:BRICKNUM:-------BRICKPATH--------:OPERATION:STATUS
927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=a17b4fe42c5a45f7a916438643edaa13:   3    :/brick/brick-dirs/brick3:    1    :   1
927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=a17b4fe42c5a45f7a916438643edaa13:   3    :/brick/brick-dirs/brick3:    3    :   1
927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=83a3cc05453b46b2a7eda4c9a9208638:   3    :/brick/brick-dirs/brick3:    1    :   1

This data will be stored on disk at /var/lib/glusterd/snaps/missed_snaps_list

In memory we maintain the data as a list of glusterd_missed_snap_info
in conf, the key for this list are the first two fields,
i.e NODE-UUID:SNAP-UUID.

For every NODE-UUID:SNAP-UUID, there can be multiple operations missed
on multiple bricks. So we maintain a list of glusterd_snap_op_t
for every node of glusterd_missed_snap_info

This list is maintained or updated during snapshot create, delete, and restore
operations which are the only operations that if missed, are recorded in this
list.

During snapshot create, if a node is down, or a brick is down, we don't
receive their mount point infos. snap_status of such bricks is marked as
-1, and their brick details are added to this list.

During snapshot delete, we check from originator node, if any other
nodes, holding bricks of the said snap are down. Those are also added to the list.
Also if the node is up, but the snapshot was pending for a snap
brick, and its snap_status is -1, we add that to the list too.
When a subsequent delete entry is processed for an already existing
create entry, we just mark the create entries status as done (2), and don't
add the delete entry to the list.

During snapshot restore, we check from originator node, if any other
nodes, holding bricks of the said snap are down. Those are also added to the list.
Also if the node is up, but the snapshot was pending for a snap
brick, and its snap_status is -1, we add that to the list too.
Like delete when a subsequent restore entry is processed for an already existing
create entry, we just mark the create entries status as done (2), and don't
add the restore entry to the list.

Change-Id: I54f63e28d3c40555d0f84528f38227103171f594
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7454
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Persisting missing snapshot info on disk as well as in memory in
the following format:
-------------NODE-UUID--------------:--------------SNAP-UUID-------------=---------SNAP-VOL-ID------------:BRICKNUM:-------BRICKPATH--------:OPERATION:STATUS
927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=a17b4fe42c5a45f7a916438643edaa13:   3    :/brick/brick-dirs/brick3:    1    :   1
927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=a17b4fe42c5a45f7a916438643edaa13:   3    :/brick/brick-dirs/brick3:    3    :   1
927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=83a3cc05453b46b2a7eda4c9a9208638:   3    :/brick/brick-dirs/brick3:    1    :   1

This data will be stored on disk at /var/lib/glusterd/snaps/missed_snaps_list

In memory we maintain the data as a list of glusterd_missed_snap_info
in conf, the key for this list are the first two fields,
i.e NODE-UUID:SNAP-UUID.

For every NODE-UUID:SNAP-UUID, there can be multiple operations missed
on multiple bricks. So we maintain a list of glusterd_snap_op_t
for every node of glusterd_missed_snap_info

This list is maintained or updated during snapshot create, delete, and restore
operations which are the only operations that if missed, are recorded in this
list.

During snapshot create, if a node is down, or a brick is down, we don't
receive their mount point infos. snap_status of such bricks is marked as
-1, and their brick details are added to this list.

During snapshot delete, we check from originator node, if any other
nodes, holding bricks of the said snap are down. Those are also added to the list.
Also if the node is up, but the snapshot was pending for a snap
brick, and its snap_status is -1, we add that to the list too.
When a subsequent delete entry is processed for an already existing
create entry, we just mark the create entries status as done (2), and don't
add the delete entry to the list.

During snapshot restore, we check from originator node, if any other
nodes, holding bricks of the said snap are down. Those are also added to the list.
Also if the node is up, but the snapshot was pending for a snap
brick, and its snap_status is -1, we add that to the list too.
Like delete when a subsequent restore entry is processed for an already existing
create entry, we just mark the create entries status as done (2), and don't
add the restore entry to the list.

Change-Id: I54f63e28d3c40555d0f84528f38227103171f594
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7454
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot-handshake: Perform handshake of missed_snaps_list.</title>
<updated>2014-04-26T06:53:42+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2014-04-02T05:39:22+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-nsr.git/commit/?id=0af287791f0d50b5d2975cb2e2c902c797b05860'/>
<id>0af287791f0d50b5d2975cb2e2c902c797b05860</id>
<content type='text'>
In a handshake, create a union of the missed_snap_lists of the two peers.
If an entry is present, its no op.
If an entry is pendng, and the peer entry is done, mark own entry as done.
If an entry is done, and the peer ertry is pending, its a no-op.
If its a new entry, add it.

Change-Id: Idbfa49cc34871631ba8c7c56d915666311024887
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7453
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In a handshake, create a union of the missed_snap_lists of the two peers.
If an entry is present, its no op.
If an entry is pendng, and the peer entry is done, mark own entry as done.
If an entry is done, and the peer ertry is pending, its a no-op.
If its a new entry, add it.

Change-Id: Idbfa49cc34871631ba8c7c56d915666311024887
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7453
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>build: MacOSX Porting fixes</title>
<updated>2014-04-24T21:41:48+00:00</updated>
<author>
<name>Harshavardhana</name>
<email>harsha@harshavardhana.net</email>
</author>
<published>2014-04-17T22:54:34+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-nsr.git/commit/?id=a3cb38e3edf005bef73da4c9cfd958474a14d50f'/>
<id>a3cb38e3edf005bef73da4c9cfd958474a14d50f</id>
<content type='text'>
git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs

Working functionality on MacOSX

 - GlusterD (management daemon)
 - GlusterCLI (management cli)
 - GlusterFS FUSE (using OSXFUSE)
 - GlusterNFS (without NLM - issues with rpc.statd)

Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac
BUG: 1089172
Signed-off-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
Signed-off-by: Dennis Schafroth &lt;dennis@schafroth.com&gt;
Tested-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
Tested-by: Dennis Schafroth &lt;dennis@schafroth.com&gt;
Reviewed-on: http://review.gluster.org/7503
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs

Working functionality on MacOSX

 - GlusterD (management daemon)
 - GlusterCLI (management cli)
 - GlusterFS FUSE (using OSXFUSE)
 - GlusterNFS (without NLM - issues with rpc.statd)

Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac
BUG: 1089172
Signed-off-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
Signed-off-by: Dennis Schafroth &lt;dennis@schafroth.com&gt;
Tested-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
Tested-by: Dennis Schafroth &lt;dennis@schafroth.com&gt;
Reviewed-on: http://review.gluster.org/7503
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge branch 'upstream'</title>
<updated>2014-04-22T15:37:09+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2014-04-22T15:37:09+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs-nsr.git/commit/?id=a827c5eab32a43ade5551259ea56a6a1af7e861b'/>
<id>a827c5eab32a43ade5551259ea56a6a1af7e861b</id>
<content type='text'>
Conflicts:
	glusterfs.spec.in
	xlators/mgmt/glusterd/src/Makefile.am
	xlators/mgmt/glusterd/src/glusterd-utils.c
	xlators/mgmt/glusterd/src/glusterd.h

Change-Id: I27bdcf42b003cfc42d6ad981bd2bf8180176806d
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Conflicts:
	glusterfs.spec.in
	xlators/mgmt/glusterd/src/Makefile.am
	xlators/mgmt/glusterd/src/glusterd-utils.c
	xlators/mgmt/glusterd/src/glusterd.h

Change-Id: I27bdcf42b003cfc42d6ad981bd2bf8180176806d
</pre>
</div>
</content>
</entry>
</feed>
