| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Till now we had simple makefile for checking dependencies and building.
Using libtoolz will give more control on dependency checks and
flexibility.
This patch also introduce rpm build feature.
Compiling:
$ ./autogen.sh
$ ./configure
$ make -j
$ make install
Building RPMS:
$ make rpms
Running:
$ systemctl start gluster-blockd.service
Using CLI:
$ gluster-block help
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
| |
|
|
|
|
|
| |
better naming of variables and functions,
variable initialization, also fix few leaks
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
| |
|
|
| |
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
from now We basically have 2 RPC connections,
1. Between gluster block CLI and local gluster-blockd
This connection is basically UNIX/local netid ,listening on
/var/run/gluster-blockd.socket file.
The CLI always Send/Receive the commands to/from the local
gluster-blockd via local rpc.
2. Between gluster-blockd's, i.e local (to cli) gluster-blockd and the
gluster-blockd's running on remote(blockhost)
This is the tcp connection. The rpc requests are listening on 24006
Also from now gluster-blockd is multi threaded (As of now 2 threads)
Lets consider the Create Request to understand what each thread solves
Thread1 (THE CLI THREAD)
* Listening on local RPC
* Generate the GBID (UUID) and create the entry with name GBID in the
given volume with a requested size.
* And Send the Configuration requests to remote hosts,
waits for the replies
(HINt: after this point Read Thread2 and come back)
* Return to CLI.
Thread 2 (THE SERVER THREAD)
* Listens on 24006
* On Receiving an event, read the structure.
* Executes the required "targetcli bla bla bla" command locally
* Fills the command exitcode and the output in the RPC reply structure
and send reply
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
starting gluster-blockd:
$ make install
$ systemctl daemon-reload
$ systemctl start gluster-blockd.service
checking status:
$ systemctl status gluster-blockd.service
● gluster-blockd.service - Gluster block storage utility
Loaded: loaded (gluster-blockd.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 01-16 17:53:23 IST; 3min 42s ago
Main PID: 27552 (gluster-blockd)
Tasks: 1 (limit: 512)
CGroup: /system.slice/gluster-blockd.service
└─27552 /usr/local/sbin/gluster-blockd
Jan 16 17:53:23 local systemd[1]: Started Gluster block storage utility.
gluster-blockd.service inturn brings below services:
1. rpcbind.service
2. target.service and
3. tcmu-runner.service
In order.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this patch,
deprecate ssh way of communicating between server nodes/pods
Reason: ssh way is hard to be accepted in container world (Kube).
An another option kubeExec way seems to be a bit weird,
to have uniform way of communication in container and
non container worlds, we prefer RPC.
From now we communicate via RPC, using a static port 24009
Hence, we have two components,
server component -> gluster-blockd (daemon)
client component -> gluster-block (cli)
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
|
|
gluster block storage CLI.
As of now, gluster-block is capable of creating tcmu based gluster
block devices, across multiple nodes.
All you need is a gluster volume (on one set of nodes) and tcmu-runner
(https://github.com/open-iscsi/tcmu-runner) running on same(as gluster)
or different set of nodes.
From an another (or same) node where gluster-block is installed you
can create iSCSI based gluster block devices.
What it can do ?
--------------
1. create a file (name uuid) in the gluster volume.
2. create the iSCSI LUN and export the target via tcmu-runner in
multiple nodes (--block-host IP1,IP2 ...)
3. list the available LUN's across multiple nodes.
4. get info about a LUN across multiple nodes.
5. delete a given LUN across all given nodes.
$ gluster-block --help
gluster-block (Version 0.1)
-c, --create <name> Create the gluster block
-v, --volume <vol> gluster volume name
-h, --host <gluster-node> node addr from gluster pool
-s, --size <size> block storage size in KiB|MiB|GiB|TiB..
-l, --list List available gluster blocks
-i, --info <name> Details about gluster block
-m, --modify <RESIZE|AUTH> Modify the metadata
-d, --delete <name> Delete the gluster block
[-b, --block-host <IP1,IP2,IP3...>] block servers, clubbed with any option
Typically gluster-block, gluster volume and tcmu-runner can coexist on
single set of nodes/node or can be split across different set of nodes.
Install:
-------
$ make -j install (hopefully that should correct you.)
Points to remember:
------------------
1. setup gluster volume
2. run tcmu-runner service
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|