> gluster volume create vol01 replica 2 transport tcp adm01:/mnt/addVol/gfs01/ adm02:/mnt/addVol/gfs01/ volume create: vol01: failed: The brick adm01:/mnt/addVol/gfs01 is is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
メッセージに書かれている通り、「force」オプションを指定すれば良い
> gluster volume create vol01 replica 2 transport tcp adm01:/mnt/addVol/gfs01/ adm02:/mnt/addVol/gfs01/ force volume create: vol01: success: please start the volume to access data
adm01/02の2つのノードでReplicaVolumeを作成した環境の健康な状態
> gluster vol status Status of volume: vol01 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick adm01:/mnt/addVol/gfs01 49152 Y 2554 Brick adm02:/mnt/addVol/gfs01 49152 Y 2507 NFS Server on localhost 2049 Y 2566 Self-heal Daemon on localhost N/A Y 2567 NFS Server on adm02 N/A N N/A Self-heal Daemon on adm02 N/A N N/A There are no active volume tasks
不健康な(amd02に異常が起った)状態
> gluster vol status Status of volume: vol01 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick adm01:/mnt/addVol/gfs01 49152 Y 2554 NFS Server on localhost 2049 Y 2566 Self-heal Daemon on localhost N/A Y 2567 There are no active volume tasks
ボリュームからブリックを外す
> gluster volume remove-brick vol01 replica 1 adm02:/mnt/addVol/gfs01/ Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit force: success
peerからノードを外す
> gluster peer detach adm02 peer detach: success
peerにノード登録
> gluster peer probe adm02 peer probe: success
ボリュームにブリック追加
> gluster volume add-brick vol01 replica 2 adm02:/mnt/addVol/gfs01/ force volume add-brick: success
> gluster vol heal vol01 full Launching Heal operation on volume vol01 has been successful Use heal info commands to check status
> gluster volume add-brick vol01 replica 2 adm02:/mnt/addVol/gfs01/ volume add-brick: failed:
[2014-04-08 01:38:03.178829] W [rpc-transport.c:175:rpc_transport_load] 0-rpc-transport: missing 'option transport-type'. defaulting to "socket" [2014-04-08 01:38:03.182308] I [socket.c:3480:socket_init] 0-glusterfs: SSL support is NOT enabled [2014-04-08 01:38:03.182423] I [socket.c:3495:socket_init] 0-glusterfs: using system polling thread [2014-04-08 01:38:03.254488] I [cli-rpc-ops.c:1695:gf_cli_add_brick_cbk] 0-cli: Received resp to add brick [2014-04-08 01:38:03.254575] I [input.c:36:cli_batch] 0-: Exiting with: -1
> gluster volume add-brick vol01 replica 2 adm02:/mnt/addVol/gfs01/ force volume add-brick: failed: /mnt/addVol/gfs01 or a prefix of it is already part of a volume
過去にどこかのブリックに追加されていたために、拡張情報が存在している。
ブリックの再利用が問題なければ、拡張情報を削除する。
有志で公開されている簡単削除のスクリプトを利用する。
> vi cleanbrick.sh
#!/bin/sh if [[ ! -d $1 ]]; then echo "usage: $0 <brickdir>" exit 1 fi getfattr -m . $1 2> /dev/null | grep -E '^trusted\.(glusterfs|gfid|afr|dht|hsrepl)' | while read xa; do echo "removing $xa on $1" setfattr -x $xa $1 done echo "removing $1/.glusterfs" rm -rf $1/.glusterfs
> ./cleanbrick.sh /mnt/addVol/gfs01