Quantcast
Channel: Debian User Forums
Viewing all articles
Browse latest Browse all 3395

GlusterFS - Transport endpoint is not connected

$
0
0
Hi folks,
I am running three virtual machine in the same LAN (Network 192.168.88.0/24)
These three virtual servers are running Debian 11 (I personally installed all updates available).
Their goal is a glusterfs deployment:
  • Two GlusterFS node as storage (their hostname are docker01 and docker02)
  • One GlusterFS node as arbiter (its hostname is swarm01)


GlusterFS installed is # gluster --version ==> glusterfs 11.1
I already edited /etc/hosts file to create DNS record with success.

Code:

root@docker01:~# tail -f /var/log/glusterfs/mnt.log [2024-02-13 22:16:23.623527 +0000] W [fuse-bridge.c:1403:fuse_attr_cbk] 0-glusterfs-fuse: 1020: LOOKUP() / => -1 (Transport endpoint is not connected)[2024-02-13 22:16:23.623571 +0000] W [fuse-bridge.c:1403:fuse_attr_cbk] 0-glusterfs-fuse: 1021: LOOKUP() / => -1 (Transport endpoint is not connected)[2024-02-13 22:16:23.792071 +0000] W [fuse-bridge.c:1403:fuse_attr_cbk] 0-glusterfs-fuse: 1022: LOOKUP() / => -1 (Transport endpoint is not connected)[2024-02-13 22:16:23.792118 +0000] W [fuse-bridge.c:1403:fuse_attr_cbk] 0-glusterfs-fuse: 1023: LOOKUP() / => -1 (Transport endpoint is not connected)[2024-02-13 22:16:23.908092 +0000] W [fuse-bridge.c:1403:fuse_attr_cbk] 0-glusterfs-fuse: 1024: LOOKUP() / => -1 (Transport endpoint is not connected)[2024-02-13 22:16:23.908136 +0000] W [fuse-bridge.c:1403:fuse_attr_cbk] 0-glusterfs-fuse: 1025: LOOKUP() / => -1 (Transport endpoint is not connected)[2024-02-13 22:16:26.241343 +0000] E [MSGID: 114058] [client-handshake.c:946:client_query_portmap_cbk] 0-storegfs-client-1: failed to get the port number for remote subvolume. Please run gluster volume status on server to see if brick process is running [] [2024-02-13 22:16:29.696653 +0000] W [fuse-bridge.c:1403:fuse_attr_cbk] 0-glusterfs-fuse: 1044: LOOKUP() / => -1 (Transport endpoint is not connected)[2024-02-13 22:16:45.654162 +0000] I [socket.c:835:__socket_shutdown] 0-storegfs-client-2: intentional socket shutdown(6)[2024-02-13 22:17:02.259513 +0000] I [socket.c:835:__socket_shutdown] 0-storegfs-client-0: intentional socket shutdown(6)q^Croot@docker01:~# gluster volume statusStatus of volume: storegfsGluster process                             TCP Port  RDMA Port  Online  Pid------------------------------------------------------------------------------Brick docker01:/mnt/disk1/br0               N/A       N/A        N       N/A  Brick docker02:/mnt/disk2/br0               N/A       N/A        N       N/A  Brick swarm01:/mnt/disk3/br0                N/A       N/A        N       N/A  Self-heal Daemon on localhost               N/A       N/A        Y       906  Self-heal Daemon on swarm01                 N/A       N/A        Y       940  Self-heal Daemon on docker02                N/A       N/A        Y       1765  Task Status of Volume storegfs------------------------------------------------------------------------------There are no active volume tasks
These three virtual server can ping each other, iptables on these machines haven't any rule set.
Gluster service is running, but I got error message concerning
I collected this data from the first of three VM in my GlusterFS deployment.

I create volume with the following command (I didn't got any error).

Code:

cluster volume create storegfs replica 3  arbiter 1 docker01:/mnt/disk1/br0 docker02:/mnt/disk2/br0 swarm01:/mnt/disk3/br0
In the end I would use this GlusterFS deployment with Docker Swarm.
I never tried Docker Swarm in this deployment due to it is not properly working GlusterFS now.

Any suggestion for me?

Statistics: Posted by coppolino97 — 2024-02-13 22:26 — Replies 1 — Views 46



Viewing all articles
Browse latest Browse all 3395

Trending Articles