Not sure what I am missing? I have slower than what i Would expected on file transfers on 10gb ethernet from NVME to NVME over SMB and NFS. Test transfer of a 30 GB Folder containing mixed files @ 226 MB/s.
My Setup is 2 HP z620 dual e5-2670, Broadcom 57810S Dual Port 10Gb NIC RJ45 MTU 9000 and 1 each NVMe WD Green SN350 1TB per machine.
First HP z620 is running Debian 12 using Proxmox 8.1.4 as a hypervisor and Open Media Vault Version 6.5.0-3(Shaitan) running as a VM SMB Server
Open Media Vault SMB Shares.
First SMB is Raid 5 with 3 HDD
read speed on a 30 GigByte Folder containing mixed files 196 MB/s. Expectation would be around 220 so it is close slightly low.
Second SMB is NVMe WD Green SN350 1TB.
read speed on a 30 GigByte Folder containing mixed files 226 MB/s. Expectation would be saturating the 10 Gb link at 900+
Only 30MB/s difference between Raid 5 and NVME?
Second HP z620 is running Debian 12 desktop
Drive is PCI(E) passed through on Proxmox node no option to test drive speed at this point
Drive Speed Proxmox VM/dev/nvme0n1:
Timing cached reads: 16896 MB in 1.99 seconds = 8487.12 MB/sec
Timing buffered disk reads: 3784 MB in 3.00 seconds = 1260.86 MB/sec
Drive Speed Desktop/dev/nvme0n1:
Timing cached reads: 17450 MB in 1.99 seconds = 8772.46 MB/sec
Timing buffered disk reads: 2904 MB in 3.00 seconds = 967.67 MB/sec
Iperf3 results between 2 PC’s
root@jaime-hpz620workstation:/etc/samba#Connecting to host 10.0.40.169, port 5201
[ 5] local 10.0.40.175 port 40206 connected to 10.0.40.169 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.12 GBytes 9.62 Gbits/sec 0 3.15 MBytes
[ 5] 1.00-2.00 sec 1.15 GBytes 9.86 Gbits/sec 0 3.15 MBytes
[ 5] 2.00-3.00 sec 1.14 GBytes 9.82 Gbits/sec 0 3.15 MBytes
[ 5] 3.00-4.00 sec 1.14 GBytes 9.82 Gbits/sec 0 3.15 MBytes
[ 5] 4.00-5.00 sec 1.15 GBytes 9.87 Gbits/sec 0 3.15 MBytes
[ 5] 5.00-6.00 sec 1.15 GBytes 9.85 Gbits/sec 0 3.15 MBytes
[ 5] 6.00-7.00 sec 1.15 GBytes 9.88 Gbits/sec 0 3.15 MBytes
[ 5] 7.00-8.00 sec 1.15 GBytes 9.85 Gbits/sec 0 3.15 MBytes
[ 5] 8.00-9.00 sec 1.13 GBytes 9.71 Gbits/sec 0 3.15 MBytes
[ 5] 9.00-10.00 sec 1.15 GBytes 9.90 Gbits/sec 0 3.15 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 11.4 GBytes 9.82 Gbits/sec 0 sender
[ 5] 0.00-10.04 sec 11.4 GBytes 9.77 Gbits/sec receiver
Although I have excellent throughput via the iperf3 results I did swap the cable to see if that was the issue no change. I did test this with a windows10 device with an SSD and the same 30 GB folder. It starteed at 800Mb and after about 10 seconds droped down to about 150 so I am guess SSD cahe was exhausted.
Any suggestions appreciated.
My Setup is 2 HP z620 dual e5-2670, Broadcom 57810S Dual Port 10Gb NIC RJ45 MTU 9000 and 1 each NVMe WD Green SN350 1TB per machine.
First HP z620 is running Debian 12 using Proxmox 8.1.4 as a hypervisor and Open Media Vault Version 6.5.0-3(Shaitan) running as a VM SMB Server
Open Media Vault SMB Shares.
First SMB is Raid 5 with 3 HDD
read speed on a 30 GigByte Folder containing mixed files 196 MB/s. Expectation would be around 220 so it is close slightly low.
Second SMB is NVMe WD Green SN350 1TB.
read speed on a 30 GigByte Folder containing mixed files 226 MB/s. Expectation would be saturating the 10 Gb link at 900+
Only 30MB/s difference between Raid 5 and NVME?
Second HP z620 is running Debian 12 desktop
Drive is PCI(E) passed through on Proxmox node no option to test drive speed at this point
Drive Speed Proxmox VM
Code:
sudo hdparm -Tt /dev/sdx
Timing cached reads: 16896 MB in 1.99 seconds = 8487.12 MB/sec
Timing buffered disk reads: 3784 MB in 3.00 seconds = 1260.86 MB/sec
Drive Speed Desktop
Code:
sudo hdparm -Tt /dev/nvme0n1
Timing cached reads: 17450 MB in 1.99 seconds = 8772.46 MB/sec
Timing buffered disk reads: 2904 MB in 3.00 seconds = 967.67 MB/sec
Iperf3 results between 2 PC’s
root@jaime-hpz620workstation:/etc/samba#
Code:
iperf3 -c 10.0.40.169
[ 5] local 10.0.40.175 port 40206 connected to 10.0.40.169 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.12 GBytes 9.62 Gbits/sec 0 3.15 MBytes
[ 5] 1.00-2.00 sec 1.15 GBytes 9.86 Gbits/sec 0 3.15 MBytes
[ 5] 2.00-3.00 sec 1.14 GBytes 9.82 Gbits/sec 0 3.15 MBytes
[ 5] 3.00-4.00 sec 1.14 GBytes 9.82 Gbits/sec 0 3.15 MBytes
[ 5] 4.00-5.00 sec 1.15 GBytes 9.87 Gbits/sec 0 3.15 MBytes
[ 5] 5.00-6.00 sec 1.15 GBytes 9.85 Gbits/sec 0 3.15 MBytes
[ 5] 6.00-7.00 sec 1.15 GBytes 9.88 Gbits/sec 0 3.15 MBytes
[ 5] 7.00-8.00 sec 1.15 GBytes 9.85 Gbits/sec 0 3.15 MBytes
[ 5] 8.00-9.00 sec 1.13 GBytes 9.71 Gbits/sec 0 3.15 MBytes
[ 5] 9.00-10.00 sec 1.15 GBytes 9.90 Gbits/sec 0 3.15 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 11.4 GBytes 9.82 Gbits/sec 0 sender
[ 5] 0.00-10.04 sec 11.4 GBytes 9.77 Gbits/sec receiver
Although I have excellent throughput via the iperf3 results I did swap the cable to see if that was the issue no change. I did test this with a windows10 device with an SSD and the same 30 GB folder. It starteed at 800Mb and after about 10 seconds droped down to about 150 so I am guess SSD cahe was exhausted.
Any suggestions appreciated.
Statistics: Posted by jaimarti — 2024-03-12 19:01 — Replies 3 — Views 74