Tuesday, August 25, 2009

Linux NFS and SCP Performance

I have been messing around with my Linux boxen.  I have two, connected through an HP gigabit wire speed switch.  The profiles are as follows:

Marklar: 'client'

2x Opteron 8358se in an ASUS KFN5-D SLI with 8gb of ram, an Hitachi Ultrastar 1tb and a Broadcom 5751 on a PCI-e x1 card.

Mash: 'server'

2x Opteron 8347 in a Tyan S2915 with 16gb of ram, a 3ware 9500s-8 with 6 Seagate 750gb SATA disks as the array, dual Fujitsu 147gb 15krpm SAS disks in raid0 as root and using the built-in forcedeth controller with Marvell phy.

I tried four different benchmarks.  For all the benchmarks, the copy was initiated on the client.  The file copied was the same 3.1gb Windows 7 beta ISO.  After the first copy, it was pretty much all in memory, so the first copy was thrown out.

I tried scp from the client to the array, copy to the array mounted through nfs, tuned for large files, scp from the client to my home directory, the raid0 SAS disks, and nfs copy to the SAS disks, optimized for general-purpose use.

The results were as follows:

Copy to array via nfs: 58MB/s average

SCP to array: 42MB/s average

Copy to raid0 SAS via nfs: 88MB/s average

SCP to raid0: 42MB/s

I guess several things come to mind: I have managed over 100MB/s over this link between two linux boxes before.  88MB/s over a large file is respectable.  The raid0 SAS disks ought to be able to do nearly 200MB/s easily, using linux' MD driver.  The local disk, an Hitachi Ultrastar terabyte disk is perfectly capable of averaging over 100MB/s, and, as said above, the whole file was in memory at the time, as confirmed by watching disk activity with saidar during copies.  Essentially, the SSL library limits ssh transfer to about 42MB/s on the opteron 8347, which clocks at 1.9GHz.  I believe the 8358se in the client is capable of substantially higher ssl throughput, with most benchmarks I've messed with showing it around 50% faster, with even higher increases if SSE is activated.

Another thing is that it is quite difficult to get to the theoretical 125MB/s limit of gigabit networking.  Despite that most modern systems have more than enough throughput and processing power to handle it, many systems don't have low enough latency to handle it.  For instance, my Macbook only can do around 35MB/s to nfs.  Part of that is the very slow internal disk, which, although 7200 rpm, doesn't come near the Hitachi, but a lot of it is the relatively high latency of the system.

No comments:

Post a Comment