Sunday, August 30, 2009
Why You Want a Dual or Better
Saturday, August 29, 2009
Heat Problems, Again
Well, I Did Pay $1500 For It
Friday, August 28, 2009
The Macbook Is Working Right Again
Mash Has Been Up For a Bit
Wednesday, August 26, 2009
Escape from Teh Suck
I decided to consolidate marklar and mash, my two duals, to a single box. Marklar has the faster procs and video, mash has the faster disk subsystem. I also decided to install linux on my intel x25-e 32gb slc ssd.
The very first thing I did was to pull the procs while marklar was still on. Something was in the way of the power light, and I didn't notice it until I went to pull the video. Fortunately, only the idle power and the SMU were powered up, and no damage was incurred. Off to install the CPUs in mash...
No joy. Dreaded quiet fan spin. No beeps, nothing. Well, maybe the CPUs did burn up. Pull them back out. Woops, how did that pin get bent? Hmm. Go to fix it with an awl, slip, bend a bunch more. Pull the array controller card (meaning all the sata cables now have to be rerouted), pull the video cards, pull the power, pull the motherboard tray, take out all the ram, spend half an hour with an awl and a doublet I have lying around straightening pins. Put it all back together. Ran out of silver compound with just enough, so if I get this wrong, it'll be a bit before I can try again.
Flip the switch, with just one video card, two ram sticks and both CPUs. Works. Whew. Put everything back together, plug dvd drive into LSI 1068 controller, fire her up. #$^@#$%@#$. The DVD drive won't work with the LSI 1068. If I plug the DVD into the MCP-55 NVRAID SATA controller, I know, from long experience, that the 3ware and the LSI controllers will both quit working.
So, after much experimentation, I got the NVRAID running with the 3ware pulled. Yay. Install debian. Reboot, hit the power switch, plug in the 3ware, GRUB error 17. !#$^%@#$%@$#. Dual SAS disks are still in the system. With the 3ware card removed, they are sda and sdb. Use the old /boot partition and get ready to get GRUBby. However, if you put the 3ware back in and make sure its array (sda now) is set last in the boot order, the machine boots, then panics about no fs where it left them. No problem, off to edit fstab and menu.lst, and I have a perfectly working machine. Then some messing around with X and I have one video card working with two monitors attached. Later, after some shut-eye, I will see about the other card and the other monitor.
Some impressions so far: the 8358se processors are seriously fast, lots faster than the 8347s. I use the 8000 series because I have this idea in the back of my mind to build a four socket monster at some point. I buy all my processors used on ebay, so there is a limit to the amount I'm willing to spend, and when they get that cheap, the 8000 and 2000 series are normally about the same price.
The addition of the ssd has pretty much ended any jerkiness the os may have had. It simply never stops to think. Installs fly by. Downloads are faster, too, which is odd, but explainable given the lower latency of the ssd.
Normal machine-buying seems to be simply picking up the fastest processor, largest disk and most memory money can buy. I've always thought that a machine should start with disk because no matter how fast your processor is, if your disk is not keeping up, it'll be idle most of the time. So, I've had a lot of SCSI hardware, some SAS hardware, and now SSDs. I won't be buying any more SAS hardware, that is how impressive these disks are. They are hideously expensive, but the price is dropping and fast.
I think that in the end, people are going to see what a huge speed bump an ssd is and begin to appreciate what I've been saying all along, but maybe not; I've used other machines that were considered, like, seriously hot, dude, and they seem slow, loggy and easily hammered. My machines seem slow to most other people. However, with all my machines, if you have a hundred things to do in a hurry, it will get them done and leave enough performance lying around to surf the web.
And that's what the bandwidth and the dual sockets and the really fast disk are all about: getting lots of things done at once rather than getting one or two things done really fast.
So, finally, I have XFCE4, Opera, Flash, Firefox and Thunderbird working on this machine, making a reasonable dev box. Oh, and there's Sun's Java6 and eclipse as well. Oh, and the full GPP and GCC dev stack and the whole kernel tree. 3.4Gb.
I just had this ssd in my macbook for a bit, made the macbook quite snappy, but the install took up 16Gb!!!! That left around 14Gb for me to do things with, which rapidly turned into 8 after a bit. I began to fear the dreaded running out of disk crash with OS X that so often leads to the trash all preferences on dot-mac and reinstall solution. Here in linuxland I have a full desktop with all the tools necessary to develop applications and it's taking less than a quarter the space OS X took. Hmm.
Tuesday, August 25, 2009
Linux NFS and SCP Performance
I have been messing around with my Linux boxen. I have two, connected through an HP gigabit wire speed switch. The profiles are as follows:
Marklar: 'client'
2x Opteron 8358se in an ASUS KFN5-D SLI with 8gb of ram, an Hitachi Ultrastar 1tb and a Broadcom 5751 on a PCI-e x1 card.
Mash: 'server'
2x Opteron 8347 in a Tyan S2915 with 16gb of ram, a 3ware 9500s-8 with 6 Seagate 750gb SATA disks as the array, dual Fujitsu 147gb 15krpm SAS disks in raid0 as root and using the built-in forcedeth controller with Marvell phy.
I tried four different benchmarks. For all the benchmarks, the copy was initiated on the client. The file copied was the same 3.1gb Windows 7 beta ISO. After the first copy, it was pretty much all in memory, so the first copy was thrown out.
I tried scp from the client to the array, copy to the array mounted through nfs, tuned for large files, scp from the client to my home directory, the raid0 SAS disks, and nfs copy to the SAS disks, optimized for general-purpose use.
The results were as follows:
Copy to array via nfs: 58MB/s average
SCP to array: 42MB/s average
Copy to raid0 SAS via nfs: 88MB/s average
SCP to raid0: 42MB/s
I guess several things come to mind: I have managed over 100MB/s over this link between two linux boxes before. 88MB/s over a large file is respectable. The raid0 SAS disks ought to be able to do nearly 200MB/s easily, using linux' MD driver. The local disk, an Hitachi Ultrastar terabyte disk is perfectly capable of averaging over 100MB/s, and, as said above, the whole file was in memory at the time, as confirmed by watching disk activity with saidar during copies. Essentially, the SSL library limits ssh transfer to about 42MB/s on the opteron 8347, which clocks at 1.9GHz. I believe the 8358se in the client is capable of substantially higher ssl throughput, with most benchmarks I've messed with showing it around 50% faster, with even higher increases if SSE is activated.
Another thing is that it is quite difficult to get to the theoretical 125MB/s limit of gigabit networking. Despite that most modern systems have more than enough throughput and processing power to handle it, many systems don't have low enough latency to handle it. For instance, my Macbook only can do around 35MB/s to nfs. Part of that is the very slow internal disk, which, although 7200 rpm, doesn't come near the Hitachi, but a lot of it is the relatively high latency of the system.