10Gb networking and DAM

Posted by Rob Pelmas on Wed, Jun 03, 2009 @ 09:56 AM

We're a bunch of performance geeks here. We've been tweaking blocksizes, stripe, and interleave settings on disk since SGI first gave you access to 'em. Tuning and re-tuning SWAP size, location, type is in our blood. A few percentage points here, double digit gains there, all without more capex. Gotta love it.

Now, anytime a paradigm shift in technology comes out there's a steep cost differential to it, right? 10Gb networking had only a tiny little blip of time when it was out of reach of the masses, which is a refreshing change. You can kit out most servers with a card, an acceptable managed switch with a 10Gb port or two, for a very reasonably cost.

Why go to 10? Our desktops have had Gb cards for what seems like forever, and very fast CPUs. With just a couple 'power' users you could swamp the networking capabilities of a server. Of course, a handful of years ago disks could only cough up 150Mb/sec or so of sustained data, so network tended to not be the gating factor in server  performance. Modern disk starts at well over 300Mb/sec, and if you stripe or otherwise use some common sense design principles you can achieve multiples of that.

 Xinet and NAPC both use the 1 to 6 rule for users and performance: with 6 retouchers (or 'power' users), you can assume 1 of them will be accessing the server at one time. 12=2, 18=3. It's a rough rule of thumb, but one that seems to stand up over time. 12 heavy hitters can thus drain 120Mb/sec out of a server, which is the better part of 2 1Gb cards bonded together. Add in the other users, doing layout, OPI printing (yep, some folks still use an OPI workflow), and Portal access, you've got a saturated pipe. 10gb gives you a good 800mb/sec of access speed, which will sate all but the most demanding organizations needs for data.

Next of course, we can talk about teaming 10Gb interfaces! (insert evil chortle of delight here).

 


Tags: knowledge, how to, DAM Systems, Portal, workflow