I was planning to comment on the blog after read the latest entry:http://www.getrichslowly.org/blog/2007/ ... h-success/
But database optimization is such a complex topic that it felt so cramp trying to type all the possible solutions in that small tiny box so I decided to make a new thread here.
Add more memory, turn on write caching, turn off database fsync. This stops flushing writes immediately to disk -- instead it then uses the server's memory as its work space and periodically posts to disk.
* Pros: Cheap, fast -- no disk config you can come up with can come even close to running from memory caching.
* Cons: Power outages and OS crashes can corrupt your database. To combat this, you would have to do backups more often to restore from hiccups. Or you could also do real-time replication to another machine that can be much slower since all it does is receive backup data.
Upgrade from IDE/SATA to SCSI. It sounds like you have a server with consumer IDE or SATA drives. SCSI drives are much faster for database use because they have custom logic to reorder disk writes to optimize for multiple users.
* Pros: Fast
* Cons: Expensive, big & fast disk drives are $$$. A possible option here is to buy 6 refurbs for the price of 2 new ones and just swap in spares if a refurb dies sooner than expected. Requires additional cost of a SCSI adapter -- 2 interfaces recommended to dedicate an entire channel per drive which greatly simplifies SCSI termination issues.
Upgrade to faster SATA drives. Western Digital makes the Raptor line which spins 40% faster than regular consumer drives and also includes similar re-ordering, queuing logic that SCSIs have. I currently have 2 databases with 150GB 10K Raptors and they run about 90% as fast as our databases with 136GB 15K SCSIs.
* Pros: Fast. Drop in replacement for your current drives, no need for an extra SCSI adapter.
* Cons: Expensive. WD Raptors are almost as expensive as comparable new SCSI drives. Refurb SCSIs totally beat Raptors at price even accounting for buying extras for spares.
Upgrade to WAY bigger SATA drives. There are two ways drives can be faster. They either can spin faster and have great built-in logic to optimize read & write patterns to reduce disk seeks. Or they can be so huge that the drive heads only spin over the fast outer edge and never have to do any disk seek. On a disk, the outer edge has higher linear velocity than the inner edge -- so it's common that the first bytes written to hard drive can be twice as fast as the last bytes written to a drive. If you can keep disk usage to 5%-10%, it's extremely fast.
* Pros: Drop in replacement for current drives.
* Cons: Will slow down as you use up more disk space.
Add more hard drives. You have RAID1 right now? That means you read at 2X a single disk and write a 1X speed. If you bumped it up to a 4-drive RAID10, you would read at 4X and write at 2X. There's a also a decent reduction in disk seeks due to same reason described in Option 4. RAID5/6 would let you string more drives together and utilize more disk space but at the cost of slower writes since every write would require reading the data off every disk first in order to calculate the checksum. With enough disk drives and a caching controller, this penalty can be overcome.
* Pros: More disk space, faster reads
* Cons: Slower writes
Add a caching controller with onboard memory and battery backup. In this case, all writes immediately get posted to the controller's memory. The controller then decides when to flush to disk or not. If there's a power outage or OS crash, a battery backup unit attached to the controller keeps that data in memory and immediately flushes data to disk when power is back on.
* Pros: 2nd fastest option next to running your database in memory
* Cons: Expensive
If it was me, I'd go for option #1 and increase the backup frequency for this case. More memory is always useful -- even if you have to pick another option in the future, it won't be wasted.