There are some concepts that have proven themselves
over the years when speed and capacity weren't what they are today. Things that worked well under constrained
circumstances still work well when speed and capacity are increased. I put my thoughts and experiences into
a white paper.
to download it.
Most of my high performance designs rely upon parallel or concurrent processing. This
requires a computer with a lot of processors. In benchmark tests on mainframes, Unix, AIX and Windows,
I have proven that one efficient process can run a processor at 90% capacity or more. And, if you try
to run more processes than processors, the overall throughput is decreased because of the operating system
and database manager swapping things in & out to work on them. If you want to do massively
parallel processing, as large databases require, then you need massively parallel hardware. This
applies to disk drives, too. Table partitions or containers are not truly independent unless there are
are no conflicts between them, even at the disk drive level.
Real-world case studies are one of the best ways to
learn from other people's experiences. Hopefully you can learn from mine:
transactions at a large regional bank
transactions at a large department store chain
billion rows in one table
Or 4? Or 5?
LOCKSIZE (ANY)? Or (PAGE)? Or
COMPRESSED YES? or COMPRESSED NO?