For our most heavily accessed data set, we had an extremely good read/write ratio, so we were able to fan out to about 20 slaves from a single master. This particular database had several hundred million rows, which challenged the limits of our hardware (periodically, we had to clean out stale data when it got too large), so one trick we used was index-segmentation. Different sets of slaves had different indexes, and our database access layer could pick a different cluster based on the necessary index. Specifically, the tables in this database generally had an ID and a string, but the index on the string was only necessary for some queries. So, on some slaves we simply didn’t have the string index. This allowed those machines to keep the entire ID index in memory, which was a huge performance boost.
We used sharding to scale our databases in other areas.