I’ve been working with a client who has been having some performance problems with their Nimble array for one very specific workload. This workload was a replication distributor that was receiving a huge number of inserts with a new very small updated. The inserted rows are very wide. The insert performance was fantastic as expected as that was being written to flash. Where the performance problem was happening was when SQL was reading new pages from the disk so that the new rows could be written to the new disk. This was apparently happening in such a pattern that the Nimble couldn’t identify the pattern and do any prefetching of those pages into it’s SSD cache.
Well apparently this was a known issue, because at the recommendation of Nimble the firmware on the Nimble array was upgraded, and the new caching algorithm enabled. As soon as that happened the array started pre-fetching the correct blocks into SSD cache and the replication performance improved dramatically. Instead of loosing about 3-4 hours per day (getting 20-21 hours of replication completed per 24 hours day) we were able to blaze through the backlog. At the time the firmware up graded replication was about 8 or 9 days behind. Within 1-2 days the replication had completely caught up.
I’d say that’s pretty good.