Today was the second day of the SQL PASS summit. The first item of business was to award The passion award to Wendy Pastrick (blog | twitter) . After the awesome award was given out by Bill Graziano, Bill talked about the budget stuff. Bill also promised to wear a kilt on Wednesday of next year, something which I’ve done for the first time this year supporting Women In Technology (wit).

Today was also the first time that I used my iPad to blog for the day. I went totally laptop free for the day to see how well it works for me.

Quentin Clark was our next keynote speaker for today. Quentin started by talking about the high level features around the new Denali product (also called SQL 11). SQL Server Danali will be bringing in an “Always On” concept which will allow ever database to have the ultra high availability requirements that many database applications have. This new “Always On” solution doesn’t require any sort of shared storage to work. It will replicate data to multiple servers with the ability of having some servers available for automatic failover ads well as having some servers available for reading the data as well as for backup so you don’t have to take the backups from the active server (not in the current CTP). The awesome part of the setup for this feature doesn’t require that you copy backups around. It will handle the moving of those backups automatically. I have a lot of questions about how this happens and how it effects the system cache.

On the T-SQL side we now have paging, sequences, and much better error handling. Paging allows you to specify the size of each page to display and which page to display with the database engine doing the hard work. Sequences and some more flexibility into the engine by allowing the developer to use the same sequence numbers for multiple tables.

The new filetable feature is a feature which I really don’t see the use for. This feature allows you to create a table which is exposed as a network share so that applications can write files into the database without any application changes. Under the hood it uses filestream feature to handle the IO. We have spent years keeping this blob data out of the database because of the performance issues of having TB of data within the database.  Some questions which need answering about the filestream is how does this affect the buffer pool when pulling files in and out of the database engine?  Is there a point at which this feature can’t handle a file load, in other words how many files can I shove into a single filetable table before it’ll slow down performance?

Denny

(Sorry this was so late coming out, I’m such a slacker.)

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trust DCAC with your data

Your data systems may be treading water today, but are they prepared for the next phase of your business growth?