SQL PASS 2010 Day 2 Keynote

Published On: 2010-11-11By:

Today was the second day of the SQL PASS summit. The first item of business was to award The passion award to Wendy Pastrick (blog | twitter) . After the awesome award was given out by Bill Graziano, Bill talked about the budget stuff. Bill also promised to wear a kilt on Wednesday of next year, something which I’ve done for the first time this year supporting Women In Technology (wit).

Today was also the first time that I used my iPad to blog for the day. I went totally laptop free for the day to see how well it works for me.

Quentin Clark was our next keynote speaker for today. Quentin started by talking about the high level features around the new Denali product (also called SQL 11). SQL Server Danali will be bringing in an “Always On” concept which will allow ever database to have the ultra high availability requirements that many database applications have. This new “Always On” solution doesn’t require any sort of shared storage to work. It will replicate data to multiple servers with the ability of having some servers available for automatic failover ads well as having some servers available for reading the data as well as for backup so you don’t have to take the backups from the active server (not in the current CTP). The awesome part of the setup for this feature doesn’t require that you copy backups around. It will handle the moving of those backups automatically. I have a lot of questions about how this happens and how it effects the system cache.

On the T-SQL side we now have paging, sequences, and much better error handling. Paging allows you to specify the size of each page to display and which page to display with the database engine doing the hard work. Sequences and some more flexibility into the engine by allowing the developer to use the same sequence numbers for multiple tables.

The new filetable feature is a feature which I really don’t see the use for. This feature allows you to create a table which is exposed as a network share so that applications can write files into the database without any application changes. Under the hood it uses filestream feature to handle the IO. We have spent years keeping this blob data out of the database because of the performance issues of having TB of data within the database.  Some questions which need answering about the filestream is how does this affect the buffer pool when pulling files in and out of the database engine?  Is there a point at which this feature can’t handle a file load, in other words how many files can I shove into a single filetable table before it’ll slow down performance?


(Sorry this was so late coming out, I’m such a slacker.)

Contact the Author | Contact DCAC


Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner   Denny Cherry & Associates Consulting LLC BBB Business Review    Microsoft MVP    Microsoft Certified Master VMWare vExpert
INC 5000 Award for 2020    American Business Awards People's Choice    American Business Awards Gold Award    American Business Awards Silver Award    FT Americas’ Fastest Growing Companies 2020   
Best Full-Service Cloud Technology Consulting Company       Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers
Share via
Copy link