I’ve seen a couple of conversations recently about companies that want to be able to script out their database schema on a daily basis so that they have a current copy of the database; or systems that have to change permissions with the database frequently, and they need to export a copy of those permissions so that they have a copy of those settings.
My question to follow up on these sorts of situations is, why aren’t these settings in Source Control?
Pushing these changes to production requires a change control process (and the approvals that go with these). That means that you have to document the change in order to put it into the change control ticket, so why aren’t these changes pushed into your source control system?
Anything and everything that goes into your production systems should be stored in your source control system. If the server burns down, I should be able to rebuild SQL (for example) from the ground up, from source control. This includes instance level settings, database properties, indexes, permissions, table (and view, and procedures) should all be in your source control system. Once things are stored in your source control system, then the need to be able to export the database schema goes away, as does the need to export the permissions regularly. As these have no point in doing them, there is no need to do them.
Think I’m wrong, convince me in the comments.
The post If It Requires A Change Control Ticket To Change It, It Should Be In The Change Control System appeared first on SQL Server with Mr. Denny.
In a previous blog post, I discussed two new methods to determine exactly which page a request might be waiting for when there is contention. One of these new methods involves a new function, fn_pagerescracker. Naturally, I wanted to see how this function operates. Let’s look at the Master database to investigate how it works!
The function consumes a BINARY(8) parameter and returns a table. This parameter represents a hexadecimal value that is constructed from the database ID, page ID, and file ID.
-- Name: sys.fn_PageResCracker
-- Cracks the output of wait_resource page info
create function [sys].[fn_PageResCracker] (@physical_locator binary (8))
returns @dumploc_table table
[db_id] int not null,
[file_id] int not null,
[page_id] int not null
if @physical_locator is not null
declare @db_id binary (2)
declare @file_id binary (2)
declare @page_id binary (4)
-- Page ID is the first four bytes, then 2 bytes of file ID, then 2 bytes of slot
select @page_id = convert (binary (4), reverse (substring (@physical_locator, 1, 4)))
select @file_id = convert (binary (2), reverse (substring (@physical_locator, 5, 2)))
select @db_id = convert (binary (2), reverse (substring (@physical_locator, 7, 2)))
insert into @dumploc_table values (@db_id, @file_id, @page_id)
You can see from the code that the parameter is then “diced” up with SUBSTRING commands and then converted into a small binary value. Also, using SUBSTRING against a binary value (which is the parameter that is passed in) will return the number of bytes specified rather than the number of characters. In this case, the Page ID is the first 4 bytes, the file ID is the following 2 bytes and finally the last 2 bytes represents the database ID. Furthermore, when they are inserted into the table (which is returned to the calling query) it is then converted into an integer value.
Shown above, the page_resource value is 0x5001000001000600. Now that we know how the function is going to work, we can do the math ourselves and confirm things. Keep in mind that SQL Server utilizes big endians, so we will have to do some byte reversing.
|| File ID
|| Database ID
Page ID = 0x50010000, byte reversed is 0x00000150 (or 0x150), which when converted to an integer = 336
File ID = 0x0100, again byte reversed is 0x0001 (or 0x1), which when converted to an integer is 1.
Database ID = 0x0600, reversed once again, 0x0006 (or 0x6), which converted to an integer is 6.
We can see below that we have now confirmed that the function is working as expected!
One of things I enjoy the most is digging into the internals on how SQL Server works. It continues to teach me new and exciting things. It also helps to make me a better DBA in my opinion. In this case, the value add isn’t super steep but it’s still fun tearing apart the logic.
© 2019, John Morehouse. All rights reserved.
Yesterday, I wrote my monthly column for Redmond Magazine about dealing with situations where your management doesn’t want to invest in the resources to provide reliability for your hardware and software environment. Whether its redundant hardware and even offsite backups, many IT organizations fail at these basic tasks, because the business side of the organization fails to see the benefit, or more likely doesn’t perceive the total risk of having a major outage.
Is this your hardware?
As I was writing the column and as mentioned in the column, AMC theaters had a system meltdown the other day, during the early sale of tickets for the premier of the new Avengers movie. The next day, Arizona Iced Tea (whom I didn’t realize was still in business) got ransomwared.
While I agree with Andy about testing your restores, I wanted to address a couple of other things. If you are running an old OS like Windows 2003, your business is at risk. If for some reason you absolutely have to run a 16 year old operating system in your environment, you should ensure that it is isolated enough on your network that it’s exposure is limited.
Additionally, as an IT organization it’s your job to be yelling up and down at your software vendors who won’t support modern versions of infrastructure components like operating and database systems. And yes, while I’m a consultant now, I’ve had many real jobs, and I understand the business chooses the software packages they want to run. I also understand, that when the org gets ransomwared because “SomeShittyApp” needed to run on an unpatched XP box with an SMB-1 share open to the internet, IT are going to be the folks whose jobs are on the line.
One of the other things I brought up in my column is how to handle the PR aspects of a system outage. Let’s face it, if your site and services are down, you need to be able to explain to your customers why and what your timeline for repair is. When you are investing in your systems and doing all of the right things, it is very easy to be transparent and explain “we had redundancy in place, but the failure happened in such a way that we incurred an outage”, sounds a lot better than “yeah, we had to run an OS that’s older than our interns because we didn’t have the budget for an upgrade.”
Finally, if you are on really old hardware (hint: if your servers were originally beige and are now a much yellower shade of beige, you’re on really old hardware), it’s probably cheaper and more efficient to do a cloud migration. You can move to Azure IaaS (or AWS) or if you’re a VMWare shop their cloud option on AWS offers a very simple migration path, especially if your cloud experience is limited. Just get off that really old hardware and software and onto something that gets patched regularly.
One of the easiest things to fix when performance tuning queries are Key Lookups or RID Lookups. The key lookup operator occurs when the query optimizer performs an index seek against a specific table and that index does not have all of the columns needed to fulfill the result set. SQL Server is forced to go back to the clustered index using the Primary Key and retrieve the remaining columns it needs to satisfy the request. A RID lookup is the same operation but is performed on a table with no clustered index, otherwise known as a heap. It uses a row id instead of a primary key to do the lookup.
As you can see these can very expensive and can result in substantial performance hits in both I/O and CPU. Imagine a query that runs thousands of times per minute that includes one or more key lookups. This can result in tremendous overhead which is generated by these extra reads it effects the overall engine performance.
Let’s look at an example.
Where [ModifiedDate]> 2014/01/01 and [ProductID]=772
The cost of the key lookup operator is 99% of the query. You can see it did an Index Seek to the IX_SalesOrderDetail_ProductID which is very effective, but that index did not have all the columns needed to satisfy the query. The optimizer then went out to the clustered index PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID to retrieve the additional columns it needed. You can see what it got by hovering over the key lookup in the query plan window.
The good thing about Key and RID look ups is that they are super easy to fix. With a little modification to the non-clustered Index IX_SalesOrderDetail_ProductID we can change to query plan from an Index Seek and a Key Lookup to a very small index seek. All we have to do is recreate that index and add the Output List fields as Included columns on that index.
CREATE NONCLUSTERED INDEX [IX_SalesOrderDetail_ProductID]
ON [Sales].[SalesOrderDetail]([ProductID] ASC)
INCLUDE ([CarrierTrackingNumber],[UnitPrice], [ModifiedDate], [OrderQty])
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = ON, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
And as you can see, we now have an Index Seek only and a more efficient plan.
Key Lookups can cause performance headaches, especially for queries that run many times a day. Do yourself and your environment a favor and start hunting these down and get them fixed. Jonathan Kehayias (B|T) has a great blog on how to locate theses in plan cache be sure to check it out.