You might be asking why on earth would you want to get a database into an undesirable state, more specifically into a Recovery Pending state. Well, in my case, I had a client database that had entered into this state due to a failure of storage. Thankfully, this database was not being used for production, so I had some time to determine the best way to fix the issue.
A phrase that was often used during my time in the fire service, was “Try Before You Pry”. Does the front door of the house need to be kicked in? It may be unlocked and by trying before prying (with my boot) I can prevent damage to the door. In these types of scenarios, this philosophy holds true. Try things out on non-critical databases will help prevent any further damage.
In this instance, I want to try it before forcing something that might be damaging. This meant I had to get a test database into a recovering state. Once it is in the state I need, then I can attempt different methods to recover the database properly. Once a successful solution is determined, I can then confidently pry against the damaged database in production knowing that I am using a validated solution.
DISCLAIMER: DO NOT DO THIS ON A PRODUCTION SQL SERVER INSTANCE.
You have been warned.
DISCLAIMER: DO NOT DO THIS ON A PRODUCTION SQL SERVER INSTANCE.
You have been warned. Again.
How do you get a database into a recovery pending state?
Here is how I did this:
Start a new transaction.
Create a new table.
Stop the SQL Server service
Rename/Delete the database log file.
Restart the SQL Server Service
The database will be in a recovery pending state upon the restart of the SQL Server service.
Why is the database in recovery pending?
When the database attempts to come back online, it will be put into a recovery pending state because the log file is not present but there was an open transaction when the service was shut down. In normal operations, even with an open transaction, SQL Server would go through the recovery phase of the transaction log. In the rollback phase of recovery, SQL Server would attempt to rollback and transaction that was open at the point of restart and undo the changes. Since the log file no longer exists, it is unable to do so.
Therefore, the database is now in the recovery pending status. It’s pending recovery because there was an open transaction, but SQL Server is unable to bring the database into a consistent state.
When this occurs, you will see something like this in the error log:
If the database is shut down cleanly and the transaction log file remove/renamed/etc., SQL Server will just rebuild the log file for you.
Sometimes it is useful to be able to put a database into a specific state of being so that you can validate solutions before attempting to perform an action in a Production environment. Just remember to try it before prying it. Not doing so could just make things worse so being overly cautious is not a bad thing.
Award-winning Microsoft Gold Data and Cloud Solutions Partner Denny Cherry & Associates Consulting [DCAC] announced the hire of two new team members.
OCEANSIDE, CA, July 29, 2021 /24-7PressRelease/ — Today, award-winning Microsoft Gold Data and Cloud Solutions Partner Denny Cherry & Associates Consulting [DCAC] announced the hire of two new team members: Elijah Rice as the company’s new Director of Sales, and Bill Fellows, who will join DCAC’s elite team of database and cloud solutions consultants.
“We couldn’t be prouder to welcome both of them to the team.” – Denny Cherry
During his brief tenure, Elijah Rice has already managed to assist in increasing DCAC’s revenue by 90%. No stranger to sales, he previously served as a dedicated sales professional for household brand name companies such as at Nordstrom and Lennar Homes. At Lennar, he was perennially ranked in the top 10% of sales nationally. A strong proponent of company culture, he was a founding member of Lennar’s LMC Cultural Ambassadors and is the recipient of multiple sales and customer service awards.
With the hiring of Bill Fellows, DCAC continues to expand its impressive roster of Microsoft Data Platform MVPs. Fellows is an architect and five-time Microsoft Data Platform MVP with a focus on converting data into actionable intelligence. He has been a database developer for the past 20 years, renowned for his depth of knowledge in automation, Extract, Transform and Load [ETL], and Business Intelligence. He is the co-author to “The BIML Book”, maintains the SSIS tag on StackOverflow, and is the organizer of Kansas City’s SQL Saturdays.
DCAC Founder and CEO Denny Cherry commented, “With Elijah joining our team we’ve been able to focus on customer service while continuing to grow our customer base. Elijah’s experience in sales and marketing fields have propelled our sales processes to the next level, allowing for a large amount of automation in our processes. Bill brings experience in languages like Python and BIML and will assist DCAC in growing our big data and advanced analytics practices using Microsoft Azure features such as Azure Data Factory, DataBricks, and Power BI.
About Denny Cherry & Associates Consulting Microsoft Partner and Gold Platform certified Denny Cherry and Associates Consulting offers companies a cost-effective way to achieve a deft, high-speed IT environment that maximizes every aspect of their platform: from architecture, to infrastructure, to network. DCAC services include Azure and AWS Architecture / Migrations, Azure Cloud Solution Partner (CSP) and Azure Support, Database High Availability, Database and Cloud scalability, SQL Server virtualization and acceleration, and Power BI report development. DCAC has been named IT Company of the Year two years in a row at the American Business Awards and is ranked on the Inc. 5000 List of America’s Fastest Growing Private Companies.
Each resource type in Azure has a naming scope within which the resource name must be unique. For PaaS resources such as Azure SQL Server (server for Azure SQL DB) and Azure Data Factory, the name must be globally unique within the resource type. This means that you can’t have two data factories with the same name, but you can have a data factory and a SQL server with the same name. Virtual machine names must be unique within the resource group. Azure Storage accounts must be globally unique. Azure SQL Databases should be unique within the server.
Since Azure allows you to create a data factory and a SQL server with the same resource name, you may think this is fine. But you may want to avoid this, especially if you plan on using system-defined managed identities or using Azure PowerShell/CLI. And if you aren’t planning on using these things, you might want to reconsider.
I ran into this issue of resources with the same name in a client environment and then recreated it in my Azure subscription to better understand it.
I already had a data factory named adf-deploydemo-dev so I made an Azure SQL server named adf-deploydemo-dev and added a database with the same name.
Azure Data Factory should automatically create its system-assigned managed identity. It will use the resource name for the name of the service principal. When you go to create a linked service in Azure Data Factory Studio and choose to use Managed Identity as the authentication method, you will see the name and object ID of the managed identity.
For the Azure SQL Server, we can create a managed identity using PowerShell. The Set-AzSqlServer cmdlet has an -AssignIdentity parameter, which creates the system-assigned managed identity.
If you use Get-AzSqlServer to retrieve the information and assign the Identity property to a variable, you can then see the system-assigned managed identity and its application ID.
Now when I look in Active Directory, I can see both managed identities have the same name but different application IDs and object IDs.
Everything is technically working right now, but I have introduced some needless ambiguity that can cause misunderstandings and issues.
Let’s say that I want to grant the Storage Blob Data Reader role to my data factory. I go to the storage account, choose to add a role assignment, select the role, and then go to add members. This is what I see:
Or let’s say that I use PowerShell to get lists of resources by name. I may be locating resources to add tags, add a resource lock, or move the resource to another region or resource group.
If I don’t specify the resource type, I will get my data factory, my database, and my server in the results. You may be saying “Well, I would always specify the type.” Even if that is true, are you sure all coworkers and consultants touching your Azure resources would do the same?
Why introduce this ambiguity when there is no need to do so?
There are some good tips in the Cloud Adoption Framework in Microsoft Docs about naming conventions. Your organization probably wants to decide up front what names are acceptable and then use Azure Policy as well as good processes to ensure adherence to your defined conventions. And if I were the consultant advising you, I would suggest that resources within your tenant be unique across resource types. The suggestion in Docs is to use a resource type abbreviation at the beginning of your resource name. That would avoid the issue I have demonstrated above. Naming conventions should be adjusted to your organization’s needs, but the ones suggested in Docs are a good place to start if you need some help. It is beneficial to have some kind of resource naming convention beyond just whatever is allowed by Azure.
Established in 2001, the National Data Center (NDC) is a non-profit that provides a cost-effective method to manage Chapter 13 bankruptcy claims through the intelligent use of data. Data is consolidated from nearly 200 individual Chapter 13 Trustees into one comprehensive secure database.
Loyalties develop when you’re with any vendor for a long time, and a co-location or cloud provider is no exception. You overlook some inconveniences, tolerate price increases…to a point. When those lines are crossed, you wait a bit longer, knowing change can be expensive, complicated, and time-consuming.
Are you new to Azure Data Factory and wondering what you don't know you don't know? The learning curve with new technologies can sometimes lead to some major refactoring down the line once we realize our mistakes. Join Meagan Longoria and Kerry Tyler to learn how to set up your data factory for success. They will start by discussing naming conventions, parameterization, Key Vault usage, and deployment with Azure DevOps. Then they'll share their recommendations on pipeline hierarchies, activity dependencies, error handling, and monitoring. Watch this webinar to help your organization avoid Data Factory regrets!
Watch Denny and Joey from DCAC, and Rob Krug from Avast as they talk about enterprise security, where companies fail from a security perspective, and what small / medium companies can do to get enterprise-grade security features without breaking the bank.
As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.
And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.