SQL Server Replication and AlwaysOn Availability Groups

Published On: 2021-04-12By:

Using SQL Server Replication with AlwaysOn Availability Groups is a pretty straightforward process at this point. Having the publisher be a member of an Availability Group has been supported for years now. You can however run into trouble when trying to create a new replication publication on a server that isn’t the server that you were using as the Primary server when you set up the first publication. When using a different server in the availability group you’ll get an error back from SQL Server Management Studio which says “Could not find server {old primary server} in sys.servers. Verify that the correct server name was specified. If necessary, execute the stored procedure sp_addlinkedserver to add the server to sys.servers. Error 7202”.

Let me explain what is happening, and how to work around it.

sql1a – Server that was the active replica when the initial publication was setup

sql1b – server that is now the active replica

If you try to set up a new publication on sql1b, you’ll get the error message that is described above. If you write out the replication commands manually you won’t get this error. This error only happens within the GUI.

The easy solution is to fail the Availability Group back to sql1a and create the publication. But what if that isn’t an option because that server no longer exists? The easy solution here is to create a linked server on all the servers in the AG, using the name of the server that was used to set up replication initially, in this case, sql1a. I then configure this linked server to point to the Availability Group listener name. So that way the linked servers, on any server, will always point to the active server. If you are setting up replication you are on the active server already, so the linked server just points back to itself.

By setting up a linked server with this name, that points back to the listener this allows replication publications to be created which sql1b is the active replica within the Availability Group.

It’s a quick and easy fix that’ll get you past the error message that SQL Server Management Studio is throwing.

Denny

Contact the Author | Contact DCAC

Using Azure Site Recovery for VM DR, and a SQL Managed Instance Failover Group

Published On: 2021-04-06By:

I wanted to name this post “I built NAT between Azure Virtual Machines, this makes me an Azure Networking God”, but that title seemed a little wordy. I was working with a client recently with an interesting Azure setup and some interesting requirements. The database tier for this application is using Azure SQL Managed Instance and will be part of a Failover Group which allows the SQL Managed Instance to be failed over between regions as needed. The web and application tier of the application will be within Azure Virtual Machines which will be protected with Azure Site Recovery (ASR).

For Disaster Recovery Tests and for an actual Disaster Recovery Failover we wanted to have to make as few networking changes as possible.

The Setup

In general, everything was pretty easy to get set up. We setup ASR, and ASR created an Azure virtual network (vNet) automatically as part of the Replication Setup process. We then set up our production Managed Instance and connected it to the production vNet. When we went to set up the Disaster Recovery SQL Managed Instance we ran into the root of our problem. In order to put the two Managed Instances within a Failover Group so that they could move data between them, the Managed Instances needs to be able to talk to each other. But the vNets had the same IP space, which means that we couldn’t peer the networks, as one of the requirements for connecting vNets to each other is that the networks need to have different IP Spaces.

In order to get around this requirement we configured a separate Azure vNet for the Disaster Recovery Managed Instance, which we could them peer to the production vNet. This then allowed the Production and the Disaster Recovery Managed Instances to be able to talk to each other.

The Big Problem

The next requirement was that the Disaster Recovery Virtual Machines needed to be able to connect to the Disaster Recovery Managed Instance with as few network changes as possible. We couldn’t just peer the ASR-created vNet to the Disaster Recovery Managed Instance vNet, as we’d have an IP conflict as the Disaster Recovery Managed Instance vNet can’t connect to two vNets with the same IP ranges. The solution to this was to create an additional vNet, with a machine in that vNet with the Windows Routing and Remote Access Service (RRAS) installed on it. This gives me a total of 4 Azure vNets peered together as shown in the picture below.

In order to make Routing and Remote Access Services (RRAS) happy it required having two subnets within the NatvNet and that machine needed to have two vNICs, with each vNIC in a separate subnet. I set up two Subnets in this vNet which were 192.168.21.0/26 and 192.168.21.64/26. The first vNIC gets the IP address of 192.168.21.4 and the second vNIC gets the IP address of 192.168.21.68. When RRAS is setup you have to specify an IP subnet range that will be used for the NAT. This range should be part of the IP range of the second vNIC. In this case, I used the range of 192.168.21.70-192.168.21.75. In order to make Azure happy (I’ll explain why in a minute) I needed to add the IP addresses that were being used for the NAT to the vNIC that was in the same subnet that they were from.

The final change in Azure that needed to be made was to create a routing table that directs the network traffic to the 192.168.20.0 network and use the next network hop for that being 192.168.21.4. This routing table is then applied to all the Subnets within the vNET that ASR created (called ProductionAsr in the image above). The network design and the VMs layout is shown below and should give a good view of the design.

The IP Requirement

One of the things to keep in mind that Azure network is a bit stubborn. When IPs are being used and those IP addresses aren’t known to the Azure environment, then the Azure environment doesn’t know which physical machine on the network to send the traffic to. I could easily see this in Wireshark when I ran Wireshark on the NAT server (the machine in the NatvNet subnet that was running.

I was using a machine in my ProductionAsr network with an IP address of 192.168.4.5. When I accessed 192.168.20.4 (the IP addresses of my Managed Instance) and looked at the network traffic (without having the IP addresses 192.168.21.70-192.168.21.75 assigned to the vNIC) I could see the network traffic going from 192.168.4.5 to 192.168.21.4, and I could see the network traffic going from 192.168.21.68 to the Managed instance IP address of 192.168.20.4, but the network traffic wasn’t coming back from the Managed Instance.

I did some additional testing by putting a Virtual Machine in the Managed Instance network and gave it an IP address of 192.168.20.5. When I installed Wireshark on this machine I could see the traffic coming to the machine, and I could see the machine sending the traffic back to 192.168.21.68, but it was never arriving at that machine. I noticed that it wasn’t actually using 192.168.21.68, but it was instead sending the traffic from the NAT machine to the Managed Instance network using 192.168.21.70. So on a hunch, I added 192.168.21.70 as an IP address which the server on the NatvNet could use. About 15 seconds after the Azure Portal said that it had made the change, I was able to make my connection to the Managed Instance as I needed to.

Final Configuration

The final setup of this was a bunch of Virtual Machines in a vNet. These machines were all protected with ASR. For the database we’re using an Azure SQL Managed Instance which is setup in a Failvoer group for Disaster Recovery.

With this NAT configuration we were able to have our SQL Managed Instance data movement between the two instances, while allowing the production servers to access either SQL Managed Instance. The Disaster Recovery Virtual Machines were able to access the SQL Managed Instance when doing a DR test, or in a DR failover. In the DR test requires that we need to be able to write to the DR Managed Instance we can do this by simply removing the DR Managed Instance from the failover group and putting a host file entry so that the production server name resolves to our DR Managed Instance IP address. This connects all the Disaster Recovery VMs to the Disaster Recovery Managed Instance with just a couple of simple changes.

A Failover Event

For an actual failover event the failover process is even easier. The VMs are failed over using ASR. The SQL Database Managed Instance failover group is failed over. And that’s is. At that point everything is failed over and working.

If you’d like to talk more about setting up your cloud environment, we’d love to set up a meeting with you.

Denny

Contact the Author | Contact DCAC

Issues When Using Temporary Tables in Nested Stored Procedures

Published On: 2021-03-26By:

SQL Server offers a lot of flexibility when working with objects like stored procedures and temporary tables.  One of the capabilities is to allow for stored procedures to call other stored procedures.  This is called “nesting” one stored procedure within the context of the calling stored procedure.  In addition, you can instantiate a temporary table from within the parent procedure that can be utilized within the context of the child procedure.

But wait! I’m not done yet!

You can also instantiate a temporary table with the same name as the parent temporary table within the child procedure.  But there is a catch!

From Microsoft Documentation:

Nested stored procedures can also create temporary tables with the same name as a temporary table that was created by the stored procedure that called it. However, for modifications to resolve to the table that was created in the nested procedure, the table must have the same structure, with the same column names, as the table created in the calling procedure. 

In other words, the name and the schema of the child temporary table must be identical to the name and schema of the parent temporary table.  Otherwise, mass confusion reigns supreme and you could get errors when attempting to do any data modifications in the child temporary table.

Let’s look at some code so that we can see the behavior.

Examining the behavior

First, here’s the code to show the example:

USE Test
GO

CREATE OR ALTER PROCEDURE dbo.parent
AS
BEGIN
       CREATE TABLE #test (name sysname)
       INSERT #test (name) VALUES ('Joey')

       EXEC dbo.child

       SELECT * FROM #test
END
GO

CREATE OR ALTER PROCEDURE dbo.child
AS
BEGIN
       CREATE TABLE #test (fname sysname)
       INSERT #test (fname) VALUES ('John')

       SELECT * FROM #test
END
GO

-- Execute the parent procedure
EXEC dbo.parent

Now, let’s step through it so that we have a clear understanding of what it is doing.

  1. Create a parent procedure that creates a temporary table called #test with “name” as the column name.
    1. Insert a row into the parent temporary table
    2. Call a child stored procedure
    3. Select from the parent temporary table
  2. Create a child stored procedure that creates a temporary table also called #test with “fname” as the column name. Note that this column name is different from the parent temporary table.
    1. Insert a row into the child temporary table
    2. Select from the child temporary table

Below is the error that is returned when executing the code block.

SSMS Screen Shot showing an error

The error above is stating that the “fname” column does not exist within the temporary table but we can see from the code block that is most definitely is.  This means that SQL Server is attempting to update the parent temporary table and NOT the child temporary table as one might expect.

Summary

When working with nested procedures as well as nested temporary tables, make sure to either have the identical name and schema or make sure to use a different naming convention.  By doing so you help to eliminate any issues of the SQL Server not resolving to the appropriate temporary table.

 

© 2021, John Morehouse. All rights reserved.

The post Issues When Using Temporary Tables in Nested Stored Procedures first appeared on John Morehouse. Contact the Author | Contact DCAC

Control Flow Limitations in Data Factory

Published On: 2021-03-25By:

Control Flow activities in Data Factory involve orchestration of pipeline activities including chaining activities in a sequence, branching, defining parameters at the pipeline level, and passing arguments while invoking the pipeline. They also include custom-state passing and looping containers.

The activities list in the ADF Author & Manage app, showing Lookup, Set variable, Filter, For Each, Switch, and more.
Control Flow activities in the Data Factory user interface

If you’ve been using Azure Data Factory for a while, you might have hit some limitations that don’t exist in tools like SSIS or Databricks. Knowing these limitations up front can help you design better pipelines, so I’m listing a few here of which you’ll want to be aware.

  1. You cannot nest For Each activities.
    Within a pipeline, you cannot place a For Each activity inside of another For Each activity. If you need to iterate through two datasets you have two main options. You can combine the two datasets before you iterate over them. Or you can use a parent/child pipeline design where you move the inner For Each activity into the child pipeline. Fun fact: currently the Data Factory UI won’t stop you from nesting For Each activities. You won’t find out until you try to execute the pipeline.
  2. You cannot put a For Each activity or Switch activity inside of an If activity.
    The Data Factory UI will prevent you from doing this by removing the For Each and Switch from the activity list. You can redesign the pipeline to put the inner activity inside a child pipeline. Also note that you can put an If activity inside of a For Each activity.
  3. You cannot use a Set Variable activity inside a For Each activity that runs in parallel.
    The Data Factory UI won’t stop you, but you’ll quickly learn that the scope of the variable is the pipeline and not the For Each or any other activity. So you’ll just overwrite the value in no particular order as the activities execute in parallel. The workaround for this is specific to your use case. You might try using an existing attribute of the item you are iterating on instead of setting a variable. Append Variable works fine, since each loop could add a value. But again, don’t count on the order being meaningful.
  4. You cannot nest If activities.
    The Data Factory UI will prevent you from nesting the If activities. If you need to have two sets of conditions, you can either combine conditions or move the inner condition to a child pipeline.
  5. You cannot nest Switch activities.
    Similar to the If activity, the Data Factory UI will prevent you from nesting Switch activities. And again, you can either combine conditions or move the inner condition to a child pipeline.
  6. You cannot put a For Each or If activity inside a Switch activity.
    The Data Factory UI will prevent you from doing this. You can move the inner activity to a child pipeline if needed.
  7. You cannot use an expression to populate the pipeline in an Execute Pipeline activity.
    It would be great to design a truly dynamic pipeline where you could have a dataset that defines which pipelines to execute, but you can’t do that natively in the Data Factory UI. The Invoked Pipeline property doesn’t allow dynamic expressions. If you need to dynamically execute pipelines, you can use Logic Apps or Azure Functions to execute the pipeline.
  8. You cannot dynamically populate the variable name in Set Variable and Append Variable activities.
    The Data Factory UI only allows you to choose from a list of existing variables. As a workaround, you could use an If activity to determine which variable you will populate.
  9. The Lookup activity has a maximum of 5,000 rows and a maximum size of 4 MB.
    If you need to iterate over more than 5000 rows, you’ll need to split your list between a child and parent pipeline.

In addition to the items mentioned above, also note these resource limits listed in Microsoft Docs. Limits like 40 activities per pipeline (including inner activities for containers) can bite you if you aren’t careful about implementing a modular design. And if you do have a modular design with lots of pipelines calling other pipelines, be aware that you are limited to 100 queued runs per pipeline and 1,000 concurrent pipeline activity runs per subscription per Azure Integration Runtime region. I don’t hit these limits too often, but I have hit them.

This is not to say you can’t create good solutions in Azure Data Factory—you absolutely can. But Data Factory has some limitations that you might not expect if you have experience working with other data integration/orchestration tools.

Have you hit any other limits that caused you to design your pipelines differently that you would like to share with others? If so, leave me a comment.

Contact the Author | Contact DCAC
1 2 3 487

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner   Denny Cherry & Associates Consulting LLC BBB Business Review    Microsoft MVP    Microsoft Certified Master VMWare vExpert
INC 5000 Award for 2020    American Business Awards People's Choice    American Business Awards Gold Award    American Business Awards Silver Award    FT Americas’ Fastest Growing Companies 2020   
Best Full-Service Cloud Technology Consulting Company       Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers
Share via
Copy link