Making WordPress a globally distributed application on Azure WebApps

Published On: 2019-05-19By:

Before I start, I’m just going to say that if I can do this with WordPress, then it can be done with anything.

The goal that I was going for to have a WordPress website that was running on multiple contents (I only used two regions, but this can be scalled out as needed to other regions as needed) so that responce time on one site of the planet would be similar to users of the site that are on the same side of the planet as me.  I also wanted the failover for all of this if an Azure Region goes offline to be as quick and painless as possible.

Here’s what I had to get setup. I setup two Azure WebApps both of which were running Windows. One was setup in US Central while the other was setup in North Europe.

Website Failover

I tried doing this with Linux WebApps, but some of the underlying functions that I needed to make this all work.  Specifically I needed Site Extentions, which are only available on Windows WebApps today.

After setting up the two web apps, I uploaded the website content to the Azure WebApp.  Once the site was uploaded, and the custom DNS was working on the WebApp it was time to start making the replication of the web traffic work.

In a new web browser I opened the .scm.azurewebsites.net version of my site.  To get the SCM site it’s http://{Azure WebApp}.scm.azurewebsites.net.  Since my Azure WebApp is named www-dcac-com-central by SCM site is https://www-dcac-com-central.scm.azurewebsites.net/. (There’s going to have to go to this site a few times, so keep the URL handy.)

Be very careful of the order that you are doing the replication. If you setup the replication from a blank website to your website, then it’ll replicate the blank website. So before doing this, make sure that you have a proper backup on your WebApp BEFORE you configure the mirroring.

One you have the website open click on the Site Extenstions on the right of the menu at the top.

You’ll see a galary option.  Select the galary option and find the “Site Replicator” extention. Enable this extention by clicking the plus sign.  A window will popup to complete the install, click install.

Once the installation is complete, go back to the Azure portal. From the Azure Portal stop the WebApp and then Start it again (clicking restart will not work).

Again in the Azure Portal select the second site (in my case the second region is the North Europe region).  From here you need to download the publish profile for the WebApp.  To do this from the Overview option tab select the “Get publish profile” option from the top right.

Just download the file, we’re going to need it in a second.

Go back to the first sites SCM website (https://www-dcac-com-central.scm.azurewebsites.net/ in my case) and click the play button on the Site Replicator extension.

This is going to give you the configuration screen for the Site Replication (it may take a minute to open). The setting screen is pretty simple.  There’s a browse button on the bottom left of the screen, click that and navigate to the publish profile file that you downloaded earlier.

Give the site some time to replicate all the changes to the other region.  When it says “Succeeded” it should be just about done. The larger the website, the longer this will take.  I FTPed into both WebApps and watched the files appear until they were all there.  On a standard WordPress install, this took about 10 minutes.

Once this was all finished, I repeated the process in the other direction.  I downloaded the publish file from the US Central version and configured the Site Replicator on the North Europe region; then I uploaded the publish file to the North Europe region.  I then gave the process about 10-15 minutes to settle down and do any replication that needed to be completed.

Once this was all finished, I was able to upload files, pictures, WordPress updates, etc. from either site and the change would be replicated to the other region within a second or two.

Once the website replication was handled it was site to setup Traffic Manager. This would allow people to connect to their local version of our website depending on where in the world they are connecting from. Getting the endpoints setup was pretty simple. I used basic geographic load balancing and pointed North/Central/South America to the US Central version, and Asia/Europe/Middle East/Africa/Anti-Artica to the North Europe version.

The only hard part was that because WordPress is going to have a ton of redirects you can’t do normal HTTP monitoring. Normally you could have traffic manager pointing to “/” for the path to monitor, but WordPress didn’t like this. I changed the website to use “/license.txt” instead as the path as this would cause the traffic manager can come online correctly. It isn’t a perfect situation, but it works well enough.

Once everything is setup and traffic manager is happy and working, we can point public DNS to the site.  In our DNS configuration for www.dcac.com we added a CNAME record to DNS. A CNAME record in DNS redirects the request to another record.  In our case we pointed www.dcac.com to www-dcac-com.trafficmanager.net. This allows the Traffic Manager service to resolve www.dcac.com to the correct version of the website.

We can test that this is working as expected by looking at the output of the nslookup command.

By running nslookup on my laptop (which is currently sitting at my house in San Diego, CA), we can see that I’m resolving www.dcac.com to www-dcac-com-central.azurewebsites.net.

If we do the same nslookup command from an Azure VM that’s running in Singapore, from the VM in Singapore when I do a nslookup on www.dcac.com I get back www-dcac-com-northeurope.azurewebsites.net.

From these outputs, I can assume that I’m viewing the version of the website that’s closer to the user.

I’ve now got two duplicate copies of the website running in two different Azure Regions.

Database Failover

On the database side of things, need need to setup some replicate for that as well. “Azure Database for MySQL servers” now supports multi-region replicas but there’s no auto-failover available yet (hopefully it’ll be coming at some point soon).  For the database copy I did basically the same thing as I did for the websites (and that I’ve done tons of times for Azure SQL DB).

For the database side of things I setup WordPress to use the dcac-mysqlcentral copy. From there I clicked the Add Replica button, and that made a copy of the data to a new server called dcac-mysqlnortheurope that I setup in North Europe.

Since there’s no automatic failover at the database level today, if I need to do a failover I need to edit the wp-config.php file on the webserver, and that’ll kick the connection over to the other server.  I also need to setup the needed PowerShell to do the failover. My next step of this process is going to be to setup some Azure Automation to handle all the database level failover, but so far this is a pretty good step as there’s not website level failover.

The End Result

The end result of all of this is that our website is setup and running in two different places for better availabililty and better performance of our application.

Denny

The post Making WordPress a globally distributed application on Azure WebApps appeared first on SQL Server with Mr. Denny.

Contact the Author | Contact DCAC

Turning a Corporate Color Palette into a Data Visualization Color Palette

Published On: 2019-05-16By:

Last week, I had a conversation on twitter about dealing with corporate color palettes that don’t work well for data visualization. Usually, this happens because corporate palettes are designed with websites and/or marketing collateral in mind rather than information graphic design. This often results in colors being too bright, dark, or dull to be used together in a report. Sometimes the colors aren’t easily distinguishable from each other. Other times, the colors needed for various situations (main color, ancillary colors, highlight color, error color, KPIs, text, borders) aren’t available in the corporate palette.

You can still stay on brand and create a consistent user experience with a color palette optimized for data visualization. But you may not be using the exact hex values as defined in the corporate palette. I like to say the data viz color palette is “inspired by” the marketing color palette.

I asked on twitter if anyone had a corporate color palette they needed to convert into a data visualization palette, and someone volunteered theirs. So this post is my walk-through of how I went about creating the palette.

Step 1: Identify a Main Color

There is often a main color in the corporate color palette. If that color is a medium intensity color, I usually include that color in my color palette as is. If it is excessively dark, light, or gray, I’ll either tweak the color a bit or use the second color in the color palette.

Step 2: Choose a Color Scheme

Next, I need to decide what kind of relationship the other colors will have with the main color. In other words, I have to decide what type of color scheme I want to use. I tend to go for monochromatic or analogous color schemes. Complimentary color schemes can be difficult, depending on your main color. I generally try to stay away from using reds and greens together in the same palette because it’s hard to stay colorblind-friendly and because the primary colors together can make it feel like a Christmas or kindergarten theme. I often try to reserve reds and oranges to draw attention to specific data points, but that isn’t a hard and fast rule.

I need 2 – 4 ancillary colors to go with my main color. I rarely need to use all 4 colors together in one chart, but there are some cases such as line charts with 4 series where that will be necessary. People can preattentively distinguish up to about 7 colors at once, so I need to use fewer than 7 colors in a single chart. If I encounter a situation where I feel like I need more than 4 colors together, I re-evaluate my choice of chart type and my use of color within the chart.

Also, I want the colors to be roughly the same level of brightness and intensity. Most importantly, the colors need to be easily distinguishable from each other.

RGB Color Wheel

Step 3: Choose Highlight and Error Colors

We often need to draw attention to specific data points to indicate that they require attention. This is usually because a value is outside of the expected range. KPIs are common in Power BI reports, I need to make sure I have a color to indicate “bad” statuses. I also like to have a highlight color that doesn’t necessarily mean “bad”, just “look here”. These highlight and error colors need to be noticeably different from my other colors so that they draw attention to the data points where they are used.

Step 4: Add Border and Background Colors

I like to add grays and browns to go with my color scheme. I’ll use them mostly for borders, grid lines, text, and light background shades. But also, I want to make sure I have 8 colors in my palette. If I have fewer than 8 colors, Power BI will add colors from the default palette at the end of my colors to fill out the full 8 columns.

Color Palette Creation Example

The original corporate color palette that I was given had a lot of colors.

Primary Corporate Colors
Secondary Corporate Colors

The primary colors go all the way around the color wheel. I definitely don’t want to use them all together. The secondary colors have the beginnings of a monochromatic blue palette, an analogous blue/green palette, or an analogous orange/red/purple palette.

I don’t need all of these different hues. I need 8 medium-intensity colors. Power BI will add black and white and provide the shades and tints for me.

Main Color

I’m keeping the main color as it is. It is bright and saturated enough to not be dull/boring and also not so bold as to leave no room for bolder colors to be used to highlight specific data points.

Chart using the main color of my palette

Color Scheme

I choose an analogous color scheme, which means I pick colors that are next to my main color on the color wheel. Since blue is my main color, I stick with cool colors for the ancillary colors.

Main color plus 3 ancillary colors

I want my 4 colors to be easily distinguishable from each other, and I want them to be roughly the same intensity and brightness.

Highlight and Error Colors

I’m adding yellow and red to my palette. The yellow can be a generic highlight color as well as a “caution” color. The red can be my “bad” color. I’m checking that my colors are easily distinguishable for various types of color vision deficiency.

I confirm that my highlight and error colors are easily distinguishable from the other colors for the most common types of color vision deficiency. I can also see here that my second and fourth colors look a bit similar on the deuteranopia line, so I’ll have to be careful how I use them together, perhaps switching to a shade or tint of the fourth color if needed.

Border and Background Colors

Now I add my grays and browns to use for formatting. This completes my color palette.

Full color palette for Power BI Theme

Power BI Theme

I can take the hex values for my colors and drop them in the color theme generator on PowerBI.Tips to get my JSON theme file.

{
"name":"Analogous",
 "dataColors":["#00aaff", "#00c2b2", "#213dca", "#7514ff", "#ffd500", "#ff002b","#768389","#987665"]
 }

When I import my theme file into my Power BI report, I get the additional tints and shades from the colors I provided.

Next I try out my new color theme in a report to see if I need to tweak any colors. This is the true test. The colors may look great in little boxes, but they might need to be altered to work on a full report page. The shade of purple that I used originally (not shown in this blog post) was a bit too intense compared to the other colors, so I replaced it with a slightly muted tint that better matched the other colors. That is the type of thing you will notice when applying your theme to a report. Don’t get too stuck on finding the exact perfect colors. Colors look slightly different on different screens. Just make sure nothing is inadvertently distracting.

Helpful Color Tools

I’m currently using https://color.mediaandme.be to create my color palettes. It’s free, and it allows me to add many (> 6) colors to my palette. Other benefits:

  • It shows me what all the colors look like together
  • It provides a colorblindness simulator
  • It lets me easily tweak hue, saturation, and brightness
  • It generates a link for the color palette I create so I can easily share it with others for feedback

When I need ideas for how to tweak a color, I use https://www.colorhexa.com. I picked the gray color in my palette by getting the grayest tone of my main color from ColorHexa.

Further Reading on Color in Data Visualization

Tiger Color: Basic Color Schemes – Introduction to Color Theory
Maureen Stone: Expert Color Choices
Billie Gray: Another post about colours for data visualisation. Part 3 — DIY Palettes
Christopher Healy: Perception in Visualization

Contact the Author | Contact DCAC

DMV’s for the Beginner

Published On: 2019-05-15By:

I give performance presentations at many different events to all levels of SQL Server professionals. Over time I’ve noticed that some DBAs and developers have never looked at the dynamic management views (dmv’s)  that are available within SQL Server starting with SQL Server 2005. These DMVs are useful means to determine what is going on with a particular SQL Server instance. So, in this post I am going to list a few of my very favorite DMVs as a reference for those who may be new to them.

Performance Tuning

These dynamic management views are what I use first when looking to any performance issues or I need to know what is going on right now with my SQL Server instance. Usage examples borrowed from docs.microsoft.com.

sys.dm_exec_sessions – This returns current session information such as Session ids, login information like name, host, client info. It also returns some performance information like memory usage, cpu time, reads, writes and lots of other useful info.

SELECT login_name ,COUNT(session_id) AS session_count
FROM sys.dm_exec_sessions
GROUP BY login_name;

 

sys.dm_exec_connections This DMV gives you information on currently connected sessions to SQL Server. Information like when the connection started, last read, last write, client net address, most recent sql_handle (the token that uniquely identifies the SQL batch) and other network connection related information.

SELECT; c.session_id, c.net_transport, c.encrypt_option, c.auth_scheme, s.host_name, s.program_name, 
s.client_interface_name, s.login_name, s.nt_domain, s.nt_user_name, s.original_login_name, c.connect_time,
s.login_time
FROM sys.dm_exec_connections AS c 
JOIN sys.dm_exec_sessions AS s
ON c.session_id = s.session_id 
WHERE c.session_id = @@SPID -- @@SPID returns your current session SPID

 

sys.dm_exec_requests – This one use I a lot when looking at blocking and I don’t have sp_whosactive available. It’s great for looking into blocking, wait times and looking at what is currently running on your SQL Server. It will provide wait types, backup\restore\DBCC Checkdb percent completes, database and session information. It gives specifics like statement commands like SELECT, INSERT, UPDATE, DELETE etc as well as execution plan information with reads/writes and resource usage.

USE master
GO
SELECT * FROM sys.dm_exec_requests
WHERE session_id = 54;
GO

 

sys.dm_os_wait_stats– This gives you all the waits currently happening in your system. It gives to a great trail of what is going on to diagnose problems the system is having. It provides wait type and times as well as how many tasks are waiting.

USE master
GO
SELECT * FROM sys.dm_os_wait_stats
GO

 

sys.dm_os_performance_counters– Looking into the OS through SQL server is always handy. With this one you can get to performance monitor counters captured by SQL Server. This does not replace perfmon for detailed performance analysis, but it includes things like buffer counts, locks, file growth events, column store counters, cache hit ratios… lots of goodies in this one.

USE master
GO
SELECT * FROM sys.dm_os_performance_counters
GO

 

Query Specific Tuning

When starting to look at query tuning you need to dive into query plans statements. These are what I used to find out what queries are running and to get to their query plans.

sys.dm_exec_query_stats -This is an important one. It returns summed performance statistics for query plans that currently exist in the plan cache. It gives statistics tied to a single plan like reads, writes, total rows, time it took to complete, plan hash, memory grant info,  threads, column store stats, spills, and degree of parallelism (DOP, which was added in SQL Server 2016).

SELECT TOP 5 query_stats.query_hash AS "Query Hash",
 SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) AS "Avg CPU Time",
 MIN(query_stats.statement_text) AS "Statement Text" 
 FROM (SELECT QS.*, 
 SUBSTRING(ST.text, (QS.statement_start_offset/2) + 1,
 ((CASE statement_end_offset
  WHEN -1 THEN DATALENGTH(ST.text)
 ELSE QS.statement_end_offset END
 - QS.statement_start_offset)/2) + 1) AS statement_text
FROM sys.dm_exec_query_stats AS QS 
CROSS APPLY sys.dm_exec_sql_text(QS.sql_handle) as ST) as query_stats
GROUP BY query_stats.query_hash 
ORDER BY 2 DESC;

 

sys.dm_exec_query_plan​-This gives you the actual execution plan in XML format.

USE master
GO
SELECT *FROM sys.dm_exec_query_plan (your plan handle goes here)
GO

 

sys.dm_exec_cached_plans​ – These are the plans that are currently available in your cache for reuse. You can use this to retrieve the plan handles needed to get the actual plan from sys.dm_exec_query_plan. Here you can also see the use counts of how many times that plan has been reused while in cache.

USE master; GO SELECT *FROM sys.dm_exec_cached_plans AS cpCROSS APPLY sys.dm_exec_query_plan(cp.plan_handle); GO

 

sys.dm_exec_sql_text – This returns the actual query text in an execution plan and is referred to a a dynamic management function instead of view.

— acquire sql_handle

SELECT sql_handle FROM sys.dm_exec_requests WHERE session_id = 59  — modify this value with your actual spid

 

— pass sql_handle to sys.dm_exec_sql_text

SELECT * 
FROM sys.dm_exec_sql_text(your plan handle goes here)

 

Index

sys.dm_db_index_usage_stats – Ever wonder is your index is actually getting used? This one will tell you all the scans, seeks and writes done on your index and the last time it was used.

USE master
GO
SELECT * FROM sys.dm_db_index_usage_stats

 

sys.dm_db_missing_index_details – This will give you a list of all the missing indexes in your SQL Server. This DMV should not be considered a list of indexes to add, as it will record many potentially overlapping indexes. However, this DMV gives you a great starting point into your servers indexing needs for you to dive into.

USE master
GO
SELECT * FROM sys.dm_db_missing_index_details

 

These are just a few of many that SQL Server has to offer. But if you are just starting out these are definitely DMVs you should take a look at and add to your arsenal for performance tuning and monitoring your SQL Servers. Lastly, one thing to keep in mind when reading the data from these queries. Like many other things inside SQL Server this data is a good as your last reboot or service restart.

 

Contact the Author | Contact DCAC

SQL Server Performance Monitor Objects

Published On: 2019-05-13By:

Monitoring SQL Server Performance Monitor objects (Perf Mon for those in the know) can be an important part of monitoring your SQL Server instance. Finding information about the performance monitor objects that SQL Server exposes can be tricky, even though the SQL Server Project Team can documented what these objects all mean.

You can find the documents about the SQL Server Perf Mon objects online on the Microsoft docs website.

If you haven’t had a chance to check it out, I’d very much recommend it. If you are looking for which objects you should be monitoring this can answer a lot of questions for you.

If you need more help past that, let us know, we can help you out.

Denny

 

The post SQL Server Performance Monitor Objects appeared first on SQL Server with Mr. Denny.

Contact the Author | Contact DCAC
1 2 3 418

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner       Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers    VMWare Partner
Microsoft Certified Master    Microsoft MVP