Improving TweetDeck with Better TweetDeck

Published On: 2019-10-18By:

If you are at involved with the #sqlfamily, you are bound to hear about the benefits of social media platforms such as Twitter.  Twitter helps us to engage each other in near real time discussions on a multitude of topics, including, of course, SQL Server.   One of the tools that many folks use in conjunction with Twitter is Tweet Deck.   Tweet Deck provides several options to enhance your Twitter experience. One of the main benefits of TweetDeck is the ability to have different columns which represent different streams (or channels) of tweets.

If you have never looked at TweetDeck, Here’s what it essentially looks like:

On the left is a stream of tweets of people that I follow.  The middle stream are tweets in which people are either replying to me or liking one of my tweets.  The third stream is the #sqlhelp hash tag.  I like to help out the community, so I watch this particular hash tag to try to answer questions about SQL Server whenever possible.

One of the tools I use to tweet, is Better TweetDeck.  This is an add-on tool for browsers that allow you to do a couple of things that the native Tweetdeck

doesn’t have baked in yet.  Things like:

  • Edit a previously sent tweet to fix typos.  No more deleting and retweeting it again
  • Quickly embed emoji’s and gifs
  • Easily paste images into your tweets (no more uploading)
  • Advanced muting abilities to help keep your twitter
    • including muting based off of biography content
    • or even mute if they have a default avatar

Make sure to check out their website to see all of its capabilities.

They currently support these browsers:

  • Chrome
  • Firefox
  • Opera

If you are looking to enhance your Twitter experience and are using a supported browser, take a look at Better TweetDeck.  I’ve been using it for a while and really have had a good experience with it.  The ability to quickly add animated gifs and pictures removes the mess of having to download or save a file and then attach it to a tweet which makes me much more efficient.

© 2019, John Morehouse. All rights reserved.


Contact the Author | Contact DCAC

HammerDB for Azure SQL DB

Published On: 2019-09-27By:

Bench marking your environment is an important step when introducing new hardware, which is accomplished by running a test workload against the hardware.    There are multiple ways to accomplish this to get  SQL Server performance data  One of these methods is using HammerDB, which is a free tool that provides TPC standard bench marking metrics for multiple database systems, including Microsoft SQL Server.  These metrics are an industry standard and are defined by the Transaction Processing Performance Council (TPC).  The results from bench marking will help you to ensure that the new infrastructure will be able to support the expected workload.

Azure introduces ways to quickly implement new hardware.  However, if the Azure environment isn’t setup correctly, you can introduce issues that could potentially degrade performance.  Thankfully, HammerDB is Azure aware which allows you to easily benchmark your cloud environment.

Let’s see how to configure HammerDB for Azure.

HammerDB

HammerDB is open source so you can download it from here.  It’s available with a Windows distribution or Linux.  For our purposes, I will showcase the Windows version.  Go ahead and install it with the default options.   Running the application requires launching a batch file as it does not include an actual EXE file.  If you accepted the defaults, youshould now have a C:\Program Files\HammerDB-3.2 directory.  Within the directory, there should be a HammerDB.bat file.  This will launch the UI for the application.  You can create a shortcut to this batch file on your desktop for ease of future use.

Go ahead and launch the hammerdb batch file.  Once the UI has been initialized, on the left hand menu tree, double click on the  SQL Server option.   In the resulting Benchmark Options dialog window, select SQL  Server and you can leave the default TPC-C option.

Click Ok.  You’ll get a confirmation window, just click OK again.  Now you’ll notice that under SQL Server there is a TPC-C tree that you can expand.   Expand the tree.

 

For the purpose of this blog post I’m only going to focus on how to configure HammerDB to connect and utilize Azure SQL DB.  I will let you play around with the other configuration settings or will blog about those at a later point in time.   Expand the Schema Build branch.  You will see an Option and Build selection.   Double click on Options

In the resulting window, you can see that there are some parameters that need to be supplied.

First, enter the name of the SQL Server.  Since we want to go to an Azure SQL DB, this is the name of the server that will host the TPCC database.    In this case, I have a demo server, sqldbdemo-east.database.windows.net available, which resides in the East region.

Next, select the Azure check box.

Thirdly, you might have to change the version of ODBC driver.  In my case, I’ve got the latest version of SSMS installed so ODBC Driver 17 for SQL Server is what I needed to use.  You can find this by looking in the ODBC Data Source Administrator if you need to find it.  If you need a different ODBC driver, you can find them here.

Finally, supply your credentials.  In this case I am use SQL Server authentication, so I supplied the appropriate user name and password.

Click Ok.  Now HammerDB is configured to work in conjunction with Microsoft Azure.  We are just about ready to build the schema for the database that will be used to perform the benchmarking.

There’s one catch, however.   The catch is that you have to create the shell of the database first.   The schema build process will not create the database if it does not currently exist.   You can easily go to the server and execute a create database statement.

CREATE DATABASE [tpcc];
GO

Now that the database shell is present, we can build the schema into the shell.  You can accomplish this either by double clicking “Build” under Options or clicking on the Build button in the toolbar.

Start the schema build.  It’ll prompt you for an OK

 

The build process will also load the data required for the actual benchmarking process.  This process could take a few minutes depending on which service tier your Azure SQL DB is sitting on.

Once it has completed, we can confirm that the schema and data are present.

Now we can commence bench marking our Azure environment!

Summary

Bench marking is a process that doesn’t occur as frequently as it probably should.  However, tools like HammerDB continue to evolve to match pace with cloud technology to help ensure we have means to do this.   Even if you are moving to the cloud, make sure to do your due diligence and benchmark things.  You might be surprised with the results..

© 2019, John Morehouse. All rights reserved.


Contact the Author | Contact DCAC

Using Powershell for SQL Saturday Logos

Published On: 2019-09-13By:

Out of necessity are born the tools of laziness.  This is a good thing.  I have found that organizing and running a SQL Saturday event is a great way to create scripts and processes that make an organizers life that much easier.  The 2019 Louisville SQL Saturday helped me to create a quick script that would download all of my sponsor logos into a single folder.

The script is written in PowerShell simply because PowerShell has great versatility and it suited my needs.  The goal was to be able to download the logos and send them off to one of my volunteers who then was going to put them on signage and get them printed.At the time I had no easy way to do with without manually pulling each one off the site.

Let’s get to it!

The Script

First, I need to set some variables to make it easier to use.  I could make these into parameters for usability, however, for my needs since I only run this once a year having just variables is acceptable for me.

The event number is the number that corresponds to the particular SQL Saturday event you want to download the logos.  Note that this would work for any event, not just the one you might be organizing.

$eventNum = "883"
$outputfolder = "c:\temp\SponsorLogos"
$compress = $True

Next, I need to fetch the XML feed from the event.  The XML feed has a wealth of information in it, including the image URL for all of the sponsors.

I will also get the sponsor name, what level they sponsored at (that’s the label column) and the URL for their logo.

#let's get the XML from the SQL Saturday website
[xml] $xdoc = Invoke-WebRequest -Uri "http://www.sqlsaturday.com/eventxml.aspx?sat=$eventNum" -UseBasicParsing

#we only need a subset of each node of the XML, mainly the sponsors
$sponsors = $xdoc.guidebookxml.sponsors.sponsor | select name, label, imageURL

We want to ensure that our output folder (the path from the variable above) exists otherwise the process won’t work.  If the folder doesn’t exist, it will be created for us.

If there is an error, there is a CATCH block that will capture the error and react accordingly.

#let's make sure the folder exists
"Checking folder existence...."
if (-not (test-path $outputFolder)) {
    try {
        New-Item -Path $outputFolder -ItemType Directory -ErrorAction Stop | Out-Null #-Force
    }
    catch {
        Write-Error -Message "Unable to create directory '$outputFolder'. Error was: $_" -ErrorAction Stop
    }
    "Successfully created directory '$outputFolder'."
}else{
    "Folder already exists...moving on"
}

Now that I have a destination folder, I can begin to download the logos into the folder.   In this block, I will loop through all of the sponsors.

First, I need to do some clean up in the sponsor names.  Some sponsors have commas or “.com” in their name and I wanted to use the sponsor name as the file name so I knew who the logo belonged to.  Once the cleanup is done, I used the INVOKE-WEBREQUEST cmdlet to fetch the file from the respective URL and output the file into the destination directory.

#give me all of the logos
foreach ($sponsor in $sponsors){
    $filename = $sponsor.imageURL | split-path -Leaf

    #get the file name and clean up spaces, commas, and the dot coms
    $sponsorname = $sponsor.name.replace(" ", "").replace(",","").replace(".com","")
    invoke-webrequest -uri $sponsor.imageURL -outfile $outputfolder\$($sponsorName)_$($sponsor.label.ToUpper())_$($fileName)
}

Since I will be sending this to a volunteer to utilize, I wanted the process to automatically zip up the folder to make it easier.  I’ll name the archive the same name as the folder so I can use the SPLIT-PATH cmdlet to get the leaf leave of the directory path, which is the folder name.

Using the COMPRESS-ARCHIVE cmdlet, I can then compress the folder and put it put it into that same folder.

# zip things up if desired
If ($compress -eq $true){
    $filename = $outputfolder | split-path -Leaf
    compress-archive -path $outputFolder -DestinationPath $outputfolder\$($filename).zip
}

Finally, I wanted the process to open the folder when it was done.  This is simple accomplished by calling “explorer” along with the destination folder name.  This will launch the folder in Windows Explorer

# show me the folder
explorer $outputfolder

Summary

Powershell is a great way to quickly and easily accomplish tasks.  Whether that is working with online data or even manipulating things on your local computer, this was a quick and easy way to make my life as a SQL Saturday event organizer that much easier.

You can also find the full script on my GitHub Repository.

Enjoy!

© 2019, John Morehouse. All rights reserved.


Contact the Author | Contact DCAC

Moving to Azure SQL Database Serverless

Published On: 2019-08-30By:

In a previous post, I discussed the public preview of Azure SQL Database Serverless.  This is a newer product released from Microsoft for the Azure ecosystem.  Moving to this new product is really easy to do and I thought that I’d show you how.

Moving the Database

From the screen shot below, you can see that I have a standard Azure SQL DB, Demo1, that is on the sqldbdemo-east server.   In order to move this to serverless, first click on Configure located on the left hand menu under Settings.

On the next screen, we will need to make some decisions on how you want the database configured in this new compute tier.  If the database is currently in the DTU price offering, you will need to move to the vCore-base pricing since that is the only pricing tier available.

Once we are in the vCore-based pricing, we can continue to configure the database.

The Serverless compute tier is only available through the General Purpose service tier so, if necessary, you will need to first click on that service tier to be able to see the Serverless Compute Tier.   Once you are in the appropriate tier, just select the Serverless compute as shown in the image below.

Next, is the compute generation.  For this particular tier, you are limited to the Gen5 generation which consists of up to 16 vCores and a maximum of 48GB of memory.  You can also select the maximum number of vCores that you want.  You have a limit of 16 cores and it is easily adjustable with the slider scale.  In the image below, you will see that I have set this database to a maximum of 4 vCores.

You can also specify a minimum number of vCores to ensure that the database always has at least that number of vCores available.   I’ve set it to a minimum of 1 vCores.

Next we can adjust the Auto-pause delay values.  This is a valuable feature which allows you to automatically pause the database after a period of inactivity.  You can set this delay up to 7 days and 23 hours which provides quite a bit of flexibility.  Pausing the database will help save costs as when the database is paused you are only charge the cost of the storage and not the compute.

Next select the appropriate data max size.  Notice that you can go up to 1 terabyte in size which gives you quite a bit of room to grow.

Finally, since it is still public preview, you must select the preview terms.  Once you have done that, you can then click the Apply button and the scaling process will commence.

Once the scaling has been completed, the database is shown in SSMS like it was before.

Summary

The process to move from Azure SQL DB on a server to the new Serverless compute tier is quick and easy to accomplish.  This new service tier might be a good fit for your particular workloads and I highly suggest you take a look at it.  Keep in mind, however, it is still in public preview and as such I would not use it for production workloads until it is fully available.  It is a great for for non-production environments, especially if those environments are not used 24×7, as you can help to save some costs by pausing them!

© 2019, John Morehouse. All rights reserved.


Contact the Author | Contact DCAC
1 2 3 18

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner    Microsoft MVP    Microsoft Certified Master    VMWare Partner    VMWare vExpert
   Best Full-Service Cloud Technology Consulting Company    Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers