Ever had users come to you and request another version of a report just to add another field and group data differently? Today, was such the day for me. I really don’t like have multiple versions of the same report out there. So, I got a little fancy with the current version of the report and added a parameter then used expressions to group the data differently and hide columns. For those new to SSRS I’ve embedded some links to MSDN to help you along the way.
The report gives summarized counts by invoice date. It currently has a ROW group using date_invoiced and the detail row is hidden from user.
To complete the user request to have Item Codes and Descriptions added to the report I need to find a way to group the data by Item and show Item columns without disturbing the current report that is currently used by many consumers.
Set Available Values
Set Default Values
Add New Columns
Change Grouping to group data using parameter
Step 1: Add Parameter
Step 2:Set Available Values
Step 3: Set Default Values– I want to make sure my current users get their version of the report simply, so I set it to No (N).
Step 4: Next Add Columns. I was lucky that the fields (Item Code, Item Desc) the user requested to be add was already part of the dataset used, so no additional coding was needed on the stored procedure.
Step 5: Next change the Visibilityattributes. You want to HIDE the column when theIncludeItemDetails parameter is NOT YES (Y). I did this for both item columns.
Step 6: Next I needed to change the grouping. The report is currently group by date_invoiced only. To make the data now total by Item I need to group it by Item only when the IncludeItemDetails parameter is Yes (Y). I did this using an IIF expression setting it to IFIncludeItemDetails=Y then group using field value else don’t (0). Again I did this for both fields.
You will see it’s relatively simple to do, and prevents a whole new report version from being created. For you beginners out there, it’s a very easy way to start to minimize the number of reports you have to maintain. Try it.
Ok everyone; here goes my first crack at replying to a T-SQL Tuesday. For those that don’t know what it is, it’s a Monthly blog topic hosted by a member of the SQL Community. It was started originally by Adam Machanic (t | b)
This month’s topic hosted by Andy Yun (t | b) is on Growing New Speakers, which I find to be a perfect topic for me to leap off from, since this was my first year speaking and blogging.
How did I get started?
I 100% blame Derik Hammer (t | b) whom at the time was running my local user group. After attending just one meeting I was “volun-told” I would be presenting in August. Yep my name was now on the speaking calendar and I hadn’t even thought of a topic, let alone ever contemplated speaking.
My First Steps to Presenting
After the shock wore off, I sat back and began to think of anything of value I could talk about. Since it would be my first time speaking I really wanted a topic I could talk about and not necessarily a technical talk. Thus my Lone DBA talk was born. Everyone has something of value in their career to talk about, for me this seemed logical.
Simple Steps to Get Started
Where to begin is always the hardest part after choosing a topic. This was my approach. Of course there is a lot more to it, but getting this far a huge step forward.
Jot down a list of things you want to talk about
Then put them in a logical order
Then write a sentence or two about each line item
Just taking the time to do this will get you going.
Don’t Be Nervous (HA! Yeah Right)
It’s very hard not to be nervous. The way I “try” to get around this is to strike up a conversation some attendees prior to the start of the session while you are standing up front. I pretend after the session begins that I am still having that one on one conversation with them. For me it creates a “friendly” atmosphere rather than one like a teacher\ student. Now my biggest problem is talking fast, I try REALLY hard not to but it’s bound to happen as I get excited about the topic. My point is nobody is perfect at speaking everyone will have their fault, don’t let it discourage you.
Start with your user group, listen to feedback, have another review your slide deck, and most of all enjoy it. There is nothing like a “speaker high”. Being able to share your knowledge and influence just one person is very rewarding.
My life for the last 2 years has been a constant battle of putting out fires with system performance; finally user complaints have moved getting this resolved as my top priority.
Let’s see how I tackled the problem…
Very High Disk Latency as high as 300,000 milliseconds (ms) is not unusual
Average: 900 – 15,000ms
Slow User Experience
Over-provisioned VM Hosts (what happens on one VM effects the other)
Old NetApp SAN
No infrastructure budget for new hardware
Challenge: Make the system viable with no hardware changes or tweaks
Step 1: Brain Storming (in no particular order)
I can probably tune a ton of old stored procedures
I need to do a full review of all indexes
Investigate daily data loads
How is the data loaded?
Can it be improved?
Step 2: Reduce I/O & Investigate daily data loads
After doing some research, it was found that we were truncating 48 tables daily with over 120 million records as part of our morning load. The process was taking over 2 hours to complete each morning and would often cause blocking. During this time users would run reports and complain data would not return in a timely manner. So I thought maybe this would be a great place to start.
I also noticed we were loading 8 tables to keep them “real time for reports” once every hour. This resulted in a total of 9.6 million records being truncated and subsequently reloaded, taking approximately 17 minutes of every hour.
Solution: Implement transactional replication instead of doing hourly and morning truncate and reloading of tables.
Outcome: Once implemented the disk I/O dropped drastically and disk latency reduced to an average 200ms. The morning load times dropped from 2 hours to 9 minutes and the hourly load went to 5 seconds down from 17 minutes. Now, the disk latency is not optimal still but better. Best practices say it should be below 20ms.
This solution was difficult to accomplish because of all the work that went into it. Once the replicated tables were stable, I first identified which stored procedures were utilizing those tables (I used Idera’s SQL Search for this). Then I changed each procedure to read tables from new location.
Next, I had to change any SSRS reports that had hard coded calls to those old tables (Note: don’t do this. Always use a stored procedure). Finally, I looked for any views that called the tables and adjusted those as well.
In two weeks’ time, over 500 stored procedures, reports and views were manually changed.
It is probably worth noting that this was all done in Production simply because we do not have a test environment for this system. Yes, I did get a few bumps and bruises for missing a few table calls in store procedures or typo’s or nasty collation errors that arose. These were bound to happen and some changes I was not able to test during the day. All in all it went really well. Having a test environment would have alleviated these, but not all of us have the luxury.
I started by reviewing duplicate indexes and deleted\adjusted accordingly where needed. Then I went on to looking for missing indexes (where some magic happens). This reduced the amount of I/O because it lessened the amount records that had to be read due to using proper indexing.
Now just because these scripts stated they were missing I didn’t just create them; I evaluated their usefulness and determined if they were worth the extra storage space and overhead. Glenn’s script gives you a lot of information to help decide on the index effectiveness. As you can see with the first one in the result set, if the index was added over 45,000 user seeks would have utilized it and query cost would drop on average by 98.43%. Again I didn’t arbitrarily add this index because it was in the list. Once I determined I would not be creating a duplicate or similar index on the table and given the potential of better performance with the suggested index, it was added.
Oh one more OOPS…(why not, learn from my mistakes)
After going thru the indexes exercise and adding indexes to the tables (in the subscriber), I lost all of them minus the Primary keys. Yep, made one change to a replicated table and the replication reinitialized; all my indexes were dropped. Needless to say I was not a happy camper that day. Lucky for me each index I added was scripted and put into a help desk ticket. I was able to go back thru all my tickets and resurrect each index I needed. Now, to be smart, I have scripted all of them and place those into one file, so I can re add them all if needed in future. I haven’t found a way around this yet, so if anyone has any information on how to feel free to let me know.
Step 4: Performance Tune Slow Stored Procedures (the fun part for me)
Armed with Grand Fritchey’s (B|T) book on Execution plans for reference I began tuning any stored procedure I was aware of that was taking more than 2 minutes to run. In total, I tuned about 77 of them, most were report related or part of data loads. I found many benefited from indexes being placed on temp tables within the procedures. Others were doing too many reads based on bad WHERE clauses or joins.
Another thing I ran across was functions used in where clauses or joins. Example of which is date conversion functions that were converting both From and To Dates used a BETWEEN statement. The functions caused each date value to be processed by the function before being evaluated by the WHERE clause, causing many more reads then necessary. To work around this I read in the data and converted the dates into temp table, then did my JOINS and WHERES on the already converted data. Alternatively, depending on what the statement was I also converted the value and placed in variable for later evaluation.
There were so many more things I came a crossed and tuned such as implicit conversions, table spools, and sorts that were not optimal. All of these were fixed by little code changes. I am not going into all of that because this post would be quite long, but you get the point.
Happy Side Effects: After cleaning up the tables and implementing replication I actually free up 300 GB of storage and greatly reduced our backup and restore times.
Things are running much better now; introducing Replication reduced enough disk I/O to keep the system viable. For now latency now hovers on average between 2 and 200 milliseconds, which is a vast improvement. I do, however, still see spikes in the thousands of milliseconds and users still complain of slowness when they run large ad-hoc queries within the application (JDE Edwards E1). Unfortunately, that goes back to hardware and the application itself which are things that I cannot improve upon. The good news is, I am hearing a rumor that we will be installing a Simplivity solution soon. I am very excited to hear that. I’ll blog again once that solution is in place and let you know how that goes.
This year has been a whirlwind so far, thanks to the Idera ACE program. For those that don’t know what that is …
What is an Idera ACE? (According to Idera)
“ACEs (Advisors & Community Educators) are active community members who have shown a passion for helping the community and sharing their knowledge. We help the ACEs pursue that passion by sponsoring travel to select events and offering guidance for soft skill training.”
Requirements to become an Idera ACE:
Enthusiastic members & leaders of the SQL community
Accomplished contributors to the SQL community
Good speaker, writer and presenter
Demonstrated a passion for educating fellow community members
Being an ACE has been both a very busy and very rewarding experience for me. Idera has given me the means to be able to share my knowledge as a Lone DBA and help others who are also in this predicament make the most of it. Since October last year, thanks to the generosity of the ACE program and the exposure it has given me, I have started my own blog, presented at a total of 9 SQL Saturdays, and 2 User Groups. I have also hosted 2 Idera #SQLChats on Twitter (links below) and participated in a SQL Hangout with Cathrine Wilhelmsen (B|T).
So far, I have given my Lone DBA session to over 200+ SQL professionals, tweeted in SQL topic specific Idera #SQLChats to with a combined over 600 tweet interactions and had 200+ views on a video chat SQL Hangout.
One of my biggest talking points I try to convey is the power of networking and getting “virtual co-workers”. Making those connections with others in the community is vital when you are a Lone DBA. I speak on the importance of building those relationships with those that can help you with their experience and expertise. Being an ACE has allowed me to vastly grow my network of “virtual co-workers”, by letting me travel to so many SQL Saturdays. I’ve had the pleasure in meeting so many speakers and attendees. I make it a point at each of these events to make new co-workers and offer up any help I can give others.
The biggest reward for me is after my session is when attendees do their homework. Yes, I assign homework. During the session, I ask each attendee to take advantage of what the SQL community has to offer by getting on Twitter and begin growing their own personal network. Usually within a few days, many of them have created a Twitter account and has sent me a tweet. I then take the opportunity to introduce them to the #sqlfamily. I get a kick out of sitting back and watching each of them get involved in the community because me. It makes me giggle every time.
Of course, all good things must come to an end. My year as an ACE is wrapping up in the next few months and I just wanted to take a minute and say thank you to Idera for a wonderful program. I encourage everyone to take full advantage of these types of programs and make the most of what they have to offer. I urge those that do, to not only take advantage for themselves but also to pay it forward. Give back to the community in any way you can. We can all benefit from each other with our shared experience and knowledge. The ACE program has really motivated me to get more involved and contribute to the #sqlfamily.
As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.
And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.