Are you using the SQL Server 2016 SSMS? Do you HATE the help experience when you press F1 and it opens the GOD DAMN webpage?
Did you know that you can change this horrible annoying behavior?
YES YOU CAN!!!!!
It’s a shockingly easy change. Open SSMS. Click help, Set Help Preference, then change the default.
Now, when I did this I had to restart SSMS to get the change to actually do anything. Half the time help hangs when I open it, but I’m on the fast train for Windows 10 so it might be that.
Best blog post ever!
The post Are you using SQL 2016 SSMS? Would you like to use native help? Now you can. appeared first on SQL Server with Mr. Denny.
I’m still fighting with some challenges about inconsistent performance between a primary and secondary replica, so I’ve been waste deep in undocumented system views looking at temporary statistics. One of the things I thought about doing was talking advantage of the Force Plan option in the Query Store in SQL Server 2016. If you are not familiar with this feature, it allows you to force a “preferred” execution plan. In this scenario, our query was running in about 20-30 seconds on the primary, and 20-30 minutes on the secondary. The plans were reasonably close, but I wanted to see what would happen if I forced a plan on the primary.
Primer about the Query Store and Availability Groups
Since readable secondary replicas are read-only, the query store on those secondary replicas are also read-only. This means runtime statistics for queries executed on those replicas are not recorded into the query store. All the stats there are from the primary replica. However, I wasn’t sure what would happen if I forced a plan on the primary—would the secondary replica honor that plan?
Let’s Find Out
The first thing I did was to query the query store catalog views to verify that the plan was forced.
I have to copies of the forced plan. If I run an estimated query plan on the primary, I see that plan is forced. You can see this by looked for UsePlan in the XML of the plan.
I did the same thing on the secondary (in the case of the secondary, we are looking at the actual plan, but it doesn’t matter).
You will note that there is no UsePlan. There are extended events and a catalog view that reflect plan forcing failure (Grant Fritchey wrote about this behavior here), While, I wouldn’t expect the catalog view to get updated, I was hoping that the Extended Event might fire. It did not.
The query store, as awesome as it is, doesn’t really do much for you on readable secondary replica. It does not force plans, nor does it record any of your data.
Thanks to Grant Fritchey and Erin Stellato for helping with this post!
If you’re hosted in Azure in the West US region you may be getting a free hardware upgrade later this month or early next month. Part of the Azure infrastructure is being upgraded and VMs that are running in the part which is being upgraded will be automatically to the new hardware.
Now if you’re going to be impacted whoever is your service administrator for your company will be getting (or has gotten) an email about this, which will include a reboot of all the impacted VMs. If your VMs are within an Availability Set then you’re guaranteed to only have one VM in the Availability Set reboot at a time. This is why we have Availability Sets.
In fact, this is all one of the reasons that Azure is a great platform. We’ve got systems that are getting moved to new, (hopefully) better hardware and there’s basically no impact to the systems in question. Just a rebooted of the impacted systems, that’s it. All in all, that’s a pretty minimal impact.
Now the email does include instructions if you’d like to schedule the upgrade yourself. Instead of letting Azure move your services automatically, just power down all the VMs in the Availability Set then power them back on. When they power back on they’ll be automatically moved to the new hardware. Yes this does mean that you have to take an outage but you’ve got two weeks notice to schedule the outage and complete it before Microsoft will force the issue.
All in all, that’s not bad.
If you’re getting the upgrade, enjoy that new server smell.
The post You may be getting a free hardware upgrade in Azure this month appeared first on SQL Server with Mr. Denny.
I was working with a client this week and we encountered very long wait types on “insert as select” queries that were part of their data delivery process. This wait type isn’t documented very well, SQL Skills has it documented here and mentions this:
“Typically these waits occur when queries involve columnstore indexes, but they can also occur without columnstore indexes being involved if a hash operator runs in batch mode.”
Nacho from Microsoft also has a blog post on it here. My theory was that a bad hash join was taking place and causing the wait.
Isolating the Plan
The thing that was very curious about the situation is that waits were only occurring on the readable secondary replica. At first, I tried to examine the query store to try to understand if there were multiple execution plans for a given query. The one problem with that is the readable secondary copy of the data is read-only, which means on that secondary replica you only see the query store data from the primary replica. If there was a plan that was specific to the secondary, I’d have gather them from the plan cache on the secondary. (Thanks to Erin Stellato (b|t) for this idea). There was one other problem—the code in question was executing as dynamic SQL from a stored procedure when meant it was always getting a new execution plan.
Ultimately I think any thought of the readable secondary having a vastly different plan was a red herrings. Statistics are going to be the same on both instances, and if there were a missing statistic on the secondary, SQL Server would create it in TempDB. Anyway, columnstore indexes don’t use statistics in the traditional sense.
Fortunately I was able to catch a query in the process of waiting on HTDELETE, so I no longer had to look for the needle in the haystack, and I could get to tuning the plans. I was able to grab the SELECT part of the query and generate an estimated plan on both the primary and secondary nodes. The plans were virtually the same on both nodes, with just a minor difference in memory grant between them.
The query is here.
FROM Database1.Schema1.Object5 Object4
INNER JOIN Database2.Schema1.Object6(?, ?) Object7 ON Object4.Column2 = Object7.Column3
INNER JOIN Database2.Schema1.Object8(?) Object9 ON Object4.Column4 = Object9.Column4
INNER JOIN Database1.Schema1.Object10 Object11 ON Object4.Column5 = Object11.Column6
INNER JOIN Database2.Schema1.Object12(?) Object13 ON Object11.Column7 = Object13.Column7
WHERE 1 = 1
AND Object4.Column8 >= ‘01-Jan-2017’
The pattern here was that we were taking all of rows of an ID field in a columnstore index with about 350MM rows and joining them to a function that has 3500 rows. My gut instinct was this was a bad match for batch mode hashing. Additionally, SQL Server was recommending I create a b-tree index on the large columnstore table. there was a key lookup in the plan that I wanted to eliminate, but my hunch was that this join was causing the waits.
So before I created the index, the query was taking at least 2-4 minutes, when it wasn’t getting hung on the HTDELETE wait. After I created the first index, we got done to about 15 seconds. SQL Server then recommended that I create another index on one of the join tables, which brought my query time down to sub-second. The plan looked a lot more traditional and had lots of my favorite operator INDEX SEEK.
The Moral of the Story
Sometimes you need non-clustered indexes on columnstore indexes. It stinks, because they do add space, but its hard to argue with a performance gain like this. I need to email some friends on the product team to ask, but I’m therorizing that the join was super expensive and causing the query to hang. Anyway, the real answer is to never stop tuning and trust your instincts.
Thanks to Sentry One for making Plan Explorer Free. I used it for the screen shots and anonymization in this post.