SQL Server with Mr. Denny
This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.
Hopefully you find these articles as useful as I did. Don’t forget to follow me on Twitter where my username is @mrdenny.
The post Recommended reading from mrdenny for December 2, 2016 appeared first on SQL Server with Mr. Denny.
Anything the uses a database needs to have the database settings configured correctly, this includes all the VMware tools. I’ve had a couple of customers contact me recently about VMware’s AirWatch system because the transaction log on the database keeps growing. The reason in both cases so far is because the database is using the default settings and the transaction log was never backed up. I talked about this at Microsoft’s TechEd conference a couple of years ago during my talk titled “Introduction to SQL Server Management for the Non-DBA”. If you prefer a VMware themed slide deck, I’ve given basically the same presentation at VMworld as well (registration required).
Fixing this is actually a really simple fix. You need to do one of two things.
- Change the recovery model from FULL to SIMPLE.
- Start taking transaction log backups of the database
Now, how do you select the one of these that you want to do? That’s pretty easy. Do you need point in time recover-ability of the database? If the answer to that question is yes, then select option 2. If the answer to that question is no, then select option 1.
With most AirWatch systems you don’t really care if the database is restored to what it looked like yesterday or 5 minutes ago, so you usually don’t need point in time recovery. So just change the database from FULL to SIMPLE and you’re good to go. Doing this is actually really easy. Connect to the SQL server in SQL Server Management Studio. Right click on the database and select properties. Change the recovery model from FULL to SIMPLE, then click OK. That’s it the log won’t grow any more.
If you need to reclaim the space because the disk is full, then you need to shrink the log file. To do this, click the “New Query” button at the top of management studio and run the following command.
DBCC SHRINKFILE (2, 1024)
That’ll shrink the file down to 1 Gig and it “shouldn’t” ever grow any larger than that. There’s no outage to doing this, and no risk to the system. Because of the way that shrinkfile works you may need to run it a couple of times over a couple of days in order to actually get all the space back.
If you don’t have SQL Server Management Studio, or you can’t find the properties, here’s a script that’ll do everything that I’ve described. Just run this in the AirWatch database (it could be named anything so you have to change the database name in the USE statement from YourDatabaseName to whatever the name of your AirWatch database is).
declare @dbname sysname
set @dbname = db_name()
declare @sql nvarchar(1000)
set @sql = 'ALTER DATABASE [' + @dbname + '] SET RECOVERY SIMPLE'
DBCC SHRINKFILE (2, 1024)
The post My VMware AirWatch Transaction Log File Keeps Growing appeared first on SQL Server with Mr. Denny.
Nesting views in SQL Server by itself probably won’t introduce performance issues. However when it will do is make it MUCH harder to find performance problems. When you have the top view calling 3 views, which each call 3 views you suddenly have a nightmare of views to unwind to resolve the performance issues.
There’s another dirty little secret of views. Even if you aren’t accessing columns from all the tables in the views which are being referenced, SQL Server still needs to query those columns. This increases IO for your queries, for basically no reason. Because of this reason alone it often makes sense to not use views, and specifically nested views.
Now there are plenty of reasons to use views in applications, however views shouldn’t be the default way of building applications because they do have this potential problems.
While working with a client the other week we had to unwind some massive nest views. Several of these views were nested 5 and 6 levels deep with multiple views being referenced by each view. When queries would run they would take minutes to execute instead of the milliseconds that they should be running in. The problems that needed to be fixed were all indexed based, but because of the massive number of views that needed to be reviewed it took almost a day to tune the single query.
Eventually the query was tuned, and the query was able to be run in about 200ms, but the cost to the customer was quite high.
Use views when they make sense, but don’t use them every time. Like everything else in SQL Server, there is no one right answer for every problem. They are one solution for a subset of problems, not the end all solution.
The post Why Are Nested Views Horrible? appeared first on SQL Server with Mr. Denny.
Many companies use co-located data centers to store their hardware. In some cases (like the colo we at DCAC use) you pay for power, cooling, and a connection to the internet. There is no expectation of added services other than those three things. In other cases, companies like Rackspace or Level 3 are what are known as managed service providers (note: I’m not talking about Rackspace or Level 3 in this post, I’m not going to name the guilty party, but if you want to know, you can reach me privately). Managed service providers offer solutions like shared storage, network management, and other value added services beyond just a space for your servers.
Enter the Cloud
So Microsoft and Amazon are effectively playing in this space with their IaaS offerings. There’s a big difference, however, as the cloud providers have invested a great deal of money in automation. The same customer I’m talking about in today’s post has some Azure VMs that we are deploying. I built and VM and allocated 3 TB of SSD storage in about 10 minutes this morning. Pretty slick operation–I’m fairly certain when I ran the PowerShell to deploy the VM, there was no person who got up to do anything. When I added the storage, I’m pretty sure no SAN zoning took place, and if it did, it was a few lines of code. We had previously stayed away from Azure because it’s not the most cost effective solution for very large workloads (Colo’s tend to have slightly better pricing on big boxes, but you get nickled and dimed on other things.
When Your Colo Sucks
So I have two different work streams going with the colo right now. One of which is to configure a site-to-site VPN to Azure. This should be a simple operation, however it took over a week to get in place, and only after I sent the colo the Cisco instructions on how to configure the VPN were they able to tell me that the Cisco device they had didn’t support the latest route-based VPN in Azure. So we finally get up and running, and then we discover that we can’t get the Azure VMs from certain on-prem subnets. We ask them to make a change to add those subnets and they completely break our connection. Awesome.
The other workstream is a cluster upgrade. I wanted a new cluster node and storage, so we didn’t have to do an in-place upgrade. We started this process like 3 weeks ago, hoping to do the migration on black Friday. We had a call today to review the configuration. Turns out they had nothing in place, and aren’t even sure they can get a server deployed by NEXT FRIDAY (YES–10 days to deploy a server, your job is deploy servers). I heard lots of excuses like, we aren’t working Thurs/Fri, and we have to connect to two different SANs, we might not have that fibre in stock. It wasn’t my place to yell WHAT THE EVERLOVING $%^^ on the call, so I started live tweeting. Because that’s ridiculous. Managing and deploying infrastructure was what I did for a living, and I wouldn’t have a job if it took 10 days to deploy a server, and that wasn’t my only job. That really is the colo’s only job. How the #$%^ do you not have fibre in stock? Seriously? My lab at Comcast had all the fibre I could possibly need.
Edited to add this:
This is after last month when they confused SAN snapshots with SAN clones (when it takes 4 hours to recover from a “snapshot” it’s a clone) and presented production cluster storage (that was in use) to a new node. Awesome!!!
Why the Cloud will Ultimately Win
Basically, when it comes to repetitive tasks like deploying OSs and setting up storage, software is way better than humans. Yeah, you need smart engineers and good design, but Azure and AWS are already 90% of the way there. Also, there service levels and response times are much better, because everything is standardized and makes troubleshooting and automating much easier.