Azure Hybrid Automation–Performing Tasks in a VM

Published On: 2019-04-22By:

Recently, I learned something new about Azure Automation–that you could execute tasks inside of a VM. I haven’t dealt with this situation before–typically, any tasks I want to execute inside of a VM, I execute using a scheduler like SQL Agent, or the Windows scheduler. However, in this situation, we were trying to reduce costs, and this VM is just used for ETL processing (the database in Azure SQL DB), but we still needed to take backups of the Master Data Services database.

My first thought around this was to have an SQL Server Agent job that either executed on startup, or once a day, however this was messy, and could potentially lead to having several sets of unnecessary backups a day. I knew I could create an Azure Automation job that would check the status of the VM, and start it if it was stopped. What I needed to figure out from there was:

  1. How to get a command to execute inside of the VM
  2. To shutdown the VM if it had not been running previously

Enter Azure Hybrid RunbooksHybridAutomation

While this image shows a potential on-premises > Azure use case, in my case I was simply using this to connect to a VM within Azure. You will have to run some PowerShell to enable this on your machine, but after that step is completed, you can simply call Azure Automation cmdlets with the “-runon” option, which specifies the name of the hybrid worker group you created.

The other trick to this, was calling a runbook from a runbook, which wasn’t well documented.

# Ensures you do not inherit an AzureRMContext in your runbook

Disable-AzureRmContextAutosave –Scope Process


if ($vm -ne 'PowerState/running')

{start-azurermvm -ResourceGroupName $RGName -Name $vmName;

start-sleep -Seconds 35;

start-azureRMautomationrunbook -AutomationAccount 'Automation' -Name 'BackupDB' -ResourceGroupName $rgName -AzureRMContext $AzureContext -Runon 'Backups' -Wait;

stop-azurermvm -Name $VMname -ResourceGroupName $RgName -force}

else

{start-azureRMautomationrunbook -AutomationAccount 'Automation' -Name 'BackupDB'`

-ResourceGroupName $rgName -AzureRMContext $AzureContext -RunOn 'Backups' -Wait}

In order to execute this inner runbook, you simply use the Start-AzureRMAutomationRunbook and since it’s hybrid, we use the aforementioned -runon option. My BackupDB runbook simply uses the DBATools

backup-dbadatabase cmdlet, to perform a full backup.

It’s quick and easy to get this up and running in your automation account–the hardest part is getting all of the modules you need into your automation account.

Why your company doesn’t need block chain

Published On: By:

Blockchain is the new hot thing in IT. Basically, every company out there is trying to figure out where Block Chain fits into their environment.  Here’s the big secret of blockchain; your company doesn’t need it.

Blockchain is simply a write one technology that allows you to change records, but it keeps track of every change that was made.  Most systems need some auditing to see when specific changes were made for example thing about an order system that your company may have. You probably have auditing of some sort so that you can see when the new order comes in (it’s probably the create date field on the table), and there’s probably some sort of auditing recorded when the shipment is sent out. If the customer fixes their name, you probably aren’t keeping a record of that, because odds are you don’t care.

Think about what systems you have at your company. Do you need to keep a record of every single change that happens to the data, or do you care about what happens to only some of the tables?  Blockchain is a great technology for the systems that need that sort of data recording. But that’s going to be a small number of systems, and we shouldn’t be fooling ourselves into believing that every company needs a system like this.

I’m not g0ing to argue that there are no systems that need this; there definitely are some systems that due. But those systems are going to be in the minority.

Executives are going to read about how blockchain is this great new thing, and they are going to want to implement it. The thing about blockchain is that there’s one major thing that building a system on blockchain requires, and that’s lots of drive space. If you want to purge data from the system after 5-6 years, that’s great; you’ll need more drive space as deleting data from a blockchain database just means that you need more space as you aren’t actually deleting those records.

A friend of mine described Blockchain as a database in full recovery mode, and you can’t ever back up (and purge) the transaction log. That’s how the database is going to grow.  Remember those lovely databases that were on the Blackberry Enterprise Server back in the day? The database would be 100 Megs and the transaction log would be 1 TB in size. That’s precisely what blockchain is going to look like, but it’s going to be a lot worse because all your customers and/or employees are going to be using the application.  If you have a database that’s 100 Gigs in size after a few years (which is a reasonable size for an application) the blockchain lot for this could easily be 15-20 TB in size, if not 100TB in size. And you’ll have to keep this amount of space online and available to the system.

So if you like buying hard drives (and the nice car that they get from their commissions) then blockchain is going to be great. If you don’t want to spend a fortune on storage for no reason, then blockchain is probably something you want to skip.

Denny

The post Why your company doesn’t need block chain appeared first on SQL Server with Mr. Denny.

What is Implicit Conversion?

Published On: 2019-04-17By:

Another quick post of simple changes you can make to your code to create more optimal execution plans. This one is on implicit conversions. An implicit conversion is when SQL Server must automatically convert a data type from one type to another when comparing values, moving data or combining values with other values. When these values are converted, during the query process, it adds additional overhead and impacts performance.

Here is a great chart by Microsoft that shows you conversions and which will cause an implicit or explicit conversion. In this post I will not go into explicit, just know that is what you explicitly tell SQL Server to CAST or CONVERT a value.

Image credit: https://docs.microsoft.com/en-us/sql/t-sql/data-types/data-type-conversion-database-engine?view=sql-server-2017

Let’s look at a very simple but common implicit conversion scenario. Here we have a table Employee with a NationalIDNumber column defined with a NVARCHAR  data type. In the query we will use a WHERE clause to search for a specific ID.

In the query below, we have requested NationalIDNumber equal to the integer value 14417807.  For SQL Server to compare these two data types it must convert that NVARCHAR to a INT. Which means every value in the that column must go through a conversion process which causes a table scan.

USE AdventureWorks2016CTP3 
GO 
SET STATISTICS IO ON 
GO 
SELECT BusinessEntityID, NationalIDNumber, LoginID, HireDate,JobTitle
FROM HumanResources.Employee
WHERE NationalIDNumber = 14417807

In the execution plan you will see an exclamation point warning you that there is a potential issue with the query.  Hovering over the SELECT operator, you will see that a CONVERT_IMPLICIT is happening which may have affected the optimizer from using a SEEK.

(1 row affected)

Table ‘Employee’. Scan count 1, logical reads 9, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Now the question is how we fix it. It’s really simple but it does require a code change. Let’s look back at our query.

SELECT BusinessEntityID, NationalIDNumber, LoginID, HireDate,JobTitle
FROM HumanResources.Employee
WHERE NationalIDNumber = 14417807

Remember we asked for a integer value. Just by add single quotes to the value we can eliminate our issue. It’s important to always know what data types your columns are when querying them. In this case since it is a NVARCHAR all I need to do is supply a character value. This is accomplished by adding single quotes around the value.

SELECT BusinessEntityID, NationalIDNumber, LoginID, HireDate,JobTitle
FROM HumanResources.Employee
WHERE NationalIDNumber = '14417807'

It’s simple to see the results. Note above the Scan count 1, logical reads 9, physical reads 0. When we rerun it we get the below.

(1 row affected)

Table ‘Employee’. Scan count 0, logical reads 4, physical reads 2, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

You can also wee in the graphical plan that the warning is now gone, and we have a SEEK instead of the SCAN which is much more efficient.

The only thing left to clean up in this plan is the key lookup. To fix that take a look at my blog from last week here.  There are many ways you can end up with  implicit or explicit conversion issues and the additional overhead they can create. In most cases they are extremely easy to fix with a little code change, this was just one example.  To help you find these in your environment Jonathan Kehayias has written a query to  find column side implicit conversions in your plan cache be sure to check it out.

Servers running too slow, just add all the cores!

Published On: 2019-04-15By:

Recently Intel announced some major upgrades to their Xeon CPU line. The long and short of the CPU announcement was that Intel was releasing their 56 Core CPUs for public release. That’s just a massive amount of CPU power that’s available in a very small package. A dual socket server, with two of these CPUs installed, would have 112 cores of CPU power, 224 with Hyper-Threading enabled. That’s a huge amount of CPU power.  And if 112 cores aren’t enough for you, these CPUs can scale up to an eight-socket server if needed.

With each one of the processors, you can install up to 4.5TB of RAM on the server, per socket.  So a dual socket server could have up to 9TB of RAM. (That’s 36TB of RAM for an eight-socket server if you’re keeping track.)

For something like a Hyper-V or a VMware host, these are going to be massive machines.

My guess is that we won’t see many of these machines are companies. Based on the companies that Intel had on stage at the keynote (Amazon, Microsoft, and Google) we’ll be seeing these chips showing up in the cloud platforms reasonably soon.  The reason that I’m thinking this way is two-fold; 1. the power behind these chips is massive, and it makes sense that these are for a cloud play; 2. the people who were on stage at the Intel launch were executives from AWS, Azure and GCP.  By using these chips in the cloud, the cloud providers will be able to get their cloud platforms probably twice as dense as they have them now. That leads to a lot of square feet being saved and reused for other servers.

As to how Intel was able to get 56 cores on a single CPU, is through the same technique that they’ve used in the past. They took two dies, each with 26 cores on them and made one socket out of that.  In the olden days, we’d say that they glued two 26 core CPUs together to make one 56 core CPU. The work that Intel had to do, to make this happen was definitely more complicated than this, but this thought exercise works for those of us not in the CPU industry.

These new CPUs use a shockingly small amount of power to run. The chips can use as little as 27 Watts of power, which is amazingly low, especially when you consider the number of CPU cores that we are talking about. Just a few years ago, these power numbers would be unheard of.

Denny

The post Servers running too slow, just add all the cores! appeared first on SQL Server with Mr. Denny.

1 2 3 415

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.