Azure B series VM’s, cheap burstable CPU

The new B series VM is now in preview. These are extremely cheap VM’s that offer burstable CPU performance.

What exactly does that mean? Well, basically you can not run these VM’s at 100% CPU 24/7. The VM CPU will run at a predefined baseline, As you run the VM you acquire credits for every hour of run time. Once enough credits have been acquired the VM can burst up to 100%.

This is very similar to AWS T2 instances and is perfect for small web servers, Dev/Test servers and so on that don’t require high CPU usage. The VM will run most of the time at low CPU usage and if required can burst for a short period of time up to 100% as long as you have the required credit to do so.

The B series VM’s are of course priced accordingly with a 2 Core, 4GB VM priced at$20.09 and a larger 4 Core, 16GB VM priced at just $80.36. These are preview prices and based on past experience we can expect these prices to double when general availability is reached. Even then these are still very low prices.

The following table details the VM specs and time to acquire credits for a full burst.

Size vCPU’s Memory: GiB Local SSD: GiB Base CPU Perf of VM Max CPU Perf of VM Credits Banked / Hour Max Banked Credits
Standard_B1s 1 1 4 10% 100% 6 144
Standard_B1ms 1 2 4 20% 100% 12 288
Standard_B2s 2 4 8 40% 200% 24 576
Standard_B2ms 2 8 16 60% 200% 36 864
Standard_B4ms 4 16 32 90% 400% 54 1296
Standard_B8ms 8 32 64 135% 800% 81 1944

So as you can see for example, the B2s will only supply 40% baseline performance (20% of each core). To burst it requires 864 credits, and for each hour of runtime 36 credits are assigned. meaning that it can burst exactly once every 24 hours of run time. This is the same for all the b series, they can burst to 100% of all cores for one hour after every 20 hours of running.

You can also see the official post here

Advertisements

Microsoft & Netapp to collobrate to deliver NFS on Azure

Netapp announced that it will be the data services technology powering the first Network File System (NFS) service in the cloud, the Microsoft Azure Enterprise NFS Service.

With Microsoft itself offering Azure files, a CIFS/SMB based file sharing service, NFS has until now not been a native option with Azure.
This announcement now means that NFS will be offered as a service via Azure. Allowing even simpler lift and shift scenarios for customers who are already using NFS based file shares.

The service itself is offered in collaboration with Netapp and will be available early 2018.
You can sign up now for the preview here

Azure File Sync

Another newly announced preview service at ignite was Azure File Sync.

Now in my opinion this has been a long time coming, though it was arguably available as part of the StorSimple solution, however this new feature sounds much easier to implement and maintain.

Basically, you install an agent/package on your on-prem fil server and it syncs up with Azure File Storage .

The two great things about this service are:

  • Storage tiering, allowing you to offload files to the cloud and free up your on-prem server space.
  • The solution supports multi master sync. Allowing you to keep file servers in different geographic regions synced with each other. Finally we have a solution for syncing cross premises file servers using Azure as our central store point!

You can read the official announcement here

Back in Business

Just a quick note to all my followers,

As you’ve probably seen my blog has been “sleepy” for the past 12-15 months.
This has been because of multiple reasons mainly our first child arriving, and me changing jobs 18 months ago.
Well the little one is now not so little and I’m changing jobs again.
I figured it’s time to kick this blog back to life. And what better time that during Microsoft ignite when all the new announcements for Azure are flowing in!

So hopefully back top blogging.

Azure Data Box

Azure Data Box is Microsoft’s answer to AWS snowball.

azure-data-box

Basically this is a secure, hardened “box”/Storage for transferring large amounts of data to Azure.

The basics are simple. The box plugs directly into your network and supports standard SMB/CIFS protocols.

You copy your data to the box, which supports up to 100TB, and ship it back to Microsoft where it will be offloaded to your Azure storage.

There is also integrated support for 3rd party products such as Commvault, Veeam. Veritas & more.

You can read the official statement here

Azure Service Healing

I often get asked what happens if an Azure service or resource crashes.
I’m also sometimes asked how Azure keep Virtual Machines running 100%.

Well lets start with the second question. They Don’t! Azure is an extremely reliable platform but is still based on industry standard physical servers, power, networking… And sometimes a failure may occur that can cause a VM to reboot or go offline. Having said that uptime is of course extremely high, some services being higher than others. You can find official SLA listings here.

Now regarding what happens if a service does fail. Well Azure has a an Auto-Recovery feature called service healing. Auto-Recovery is available across all Virtual Machine sizes in all regions.
Azure has multiple ways to preform health checks on your resources. Every VM deployed in the form if Web and Worker role has an agent injected in to it that run a health check every 15 seconds, a web farm behind a load balancer will also have health checks performed by the load balancer itself. If a predefined number of consecutive health check fail or a signal from the load balancer causes a role to become unhealthy then a recovery action will be initiated which is to restart the role instance.

Another test preformed is the health of the virtual machine itself within which the role instance is running. The virtual machine is hosted on a physical server running inside an Azure datacenter. The physical server runs another agent called the Host Agent. The Host Agent monitors the health of the virtual machine by pinging the guest agent every 15 seconds. It is quite plausible that a virtual machine is under stress from its workload, which could be its CPU is at 100% utilization, because a machine may be under heavy load Azure will wait 10 minutes before preforming a recovery action. The recovery action in this case is to recycle the virtual machine with a clean OS disk in the case of a Web & Worker Role and in the case of Azure Virtual Machine we perform a reboot preserving the disk state intact.

Apart from this Azure take as many measures as possible to predict failure in advance this includes extensive monitoring of all hardware in the Datacenter including CPU, Disk IO etc.

Azure Cool Blob Storage

Azure’s new cool blob is now GA. But what is cool blob?

Well cool blob is a new blob storage feature for data that is accessed infrequently. In other words it’s good for backups, archives, scientific data etc.

The price of a cool blob is extremely low, between 1 to 1.6 cents per GB per month depending on region.

Cool blobs come with a 99% SLA compared with the 99.9% SLA offered on it’s hot tier. Azure cool blobs API is 100% compatible with existing blob storage offerings.

The Service is only available using the new modern ARM deployment, so if for some reason you need to use classic deployment then you cant take advantage of the new service. Also the service is offered as a block blob for unstructured data, so it can’t be used to store IAAS VHD’s, this makes sense as VHD’s need random read and write operations.

You can read more on the new offering at the Azure Blog over here