Microsoft Just announced General Purpose Storage v2.
Until now we had general purpose storage that supported both: Blobs (page & blob), File Share, Que & Table storage.
We also had Blob storage that supported only, you guessed it, blobs.
So why not just use the general purpose. Well two reasons. The general purpose didn’t support cool blobs or Archive (lower tiers for backups, archives etc).
Also the use of blob storage via the general purpose account was slightly more expensive per GB though write operations were lower. Basically making it a mathematical nightmare to choose between general purpose storage account or blob storage account types.
The new GPv2 supports all storage types similar to GPv1. However it also supports both hot, cool and archive blobs. So basically all of the features of both of the previous storage account types are supported under the new GPv2. pricing per GB for blob is the same as with the blob storage account (cheaper that GPv1) however write operations are charged at the higher rates that were charged for GPv1.
All newly created storage accounts now default to GPv2 and Microsoft is recommending to create all new storage accounts using GPv2 and to convert existing storage accounts to GPv2.
The conversion process is very simple, simply click on the existing storage account, click on configuration and you will see a button labeled “Upgrade” you will be asked to confirm the storage account name and thats it.
I’ll explain in my next post the difference between Hot, Cool & Archive blobs and how to use them.
Microsoft have just announced the end for the Azure classic portal – https://manage.windowsazure.com
This doesn’t mean that you have to have a panic attack and migrate all your classic resources to arm (at least not yet).
You can still access all classic resources through the new portal – https://portal.azure.com
Although I would recommend to anyone with classic resources that he/she migrate them over to the newer arm based deployment.
The new portal has already been in production for over two years now and in preview before that, so Microsoft announcing the end of the classic portal is no surprise. All previous services such as Azure AD that were only available in the classic portal are now GA in the new portal, and as previously stated any classic resources can still be accessed via the new portal.
So goodbye classic portal and thank you for your service
A new version of Azure backup has just been released and it now includes the ability to backup system state.
Previous this would have required deploying the more advanced Azure Backup Server, but can now be accomplished using the simpler Azure backup agent.
Also new, you can now set policies and backup retention from the Azure portal and not just the endpoint server/computer.
Backing up of System state is good for Active Directory servers, IIS and file servers (shares) as it allows for easier recovery of these systems in case of a failure.
Now I’m just waiting to see when full image backup and recovery will become part of the system 🙂
For anyone looking to upgrade an existing agent, you can either download it from the Azure portal or directly from here
The new B series VM is now in preview. These are extremely cheap VM’s that offer burstable CPU performance.
What exactly does that mean? Well, basically you can not run these VM’s at 100% CPU 24/7. The VM CPU will run at a predefined baseline, As you run the VM you acquire credits for every hour of run time. Once enough credits have been acquired the VM can burst up to 100%.
This is very similar to AWS T2 instances and is perfect for small web servers, Dev/Test servers and so on that don’t require high CPU usage. The VM will run most of the time at low CPU usage and if required can burst for a short period of time up to 100% as long as you have the required credit to do so.
The B series VM’s are of course priced accordingly with a 2 Core, 4GB VM priced at$20.09 and a larger 4 Core, 16GB VM priced at just $80.36. These are preview prices and based on past experience we can expect these prices to double when general availability is reached. Even then these are still very low prices.
The following table details the VM specs and time to acquire credits for a full burst.
||Local SSD: GiB
||Base CPU Perf of VM
||Max CPU Perf of VM
||Credits Banked / Hour
||Max Banked Credits
So as you can see for example, the B2s will only supply 40% baseline performance (20% of each core). To burst it requires 864 credits, and for each hour of runtime 36 credits are assigned. meaning that it can burst exactly once every 24 hours of run time. This is the same for all the b series, they can burst to 100% of all cores for one hour after every 20 hours of running.
You can also see the official post here
Netapp announced that it will be the data services technology powering the first Network File System (NFS) service in the cloud, the Microsoft Azure Enterprise NFS Service.
With Microsoft itself offering Azure files, a CIFS/SMB based file sharing service, NFS has until now not been a native option with Azure.
This announcement now means that NFS will be offered as a service via Azure. Allowing even simpler lift and shift scenarios for customers who are already using NFS based file shares.
The service itself is offered in collaboration with Netapp and will be available early 2018.
You can sign up now for the preview here
Another new announced feature at Ignite was Virtual Network Service Endpoints.
Now I actually saw this turn up in the portal about a w eek ago and wasn’t quite sure what the feature was until now.
Basically this a very simple and very useful feature. up until now services such as Azure storage and Azure SQL have been public facing services. You would connect to these services over a public IP address and secure access either using a firewall or security token. Now I’ve had quite a few customers who were not happy using a public facing service. The new service endpoints allows you to connect your VNet address space to Azure services, and you can restrict access to the services to be from your VNet only.
Allowing you to secure access to Azure resources from your VNet only. The service currently supports Azure Storage & Azure SQL with more services coming in the future.
Another newly announced preview service at ignite was Azure File Sync.
Now in my opinion this has been a long time coming, though it was arguably available as part of the StorSimple solution, however this new feature sounds much easier to implement and maintain.
Basically, you install an agent/package on your on-prem fil server and it syncs up with Azure File Storage .
The two great things about this service are:
- Storage tiering, allowing you to offload files to the cloud and free up your on-prem server space.
- The solution supports multi master sync. Allowing you to keep file servers in different geographic regions synced with each other. Finally we have a solution for syncing cross premises file servers using Azure as our central store point!
You can read the official announcement here