Another newly announced preview service at ignite was Azure File Sync.
Now in my opinion this has been a long time coming, though it was arguably available as part of the StorSimple solution, however this new feature sounds much easier to implement and maintain.
Basically, you install an agent/package on your on-prem fil server and it syncs up with Azure File Storage .
The two great things about this service are:
- Storage tiering, allowing you to offload files to the cloud and free up your on-prem server space.
- The solution supports multi master sync. Allowing you to keep file servers in different geographic regions synced with each other. Finally we have a solution for syncing cross premises file servers using Azure as our central store point!
You can read the official announcement here
Just a quick note to all my followers,
As you’ve probably seen my blog has been “sleepy” for the past 12-15 months.
This has been because of multiple reasons mainly our first child arriving, and me changing jobs 18 months ago.
Well the little one is now not so little and I’m changing jobs again.
I figured it’s time to kick this blog back to life. And what better time that during Microsoft ignite when all the new announcements for Azure are flowing in!
So hopefully back top blogging.
Azure Data Box is Microsoft’s answer to AWS snowball.
Basically this is a secure, hardened “box”/Storage for transferring large amounts of data to Azure.
The basics are simple. The box plugs directly into your network and supports standard SMB/CIFS protocols.
You copy your data to the box, which supports up to 100TB, and ship it back to Microsoft where it will be offloaded to your Azure storage.
There is also integrated support for 3rd party products such as Commvault, Veeam. Veritas & more.
You can read the official statement here
I often get asked what happens if an Azure service or resource crashes.
I’m also sometimes asked how Azure keep Virtual Machines running 100%.
Well lets start with the second question. They Don’t! Azure is an extremely reliable platform but is still based on industry standard physical servers, power, networking… And sometimes a failure may occur that can cause a VM to reboot or go offline. Having said that uptime is of course extremely high, some services being higher than others. You can find official SLA listings here.
Now regarding what happens if a service does fail. Well Azure has a an Auto-Recovery feature called service healing. Auto-Recovery is available across all Virtual Machine sizes in all regions.
Azure has multiple ways to preform health checks on your resources. Every VM deployed in the form if Web and Worker role has an agent injected in to it that run a health check every 15 seconds, a web farm behind a load balancer will also have health checks performed by the load balancer itself. If a predefined number of consecutive health check fail or a signal from the load balancer causes a role to become unhealthy then a recovery action will be initiated which is to restart the role instance.
Another test preformed is the health of the virtual machine itself within which the role instance is running. The virtual machine is hosted on a physical server running inside an Azure datacenter. The physical server runs another agent called the Host Agent. The Host Agent monitors the health of the virtual machine by pinging the guest agent every 15 seconds. It is quite plausible that a virtual machine is under stress from its workload, which could be its CPU is at 100% utilization, because a machine may be under heavy load Azure will wait 10 minutes before preforming a recovery action. The recovery action in this case is to recycle the virtual machine with a clean OS disk in the case of a Web & Worker Role and in the case of Azure Virtual Machine we perform a reboot preserving the disk state intact.
Apart from this Azure take as many measures as possible to predict failure in advance this includes extensive monitoring of all hardware in the Datacenter including CPU, Disk IO etc.
Azure’s new cool blob is now GA. But what is cool blob?
Well cool blob is a new blob storage feature for data that is accessed infrequently. In other words it’s good for backups, archives, scientific data etc.
The price of a cool blob is extremely low, between 1 to 1.6 cents per GB per month depending on region.
Cool blobs come with a 99% SLA compared with the 99.9% SLA offered on it’s hot tier. Azure cool blobs API is 100% compatible with existing blob storage offerings.
The Service is only available using the new modern ARM deployment, so if for some reason you need to use classic deployment then you cant take advantage of the new service. Also the service is offered as a block blob for unstructured data, so it can’t be used to store IAAS VHD’s, this makes sense as VHD’s need random read and write operations.
You can read more on the new offering at the Azure Blog over here
As I mentioned last week the new version of Azure AD Connect has been released and now includes a built in scheduler. This means that it no longer relies on the Windows Task Scheduler to run synchronization jobs. While this is defiantly an improvement it does mean that you can no longer use the Windows task scheduler to manually run a job. That is now all down to PowerShell, so after tinkering around a bit I decided to list some of the most required commands for running jobs.
Fist of all after initial installation there is a Check box to start the initial sync after installation. If you do not check this box the sync will never run until a correct command is issued.
To check if Sync is enabled or not we need to run the following command Get-ADSyncScheduler
In my case you can see that SyncCycleEnabled is set to true. However if it set to false then the client is not performing any syncs.
To enable the Sync cycle you will need to issue the following command Set-ADSyncScheduler -SyncCycleEnabled $True
The sync will be run automatically once every 30 minutes.
To manually kick off a sync cycle we will need to issue one of the following commands.
Start-ADSyncSyncCycle -PolicyType Delta
A delta sync cycle will:
- Delta import on all connectors
- Delta sync on all connectors
- Export on all connectors
This is the command that you will usally use to run a manuall sync.
You could also run a full cycle by issuing the following command
Start-ADSyncSyncCycle -PolicyType Initial
An initial sync cycle will
- Full import on all connectors
- Full sync on all connectors
- Export on all connectors
You mainly want to issue this command if you have made one of the following changes:
- Added more objects or attributes to be imported from a source directory
- Made changes to the Synchronization rules
- Changed filtering so a different number of objects should be included
If for some reason you need to stop the Sync Scheduler then you can issue the following command Stop-ADSyncSyncCycle
So now that you know the commands you can go ahead and update to the latest version of Azure AD Connect.
The new version of Azure AD connect has been released.
So what’s new?
- Automatic upgrade feature for Express settings customers.
Support for the global admin using MFA and PIM in the installation wizard.
- user’s sign-in can be changed after initial install.
- We can now set Domain and OU filtering in the installation wizard.
- We get a Scheduler is built-in to the sync engine.
Also Device Writeback and Directory extensions are now fully available (previously these were preview only).
You can download the new version of Azure AD Connect here.