Nowadays, Azure Files supports identity based authentication over SMB through two kind of Domain Services. You can either use Azure Active Directory Domain Services (AADDS) or the old On-Prem Active Directory Domain Services that most of the environments already have. The setup is a bit different, depending on which authentication method you want to use. On this blog, we will go through on how to implement the On-Prem Active Directory Authentication, using Site-to-Site VPN tunnel, instead of going over public internet.
Using On-Prem AD DS Authentication, requires a Hybrid Identity. So you will need to synchronize your users to Azure AD, using Azure AD Connect. If you don’t have Hybrid Identity yet, follow the guide here.
Like in every new thing you are planning to setup, there are some prerequisites that we need to complete first. First of all, you need to have Azure Subscription. In the Azure subscription, you need to have Virtual Network, VPN Gateway, Local Network Gateway and of course, the storage account that will be used for the file share.
In this blog, we will use powershell to create the needed resources, so make sure you have installed the Azure Powershell modules. If not, you can install the modules with the following command and connect to Azure.
1 |
Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force |
Create the needed Resource Group to host your resources.
1 |
New-AzResourceGroup -Name RG1 -Location "northeurope" |
Create a Storage Account to the newly created Resource Group with the preferred settings.
1 2 3 |
$resourceGroupName = "RG1" $storageAccountName = "mystorageacct" $region = "northeurope" |
1 2 3 4 5 6 |
$storAcct = New-AzStorageAccount ` -ResourceGroupName $resourceGroupName ` -Name $storageAccountName ` -SkuName Standard_LRS ` -Location $region ` -Kind StorageV2 |
Create the File Share, in to your Storage Account.
1 2 3 4 5 6 7 8 9 |
$shareName = "myshare" New-AzRmStorageShare ` -ResourceGroupName $resourceGroupName ` -StorageAccountName $storageAccountName ` -Name $shareName ` -AccessTier TransactionOptimized ` -QuotaGiB 1024 | ` Out-Null |
After we have created the needed Resource Groups and Storage Accounts, we need to create a Virtual Network, where our VPN Gateway will be.
1 2 3 4 5 |
$virtualNetwork = New-AzVirtualNetwork ` -ResourceGroupName RG1 ` -Location northeurope ` -Name VNet1 ` -AddressPrefix 10.1.0.0/16 |
Create a subnet configuration.
1 2 3 4 |
$subnetConfig = Add-AzVirtualNetworkSubnetConfig ` -Name Subnet1 ` -AddressPrefix 10.1.0.0/24 ` -VirtualNetwork $virtualNetwork |
Set the subnet, to the Virtual Network.
1 |
$virtualNetwork | Set-AzVirtualNetwork |
After the network and the subnet have been created, its time to create the Virtual Network Gateway and Gateway Subnet.
First we set a variable to the Virtual Network.
1 |
$vnet = Get-AzVirtualNetwork -ResourceGroupName RG1 -Name VNet1 |
Create the Gateway Subnet.
1 |
Add-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -AddressPrefix 10.1.255.0/27 -VirtualNetwork $vnet |
Set the subnet configuration to the Virtual Network.
1 |
$vnet | Set-AzVirtualNetwork |
After the Gateway Subnet has been created, we will need an IP for the Virtual Network Gateway.
Request a public IP.
1 |
$gwpip= New-AzPublicIpAddress -Name VNet1GWIP -ResourceGroupName RG1 -Location 'northeurope' -AllocationMethod Dynamic |
Create a Gateway configuration.
1 2 3 |
$vnet = Get-AzVirtualNetwork -Name VNet1 -ResourceGroupName RG1 $subnet = Get-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -VirtualNetwork $vnet $gwipconfig = New-AzVirtualNetworkGatewayIpConfig -Name gwipconfig1 -SubnetId $subnet.Id -PublicIpAddressId $gwpip.Id |
Last thing to do, is to create the VPN Gateway. The provisioning for VPN Gateway may take up to 1hr or more.
1 2 3 |
New-AzVirtualNetworkGateway -Name VNet1GW -ResourceGroupName RG1 ` -Location 'northeurope' -IpConfigurations $gwipconfig -GatewayType Vpn ` -VpnType RouteBased -GatewaySku VpnGw1 |
After creating a Virtual Network Gateway, you will need to create a Local Network Gateway. Fill in your On-Prem details.
1 2 |
New-AzLocalNetworkGateway -Name Site1 -ResourceGroupName RG1 ` -Location 'northeurope' -GatewayIpAddress 'On-Prem Public IP here' -AddressPrefix @('On-Prem Subnets here','On-Prem Subnets here') |
Once you have created the Virtual Network Gateway and the Local Network Gateway, you can create a Connection between these two.
First set the variables.
1 2 |
$gateway1 = Get-AzVirtualNetworkGateway -Name VNet1GW -ResourceGroupName RG1 $local = Get-AzLocalNetworkGateway -Name Site1 -ResourceGroupName RG1 |
And create the Connection with your own details.
1 2 3 |
New-AzVirtualNetworkGatewayConnection -Name VNet1toSite1 -ResourceGroupName RG1 ` -Location 'northeurope' -VirtualNetworkGateway1 $gateway1 -LocalNetworkGateway2 $local ` -ConnectionType IPsec -RoutingWeight 10 -SharedKey 'abc123' |
Once you have everything you need, its time to make the Storage Account only accessible through the Site-to-Site VPN tunnel. There is two ways to do it. You can either create a Private Endpoint or a Service Endpoint. In this blog, we will use the Service Endpoint.
As the Storage Account connectivity method by default, is set as a Public Endpoint which allows access over the internet from all networks, we need to restrict the access to a selected Virtual Networks.
To restrict the access to the storage account’s public endpoint, we need to gather some information about the storage account and the virtual network to variables. Fill in your environment details.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
$storageAccountResourceGroupName = "<storage-account-resource-group>" $storageAccountName = "<storage-account-name>" $restrictToVirtualNetworkResourceGroupName = "<vnet-resource-group-name>" $restrictToVirtualNetworkName = "<vnet-name>" $subnetName = "<subnet-name>" $storageAccount = Get-AzStorageAccount ` -ResourceGroupName $storageAccountResourceGroupName ` -Name $storageAccountName ` -ErrorAction Stop $virtualNetwork = Get-AzVirtualNetwork ` -ResourceGroupName $restrictToVirtualNetworkResourceGroupName ` -Name $restrictToVirtualNetworkName ` -ErrorAction Stop $subnet = $virtualNetwork | ` Select-Object -ExpandProperty Subnets | ` Where-Object { $_.Name -eq $subnetName } if ($null -eq $subnet) { Write-Error ` -Message "Subnet $subnetName not found in virtual network $restrictToVirtualNetworkName." ` -ErrorAction Stop } |
We need to expose the Microsoft.Storage Service Endpoint to the Subnets, in order to allow the traffic through the Azure network fabric, towards the Public Endpoint of the Storage Account.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
$serviceEndpoints = $subnet | ` Select-Object -ExpandProperty ServiceEndpoints | ` Select-Object -ExpandProperty Service if ($serviceEndpoints -notcontains "Microsoft.Storage") { if ($null -eq $serviceEndpoints) { $serviceEndpoints = @("Microsoft.Storage") } elseif ($serviceEndpoints -is [string]) { $serviceEndpoints = @($serviceEndpoints, "Microsoft.Storage") } else { $serviceEndpoints += "Microsoft.Storage" } $virtualNetwork = $virtualNetwork | Set-AzVirtualNetworkSubnetConfig ` -Name $subnetName ` -AddressPrefix $subnet.AddressPrefix ` -ServiceEndpoint $serviceEndpoints ` -WarningAction SilentlyContinue ` -ErrorAction Stop | ` Set-AzVirtualNetwork ` -ErrorAction Stop } |
Last step in restricting traffic to the storage account is to create a networking rule and add to the storage account’s network rule set.
1 2 3 4 5 6 7 8 9 10 11 |
$networkRule = $storageAccount | Add-AzStorageAccountNetworkRule ` -VirtualNetworkResourceId $subnet.Id ` -ErrorAction Stop $storageAccount | Update-AzStorageAccountNetworkRuleSet ` -DefaultAction Deny ` -Bypass AzureServices ` -VirtualNetworkRule $networkRule ` -WarningAction SilentlyContinue ` -ErrorAction Stop | ` Out-Null |
Now we have created all the needed resources to Azure, and its time to head over to your On-Prem AD DS Server. There are few ways to accomplish the AD DS authentication but on this blog, we will be using the AzFilesHybrid Powershell module.
Download and unzip the AzFilesHybrid module from here.
Run Powershell as an Admin and navigate to the folder, where you unzipped the AzFilesHybrid module. Fill in your environment details.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
#Change the execution policy to unblock importing AzFilesHybrid.psm1 module Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser # Navigate to where AzFilesHybrid is unzipped and stored and run to copy the files into your path .\CopyToPSPath.ps1 #Import AzFilesHybrid module Import-Module -Name AzFilesHybrid #Login with an Azure AD credential that has either storage account owner or contributer Azure role assignment Connect-AzAccount #Define parameters $SubscriptionId = "<your-subscription-id-here>" $ResourceGroupName = "<resource-group-name-here>" $StorageAccountName = "<storage-account-name-here>" #Select the target subscription for the current session Select-AzSubscription -SubscriptionId $SubscriptionId # Register the target storage account with your active directory environment under the target OU (for example: specify the OU with Name as "UserAccounts" or DistinguishedName as "OU=UserAccounts,DC=CONTOSO,DC=COM"). # You can use to this PowerShell cmdlet: Get-ADOrganizationalUnit to find the Name and DistinguishedName of your target OU. If you are using the OU Name, specify it with -OrganizationalUnitName as shown below. If you are using the OU DistinguishedName, you can set it with -OrganizationalUnitDistinguishedName. You can choose to provide one of the two names to specify the target OU. # You can choose to create the identity that represents the storage account as either a Service Logon Account or Computer Account (default parameter value), depends on the AD permission you have and preference. # Run Get-Help Join-AzStorageAccountForAuth for more details on this cmdlet. Join-AzStorageAccountForAuth ` -ResourceGroupName $ResourceGroupName ` -StorageAccountName $StorageAccountName ` -DomainAccountType "<ComputerAccount|ServiceLogonAccount>" <# Default is set as ComputerAccount #> ` -OrganizationalUnitDistinguishedName "<ou-distinguishedname-here>" <# If you don't provide the OU name as an input parameter, the AD identity that represents the storage account is created under the root directory. #> ` -EncryptionType "<AES256/RC4/AES256,RC4>" <# Specify the encryption agorithm used for Kerberos authentication. Default is configured as "'RC4','AES256'" which supports both 'RC4' and 'AES256' encryption. #> #Run the command below if you want to enable AES 256 authentication. If you plan to use RC4, you can skip this step. Update-AzStorageAccountAuthForAES256 -ResourceGroupName $ResourceGroupName -StorageAccountName $StorageAccountName #You can run the Debug-AzStorageAccountAuth cmdlet to conduct a set of basic checks on your AD configuration with the logged on AD user. This cmdlet is supported on AzFilesHybrid v0.1.2+ version. For more details on the checks performed in this cmdlet, see Azure Files Windows troubleshooting guide. Debug-AzStorageAccountAuth -StorageAccountName $StorageAccountName -ResourceGroupName $ResourceGroupName -Verbose |
Good thing to remember on this setup, is that the AD DS account created by the cmdlet represents the storage account. If the AD DS account is created under an organizational unit that enforces password expiration, you must update the password before the maximum password age. You can update the password with the following command. Fill in your environment details.
1 2 3 4 5 6 |
# Update the password of the AD DS account registered for the storage account # You may use either kerb1 or kerb2 Update-AzStorageAccountADObjectPassword ` -RotateToKerbKey kerb2 ` -ResourceGroupName "<your-resource-group-name-here>" ` -StorageAccountName "<your-storage-account-name-here>" |
After you have succesfully created the ComputerAccount or the ServiceLogonAccount, you can verify the configuration. Fill in your environment details.
1 2 3 4 5 6 7 8 9 10 |
# Get the target storage account $storageaccount = Get-AzStorageAccount ` -ResourceGroupName "<your-resource-group-name-here>" ` -Name "<your-storage-account-name-here>" # List the directory service of the selected service account $storageAccount.AzureFilesIdentityBasedAuth.DirectoryServiceOptions # List the directory domain information if the storage account has enabled AD DS authentication for file shares $storageAccount.AzureFilesIdentityBasedAuth.ActiveDirectoryProperties |
If succesfull, the output should look similar to this.
1 2 3 4 5 6 |
DomainName:<yourDomainHere> NetBiosDomainName:<yourNetBiosDomainNameHere> ForestName:<yourForestNameHere> DomainGuid:<yourGUIDHere> DomainSid:<yourSIDHere> AzureStorageID:<yourStorageSIDHere> |
Now that you have enabled the authentication through AD DS, we need to assign a Share-level permissions to the users. You can either do this through Powershell, or from the Azure portal. Fill in your environment details.
1 2 3 4 5 6 |
#Get the name of the custom role $FileShareContributorRole = Get-AzRoleDefinition "<role-name>" #Use one of the built-in roles: Storage File Data SMB Share Reader, Storage File Data SMB Share Contributor, Storage File Data SMB Share Elevated Contributor #Constrain the scope to the target file share $scope = "/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/fileServices/default/fileshares/<share-name>" #Assign the custom role to the target identity with the specified scope. New-AzRoleAssignment -SignInName <user-principal-name> -RoleDefinitionName $FileShareContributorRole.Name -Scope $scope |
After the share-level permissions have been set, we need to set up the directory/file-level permissions.
First we need to mount the Azure File Share, to a domain joined computer/server. Fill in your environment details.
1 2 3 4 5 6 7 8 9 |
$connectTestResult = Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 445 if ($connectTestResult.TcpTestSucceeded) { net use <desired-drive-letter>: \\<storage-account-name>.file.core.windows.net\<share-name> /user:Azure\<storage-account-name> <storage-account-key> } else { Write-Error -Message "Unable to reach the Azure storage account via port 445. Check to make sure your organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to tunnel SMB traffic over a different port." } |
Note here: Now that we are connecting to the Storage Account Public Endpoint through a restricted network, you will need to route the traffic towards the *.file.core.windows.net, through the Site-to-Site VPN tunnel in your On-Prem Firewall. Otherwise the Computer/Server will try to connect to the Public Endpoint over the Internet, which is not allowed in this configuration. You can also do the routing with some other method, but the traffic needs to be forwarded to the VPN Tunnel. If you are using a private endpoint, this configuration is not necessary.
You can assign the directory/file-level permissions with icacls for example.
1 |
icacls <mounted-drive-letter>: /grant <user-email>:(f) |
After you have set all the needed permissions, you can start copying your files from On-Prem file shares, to Azure File Share.
If you have directories or files in on-premises file servers with Windows DACLs configured against the AD DS identities, you can copy it over to Azure Files persisting the ACLs with traditional file copy tools like Robocopy.
The AzFilesHybrid also provides tools for setting up the permissions, for example Move-OnPremSharePermissionsToAzureFileShare cmdlet to help migrate local share permissions to Azure RBAC’s built-in roles for files.