Known issues in the Azure Local 2411.1 release

Applies to: Azure Local, version 23H2

This article identifies critical known issues and their workarounds in the Azure Local 2411.1 release.

These release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your Azure Local instance, carefully review the information contained here.

Important

For information about supported update paths for this release, see Release information.

For more information about new features in this release, see What's new in 23H2.

Known issues for version 2411.1

This software release maps to software version number 2411.1.10.

Important

The new deployments of this software use the 2411.1.10 build. If you updated from 2408.2, you’ve received either the 2411.0.22 or 2411.0.24 build. Both builds can be updated to 2411.1.10.

Release notes for this version include the issues fixed in this release, known issues in this release, and release note issues carried over from previous versions.

Note

For detailed remediation for common known issues, see the Azure Local Supportability GitHub repository.

Fixed issues

The following issues are fixed in this release:

Feature Issue Workaround/Comments
Arc VM Management Redeploying an Arc VM causes connection issues with that Arc VM and the agent disconnects.
Upgrade Resolved conflict with third party PowerShell modules.
Upgrade Stopped indefinite logging of negligible error events.
Upgrade Added validation to check for free memory.
Update Added check to ensure that solution extension content has been copied correctly.
Deployment
Upgrade
If the timezone isn't set to UTC before you deploy Azure Local, an ArcOperationTimeOut error occurs during validation. The following error message is displayed: *OperationTimeOut, No updates received from device for operation.
Security vulnerability Microsoft identified a security vulnerability that could expose the local admin credentials used during the creation of Arc VMs on Azure Local to non-admin users on the VM and on the hosts.
Arc VMs running on releases prior to Azure Local 2411 release are vulnerable.

Known issues in this release

Microsoft isn't aware of any issues in this release.

Known issues from previous releases

The following table lists the known issues from previous releases:

Feature Issue Workaround
Update When updating from version 2408.2.7 to 2411.0.24, the update process could fail with the following error message: Type 'CauPreRequisites' of Role 'CAU' raised an exception: Could not finish cau prerequisites due to error 'Cannot remove item C:\UpdateDistribution\<any_file_name>: Access to the path is denied.' For detailed steps on how to mitigate this issue, see Azure Local Troubleshooting Guide for Update.
Update With the 2411 release, solution and Solution Builder Extension update aren't combined in a single update run. To apply a Solution Builder Extension package, you need a separate update run.
Update When applying solution update in this release, the update can fail. This will occur only if the update was started prior to November 26. The issue that causes the failure can result in one of the following error messages:

Error 1 - The step "update ARB and extension" error "Clear-AzContext failed with 0 and Exception calling "Initialize" with "1" argument(s): "Object reference not set to an instance of an object." at "Clear-AzPowerShellCache".

Error 2 - The step "EvalTVMFlow" error "CloudEngine.Actions.InterfaceInvocationFailedException: Type 'EvalTVMFlow' of Role 'ArcIntegration' raised an exception: This module requires Az.Accounts version 3.0.5. An earlier version of Az.Accounts is imported in the current PowerShell session. Please open a new session before importing this module. This error could indicate that multiple incompatible versions of the Azure PowerShell cmdlets are installed on your system. Please see https://aka.ms/azps-version-error for troubleshooting information."

Depending on the version of PowerShell modules, the above error could be reported for both versions 3.0.4 and 3.0.5.
For detailed steps on how to mitigate this issue, go to: https://aka.ms/azloc-update-30221399.
Repair server After you repair a node and run the command Set-AzureStackLCMUserPassword, you may encounter the following error:

CloudEngine.Actions.InterfaceInvocationFailedException: Type 'ValidateCredentials' of Role 'SecretRotation' raised an exception: Cannot load encryption certificate. The certificate setting 'CN=DscEncryptionCert' does not represent a valid base-64 encoded certificate, nor does it represent a valid certificate by file, directory, thumbprint, or subject name. at Validate-Credentials
Follow these steps to mitigate the issue:

$NewPassword = <Provide new password as secure string>

$OldPassword = <Provide the old/current password as secure string>

$Identity = <LCM username>

$credential = New-Object -TypeName PSCredential -ArgumentList $Identity, $NewPassword

1. Import the necessary module:

Import-Module "C:\Program Files\WindowsPowerShell\Modules\Microsoft.AS.Infra.Security.SecretRotation\PasswordUtilities.psm1" -DisableNameChecking

2. Check the status of the ECE cluster group:

$eceClusterGroup = Get-ClusterGroup | Where-Object {$_.Name -eq "Azure Stack HCI Orchestrator Service Cluster Group"}

if ($eceClusterGroup.State -ne "Online") {Write-AzsSecurityError -Message "ECE cluster group is not in an Online state. Cannot continue with password rotation." -ErrRecord $_}

3. Update the ECE with the new password:

Write-AzsSecurityVerbose -Message "Updating password in ECE" -Verbose

$eceContainersToUpdate = @("DomainAdmin", "DeploymentDomainAdmin", "SecondaryDomainAdmin", "TemporaryDomainAdmin", "BareMetalAdmin", "FabricAdmin", "SecondaryFabric", "CloudAdmin") <br><br> foreach ($containerName in $eceContainersToUpdate) {Set-ECEServiceSecret -ContainerName $containerName -Credential $credential 3>$null 4>$null} <br><br> Write-AzsSecurityVerbose -Message "Finished updating credentials in ECE." -Verbose

4. Update the password in Active Directory:

Set-ADAccountPassword -Identity $Identity -OldPassword $OldPassword -NewPassword $NewPassword
Arc VM management Using an exported Azure VM OS disk as a VHD to create a gallery image for provisioning an Arc VM is unsupported. Run the command restart-service mochostagent to restart the mochostagent service.
Networking When a node is configured with a proxy server that has capital letters in its address, such as HTTPS://10.100.000.00:8080, Arc extensions fail to install or update on the node in existing builds, including version 2408.1. However, the node remains Arc connected. Follow these steps to mitigate the issue:

1. Set the environment values in lowercase. [System.Environment]::SetEnvironmentVariable("HTTPS_PROXY", "https://10.100.000.00:8080", "Machine").

2. Validate that the values were set. [System.Environment]::GetEnvironmentVariable("HTTPS_PROXY", "Machine").

3. Restart Arc services.

Restart-Service himds

Restart-Service ExtensionService

Restart-Service GCArcService

4. Signal the AzcmaAgent with the lowercase proxy information.

& 'C:\Program Files\AzureConnectedMachineAgent\azcmagent.exe' config set proxy.url https://10.100.000.00:8080

& 'C:\Program Files\AzureConnectedMachineAgent\azcmagent.exe' config list
Networking When Arc machines go down, the "All Clusters" page, in the new portal experience shows a "PartiallyConnected" or "Not Connected Recently status. Even when the Arc machines become healthy, they may not show a "Connected" status. There's no known workaround for this issue. To check the connectivity status, use the old experience to see if it shows as "Connected".
Security The SideChannelMitigation security feature may not show an enabled state even if it's enabled. There's no workaround in this release. If you encounter this issue, contact Microsoft Support to determine next steps.
Arc VM management The Mochostagent service might appear to be running but can get stuck without updating logs for over a month. You can identify this issue by checking the service logs in C:\programdata\mochostagent\logs to see if logs are being updated. Run the following command to restart the mochostagent service: restart-service mochostagent.
Upgrade When upgrading the stamp from 2311 or prior builds to 2408 or later, add node and repair node operations may fail. For example, you could see an error: Type 'AddAsZHostToDomain' of Role 'BareMetal' raised an exception. There's no workaround in this release. If you encounter this issue, contact Microsoft Support to determine next steps.
Update When viewing the readiness check results for an Azure Local instance via the Azure Update Manager, there might be multiple readiness checks with the same name. There's no known workaround in this release. Select View details to view specific information about the readiness check.
Update There's an intermittent issue in this release when the Azure portal incorrectly reports the update status as Failed to update or In progress though the update is complete. Connect to your Azure Local instance via a remote PowerShell session. To confirm the update status, run the following PowerShell cmdlets:

$Update = get-solutionupdate| ? version -eq "<version string>"

Replace the version string with the version you're running. For example, "10.2405.0.23".

$Update.state

If the update status is Installed, no further action is required on your part. Azure portal refreshes the status correctly within 24 hours.
To refresh the status sooner, follow these steps on one of the nodes.
Restart the Cloud Management cluster group.
Stop-ClusterGroup "Cloud Management"
Start-ClusterGroup "Cloud Management"
Update During an initial MOC update, a failure occurs due to the target MOC version not being found in the catalog cache. The follow-up updates and retries show MOC in the target version, without the update succeeding, and as a result the Arc Resource Bridge update fails.

To validate this issue, collect the update logs using Troubleshoot solution updates for Azure Local, version 23H2. The log files should show a similar error message (current version might differ in the error message):

[ERROR: { "errorCode": "InvalidEntityError", "errorResponse": "{\n\"message\": \"the cloud fabric (MOC) is currently at version v0.13.1. A minimum version of 0.15.0 is required for compatibility\"\n}" }]
Follow these steps to mitigate the issue:

1. To find the MOC agent version, run the following command: 'C:\Program Files\AksHci\wssdcloudagent.exe' version.

2. Use the output of the command to find the MOC version from the table below that matches the agent version, and set $initialMocVersion to that MOC version. Set the $targetMocVersion by finding the Azure Local build you're updating to and get the matching MOC version from the following table. Use these values in the mitigation script provided below:

BuildMOC versionAgent version
2311.21.0.24.10106v0.13.0-6-gf13a73f7, v0.11.0-alpha.38,01/06/2024
24021.0.25.10203v0.14.0, v0.13.1, 02/02/2024
2402.11.0.25.10302v0.14.0, v0.13.1, 03/02/2024
2402.21.1.1.10314v0.16.0-1-g04bf0dec, v0.15.1, 03/14/2024
2405/2402.31.3.0.10418v0.17.1, v0.16.5, 04/18/2024


For example, if the agent version is v0.13.0-6-gf13a73f7, v0.11.0-alpha.38,01/06/2024, then $initialMocVersion = "1.0.24.10106" and if you're updating to 2405.0.23, then $targetMocVersion = "1.3.0.10418".

3. Run the following PowerShell commands on the first node:

$initialMocVersion = "<initial version determined from step 2>"
$targetMocVersion = "<target version determined from step 2>"

# Import MOC module twice
import-module moc
import-module moc
$verbosePreference = "Continue"

# Clear the SFS catalog cache
Remove-Item (Get-MocConfig).manifestCache

# Set version to the current MOC version prior to update, and set state as update failed
Set-MocConfigValue -name "version" -value $initialMocVersion
Set-MocConfigValue -name "installState" -value ([InstallState]::UpdateFailed)

# Rerun the MOC update to desired version
Update-Moc -version $targetMocVersion

4. Resume the update.
Deployment In some instances, during the registration of Azure Local machines, this error might be seen in the debug logs: Encountered internal server error. One of the mandatory extensions for device deployment might not be installed. Follow these steps to mitigate the issue:

$Settings = @{ "CloudName" = $Cloud; "RegionName" = $Region; "DeviceType" = "AzureEdge" }

New-AzConnectedMachineExtension -Name "AzureEdgeTelemetryAndDiagnostics" -ResourceGroupName $ResourceGroup -MachineName $env:COMPUTERNAME -Location $Region -Publisher "Microsoft.AzureStack.Observability" -Settings $Settings -ExtensionType "TelemetryAndDiagnostics" -EnableAutomaticUpgrade

New-AzConnectedMachineExtension -Name "AzureEdgeDeviceManagement" -ResourceGroupName $ResourceGroup -MachineName $env:COMPUTERNAME -Location $Region -Publisher "Microsoft.Edge" -ExtensionType "DeviceManagementExtension"

New-AzConnectedMachineExtension -Name "AzureEdgeLifecycleManager" -ResourceGroupName $ResourceGroup -MachineName $env:COMPUTERNAME -Location $Region -Publisher "Microsoft.AzureStack.Orchestration" -ExtensionType "LcmController"

New-AzConnectedMachineExtension -Name "AzureEdgeRemoteSupport" -ResourceGroupName $ResourceGroup -MachineName $env:COMPUTERNAME -Location $Region -Publisher "Microsoft.AzureStack.Observability" -ExtensionType "EdgeRemoteSupport" -EnableAutomaticUpgrade
AKS on Azure Local AKS cluster creation fails with the Error: Invalid AKS network resource id. This issue can occur when the associated logical network name has an underscore. Underscores aren't supported in logical network names. Make sure to not use underscore in the names for logical networks deployed on your Azure Local.
Repair server In rare instances, the Repair-Server operation fails with the HealthServiceWaitForDriveFW error. In these cases, the old drives from the repaired node aren't removed and new disks are stuck in the maintenance mode. To prevent this issue, make sure that you DO NOT drain the node either via the Windows Admin Center or using the Suspend-ClusterNode -Drain PowerShell cmdlet before you start Repair-Server.
If the issue occurs, contact Microsoft Support for next steps.
Repair server This issue is seen when the single node Azure Local instance is updated from 2311 to 2402 and then the Repair-Server is performed. The repair operation fails. Before you repair the single node, follow these steps:
1. Run version 2402 for the ADPrepTool. Follow the steps in Prepare Active Directory. This action is quick and adds the required permissions to the Organizational Unit (OU).
2. Move the computer object from Computers segment to the root OU. Run the following command:
Get-ADComputer <HOSTNAME> | Move-ADObject -TargetPath "<OU path>"
Deployment If you prepare the Active Directory on your own (not using the script and procedure provided by Microsoft), your Active Directory validation could fail with missing Generic All permission. This is due to an issue in the validation check that checks for a dedicated permission entry for msFVE-RecoverInformationobjects – General – Permissions Full control, which is required for BitLocker recovery. Use the Prepare AD script method or if using your own method, make sure to assign the specific permission msFVE-RecoverInformationobjects – General – Permissions Full control.
Deployment There's a rare issue in this release where the DNS record is deleted during the Azure Local deployment. When that occurs, the following exception is seen:
Type 'PropagatePublicRootCertificate' of Role 'ASCA' raised an exception:<br>The operation on computer 'ASB88RQ22U09' failed: WinRM cannot process the request. The following error occurred while using Kerberos authentication: Cannot find the computer ASB88RQ22U09.local. Verify that the computer exists on the network and that the name provided is spelled correctly at PropagatePublicRootCertificate, C:\NugetStore\Microsoft.AzureStack, at Orchestration.Roles.CertificateAuthority.10.2402.0.14\content\Classes\ASCA\ASCA.psm1: line 38, at C:\CloudDeployment\ECEngine\InvokeInterfaceInternal.psm1: line 127,at Invoke-EceInterfaceInternal, C:\CloudDeployment\ECEngine\InvokeInterfaceInternal.psm1: line 123.
Check the DNS server to see if any DNS records of the nodes are missing. Apply the following mitigation on the nodes where its DNS record is missing.

Restart the DNS client service. Open a PowerShell session and run the following cmdlet on the affected node:
Taskkill /f /fi "SERVICES eq dnscache"
Deployment In this release, there's a remote task failure on a multi-node deployment that results in the following exception:
ECE RemoteTask orchestration failure with ASRR1N42R01U31 (node pingable - True): A WebException occurred while sending a RestRequest. WebException.Status: ConnectFailure on [https://<URL>](https://<URL>).
The mitigation is to restart the ECE agent on the affected node. On your machine, open a PowerShell session and run the following command:
Restart-Service ECEAgent.
Add server In this release and previous releases, when adding a machine to the system, isn't possible to update the proxy bypass list string to include the new machine. Updating environment variables proxy bypass list on the hosts won't update the proxy bypass list on Azure Resource Bridge or AKS. There's no workaround in this release. If you encounter this issue, contact Microsoft Support to determine next steps.
Add/Repair server In this release, when adding or repairing a machine, a failure is seen when the software load balancer or network controller VM certificates are being copied from the existing nodes. The failure is because these certificates weren't generated during the deployment/update. There's no workaround in this release. If you encounter this issue, contact Microsoft Support to determine next steps.
Deployment In this release, there's a transient issue resulting in the deployment failure with the following exception:
Type 'SyncDiagnosticLevel' of Role 'ObservabilityConfig' raised an exception:*<br>*Syncing Diagnostic Level failed with error: The Diagnostic Level does not match. Portal was not set to Enhanced, instead is Basic.
As this is a transient issue, retrying the deployment should fix this. For more information, see how to Rerun the deployment.
Deployment In this release, there's an issue with the Secrets URI/location field. This is a required field that is marked Not mandatory and results in Azure Resource Manager template deployment failures. Use the sample parameters file in the Deploy Azure Local, version 23H2 via Azure Resource Manager template to ensure that all the inputs are provided in the required format and then try the deployment.
If there's a failed deployment, you must also clean up the following resources before you Rerun the deployment:
1. Delete C:\EceStore.
2. Delete C:\CloudDeployment.
3. Delete C:\nugetstore.
4. Remove-Item HKLM:\Software\Microsoft\LCMAzureStackStampInformation.
Security For new deployments, Secured-core capable devices won't have Dynamic Root of Measurement (DRTM) enabled by default. If you try to enable (DRTM) using the Enable-AzSSecurity cmdlet, you see an error that DRTM setting isn't supported in the current release.
Microsoft recommends defense in depth, and UEFI Secure Boot still protects the components in the Static Root of Trust (SRT) boot chain by ensuring that they're loaded only when they're signed and verified.
DRTM isn't supported in this release.
Networking An environment check fails when a proxy server is used. By design, the bypass list is different for winhttp and wininet, which causes the validation check to fail. Follow these workaround steps:

1. Clear the proxy bypass list prior to the health check and before starting the deployment or the update.

2. After passing the check, wait for the deployment or update to fail.

3. Set your proxy bypass list again.
Arc VM management Deployment or update of Arc Resource Bridge could fail when the automatically generated temporary SPN secret during this operation, starts with a hyphen. Retry the deployment/update. The retry should regenerate the SPN secret and the operation will likely succeed.
Arc VM management Arc Extensions on Arc VMs stay in "Creating" state indefinitely. Sign in to the VM, open a command prompt, and type the following:
Windows:
notepad C:\ProgramData\AzureConnectedMachineAgent\Config\agentconfig.json
Linux:
sudo vi /var/opt/azcmagent/agentconfig.json
Next, find the resourcename property. Delete the GUID that is appended to the end of the resource name, so this property matches the name of the VM. Then restart the VM.
Arc VM management When a new machine is added to an Azure Local instance, storage path isn't created automatically for the newly created volume. You can manually create a storage path for any new volumes. For more information, see Create a storage path.
Arc VM management Restart of Arc VM operation completes after approximately 20 minutes although the VM itself restarts in about a minute. There's no known workaround in this release.
Arc VM management In some instances, the status of the logical network shows as Failed in Azure portal. This occurs when you try to delete the logical network without first deleting any resources such as network interfaces associated with that logical network.
You should still be able to create resources on this logical network. The status is misleading in this instance.
If the status of this logical network was Succeeded at the time when this network was provisioned, then you can continue to create resources on this network.
Arc VM management In this release, when you update a VM with a data disk attached to it using the Azure CLI, the operation fails with the following error message:
Couldn't find a virtual hard disk with the name.
Use the Azure portal for all the VM update operations. For more information, see Manage Arc VMs and Manage Arc VM resources.
Update In rare instances, you may encounter this error while updating your Azure Local instance: Type 'UpdateArbAndExtensions' of Role 'MocArb' raised an exception: Exception Upgrading ARB and Extension in step [UpgradeArbAndExtensions :Get-ArcHciConfig] UpgradeArb: Invalid applianceyaml = [C:\AksHci\hci-appliance.yaml]. If you see this issue, contact Microsoft Support to assist you with the next steps.
Networking There's an infrequent DNS client issue in this release that causes the deployment to fail on a two-node system with a DNS resolution error: A WebException occurred while sending a RestRequest. WebException.Status: NameResolutionFailure. As a result of the bug, the DNS record of the second node is deleted soon after it's created resulting in a DNS error. Restart the machine. This operation registers the DNS record, which prevents it from getting deleted.
Azure portal In some instances, the Azure portal might take a while to update and the view might not be current. You might need to wait for 30 minutes or more to see the updated view.
Arc VM management Deleting a network interface on an Arc VM from Azure portal doesn't work in this release. Use the Azure CLI to first remove the network interface and then delete it. For more information, see Remove the network interface and see Delete the network interface.
Deployment Providing the OU name in an incorrect syntax isn't detected in the Azure portal. The incorrect syntax includes unsupported characters such as &,",',<,>. The incorrect syntax is detected at a later step during system validation. Make sure that the OU path syntax is correct and doesn't include unsupported characters.
Deployment Deployments via Azure Resource Manager time out after 2 hours. Deployments that exceed 2 hours show up as failed in the resource group though the system is successfully created. To monitor the deployment in the Azure portal, go to the Azure Local instance resource and then go to new Deployments entry.
Azure Site Recovery Azure Site Recovery can't be installed on an Azure Local instance in this release. There's no known workaround in this release.
Update When updating the Azure Local instance via the Azure Update Manager, the update progress and results may not be visible in the Azure portal. To work around this issue, on each node, add the following registry key (no value needed):

New-Item -Path "HKLM:\SYSTEM\CurrentControlSet\Services\HciCloudManagementSvc\Parameters" -force

Then on one of the nodes, restart the Cloud Management cluster group.

Stop-ClusterGroup "Cloud Management"

Start-ClusterGroup "Cloud Management"

This won't fully remediate the issue as the progress details may still not be displayed for a duration of the update process. To get the latest update details, you can Retrieve the update progress with PowerShell.
Update In rare instances, if a failed update is stuck in an In progress state in Azure Update Manager, the Try again button is disabled. To resume the update, run the following PowerShell command:
Get-SolutionUpdate|Start-SolutionUpdate.
Update In some cases, SolutionUpdate commands could fail if run after the Send-DiagnosticData command. Make sure to close the PowerShell session used for Send-DiagnosticData. Open a new PowerShell session and use it for SolutionUpdate commands.
Update In rare instances, when applying an update from 2311.0.24 to 2311.2.4, system status reports In Progress instead of expected Failed to update. Retry the update. If the issue persists, contact Microsoft Support.
Update Attempts to install solution updates can fail at the end of the CAU steps with:
There was a failure in a Common Information Model (CIM) operation, that is, an operation performed by software that Cluster-Aware Updating depends on.
This rare issue occurs if the Cluster Name or Cluster IP Address resources fail to start after a node reboot and is most typical in small deployments.
If you encounter this issue, contact Microsoft Support for next steps. They can work with you to manually restart the Azure Local resources and resume the update as needed.
Update When applying a system update to 10.2402.3.11, the Get-SolutionUpdate cmdlet may not respond and eventually fails with a RequestTimeoutException after approximately 10 minutes. This is likely to occur following an add or repair server scenario. Use the Start-ClusterGroup and Stop-ClusterGroup cmdlets to restart the update service.

Get-ClusterGroup -Name "Azure Stack HCI Update Service Cluster Group" | Stop-ClusterGroup

Get-ClusterGroup -Name "Azure Stack HCI Update Service Cluster Group" | Start-ClusterGroup

A successful run of these cmdlets should bring the update service online.
Cluster aware updating Resume node operation failed to resume node. This is a transient issue and could resolve on its own. Wait for a few minutes and retry the operation. If the issue persists, contact Microsoft Support.
Cluster aware updating Suspend node operation was stuck for greater than 90 minutes. This is a transient issue and could resolve on its own. Wait for a few minutes and retry the operation. If the issue persists, contact Microsoft Support.

Next steps