Deploying HP BIOS Updates – a real world example

Not so long ago HP published a customer advisory listing a number of their models that need to be on the latest BIOS release to be upgraded to Windows 10 2004. Since we were getting ready to rollout 20H2 we encountered some affected models in piloting, which prompted me to find that advisory and then get the BIOS updated on affected devices.

To be honest, until now we’ve never pushed out BIOS updates to anyone, but to get these devices updated to 20H2 we now had no choice. In this post I’m just going to share how we did that. For us, good user experience is critical but finding the balance between keeping devices secure and up-to-date without being too disruptive to the user can be a challenge!

First off was to create a script that could update the BIOS on any supported HP workstation, without needing to package and distribute BIOS update content. I know other equally handsome community members have shared some great scripts and solutions for doing BIOS updates, but I decided to create my own in this instance to meet our particular requirements and afford a bit more control over the update process. I considered using the HP Client Management Script Library which seems purpose-built for this kind of task and is a great resource, but I preferred not to have the dependency of an external PowerShell module and its requirements.

I published a version of this script in Github here. The script does the following:

  • Creates a working directory under ProgramData
  • Disables the IE first run wizard which causes a problem for the Invoke-WebRequest cmdlet when running in a context where IE hasn’t been initialised
  • Checks the HP Image Assistant (HPIA) web page for the latest version and downloads it
  • Extracts and runs the HPIA to identify the latest applicable BIOS update (if any)
  • If a BIOS update is available, downloads and extracts the softpaq
  • Checks if a BIOS password has been set, if so creates an encrypted password file as required by the firmware update utility
  • Runs the firmware update utility to stage the update
  • Everything is logged to the working directory and the logs are uploaded to Azure blob storage upon completion, or if something fails, so we can review them without requiring remote access to the user’s computer

It all runs silently without any user interaction and it can be run on any HP model that the HPIA supports.

In Production however, we used a slightly modified version of this script. Since there was the possibility that there could be unknown BIOS passwords in use out there, we decided not to try to flash the BIOS using an encrypted password file, but instead try to remove the BIOS password altogether (temporarily!) When the BIOS update is staged it simply copies the password file to the staging volume – it doesn’t check whether the password is correct or not. If it is not correct, the user would then be asked for the correct password when the BIOS is flashed and that is not cool! Removing the password meant that the user could never be unexpectedly prompted for the password in the event that the provided password file was incorrect. Of course, to remove the password you also have to know the password, so we tried the ones we knew and if they worked, great, if they didn’t, the script would simply exit out as a failsafe.

We have a compliance baseline deployed with MEMCM that sets the BIOS password on any managed workstation that does not have one set, so after the machine rebooted, the BIOS flashed and machine starts up again, before long the CB would run and set the password again.

Doing this also meant that we needed to ensure the computer was restarted asap after the update was staged – and for another reason as well – Bitlocker encryption is suspended until the update is applied and the machine restarted.

Because we didn’t want to force the update on users and force a restart on them, we decided to package the script as an application in MEMCM. This meant a couple of things:

  • We could put a nice corporate logo on the app and make it available for install in the Software Center
  • We could handle the return codes with custom actions. In this case, we are expecting the PowerShell script to exit successfully with code 0, and when it does we’ve set that code to be a soft reboot so that MEMCM restart notifications are then displayed to the user.

As a detection method, the following PowerShell code was used. This simply detects if the last write time of the log file has changed within the last hour, if it has, it’s installed. Longer than an hour and it’s available for install again if needed.

$Log = Get-ChildItem C:\ProgramData\Contoso\HP_BIOS_Update -Recurse -Include HP_BIOS_Update.log -ErrorAction SilentlyContinue | 
    where {([DateTime]::Now - $_.LastWriteTime).TotalHours -le 1}
If ($Log)
{
    Write-Host "Installed"
}

We then deployed the application with an available deployment. We communicated with the users directly to inform them a BIOS update needed to be installed on their device in order to remain secure and up-to-date, and directed them to the Software Center to install it themselves.

We also prepared collections in SCCM using query-based membership rules to identify the machines that were affected by the HP advisory, and an SQL query to find the same information and pull the full user name and email address from inventoried data.

The script does contain the BIOS password in clear text which, or course, may not meet your security requirements, although for us this password is not really that critical – it’s just there to help prevent the user from making unauthorized changes in the BIOS. In our Production script though, we simply converted these to base64 before adding them to the script to at least provide some masking. But for greater security you could consider storing the password in Azure key vault and fetch it at run time with a web request, for example.

If you wish to use the script in your own environment, you’ll need to change the location of the working directory as you desire. Additionally if you wish to upload the log files to an Azure storage container, you’ll need to have or create a container and add the URL and the SAS token string to the script, or else just comment out the Upload-LogFilesToAzure function where it’s used. I’m a big fan of sending log files to Azure storage especially during this season where many are working from home and may not be corporate connected. You can just use Azure Storage Explorer to download the log files which will open up in CMTrace if that’s your default log viewer.

Hope this is helpful to someone! The PS script is below.

#####################
## HP BIOS UPDATER ##
#####################
# Params
$HPIAWebUrl = "https://ftp.hp.com/pub/caps-softpaq/cmit/HPIA.html" # Static web page of the HP Image Assistant
$BIOSPassword = "MyPassword"
$script:ContainerURL = "https://mystorageaccount.blob.core.windows.net/mycontainer" # URL of your Azure blob storage container
$script:FolderPath = "HP_BIOS_Updates" # the subfolder to put logs into in the storage container
$script:SASToken = "mysastoken" # the SAS token string for the container (with write permission)
$ProgressPreference = 'SilentlyContinue' # to speed up web requests
################################
## Create Directory Structure ##
################################
$RootFolder = $env:ProgramData
$ParentFolderName = "Contoso"
$ChildFolderName = "HP_BIOS_Update"
$ChildFolderName2 = Get-Date Format "yyyy-MMM-dd_HH.mm.ss"
$script:WorkingDirectory = "$RootFolder\$ParentFolderName\$ChildFolderName\$ChildFolderName2"
try
{
[void][System.IO.Directory]::CreateDirectory($WorkingDirectory)
}
catch
{
throw
}
# Function write to a log file in ccmtrace format
Function script:Write-Log {
param (
[Parameter(Mandatory = $true)]
[string]$Message,
[Parameter()]
[ValidateSet(1, 2, 3)] # 1-Info, 2-Warning, 3-Error
[int]$LogLevel = 1,
[Parameter(Mandatory = $true)]
[string]$Component,
[Parameter(Mandatory = $false)]
[object]$Exception
)
$LogFile = "$WorkingDirectory\HP_BIOS_Update.log"
If ($Exception)
{
[String]$Message = "$Message" + "$Exception"
}
$TimeGenerated = "$(Get-Date Format HH:mm:ss).$((Get-Date).Millisecond)+000"
$Line = '<![LOG[{0}]LOG]!><time="{1}" date="{2}" component="{3}" context="" type="{4}" thread="" file="">'
$LineFormat = $Message, $TimeGenerated, (Get-Date Format MMddyyyy), $Component, $LogLevel
$Line = $Line -f $LineFormat
# Write to log
Add-Content Value $Line Path $LogFile ErrorAction SilentlyContinue
}
# Function to upload log file to Azure Blob storage
Function Upload-LogFilesToAzure {
$Date = Get-date Format "yyyy-MM-dd_HH.mm.ss"
$HpFirmwareUpdRecLog = Get-ChildItem Path $WorkingDirectory Include HpFirmwareUpdRec.log Recurse ErrorAction SilentlyContinue
$HPBIOSUPDRECLog = Get-ChildItem Path $WorkingDirectory Include HPBIOSUPDREC64.log Recurse ErrorAction SilentlyContinue
If ($HpFirmwareUpdRecLog)
{
$File = $HpFirmwareUpdRecLog
}
ElseIf ($HPBIOSUPDRECLog)
{
$File = $HPBIOSUPDRECLog
}
Else{}
If ($File)
{
$Body = Get-Content $($File.FullName) Raw ErrorAction SilentlyContinue
If ($Body)
{
$URI = "$ContainerURL/$FolderPath/$($Env:COMPUTERNAME)`_$Date`_$($File.Name)$SASToken"
$Headers = @{
'x-ms-content-length' = $($File.Length)
'x-ms-blob-type' = 'BlockBlob'
}
Invoke-WebRequest Uri $URI Method PUT Headers $Headers Body $Body ErrorAction SilentlyContinue
}
}
$File2 = Get-Item $WorkingDirectory\HP_BIOS_Update.log ErrorAction SilentlyContinue
$Body2 = Get-Content $($File2.FullName) Raw ErrorAction SilentlyContinue
If ($Body2)
{
$URI2 = "$ContainerURL/$FolderPath/$($Env:COMPUTERNAME)`_$Date`_$($File2.Name)$SASToken"
$Headers2 = @{
'x-ms-content-length' = $($File2.Length)
'x-ms-blob-type' = 'BlockBlob'
}
Invoke-WebRequest Uri $URI2 Method PUT Headers $Headers2 Body $Body2 ErrorAction SilentlyContinue
}
}
Write-Log Message "#######################" Component "Preparation"
Write-Log Message "## Starting BIOS update run ##" Component "Preparation"
Write-Log Message "#######################" Component "Preparation"
#################################
## Disable IE First Run Wizard ##
#################################
# This prevents an error running Invoke-WebRequest when IE has not yet been run in the current context
Write-Log Message "Disabling IE first run wizard" Component "Preparation"
$null = New-Item Path "HKLM:\SOFTWARE\Policies\Microsoft" Name "Internet Explorer" Force
$null = New-Item Path "HKLM:\SOFTWARE\Policies\Microsoft\Internet Explorer" Name "Main" Force
$null = New-ItemProperty Path "HKLM:\SOFTWARE\Policies\Microsoft\Internet Explorer\Main" Name "DisableFirstRunCustomize" PropertyType DWORD Value 1 Force
##########################
## Get latest HPIA Info ##
##########################
Write-Log Message "Finding info for latest version of HP Image Assistant (HPIA)" Component "DownloadHPIA"
try
{
$HTML = Invoke-WebRequest Uri $HPIAWebUrl ErrorAction Stop
}
catch
{
Write-Log Message "Failed to download the HPIA web page. $($_.Exception.Message)" Component "DownloadHPIA" LogLevel 3
UploadLogFilesToAzure
throw
}
$HPIASoftPaqNumber = ($HTML.Links | Where {$_.href -match "hp-hpia-"}).outerText
$HPIADownloadURL = ($HTML.Links | Where {$_.href -match "hp-hpia-"}).href
$HPIAFileName = $HPIADownloadURL.Split('/')[-1]
Write-Log Message "SoftPaq number is $HPIASoftPaqNumber" Component "DownloadHPIA"
Write-Log Message "Download URL is $HPIADownloadURL" Component "DownloadHPIA"
###################
## Download HPIA ##
###################
Write-Log Message "Downloading the HPIA" Component "DownloadHPIA"
try
{
$ExistingBitsJob = Get-BitsTransfer Name "$HPIAFileName" AllUsers ErrorAction SilentlyContinue
If ($ExistingBitsJob)
{
Write-Log Message "An existing BITS tranfer was found. Cleaning it up." Component "DownloadHPIA" LogLevel 2
Remove-BitsTransfer BitsJob $ExistingBitsJob
}
$BitsJob = Start-BitsTransfer Source $HPIADownloadURL Destination $WorkingDirectory\$HPIAFileName Asynchronous DisplayName "$HPIAFileName" Description "HPIA download" RetryInterval 60 ErrorAction Stop
do {
Start-Sleep Seconds 5
$Progress = [Math]::Round((100 * ($BitsJob.BytesTransferred / $BitsJob.BytesTotal)),2)
Write-Log Message "Downloaded $Progress`%" Component "DownloadHPIA"
} until ($BitsJob.JobState -in ("Transferred","Error"))
If ($BitsJob.JobState -eq "Error")
{
Write-Log Message "BITS tranfer failed: $($BitsJob.ErrorDescription)" Component "DownloadHPIA" LogLevel 3
UploadLogFilesToAzure
throw
}
Write-Log Message "Download is finished" Component "DownloadHPIA"
Complete-BitsTransfer BitsJob $BitsJob
Write-Log Message "BITS transfer is complete" Component "DownloadHPIA"
}
catch
{
Write-Log Message "Failed to start a BITS transfer for the HPIA: $($_.Exception.Message)" Component "DownloadHPIA" LogLevel 3
UploadLogFilesToAzure
throw
}
##################
## Extract HPIA ##
##################
Write-Log Message "Extracting the HPIA" Component "Analyze"
try
{
$Process = Start-Process FilePath $WorkingDirectory\$HPIAFileName WorkingDirectory $WorkingDirectory ArgumentList "/s /f .\HPIA\ /e" NoNewWindow PassThru Wait ErrorAction Stop
Start-Sleep Seconds 5
If (Test-Path $WorkingDirectory\HPIA\HPImageAssistant.exe)
{
Write-Log Message "Extraction complete" Component "Analyze"
}
Else
{
Write-Log Message "HPImageAssistant not found!" Component "Analyze" LogLevel 3
UploadLogFilesToAzure
throw
}
}
catch
{
Write-Log Message "Failed to extract the HPIA: $($_.Exception.Message)" Component "Analyze" LogLevel 3
UploadLogFilesToAzure
throw
}
##############################################
## Analyze available BIOS updates with HPIA ##
##############################################
Write-Log Message "Analyzing system for available BIOS updates" Component "Analyze"
try
{
$Process = Start-Process FilePath $WorkingDirectory\HPIA\HPImageAssistant.exe WorkingDirectory $WorkingDirectory ArgumentList "/Operation:Analyze /Category:BIOS /Selection:All /Action:List /Silent /ReportFolder:$WorkingDirectory\Report" NoNewWindow PassThru Wait ErrorAction Stop
If ($Process.ExitCode -eq 0)
{
Write-Log Message "Analysis complete" Component "Analyze"
}
elseif ($Process.ExitCode -eq 256)
{
Write-Log Message "The analysis returned no recommendation. No BIOS update is available at this time" Component "Analyze" LogLevel 2
UploadLogFilesToAzure
Exit 0
}
elseif ($Process.ExitCode -eq 4096)
{
Write-Log Message "This platform is not supported!" Component "Analyze" LogLevel 2
UploadLogFilesToAzure
throw
}
Else
{
Write-Log Message "Process exited with code $($Process.ExitCode). Expecting 0." Component "Analyze" LogLevel 3
UploadLogFilesToAzure
throw
}
}
catch
{
Write-Log Message "Failed to start the HPImageAssistant.exe: $($_.Exception.Message)" Component "Analyze" LogLevel 3
UploadLogFilesToAzure
throw
}
# Read the XML report
Write-Log Message "Reading xml report" Component "Analyze"
try
{
$XMLFile = Get-ChildItem Path "$WorkingDirectory\Report" Recurse Include *.xml ErrorAction Stop
If ($XMLFile)
{
Write-Log Message "Report located at $($XMLFile.FullName)" Component "Analyze"
try
{
[xml]$XML = Get-Content Path $XMLFile.FullName ErrorAction Stop
$Recommendation = $xml.HPIA.Recommendations.BIOS.Recommendation
If ($Recommendation)
{
$CurrentBIOSVersion = $Recommendation.TargetVersion
$ReferenceBIOSVersion = $Recommendation.ReferenceVersion
$DownloadURL = "https://" + $Recommendation.Solution.Softpaq.Url
$SoftpaqFileName = $DownloadURL.Split('/')[-1]
Write-Log Message "Current BIOS version is $CurrentBIOSVersion" Component "Analyze"
Write-Log Message "Recommended BIOS version is $ReferenceBIOSVersion" Component "Analyze"
Write-Log Message "Softpaq download URL is $DownloadURL" Component "Analyze"
}
Else
{
Write-Log Message "Failed to find a BIOS recommendation in the XML report" Component "Analyze" LogLevel 3
UploadLogFilesToAzure
throw
}
}
catch
{
Write-Log Message "Failed to parse the XML file: $($_.Exception.Message)" Component "Analyze" LogLevel 3
UploadLogFilesToAzure
throw
}
}
Else
{
Write-Log Message "Failed to find an XML report." Component "Analyze" LogLevel 3
UploadLogFilesToAzure
throw
}
}
catch
{
Write-Log Message "Failed to find an XML report: $($_.Exception.Message)" Component "Analyze" LogLevel 3
UploadLogFilesToAzure
throw
}
###############################
## Download the BIOS softpaq ##
###############################
Write-Log Message "Downloading the Softpaq" Component "DownloadBIOSUpdate"
try
{
$ExistingBitsJob = Get-BitsTransfer Name "$SoftpaqFileName" AllUsers ErrorAction SilentlyContinue
If ($ExistingBitsJob)
{
Write-Log Message "An existing BITS tranfer was found. Cleaning it up." Component "DownloadBIOSUpdate" LogLevel 2
Remove-BitsTransfer BitsJob $ExistingBitsJob
}
$BitsJob = Start-BitsTransfer Source $DownloadURL Destination $WorkingDirectory\$SoftpaqFileName Asynchronous DisplayName "$SoftpaqFileName" Description "BIOS update download" RetryInterval 60 ErrorAction Stop
do {
Start-Sleep Seconds 5
$Progress = [Math]::Round((100 * ($BitsJob.BytesTransferred / $BitsJob.BytesTotal)),2)
Write-Log Message "Downloaded $Progress`%" Component "DownloadBIOSUpdate"
} until ($BitsJob.JobState -in ("Transferred","Error"))
If ($BitsJob.JobState -eq "Error")
{
Write-Log Message "BITS tranfer failed: $($BitsJob.ErrorDescription)" Component "DownloadBIOSUpdate" LogLevel 3
UploadLogFilesToAzure
throw
}
Write-Log Message "Download is finished" Component "DownloadBIOSUpdate"
Complete-BitsTransfer BitsJob $BitsJob
Write-Log Message "BITS transfer is complete" Component "DownloadBIOSUpdate"
}
catch
{
Write-Log Message "Failed to start a BITS transfer for the BIOS update: $($_.Exception.Message)" Component "DownloadBIOSUpdate" LogLevel 3
UploadLogFilesToAzure
throw
}
#########################
## Extract BIOS Update ##
#########################
Write-Log Message "Extracting the BIOS Update" Component "ExtractBIOSUpdate"
$BIOSUpdateDirectoryName = $SoftpaqFileName.Split('.')[0]
try
{
$Process = Start-Process FilePath $WorkingDirectory\$SoftpaqFileName WorkingDirectory $WorkingDirectory ArgumentList "/s /f .\$BIOSUpdateDirectoryName\ /e" NoNewWindow PassThru Wait ErrorAction Stop
Start-Sleep Seconds 5
$HpFirmwareUpdRec = Get-ChildItem Path $WorkingDirectory Include HpFirmwareUpdRec.exe Recurse ErrorAction SilentlyContinue
$HPBIOSUPDREC = Get-ChildItem Path $WorkingDirectory Include HPBIOSUPDREC.exe Recurse ErrorAction SilentlyContinue
If ($HpFirmwareUpdRec)
{
$BIOSExecutable = $HpFirmwareUpdRec
}
ElseIf ($HPBIOSUPDREC)
{
$BIOSExecutable = $HPBIOSUPDREC
}
Else
{
Write-Log Message "BIOS update executable not found!" Component "ExtractBIOSUpdate" LogLevel 3
UploadLogFilesToAzure
throw
}
Write-Log Message "Extraction complete" Component "ExtractBIOSUpdate"
}
catch
{
Write-Log Message "Failed to extract the softpaq: $($_.Exception.Message)" Component "ExtractBIOSUpdate" LogLevel 3
UploadLogFilesToAzure
throw
}
#############################
## Check for BIOS password ##
#############################
try
{
$SetupPwd = (Get-CimInstance Namespace ROOT\HP\InstrumentedBIOS ClassName HP_BIOSPassword Filter "Name='Setup Password'" ErrorAction Stop).IsSet
If ($SetupPwd -eq 1)
{
Write-Log Message "The BIOS has a password set" Component "BIOSPassword"
$BIOSPasswordSet = $true
}
Else
{
Write-Log Message "No password has been set on the BIOS" Component "BIOSPassword"
}
}
catch
{
Write-Log Message "Unable to determine if a BIOS password has been set: $($_.Exception.Message)" Component "BIOSPassword" LogLevel 3
UploadLogFilesToAzure
throw
}
##########################
## Create password file ##
##########################
If ($BIOSPasswordSet)
{
Write-Log Message "Creating an encrypted password file" Component "BIOSPassword"
$HpqPswd = Get-ChildItem Path $WorkingDirectory Include HpqPswd.exe Recurse ErrorAction SilentlyContinue
If ($HpqPswd)
{
try
{
$Process = Start-Process FilePath $HpqPswd.FullName WorkingDirectory $WorkingDirectory ArgumentList "-p""$BIOSPassword"" -f.\password.bin -s" NoNewWindow PassThru Wait ErrorAction Stop
Start-Sleep Seconds 5
If (Test-Path $WorkingDirectory\password.bin)
{
Write-Log Message "File successfully created" Component "BIOSPassword"
}
Else
{
Write-Log Message "Encrypted password file could not be found!" Component "BIOSPassword" LogLevel 3
UploadLogFilesToAzure
throw
}
}
catch
{
Write-Log Message "Failed to create an encrypted password file: $($_.Exception.Message)" Component "BIOSPassword" LogLevel 3
UploadLogFilesToAzure
throw
}
}
else
{
Write-Log Message "Failed to locate HP password encryption utility!" Component "BIOSPassword" LogLevel 3
UploadLogFilesToAzure
throw
}
}
###########################
## Stage the BIOS update ##
###########################
Write-Log Message "Staging BIOS firmware update" Component "BIOSFlash"
try
{
If ($BIOSPasswordSet)
{
$Process = Start-Process FilePath "$WorkingDirectory\$BIOSUpdateDirectoryName\$BIOSExecutable" WorkingDirectory $WorkingDirectory ArgumentList "-s -p.\password.bin -f.\$BIOSUpdateDirectoryName -r -b" NoNewWindow PassThru Wait ErrorAction Stop
}
Else
{
$Process = Start-Process FilePath "$WorkingDirectory\$BIOSUpdateDirectoryName\$BIOSExecutable" WorkingDirectory $WorkingDirectory ArgumentList "-s -f.\$BIOSUpdateDirectoryName -r -b" NoNewWindow PassThru Wait ErrorAction Stop
}
If ($Process.ExitCode -eq 3010)
{
Write-Log Message "The update has been staged. The BIOS will be updated on restart" Component "BIOSFlash"
}
Else
{
Write-Log Message "An unexpected exit code was returned: $($Process.ExitCode)" Component "BIOSFlash" LogLevel 3
UploadLogFilesToAzure
throw
}
}
catch
{
Write-Log Message "Failed to stage BIOS update: $($_.Exception.Message)" Component "BIOSFlash" LogLevel 3
UploadLogFilesToAzure
throw
}
Write-Log Message "This BIOS update run is complete. Have a nice day!" Component "Completion"
UploadLogFilesToAzure

Calculating the Offline Time for a Windows 10 Upgrade

For my Windows 10 feature update installation process, I like to gather lots of statistics around the upgrade itself as well as the devices they are running on so we can later report on these. These stats can be useful for identifying areas of potential improvement in the upgrade process. One stat I gather is the offline time for the upgrade, ie the time between when the downlevel (online) phase is completed and the computer is restarted and the time when the offline phases have completed and the OS is brought back to the logon screen again. Knowing this value across the estate helps to gauge the user experience and how much time is being spent waiting for the offline phases to complete.

To calculate this value is actually straightforward – you can do it by searching the SYSTEM event log for the last time the computer was restarted and comparing it with the installation time of the OS which gets recorded in WMI after the offline phases have completed successfully. The only thing is, after the offline phase is complete the event logs are refreshed and previous log entries are removed, so you have to search the event log in the Windows.old folder instead. You have to do this before the Windows.old folder gets automatically removed (depending on your policy) and manual rollback is no longer possible.

The PowerShell code below searches for the most recent event ID 1074, compares the date with the OS install date value in WMI (use the *CIM* cmdlets to get an automatic conversion to [DateTime]) and outputs as a TimeSpan which you can log however you want.

The good news is that for a 20H2 upgrade from media – at least in my various tests – the offline time has been impressively low.

$Params = @{
    Path = "$env:SystemDrive\Windows.old\Windows\System32\winevt\Logs\System.evtx"
    Id = 1074
}
$LatestRestartEvent = (Get-WinEvent -FilterHashtable $Params -ErrorAction SilentlyContinue | Select -First 1)
$InstallFinishedDate = Get-CimInstance Win32_OperatingSystem | Select -ExpandProperty InstallDate
If ($LatestRestartEvent)
{
    $UpgradeOfflineTime = $InstallFinishedDate - $LatestRestartEvent.TimeCreated
}

Windows 10 Upgrades – Dealing with Safeguard ID 25178825 (Conexant ISST Driver)

I saw a tweet recently from Madhu Sanke where he had deployed updated Conexant ISST drivers to his environment to release devices from the Safeguard ID 25178825 which at the time of writing still prevents devices trying to upgrade to 2004 or 20H2.

You can read more about the issue here, but around September/October 2020 timeframe, newer drivers became available that are not affected by this Safeguard and updating the drivers will release a device from this hold.

Madhu took the approach of downloading the drivers from the Microsoft Update Catalog and packaging them with a script wrapper for the install. In researching this myself, I found that there are more than one driver available and different models will take different drivers, so I decided to write a little C# program that will update a device using Windows Update directly instead.

The program simply connects to Windows Update online, checks if a newer driver version is available for the Conexant ISST audio driver (listed under ‘Conexant – MEDIA – ‘ in the MS update catalog), and downloads and installs it.

The executable can then be deployed as is using a product like Microsoft Endpoint Configuration Manager.

For environments where software updates are deployed with Configuration Manager / WSUS, the program will check if registry keys have been set preventing access to Windows Update online and temporarily open them. It will restore the previous settings after updating.

The program also logs to the Temp directory.

On my HP laptops, the driver in question is this guy:

This one has been updated and is no longer affected by the Safeguard.

You can download the C# executable from my GitHub repo, or you can clone the solution in Visual Studio and compile your own executable. Just remember to run the program in SYSTEM context or with administrative privilege on the client as it’s installing a driver.

A reboot is usually required after installation and the driver can display a small one-time toast notification like this:

If your devices are managed by Windows Update for Business, you may see a notification from there as well, depending on your configuration of course.

After deploying the driver update to affected devices, expect to see them being released from the Safeguard after telemetry has run. You can use my PowerBI reports to help with reporting on Safeguards in your environment.

Thanks again to Madhu for the inspiration!

Getting Creative: a Bespoke Solution for Feature Update Deployments

This is the first blog post in what I hope will be a series of posts demonstrating several custom solutions I created for things such as feature update deployments, managing local admin password rotation, provisioning Windows 10 devices, managing drive mappings and more. My reasons for creating these solutions was to overcome some of the current limitations in existing products or processes, make things more cloud-first and independent of existing on-prem infrastructure where possible, and to more exactly meet the requirements of the business.

Although I will try to provide a generalised version of the source code where possible, I am not providing complete solutions that you can go ahead and use as is. Rather my intention is to inspire your own creativity, to give working examples of what could be done if you have the time and resource, and to provide source code as a reference or starting point for your own solutions should you wish to create them!

Someone asked me recently how we deploy feature updates and it was a difficult question to answer other than to say we use a custom-built process. Having used some of the existing methods available (ConfigMgr Software Updates, ConfigMgr custom WaaS process, Windows Update for Business) we concluded there were shortcomings in each of them, and this provided inspiration to create our own, customized process to give us the control, reliability, user experience and reporting capability that we desired. Don’t get me wrong – I am not saying these methods aren’t good – they just couldn’t do things exactly the way we wanted.

So I set out to create a bespoke process – one that we could customize according to our needs, that was largely independent of our existing Configuration Manager infrastructure and that could run on any device with internet access. This required making use of cloud services in Azure as well as a lot of custom scripting! In this blog, I’ll try to cover what I did and how it works.

User Experience

First, let’s look at the end user experience of the feature update installation process – this was something key for us, to improve the user experience keeping it simple yet informative, and able to respond appropriately to any upgrade issues.

Once the update is available to a device, a toast notification is displayed notifying the user that an update is available. Initially, this displays once a day and automatically dismisses after 25 seconds. (I’ve blanked out our corporate branding in all these images)

We use a soft deadline – ie the update is never forced on the user. Enforcing compliance is handled by user communications and involvement from our local technicians. With one week left before the deadline, we increase the frequency of the notifications to twice per day.

If the deadline has passed, we take a more aggressive approach with the notifications, modifying the image and text, displaying it every 30 minutes and it doesn’t leave the screen unless the user actions or dismisses it.

The update can be installed via a shortcut on the desktop, or in the last notification it can be initiated from the notification itself.

Once triggered, a custom UI is displayed introducing the user to the update and what to expect.

When the user clicks Begin, we check that a power adapter is connected and no removable USB devices are attached – if they are, we prompt to the user to remove them first.

The update runs in three phases or stages – these correspond to the PreDownload, Install and Finalize commands on the update (more on that later). The progress of each stage is polled from the registry, as is the Setup Phase and Setup SubPhase.

Note that the user cannot cancel the update once it starts and this window will remain on the screen and on top of all other windows until the update is complete. The user can click the Hide me button, and this will shrink the window like so:

This little window also cannot be removed from the screen, but it can be moved around and is small enough to be unobtrusive. When the update has finished installing, or when the user clicks Restore, the main window will automatically display again and report the result of the update.

The colour scheme is based on Google’s material design, by the way.

If the update failed during the online phase, the user can still initiate the update from the desktop shortcut but toast notifications will no longer display as reminders. The idea is that IT can attempt to remediate the device and run the update again after.

If successful, the user can click Restart to restart the computer immediately. Then the offline phase of the upgrade runs, where you see the usual light blue screen and white Windows update text reporting that updates are being installed.

Once complete, the user will be brought back to the login screen, and we won’t bother them anymore.

If the update rolled back during the offline phase, we will detect this next time they log in and notify them one time:

Logging and Reporting

The entire update process is logged right from the start to a log file on the local machine. We also send ‘status messages’ at key points during the process and these find their way to an Azure SQL database which becomes the source for reporting on update progress across the estate (more on this later).

A PowerBI report gives visual indicators of update progress as well as a good amount of detail from each machine including update status, whether it passed or failed readiness checks and if failed, why, whether it passed the compatibility assessment, if it failed the assessment or the install we give the error code, whether any hard blocks were found, setup diag results (2004 onward), how long the update took to install and a bunch of other stuff we find useful.

Since 2004 though, we have starting inventorying certain registry keys using ConfigMgr to give us visibility of devices that won’t upgrade because of a Safeguard hold or other reason, so we can target the upgrade only to devices that aren’t reporting any known compatibility issues.

If a device performs a rollback, we can get it to upload key logs and registry key dumps to an Azure storage account where an administrator can remotely diagnose the issue.

How does it work?

Now lets dive into the process in more technical detail.

Deployment Script

The update starts life with a simple PowerShell script that does the following:

  • Creates a local directory to use to cache content, scripts and logs etc
  • Temporarily stores some domain credentials in the registry of the local SYSTEM account as encrypted secure strings for accessing content from a ConfigMgr distribution point if necessary (more on this later)
  • Downloads a manifest file that contains a list of all files and file versions that need to be downloaded to run the update. These include scripts, dlls (for the UI), xml definition files for scheduled tasks etc
  • Each file is then downloaded to the cache directory from an Azure CDN
  • 3 scheduled tasks are then registered on the client:
    • A ‘preparer’ task which runs prerequisite actions
    • A ‘file updater’ task which keeps local files up-to-date in case we wish to change something
    • A ‘content cleanup’ task which is responsible for cleaning up in the event the device gets upgraded through any means
  • A ‘status message’ is then sent as an http request, creating a new record for the device in the Azure SQL database

This script can be deployed through any method you wish, including Configuration Manager, Intune or just manually, however it should be run in SYSTEM context.

Content

All content needed for the update process to run is put into a container in a storage account in Azure. This storage account is exposed via an Azure Content Delivery Network (CDN). This means that clients can get all the content they need directly from an internet location with minimal latency no matter where they are in the world.

Feature Update Files

The files for the feature update itself are the ESD file and WindowsUpdateBox.exe that Windows Update uses. You can get these files from Windows Update, WSUS, or as in our case, from Configuration Manager via WSUS. We simply download the feature updates to a deployment package in ConfigMgr and grab the content from there.

You could of course use an ISO image and run setup.exe, but the ESD files are somewhat smaller in size and are sufficient for purpose.

The ESD files are put into the Azure CDN so the client can download them from there, but we also allow the client the option to get the FU content from a local ConfigMgr distribution point if they are connected to the corporate network locally. Having this option allows considerably quicker content download. Since IIS on the distribution points is not open to anonymous authentication, we use the domain credentials stamped to the registry to access the DP and download the content directly from IIS (credentials are cleaned from the registry after use).

Status Messages

Similar to how ConfigMgr sends status message to a management point, this solution also send status messages at key points during the process. This works by using Azure Event Grid to receive the message sent from the client as an http request. The Event Grid sends the message to an Azure Function, and the Azure Function is responsible to update the Azure SQL database created for this purpose with the current upgrade status of the device. The reason for doing it this way is that sending an http request to Event Grid is very quick and doesn’t hold up the process. Event Grid forwards the message to the Azure Function and can retry the message in the case it can’t get through immediately (although I’ve never experienced any failures or dead-lettering in practice). The Azure Function uses a Managed Identity to access the SQL database, which means the SQL database never needs to be exposed outside of its sandbox in Azure, and no credentials are needed to update the database.

We then use PowerBI to report on the data in the database to give visibility of where in the process every device is, if there are any issues that need addressing and all the stats that are useful for understanding whether devices get content from Azure or a local DP, what their approximate bandwidth is, how long downloads took, whether they were wired or wireless, make and model, upgrade time etc.

Preparation Script

After the initial deployment script has run, the entire upgrade process is driven by scheduled tasks on the client. The first task to run is the Preparation script and this attempts to run every hour until successful completion. This script does the following things:

  • Create the registry keys for the upgrade. These keys are stamped with the update progress and the results of the various actions such as pre-req checks, downloads etc. When we send a ‘status message’ we simply read these keys and send them on. Having progress stamped in the local registry is useful if we need to troubleshoot on the device directly.
  • Run readiness checks, such as
    • Checking for client OS
    • Checking disk space
  • Check for internet connectivity
  • Determine the approximate bandwidth to the Azure CDN and measure latency. This is done by downloading a 100MB file from the CDN and timing the download and using ‘psping.exe’ to measure latency. From this, we can calculate an approximate download time for the main ESD file.
  • Determine if the device is connected by wire or wireless
  • Determine if the device is connected to the corporate network
  • If the device is on the corporate network, we check latency to all the ConfigMgr distribution points to determine which one will be the best DP to get content from
  • Determine whether OS is business or consumer and which language. This helps us figure out which ESD file to use.
  • Download WindowsUpdateBox.exe and verify the hash
  • Download the feature update ESD file and verify the hash
    • Downloads of FU content is done using BITS transfers as this proved the most reliable method. Code is added to handle BITS transfer errors to add resilience.
  • Assuming all the above is done successfully, the Preparation task will be disabled and the PreDownload task created.

PreDownload Script

The purpose of this script is to run the equivalent of a compatibility assessment. When using the ESD file, this is done with the /PreDownload switch on WindowsUpdateBox.exe. Should the PreDownload fail, the error code will be logged to the registry. Since 2004, we also read the SetupDiag results and stamp these to the registry. We also check the Compat*.xml files to look for any hard blocks and if found, we log the details to the registry.

If the PreDownload failed, we change the schedule of the task to run twice a week. This allows for remediation to be performed on the device before attempting the PreDownload assessment again.

If the PreDownload succeeds, we disable the PreDownload task and create two new ones – a Notification task and an Upgrade task.

We also create a desktop shortcut that the user can use to initiate the upgrade.

Notification Script

The Notification script runs in the user context and displays toast notifications to notify the user that the upgrade is available, what the deadline is and how to upgrade, as already mentioned.

Upgrade Script

When the user clicks the desktop shortcut or the ‘Install now’ button on the toast notification, the upgrade is initiated. Because the upgrade needs to run with administrative privilege, the only thing the desktop shortcut and toast notification button does is to create an entry in the Application event log. The upgrade scheduled task is triggered when this event is created and the task runs in SYSTEM context. The UI is displayed in the user session with the help of the handy ServiceUI.exe from the MDT toolkit.

Upgrade UI

The user interface part of the upgrade is essentially a WPF application coded in PowerShell. The UI displays some basic upgrade information for the user and once they click ‘Begin’ we run the upgrade in 3 stages:

  1. PreDownload. Even though we ran this already, we run again before installing just to make sure nothing has changed since, and it doesn’t take long to run.
  2. Install. This uses the /Install switch on WindowsUpdateBox.exe and runs the main part of the online phase of the upgrade.
  3. Finalize. This uses the /Finalize switch and finalizes the update in preparation for a computer restart.

The progress of each of these phases is tracked in the registry and displayed in the UI using progress bars. If there is an issue, we notify the user and IT can get involved to remediate.

If successful, the user can restart the computer immediately or a later point (though we discourage this!). We don’t stop the user from working while the upgrade is running in the online phase and we allow them to partially hide the upgrade window so the upgrade does not hinder user productivity (similar to how WUfB installs an update in the background.)

After the user restarts the computer, the usual Windows Update screens take over until the update has installed and the user is brought to the login screen again.

Drivers and Stuff

We had considered upgrading drivers and even apps with this process, as we did for the 1903 upgrade, however user experience was important for us and we didn’t want the upgrade to take any longer than necessary, so we decided not to chain anything onto the upgrade process itself but handle other things separately. That being said, because this is a custom solution it is perfectly possible to incorporate additional activities into it if desired.

Rollback

In the event the that OS was rolled back during the offline phase, a scheduled task will run that will detect this and raise a toast notification to inform the user. We have a script that will gather logs and data from the device and upload it to a storage account in Azure where an administrator can remotely diagnose the issue. I plan to incorporate that as an automatic part of the process in a future version.

Updater Script

The solution creates an Updater scheduled task which runs once per day. The purpose of this task is to keep the solution up to date. If we want to change something in the process, add some code to a file or whatever is necessary, the Updater will take care of this.

It works by downloading a manifest file from the Azure CDN. This file contains all the files used by the solution with their current versions. If we update something, we upload the new files to the Azure storage account, purge them from the CDN and update the manifest file.

The Updater script will download the current manifest, detect that something has changed and download the required files to the device.

Cleanup Script

A Cleanup task is also created. When this task detects that the OS has been upgraded to the required version, it will remove all the scheduled tasks and cached content to leave no footprint on the device other than the log file and the registry keys.

Source Files

You can find a generalised version of the code used in this solution in my Github repo as a reference. As mentioned before though, there are many working parts to the solution including the Azure services and I haven’t documented their configuration here.

Final Comments

The main benefit of this solution for us is that it is completely customised to our needs. Although it is relatively complex to create, it is also relatively easy to maintain as well as adapt the solution for new W10 versions. We do still take advantage of products like ConfigMgr to allow devices to get content from a local DP if they are corporate connected, and ConfigMgr / Update Compliance / Desktop Analytics for helping us determine device compatibility and ConfigMgr or Intune to actually get the deployment script to the device. We also make good use of Azure services for the status messages and the cloud database, as well as PowerBI for reporting. So the solution still utilizes existing Microsoft products while giving us the control and customisations that we need to provide a better upgrade experience for our users.

Windows 10 Feature Update Readiness PowerBI Report (MEMCM version)

Following on from my previous post where I shared a PowerBI report that provides information on Windows 10 feature update blocks using Update Compliance and Desktop Analytics, in this post I will share another report that exposes similar data, but this time built from custom hardware inventory data in MEMCM.

Outside of the Windows setup process, feature update compatibility is assessed on a Windows 10 device by the ‘Microsoft Compatibility Appraiser’ – a scheduled task that runs daily and the results of which form part of the telemetry data that gets uploaded to Microsoft from the device. The specifics of this process are still somewhat shrouded in mystery, but thanks to the dedication of people like Adam Gross we can lift the lid a little bit on understanding compatibility assessment results. I highly recommend reading his blog if your interested in a more in-depth understanding.

Some compatibility data is stored in the registry under HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags, and in particular the subkey TargetVersionUpgradeExperienceIndicators contains useful compatibility information for different W10 releases such as the GatedBlockIds, otherwise known as the Safeguard Hold Ids, some of which are published by Microsoft on their Windows 10 Known Issues pages. Since the release of Windows 10 2004, many devices have been prevented from receiving a W10 feature update by these Safeguard holds, so reporting on them can be useful to identify which devices are affected by which blocks, and which devices are not affected and are candidates for upgrade.

By inventorying these registry keys with MEMCM I built the PowerBI report shown above where you can view data such as:

  • Which Safeguard holds are affecting which devices
  • Rank of model count
  • Rank of OS version count
  • The upgrade experience indicators (UE)
  • UE red block reasons

Note that this data alone doesn’t replace a solution like Desktop Analytics which can help identify devices with potential app or driver compatibility issues, but it’s certainly helpful with the Safeguard holds.

You can also use this data to build collections in MEMCM containing devices that are affected by a Safeguard hold. Because this is based on inventory data, when a Safeguard hold is released by Microsoft those devices will move naturally out those collections.

Understanding the data

Because of the lack of any public documentation around the compatibility appraiser process, we have to take (hopefully!) intelligent guesses as to what the data means.

Under the TargetVersionUpgradeExperienceIndicators registry key for example, you may find subkeys for 19H1, 20H1, 21H1 or even older Windows 10 versions. I haven’t found any keys for *H2 releases though, and I can only assume it’s because the Safeguard holds for a H1 release are the same for the H2 release. From the Windows 10 Known Issues documentation this seems to be the case.

There is also a UNV subkey – I assume that means Universal and contains data that applies across any feature update.

Under the *H1 keys (I suppose I should call it a branch, really) we can try to understand some of the main keys such as:

  • FailedPrereqs – I haven’t seen any devices yet that actually failed the appraiser’s prerequisites, but I assume the details would be logged here if they were.
  • AppraiserVersion, SdbVer, Version, DateVer* – I assume these indicate the version of the compatibility appraiser database used for the assessment
  • DataExp*, DataRel* – these seem to indicate the release and expiry dates for the Appraiser database so my assumption is a new one will be downloaded at or before expiry
  • GatedBlock* – the Id key in particular gives the Safeguard Hold Id/s that are blocking the device from upgrade
  • Perf – this appears to be a general assessment of the performance of the device. A low performing device will likely take longer to upgrade
  • UpgEx* – these seem to be a traffic-light rating for the ‘upgrade experience’. The UpgExU seems to stand for Upgrade Experience Usage – I don’t know what the difference between the two is. Green is good, right, so a green device is going to be a good upgrade experience, yellow or orange not so great, red is a blocker. I don’t know exactly what defines each colour other than that…
  • RedReason – if you’ve got a red device, it’s blocked from upgrade by something – but this isn’t related to Safeguard holds as far as I can tell. It seems to be related to the keys under HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\CompatMarkers\*H1, such as BlockedByBios, BlockedByCpu for example. The only one I’ve seen in practice is the SystemDriveTooFull block.

Configure Custom Hardware Inventory

Alright, so first we need to configure hardware inventory in MEMCM to include the registry keys we want to use. You can use the RegKey2Mof utility, or you can download the files below to update your Configuration.mof file and your Client Settings / hardware inventory classes. I’ll assume you are familiar with that process.

Configuration.mof.additions

20H1_AppCompat.mof

21H1_AppCompat.mof

If you choose to use different names for the created classes, you’ll need to update the PowerBI report as it uses those names.

Download the PowerBI Report

Download the PBI template from here:

Windows 10 Feature Updates Readiness

Open opening, you’ll need to add the SQL server and database name for your MEMCM database:

You won’t see any data until devices have started sending hardware inventory that includes the custom classes.

Note that I have included pages for both 20H1 and 21H1, but the latter is just a placeholder for now as no actual compatibility data will be available until that version is released, or close to.

MEMCM collections

You can also build collections like those shown above by adding a query rule and using the names created by the custom classes – in this case TwentyHOne and TwentyOneHOne. Use the value option to find which GatedBlockIds are present in your environment.

Hope it helps!

Prevent Users from Disabling Toast Notifications – Can it be Done?

Another toast notifications post – this time to deal with an issue where users have turned off toast notifications. In my deployment of Windows 10 feature updates for example, I use toast notifications to inform users an update is available. Once we hit the installation deadline, the notifications become more aggressive and display more frequently and do not leave the screen unless the user actions or dismisses them. But we found that some users turn off toast notifications altogether – perhaps they just don’t like any notifications, or perhaps they don’t like being reminded to install the feature update.

In any case, since toast notifications are a key communications channel with our users, it’s important for us that they stay enabled.

Users can disable toast notifications in Settings > System > Notification & actions – simply turn off the setting Get notifications from apps and other senders.

There is also a group policy setting that can disable toast notifications and lock the setting so the user can’t turn it back on.

However, I was surprised to find no setting to do the opposite thing – turn notifications on and lock the setting preventing the user from turning them off..

What I did find is a registry key that enables or disables toast notifications in the user context, but it doesn’t take effect without restarting a service called Windows Push Notifications User Service.

Here’s the registry key. Setting it to 1 enables notifications and 0 disables.

Because this is not being done by group policy, you can’t lock the setting unfortunately. But what you can do is use a Configuration Manager compliance baseline, or even Proactive remediations in MEM, to detect and remediate and turn notifications back on if a user has turned them off. It needs to run with sufficient frequency to be effective.

Here is a detection script for MEMCM that will check the registry key and if it exists and is set to zero, will flag non-compliance.

$ToastEnabled = Get-ItemProperty -Path "HKCU:\SOFTWARE\Microsoft\Windows\CurrentVersion\PushNotifications" -Name ToastEnabled -ErrorAction SilentlyContinue | Select -ExpandProperty ToastEnabled
If ($ToastEnabled -eq 0)
{
    Write-host "Not compliant"
}
Else
{
    Write-host "Compliant"
}

And here’s a remediation script that will set the registry key to the ‘enabled’ value, and restart the push notifications service.

Set-ItemProperty -Path "HKCU:\SOFTWARE\Microsoft\Windows\CurrentVersion\PushNotifications" -Name ToastEnabled -Value 1 -Force
Get-Service -Name WpnUserService* | Restart-Service -Force

Remember to run these in the user context and allow remediation.

With this active, we can’t completely prevent users from turning off notifications altogether, but if they do, we’ll turn them back on. If they want to fight with the remediation, that’s on them 🙂

Real world notes: In-place OS upgrade on Server 2012 R2 ConfigMgr distribution points

In my MEMCM primary site I had several distribution points that were still running Windows Server 2012 R2, so I decided to run an in-place OS upgrade on them to bring them up to Server 2019. After reading the MS Docs, I concluded this is a supported scenario and things would go smoothly. I was quite wrong, however!

The OS upgrade itself went very well. I scripted the process to automate it and deployed it via MEMCM and the servers upgraded in around 1-1.5 hours. However, after the upgrade I found two big issues on each server:

  • The ConfigMgr client was broken – specifically WMI. The SMS Agent host (ccmexec) service was not running and would not start. Digging in the logs, I could see that ccmrepair.exe was running and attempting to fix the client, however it was repeatedly failing with the following error:

MSI: Setup was unable to compile the file DiscoveryStatus.mof. The error code is 80041002

  • The root\SCCMDP namespace was missing from WMI. This essentially breaks the distribution point role as no packages can be updated or distributed to it and content validation will fail

It’s quite possible something on the servers contributed to these issues happening, but they were actually fairly clean boxes – physical HP servers running only the ConfigMgr client, some HP software, antivirus and a few agents such as MMA and Azure agents. When I contacted Microsoft support they suggested to perform a site reset, but when you have many servers to upgrade that isn’t viable. They also wouldn’t update the Docs as they couldn’t reproduce the issues internally even though I’m not the only one to report them.

Anyway, I’m documenting here what I did to fix it and the scripts I used.

To fix the broken ConfigMgr client I did the following:

  1. Compile the mof file %program files%\Microsoft Policy Platform\ExtendedStatus.mof
  2. Stop any ccmrepair.exe or ccmsetup.exe process so they don’t hinder step 3
  3. Run the ‘Configuration Manager Health Evaluation’ scheduled task (ccmeval.exe) a couple of times. This will self-remediate the WMI issues.

To fix the broken DP role:

  1. Compile the mof file ..\SMS_DP$\sms\bin\smsdpprov.mof (this restores the missing WMI namespace)
  2. Query the ConfigMgr database to find the list of packages distributed to the distribution point
  3. Run some PowerShell code to restore the packages as instances in the root\SCCMDP:SMS_PackagesInContLib WMI class
  4. Run the Content Validation scheduled task to revalidate the content and remove any errors in the console

For the latter, I don’t take any credit in my script below as I simply used and expanded something I found here. Note both scripts must be run as administrator and the second script requires read access to the ConfigMgr database.

Repair ConfigMgr client script

# Complile mof file
Write-host "Compiling ExtendedStatus.mof file"
mofcomp "C:\Program Files\Microsoft Policy Platform\ExtendedStatus.mof"

# Stop processes that might hinder ccmeval
If (Get-Process -Name ccmrepair -ErrorAction SilentlyContinue)
{
    Write-host "Found ccmrepair process. Stopping it..."
    Stop-Process -Name ccmrepair -Force
    Start-Sleep -Seconds 5
}
If (Get-Process -Name ccmsetup -ErrorAction SilentlyContinue)
{
    Write-host "Found ccmsetup process. Stopping it..."
    Stop-Process -Name ccmsetup -Force
    Start-Sleep -Seconds 5
}

# Run ccmeval to self-remediate the broken WMI
Write-host "Starting Configuration Manager Health Evaluation to repair the ConfigMgr client"
Start-ScheduledTask -TaskName "Configuration Manager Health Evaluation" -TaskPath "\Microsoft\Configuration Manager"
$P = Get-Process -Name ccmeval
$P.WaitForExit()
Start-Sleep -Seconds 5
Write-host "Starting Configuration Manager Health Evaluation one more time"
Start-ScheduledTask -TaskName "Configuration Manager Health Evaluation" -TaskPath "\Microsoft\Configuration Manager"
$P = Get-Process -Name ccmeval
$P.WaitForExit()

# Open the logs to verify it was successful
Start-Process -FilePath C:\Windows\CCM\CMTrace.exe -ArgumentList "C:\Windows\ccmsetup\Logs\ccmsetup-ccmeval.log"
Start-Process -FilePath C:\Windows\CCM\CMTrace.exe -ArgumentList "C:\Windows\CCM\Logs\CcmEval.log"

Repair DP role script

# MEMCM database params
$script:dataSource = 'MyConfigMgrDatabaseServer' 
$script:database = 'MyConfigMgrDatabase'

# Function to query SQL server...must have db_datareader role for current user context
function Get-SQLData {
    [CmdletBinding()]
    param($Query)
    $connectionString = "Server=$dataSource;Database=$database;Integrated Security=SSPI;"
    $connection = New-Object -TypeName System.Data.SqlClient.SqlConnection
    $connection.ConnectionString = $connectionString
    $connection.Open()
    
    $command = $connection.CreateCommand()
    $command.CommandText = $Query
    $reader = $command.ExecuteReader()
    $table = New-Object -TypeName 'System.Data.DataTable'
    $table.Load($reader)
    
    # Close the connection
    $connection.Close()
    
    return $Table
}

Try
{
    Get-CimClass -Namespace root\SCCMDP -ErrorAction Stop
    Write-host "WMI namespace root\SCCMDP is present"
    Return
}
Catch
{
   If ($_.Exception.NativeErrorCode -eq "InvalidNamespace")
   {
        Write-host "WMI namespace root\SCCMDP is missing" -ForegroundColor Red
        Write-host "Performing remediations..."
   }
   else 
   {
        $_
        Return    
   }
}

# Compile DP mof file
Write-host "Compiling smsdpprov.mof file"
mofcomp "$(Get-SmbShare | where {$_.Name -match "SMS_DP"} | Select -ExpandProperty Path)\sms\bin\smsdpprov.mof"

# Query database for DP package info
$Server = Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\SMS\DP" -Name Server | Select -ExpandProperty Server
$Query = @"
declare @ServerName varchar (50) = '$Server'
select
  dp.PackageID
, case
 when p.ShareName <> '' 
  then '\\' + @ServerName + '\' + p.ShareName
 else ''
  end ShareName
, 'Set-WmiInstance -Path ''ROOT\SCCMDP:SMS_PackagesInContLib'' -Arguments @{PackageID="'
  + dp.PackageID + '";PackageShareLocation="' +
 case
  when p.ShareName <> '' 
   then '\\' + @ServerName + '\' + p.ShareName
  else ''
   end + '"}' PowershellCommand
from v_DistributionPoint dp
inner join v_Package p on p.PackageID = dp.PackageID
where dp.ServerNALPath like '[[]"Display=\\' + @ServerName + '%'
"@
Write-host "Querying database for DP package info"
Try
{
    $Results = Get-SQLData -Query $Query  -ErrorAction Stop
    Write-host "Found $($results.rows.count) packages"
}
Catch
{
    $_
    Return
}

# Run the POSH code to add the package back into WMI
Write-host "Restoring WMI instances to ROOT\SCCMDP:SMS_PackagesInContLib"
If ($Results)
{
    Foreach ($result in $results)
    {
        $Scriptblock = [scriptblock]::Create($Result.PowershellCommand)
        Try
        {
            $null = Invoke-Command -ScriptBlock $Scriptblock -ErrorAction Stop
        }
        Catch
        {
            $_
        }
    }
}
Else
{
    Throw "No package info found for this DP"
    Return
}

# Start the content validation task...may take some time
Write-host "Starting Content validation. Check smsdpmon.log for progress"
Start-ScheduledTask -TaskPath "\Microsoft\Configuration Manager" -TaskName "Content Validation"

Get a daily admin Audit Report for MEM / Intune

In an environment where you have multiple admin users it’s useful to audit admin activities so everyone can be aware of changes that others have made. I do this for Endpoint Configuration Manager with a daily email report built from admin status messages, so I decided to create something similar for Intune / MEM.

Admin actions are already audited for you in MEM (Tenant Administration > Audit logs) so it’s simply a case of getting that data into an email report. You can do this with Graph (which gives you more data actually) but I decided to use Log Analytics for this instead.

You need a Log Analytics workspace, and you need to configure Diagnostics settings in the MEM portal to send AuditLogs to the workspace.

Then, in order to automate sending a daily report create a service principal in Azure AD with just the permissions necessary to read data from the Log Analytics workspace. You can do this easily from the Azure portal using CloudShell. In the example below, I’m creating a new service principal with the role “Log Analytics Reader” scoped just to the Log Analytics workspace where the AuditLogs are sent to.

$DisplayName = "MEM-Reporting"
$Role = "Log Analytics Reader"
$Scope = "/subscriptions/<subscriptionId>/resourcegroups/<resourcegroupname>/providers/microsoft.operationalinsights/workspaces/<workspacename>"

$sp = New-AzADServicePrincipal -DisplayName $DisplayName -Role $Role -Scope $Scope

With the service principal created, you’ll need to make a note of the ApplicationId:

$sp.ApplicationId

And the secret:

$SP.Secret | ConvertFrom-SecureString -AsPlainText

Of course, if you prefer you can use certificate authentication instead of using the secret key.

Below is a PowerShell script that uses the Az PowerShell module to connect to the log analytics workspace as the service principal, query the IntuneAuditLogs for entries in the last 24 hours, then send them in an HTML email report. Run it with your favourite automation tool.

You’ll need the app Id and secret from the service principal, your tenant Id, your log analytics workspace Id, and don’t forget to update the email parameters.

Sample email report
# Script to send a daily audit report for admin activities in MEM/Intune
# Requirements:
# – Log Analytics Workspace
# – Intune Audit Logs saved to workspace
# – Service Principal with 'Log Analytics reader' role in workspace
# – Azure Az PowerShell modules
# Azure resource info
$ApplicationId = "abc73938-0000-0000-0000-9b01316a9123" # Service Principal Application Id
$Secret = "489j49r-0000-0000-0000-e2dc6451123" # Service Principal Secret
$TenantID = "abc894e7-00000-0000-0000-320d0334b123" # Tenant ID
$LAWorkspaceID = "abcc1e47-0000-0000-0000-b7ce2b2bb123" # Log Analytics Workspace ID
$Timespan = (New-TimeSpan Hours 24)
# Email params
$EmailParams = @{
To = 'trevor.jones@smsagent.blog'
From = 'MEMReporting@smsagent.blog'
Smtpserver = 'smsagent.mail.protection.outlook.com'
Port = 25
Subject = "MEM Audit Report | $(Get-Date Format ddMMMyyyy)"
}
# Html CSS style
$Style = @"
<style>
table {
border-collapse: collapse;
font-family: sans-serif
font-size: 12px
}
td, th {
border: 1px solid #ddd;
padding: 6px;
}
th {
padding-top: 8px;
padding-bottom: 8px;
text-align: left;
background-color: #3700B3;
color: #03DAC6
}
</style>
"@
# Connect to Azure with Service Principal
$Creds = [PSCredential]::new($ApplicationId,(ConvertTo-SecureString $Secret AsPlaintext Force))
Connect-AzAccount ServicePrincipal Credential $Creds Tenant $TenantID
# Run the Log Analytics Query
$Query = "IntuneAuditLogs | sort by TimeGenerated desc"
$Results = Invoke-AzOperationalInsightsQuery WorkspaceId $LAWorkspaceID Query $Query Timespan $Timespan
$ResultsArray = [System.Linq.Enumerable]::ToArray($Results.Results)
# Converts the results to a datatable
$DataTable = New-Object System.Data.DataTable
$Columns = @("Date","Initiated by (actor)","Application Name","Activity","Operation Status","Target Name","Target ObjectID")
foreach ($Column in $Columns)
{
[void]$DataTable.Columns.Add($Column)
}
foreach ($result in $ResultsArray)
{
$Properties = $Result.Properties | ConvertFrom-Json
[void]$DataTable.Rows.Add(
$Properties.ActivityDate,
$result.Identity,
$Properties.Actor.ApplicationName,
$result.OperationName,
$result.ResultType,
$Properties.TargetDisplayNames[0],
$Properties.TargetObjectIDs[0]
)
}
# Send an email
If ($DataTable.Rows.Count -ge 1)
{
$HTML = $Datatable |
ConvertTo-Html Property "Date","Initiated by (actor)","Application Name","Activity","Operation Status","Target Name","Target ObjectID" Head $Style Body "<h2>MEM Admin Activities in the last 24 hours</h2>" |
Out-String
Send-MailMessage @EmailParams Body $html BodyAsHtml
}

Forcing a Full Hardware Inventory Report to be Sent Immediately on a ConfigMgr Client

Sometimes you might want to force a ConfigMgr client to send a full hardware inventory report immediately for whatever reason. Typically you would simply clean out the WMI instance for the InventoryAction then trigger the schedule. But sometimes there may already be a scheduled action in the queue, for example the hardware inventory cycle has been triggered on the normal schedule but it runs with randomization so it doesn’t run immediately when it’s triggered. In this case, you get a message in the InventoryAgent.log that looks like this:

Inventory: Message [Type=InventoryAction, ActionID={00000000-0000-0000-0000-000000000001}, Report=Delta] already in queue. Message ignored.

It ignores your request if there’s already a request queued.

You can still force it to run immediately though by clearing the queue. To do this you can simply delete the InventoryAgent queue folder but you can’t do this while the SMS Agent host service is running, you have to stop the service first.

Below is a script that will attempt to trigger a full a HWI report and check the InventoryAgent.log to see if the request was ignored – if so, it clears the queue and tries again.

# Invoke a full (resync) HWI report
$Instance = Get-CimInstance -NameSpace ROOT\ccm\InvAgt -Query "SELECT * FROM InventoryActionStatus WHERE InventoryActionID='{00000000-0000-0000-0000-000000000001}'"
$Instance | Remove-CimInstance
Invoke-CimMethod -Namespace ROOT\ccm -ClassName SMS_Client -MethodName TriggerSchedule -Arguments @{ sScheduleID = "{00000000-0000-0000-0000-000000000001}"}
Start-Sleep -Seconds 5

# Check InventoryAgent log for ignored message
$Log = "$env:SystemRoot\CCM\Logs\InventoryAgent.Log"
$LogEntries = Select-String –Path $Log –SimpleMatch "{00000000-0000-0000-0000-000000000001}" | Select -Last 1
If ($LogEntries -match "already in queue. Message ignored.")
{
    # Clear the message queue
    # WARNING: This restarts the SMS Agent host service
    Stop-Service -Name CcmExec -Force
    Remove-Item -Path C:\Windows\CCM\ServiceData\Messaging\EndpointQueues\InventoryAgent -Recurse -Force -Confirm:$false
    Start-Service -Name CcmExec

    # Invoke a full (resync) HWI report
    Start-Sleep -Seconds 5
    $Instance = Get-CimInstance -NameSpace ROOT\ccm\InvAgt -Query "SELECT * FROM InventoryActionStatus WHERE InventoryActionID='{00000000-0000-0000-0000-000000000001}'"
    $Instance | Remove-CimInstance
    Invoke-CimMethod -Namespace ROOT\ccm -ClassName SMS_Client -MethodName TriggerSchedule -Arguments @{ sScheduleID = "{00000000-0000-0000-0000-000000000001}"}
}

Collecting ConfigMgr Client Logs to Azure Storage

In the 2002 release of Endpoint Configuration Manager, Microsoft added a nice capability to collect log files from a client to the site server. Whilst this is a cool capability, you might not be on 2002 yet or you might prefer to send logs to a storage account in Azure rather than to the site server. You can do that quite easily using the Run Script feature. This works whether the client is connected on the corporate network or through a Cloud Management Gateway.

To do this you need a storage account in Azure, a container in the account, and a Shared access signature.

I’ll assume you have the first two in place, so let’s create a Shared access signature. In the Storage account in the Azure Portal, click on Shared access signature under Settings.

  • Under Allowed services, check Blob.
  • Under Allowed resource types, check Object.
  • Under Allowed permissions, check Create.

Set an expiry date then click Generate SAS and connection string. Copy the SAS token and keep it safe somewhere.

Below is a PowerShell script that will upload client log files to Azure storage.

## Uploads client logs files to Azure storage
$Logs = Get-ChildItem "$env:SystemRoot\CCM\Logs"
$Date = Get-date -Format "yyyy-MM-dd-HH-mm-ss"
$ContainerURL = "https://mystorageaccount.blob.core.windows.net/mycontainer&quot;
$FolderPath = "ClientLogFiles/$($env:COMPUTERNAME)/$Date"
$SASToken = "?sv=2019-10-10&ss=b&srt=o&sp=c&se=2030-05-01T06:31:59Z&st=2020-04-30T22:31:59Z&spr=https&sig=123456789abcdefg"
$Responses = New-Object System.Collections.ArrayList
$Stopwatch = New-object System.Diagnostics.Stopwatch
$Stopwatch.Start()
foreach ($Log in $Logs)
{
$Body = Get-Content $($Log.FullName) -Raw
$URI = "$ContainerURL/$FolderPath/$($Log.Name)$SASToken"
$Headers = @{
'x-ms-content-length' = $($Log.Length)
'x-ms-blob-type' = 'BlockBlob'
}
$Response = Invoke-WebRequest -Uri $URI -Method PUT -Headers $Headers -Body $Body
[void]$Responses.Add($Response)
}
$Stopwatch.Stop()
Write-host "$(($Responses | Where {$_.StatusCode -eq 201}).Count) log files uploaded in $([Math]::Round($Stopwatch.Elapsed.TotalSeconds,2)) seconds."

Update the following parameters in your script:

  • ContainerURL. This is the URL to the container in your storage account. You can find it by clicking on the container, then Properties > URL.
  • SASToken. This is the SAS token string you created earlier.

Create and approve a new Script in ConfigMgr with this code. You can then run it against any online machine, or collection. When it’s complete, it will output how many log files were uploaded and how long the upload took.

To view the log files, you can either browse them in storage account in the Azure portal looking at the container directly, or using the Storage explorer. My preferred method is to use the standalone Microsoft Azure Storage Explorer app, where you can simply double-click a log file to open it, or easily download the folder containing the log files to your local machine.