Collecting ConfigMgr Client Logs to Azure Storage

In the 2002 release of Endpoint Configuration Manager, Microsoft added a nice capability to collect log files from a client to the site server. Whilst this is a cool capability, you might not be on 2002 yet or you might prefer to send logs to a storage account in Azure rather than to the site server. You can do that quite easily using the Run Script feature. This works whether the client is connected on the corporate network or through a Cloud Management Gateway.

To do this you need a storage account in Azure, a container in the account, and a Shared access signature.

I’ll assume you have the first two in place, so let’s create a Shared access signature. In the Storage account in the Azure Portal, click on Shared access signature under Settings.

  • Under Allowed services, check Blob.
  • Under Allowed resource types, check Object.
  • Under Allowed permissions, check Create.

Set an expiry date then click Generate SAS and connection string. Copy the SAS token and keep it safe somewhere.

Below is a PowerShell script that will upload client log files to Azure storage.

Update the following parameters in your script:

  • ContainerURL. This is the URL to the container in your storage account. You can find it by clicking on the container, then Properties > URL.
  • SASToken. This is the SAS token string you created earlier.

Create and approve a new Script in ConfigMgr with this code. You can then run it against any online machine, or collection. When it’s complete, it will output how many log files were uploaded and how long the upload took.

To view the log files, you can either browse them in storage account in the Azure portal looking at the container directly, or using the Storage explorer. My preferred method is to use the standalone Microsoft Azure Storage Explorer app, where you can simply double-click a log file to open it, or easily download the folder containing the log files to your local machine.

Installing and Configuring Additional Languages during Windows Autopilot

I was experimenting with different ways to get additional languages installed and configured during Windows Autopilot and it proved to be an interesting challenge. The following is what I settled on in the end and what produced the results that I wanted.

Here were my particular requirements, but you can customize this per your own need:

  • The primary language should be English (United Kingdom)
  • An additional secondary language of English (United States)
  • Display language should be English (United Kingdom)
  • Default input override should be English (United Kingdom)
  • System locale should be English (United Kingdom)
  • The administrative defaults for the Welcome screen and New user accounts must have a display language, input language, format and location matching the primary language (UK / UK English)
  • All optional features for the primary language should be installed (handwriting, optical character recognition, etc)

To achieve this, I basically created three elements:

  1. Installed the Local Experience Pack for English (United Kingdom)
  2. Deployed a powershell script running in administrative context that sets the administrative language defaults and system locale
  3. Deployed a powershell script running in user context that sets the correct order in the user preferred languages list

This was deployed during Autopilot to a Windows 10 1909 (United States) base image.

Local Experience Packs

Local Experience Packs (LXPs) are the modern way to go for installing additional languages since Windows 10 1803. These are published to the Microsoft Store and are automatically updated. They also install more quickly that the traditional cab language packs that you would install with DISM.

LXPs are available in the Microsoft Store for Business, so they can be synced with Intune and deployed as apps. However, the problem with using LXPs as apps during Autopilot is the order of things. The LXP needs to be installed before the PowerShell script that configures the language defaults runs, and since PowerShell scripts are not currently tracked in the ESP, and apps are the last thing to install in the device setup phase, the scripts will very likely run before the app is installed.

To get around that, I decided to get the LXP from the Volume Licensing Center instead. Then I uploaded this to a storage account in Azure, where it gets downloaded and installed by the PowerShell script. This way I can control the order and be sure the LXP is installed before making configuration changes.

When downloading from the VLC, be sure to select the Multilanguage option:

Then get the highlighted ISO. The 1903 LXPs work for 1909 also.

Get the applicable appx file and the license file from the ISO, zip them, and upload the zip file into an Azure Storage account.

When uploading the zip file, be sure to choose the Account Key authentication type:

Once uploaded, click on the blob and go to the Generate SAS page. Choose Read permissions, set an appropriate expiry date, then copy the Blob SAS URL. You will need this to download the file with PowerShell.

Administrative PowerShell Script

Now lets create a PowerShell script that will:

  • Download and install the Local Experience Pack
  • Install any optional features for the language
  • Configure language and regional settings and defaults

Here’s the script I’m using for that.

A quick walkthrough:

First, I’ve entered the locale IDs for the primary and secondary languages, as well as the keyboard layout hex codes, and finally the Geo location ID for the primary language as variables.

Then we set a registry key to allow side-loading (required for older W10 versions for the install of appx/msix).

Next we download and install the LXP. You’ll need to enter the URL you copied earlier for the Azure blob, and update the zip filename as required, as well as the LXP filename.

Then we install any optional features for the primary language that aren’t already installed.

Then we define the content of an XML file that will be used to set the language and locale preferences. Obviously customize that per your requirement.

Then we save that content to a file and apply it.

Create the PowerShell script in Intune, make sure you don’t run it using the logged on credentials, and deploy it to your Autopilot AAD group.

User PowerShell Script

Now we need to create a very simple script that will run in the user context. This script simply makes sure that the list of preferred languages is in the correct order, as by default it will look like this:

This script will run for each user that logs in. It won’t run immediately so the order may be wrong when you first log in, but it doesn’t take long before it runs. Create the script in Intune, remember to run it using the logged on credentials, and deploy it to your Autopilot AAD group.

The Result

After running the Autopilot deployment and logging in, everything checks out 🙂

Managing Intune PowerShell Scripts with Microsoft Graph

In this blog I’ll cover how to list, get, create, update, delete and assign PowerShell scripts in Intune using Microsoft Graph and PowerShell.

Although you can use the Invoke-WebRequest or Invoke-RestMethod cmdlets when working with MS Graph, I prefer to use the Microsoft.Graph.Intune module, aka Intune PowerShell SDK, as it more nicely handles getting an auth token and we don’t have to create any headers, so get that module installed.

In the Graph API, PowerShell scripts live under the deviceManagementScript resource type and these are still only available in the beta schema so they are subject to change.

Connect to MS Graph

First off, let’s connect to MS Graph and set the schema to beta:

If ((Get-MSGraphEnvironment).SchemaVersion -ne "beta")
{
    $null = Update-MSGraphEnvironment -SchemaVersion beta
}
$Graph = Connect-MSGraph

List PowerShell Scripts

Now we can list the PowerShell scripts we have in Intune:

$URI = "deviceManagement/deviceManagementScripts"
$IntuneScripts = Invoke-MSGraphRequest -HttpMethod GET -Url $URI
If ($IntuneScripts.value)
{
    $IntuneScripts = $IntuneScripts.value
}

If we take a look at the results, we’ll see that the script content is not included when we list scripts. It is included when we get a single script, as we’ll see next.

Get a PowerShell Script

To get a specific script, we need to know its Id. To get that, first let’s create a simple function where we can pass a script name and use the Get method to retrieve the script details.

Function Get-IntunePowerShellScript {
    Param($ScriptName)
    $URI = "deviceManagement/deviceManagementScripts" 
    $IntuneScripts = Invoke-MSGraphRequest -HttpMethod GET -Url $URI
    If ($IntuneScripts.value)
    {
        $IntuneScripts = $IntuneScripts.value
    }
    $IntuneScript = $IntuneScripts | Where {$_.displayName -eq "$ScriptName"}
    Return $IntuneScript
}

Now we can use this function to get the script Id and then call Get again adding the script Id to the URL:

$ScriptName = "Escrow Bitlocker Recovery Keys to AAD"
$Script = Get-IntunePowerShellScript -ScriptName $ScriptName
$URI = "deviceManagement/deviceManagementScripts/$($Script.id)"
$IntuneScript = Invoke-MSGraphRequest -HttpMethod GET -Url $URI

If we look at the result, we can see that the script content is now returned, albeit in binary form:

View Script Content

To view the script, we simply need to convert it:

$Base64 =[Convert]::FromBase64String($IntuneScript.scriptContent)
[System.Text.Encoding]::UTF8.GetString($Base64)

Create a Script

Now lets create a new script. To create a script we will read in a script file and convert it into base64. We add this together with other required parameters into some JSON before posting the request.

When reading and converting the script content use UTF8. Other character sets may not decode properly at run-time on the client-side and result in script execution failure.

$ScriptPath = "C:\temp"
$ScriptName = "Escrow-BitlockerRecoveryKeys.ps1"
$Params = @{
    ScriptName = $ScriptName
    ScriptContent = [Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes((Get-Content -Path "$ScriptPath\$ScriptName" -Raw -Encoding UTF8)))
    DisplayName = "Escrow Bitlocker Recovery Keys"
    Description = "Backup Bitlocker Recovery key for OS volume to AAD"
    RunAsAccount = "system" # or user
    EnforceSignatureCheck = "false"
    RunAs32Bit = "false"
}
$Json = @"
{
    "@odata.type": "#microsoft.graph.deviceManagementScript",
    "displayName": "$($params.DisplayName)",
    "description": "$($Params.Description)",
    "scriptContent": "$($Params.ScriptContent)",
    "runAsAccount": "$($Params.RunAsAccount)",
    "enforceSignatureCheck": $($Params.EnforceSignatureCheck),
    "fileName": "$($Params.ScriptName)",
    "runAs32Bit": $($Params.RunAs32Bit)
}
"@
$URI = "deviceManagement/deviceManagementScripts"
$Response = Invoke-MSGraphRequest -HttpMethod POST -Url $URI -Content $Json

We can now see our script in the portal:

Update a Script

To update an existing script, we follow a similar process to creating a new script, we create some JSON that contains the updated parameters then call the Patch method to update it. But first we need to get the Id of the script we want to update, using our previously created function:

$ScriptName = "Escrow Bitlocker Recovery Keys"
$IntuneScript = Get-IntunePowerShellScript -ScriptName $ScriptName

In this example I have updated the content in the source script file so I need to read it in again, as well as updating the description of the script:

$ScriptPath = "C:\temp"
$ScriptName = "Escrow-BitlockerRecoveryKeys.ps1"
$Params = @{
    ScriptName = $ScriptName
    ScriptContent = [Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes((Get-Content -Path "$ScriptPath\$ScriptName" -Raw -Encoding UTF8)))
    DisplayName = "Escrow Bitlocker Recovery Keys"
    Description = "Backup Bitlocker Recovery key for OS volume to AAD (Updated 2020-03-19)"
    RunAsAccount = "system"
    EnforceSignatureCheck = "false"
    RunAs32Bit = "false"
}
$Json = @"
{
    "@odata.type": "#microsoft.graph.deviceManagementScript",
    "displayName": "$($params.DisplayName)",
    "description": "$($Params.Description)",
    "scriptContent": "$($Params.ScriptContent)",
    "runAsAccount": "$($Params.RunAsAccount)",
    "enforceSignatureCheck": $($Params.EnforceSignatureCheck),
    "fileName": "$($Params.ScriptName)",
    "runAs32Bit": $($Params.RunAs32Bit)
}
"@
$URI = "deviceManagement/deviceManagementScripts/$($IntuneScript.id)"
$Response = Invoke-MSGraphRequest -HttpMethod PATCH -Url $URI -Content $Json

We can call Get on the script again and check the lastModifiedDateTime entry to verify that the script was updated, or check in the portal.

Add an Assignment

Before the script will execute anywhere it needs to be assigned to a group. To do that, we need the objectId of the AAD group we want to assign it to. To work with AAD groups I prefer to use the AzureAD module, so install that before continuing.

We need to again get the script that we want to assign:

$ScriptName = "Escrow Bitlocker Recovery Keys"
$IntuneScript = Get-IntunePowerShellScript -ScriptName $ScriptName

Then get the Azure AD group:

$AzureAD = Connect-AzureAD -AccountId $Graph.UPN
$GroupName = "Intune - [Test] Bitlocker Key Escrow"
$Group = Get-AzureADGroup -SearchString $GroupName

Then we prepare the necessary JSON and post the assignment

$Json = @"
{
    "deviceManagementScriptGroupAssignments": [
        {
          "@odata.type": "#microsoft.graph.deviceManagementScriptGroupAssignment",
          "id": "$($IntuneScript.Id)",
          "targetGroupId": "$($Group.ObjectId)"
        }
      ]
}
"@
$URI = "deviceManagement/deviceManagementScripts/$($IntuneScript.Id)/assign"
Invoke-MSGraphRequest -HttpMethod POST -Url $URI -Content $Json

To replace the current assignment with a new assignment, simply change the group name and run the same code again. To add an additional assignment or multiple assignments, you’ll need to post all the assignments at the same time, for example:

$GroupNameA = "Intune - [Test] Bitlocker Key Escrow"
$GroupNameB = "Intune - [Test] Autopilot SelfDeploying Provisioning"
$GroupA = Get-AzureADGroup -SearchString $GroupNameA
$GroupB = Get-AzureADGroup -SearchString $GroupNameB

$Json = @"
{
    "deviceManagementScriptGroupAssignments": [
        {
          "@odata.type": "#microsoft.graph.deviceManagementScriptGroupAssignment",
          "id": "$($IntuneScript.Id)",
          "targetGroupId": "$($GroupA.ObjectId)"
        },
        {
          "@odata.type": "#microsoft.graph.deviceManagementScriptGroupAssignment",
          "id": "$($IntuneScript.Id)",
          "targetGroupId": "$($GroupB.ObjectId)"
        }
      ]
}
"@
$URI = "deviceManagement/deviceManagementScripts/$($IntuneScript.Id)/assign"
Invoke-MSGraphRequest -HttpMethod POST -Url $URI -Content $Json

Delete an Assignment

I haven’t yet figured out how to delete an assignment – the current documentation appears to be incorrect. If you can figure this out please let me know!

Delete a Script

To delete a script, we simply get the script Id and call the Delete method on it:

$ScriptName = "Escrow Bitlocker Recovery Keys"
$IntuneScript = Get-IntunePowerShellScript -ScriptName $ScriptName
$URI = "deviceManagement/deviceManagementScripts/$($IntuneScript.Id)"
Invoke-MSGraphRequest -HttpMethod DELETE -Url $URI 

Delete Device Records in AD / AAD / Intune / Autopilot / ConfigMgr with PowerShell

I’ve done a lot of testing with Windows Autopilot in recent times. Most of my tests are done in virtual machines, which are ideal as I can simply dispose of them after. But you also need to cleanup the device records that were created in Azure Active Directory, Intune, the Autopilot registration service, Microsoft Endpoint Manager (if you’re using it) and Active Directory in the case of Hybrid-joined devices.

To make this a bit easier, I wrote the following PowerShell script. You simply enter the device name and it’ll go and search for that device in any of the above locations that you specify and delete the device records.

The script assumes you have the appropriate permissions, and requires the Microsoft.Graph.Intune and AzureAD PowerShell modules, as well as the Configuration Manager module if you want to delete from there.

You can delete from all of the above locations with the -All switch, or you can specify any combination, for example -AAD -Intune -ConfigMgr, or -AD -Intune etc.

In the case of the Autopilot device registration, the device must also exist in Intune before you attempt to delete it as the Intune record is used to determine the serial number of the device.

Please test thoroughly before using on any production device!

Examples

Delete-AutopilotedDeviceRecords -ComputerName PC01 -All
@(
    'PC01'
    'PC02'
    'PC03'
) | foreach {
    Delete-AutopilotedDeviceRecords -ComputerName $_ -AAD -Intune
}

Output

Script

Get Program Execution History from a ConfigMgr Client with PowerShell

Have you ever been in the situation where something unexpected happens on a users computer and people start pointing their fingers at the ConfigMgr admin and asking “has anyone deployed something with SCCM?” Well, I decided to write a PowerShell script to retrieve the execution history for ConfigMgr programs on a local or remote client. This gives clear visibility of when and which deployments such as applications/programs/task sequences have run on the client and hopefully acquit you (or prove you guilty!)

Program execution history can be found in the registry but it doesn’t contain the name of the associated package, so I joined that data with software distribution data from WMI to give a better view.

You can run the script against the local machine, or a remote machine if you have PS remoting enabled. You can also run it against multiple machines at the same time and combine the data if desired. I recommend to pipe the results to grid view.

Get-CMClientExecutionHistory -Computername PC001,PC002 | Out-GridView

Get Previous and Scheduled Evaluation Times for ConfigMgr Compliance Baselines with PowerShell

I was testing a compliance baseline recently and wanted to verify if the schedule defined in the baseline deployment is actually honored on the client. I set the schedule to run every hour, but it was clear that it did not run every hour and that some randomization was being used.

To review the most recent evaluation times and the next scheduled evaluation time, I had to read the scheduler.log in the CCM\Logs directory, because I could only find a single last evaluation time recorded in WMI.

The following PowerShell script reads which baselines are currently deployed to the local machine, displays a window for you to choose one, then basically reads the Scheduler log to find when the most recent evaluations were and when the next one is scheduled.

Select a baseline
Baseline evaluations

[Unsupported] Getting / triggering ConfigMgr Client Programs using Software Center Code

An odd title perhaps, but I recently had a requirement to retrieve the deadline for a deployed task sequence on the client side in the user context using PowerShell. You can find this info in WMI, using the CCM_Program class of the ROOT\ccm\ClientSDK namespace. Problem is, standard users do not have access to that.

I tried deploying a script in SYSTEM context to get the deadline from WMI and stamp it to a registry location where it could be read in the user context, however curiously the CCM_Program namespace is not accessible in SYSTEM context. A quick Google search assured me I was not alone scratching my head over that one.

I found a way to do it using a Software Center dll, which I’m sure is not supported, but it works at least. Run the following PowerShell code as the logged-on user to find the deadline for a deployed program (could be a classic package/program or task sequence).

$PackageID = "ABC0012B"
Add-Type -Path $env:windir\CCM\SCClient.data.dll
$Connector = [Microsoft.SoftwareCenter.Client.Data.ClientConnectionFactory]::CreateDataConnector()
$Package = $Connector.AllProgramApplications | where {$_.PackageId -eq $PackageID}
$Connector.Dispose()
If ($Package)
{
    $Deadline = Get-Date $Package.DeadlineDisplayValue
}

You can do some other nice things with that Software Center data connector class, for example, trigger a task sequence to run. But you didn’t hear that from me 😉

$PackageID = "ABC0012B"
Add-Type -Path $env:windir\CCM\SCClient.data.dll
$Connector = [Microsoft.SoftwareCenter.Client.Data.ClientConnectionFactory]::CreateDataConnector()
$Package = $Connector.AllProgramApplications | where {$_.PackageId -eq $PackageID}
$Connector.InstallApplication($Package,$false,$false)
$Connector.Dispose()

Setting the Computer Description During Windows Autopilot

I’ve been getting to grips with Windows Autopilot recently and, having a long history working with SCCM, I’ve found it hard not to compare it with the power of traditional OSD using a task sequence. In fact, one of my goals was to basically try to reproduce what I’m doing in OSD with Autopilot in order to end up with the same result – and it’s been a challenge.

I like the general concept of Autopilot and don’t get me wrong – it’s getting better all the time – but it still has its shortcomings that require a bit of creativity to work around. One of the things I do during OSD is to set the computer description in AD. That’s fairly easy to do in a task sequence; you can just script it and run the step using credentials that have the permission to make that change.

In Autopilot however (hybrid AAD join scenario), although you can run Powershell scripts too, they will only run in SYSTEM context during the Autopilot process. That means you either need to give computer accounts the permission to change their own properties in AD, or you have to find a way to run that code using alternate credentials. You can run scripts in the context of the logged-on user, but I don’t want to do that – in fact I disable the user ESP – I want to use a specific account that has those permissions.

You could use SCCM to do it post-deployment if you are co-managing the device, but ideally I want everything to be native to Autopilot where possible, and move away from the hybrid mentality of do what you can with Intune, and use SCCM for the rest.

It is possible to execute code in another user context from SYSTEM context, but when making changes in AD the DirectoryEntry operation kept erroring with “An operations error occurred”. After researching, I realized it is due to AD not accepting the authentication token as it’s being passed a second time and not directly. I tried creating a separate powershell process, a background job, a runspace with specific credentials – nothing would play ball. Anyway, I found a way to get around that by using the AccountManagement .Net class, which allows you to create a context using specific credentials.

In this example, I’m setting the computer description based on the model and serial number of the device. You need to provide the username and password for the account you will perform the AD operation with. I’ve put the password in clear text in this example, but in the real world we store the credentials in an Azure Keyvault and load them in dynamically at runtime with some POSH code to avoid storing them in the script. I hope in the future we will be able to run Powershell scripts with Intune in a specific user context, as you can with steps in an SCCM task sequence.

# Set credentials
$ADAccount = "mydomain\myADaccount"
$ADPassword = "Pa$$w0rd"

# Set initial description
$Model = Get-WMIObject -Class Win32_ComputerSystem -Property Model -ErrorAction Stop| Select -ExpandProperty Model
$SerialNumber = Get-WMIObject -Class Win32_BIOS -Property SerialNumber -ErrorAction Stop | Select -ExpandProperty SerialNumber
$Description = "$Model - $SerialNumber"

# Set some type accelerators
Add-Type -AssemblyName System.DirectoryServices.AccountManagement -ErrorAction Stop
$Accelerators = [PowerShell].Assembly.GetType("System.Management.Automation.TypeAccelerators")
$Accelerators::Add("PrincipalContext",[System.DirectoryServices.AccountManagement.PrincipalContext])
$Accelerators::Add("ContextType",[System.DirectoryServices.AccountManagement.ContextType])
$Accelerators::Add("Principal",[System.DirectoryServices.AccountManagement.ComputerPrincipal])
$Accelerators::Add("IdentityType",[System.DirectoryServices.AccountManagement.IdentityType])

# Connect to AD and set the computer description
$Domain = [System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain()
$PrincipalContext = [PrincipalContext]::new([ContextType]::Domain,$Domain,$ADAccount,$ADPassword)
$Account = [Principal]::FindByIdentity($PrincipalContext,[IdentityType]::Name,$env:COMPUTERNAME)
$LDAPObject = $Account.GetUnderlyingObject()
If ($LDAPObject.Properties["description"][0])
{
    $LDAPObject.Properties["description"][0] = $Description
}
Else
{
    [void]$LDAPObject.Properties["description"].Add($Description)
}
$LDAPObject.CommitChanges()
$Account.Dispose()

Windows 10 Upgrade Splash Screen – Take 2

Recently I tweeted a picture of the custom Windows 10-style splash screen I’m using in an implementation of Windows as a Service with SCCM (aka in-place upgrade), and a couple of people asked for the code, so here it is!

A while ago a blogged about a custom splash screen I created to use during the Windows 10 upgrade process. Since then, I’ve seen some modifications of it out there, including that of Gary Blok, where he added the Windows Setup percent complete which I quite liked. So I made a few changes to the original code as follows:

  • Added a progress bar and percentage for the Windows Setup percent complete
  • Added a timer so the user knows how long the upgrade has been running
  • Prevent the monitors from going to sleep while the splash screen is displayed
  • Added a simple way to close the splash screen in a failure scenario by setting a task sequence variable
  • Re-wrote the WPF part into XAML code

Another change is that I call the script with ServiceUI.exe from the MDT toolkit instead of via the Invoke-PSScriptasUser.ps1 as this version needs to read task sequence variables so must run in the same context as the task sequence.

I haven’t added things like looping the text, or adding TS step names as I prefer not to do that, but check out Gary’s blog if you want to know how.

To use this version, download the files from my Github repo. Make sure you download the v2 edition. Grab the ServiceUI.exe from an MDT installation and add it at top-level (use the x64 version of ServiceUI.exe if you are deploying 64-bit OS). Package these files in a package in SCCM – no program needed.

To call the splash screen, add a Run Command Line step to your upgrade task sequence and call the main script via Service UI, referencing the package:

ServiceUI.exe -process:Explorer.exe %SYSTEMROOT%\System32\WindowsPowershell\v1.0\powershell.exe -NoProfile -WindowStyle Hidden -ExecutionPolicy Bypass -File "Show-OSUpgradeBackground.ps1"

To close the screen in a failure scenario, I add 3 steps as follows:

The first step kills the splash screen simply by setting the task sequence variable QuitSplashing to True. The splash screen code will check for this variable and initiate closure of the window when set to True.

The second step just runs a PowerShell script to wait 5 seconds for the splash screen to close

The last step restores the taskbar to the screen

For that step, run the following PowerShell code:

# Thanks to https://stackoverflow.com/questions/25499393/make-my-wpf-application-full-screen-cover-taskbar-and-title-bar-of-window
$Source = @"
using System;
using System.Runtime.InteropServices;

public class Taskbar
{
    [DllImport("user32.dll")]
    private static extern int FindWindow(string className, string windowText);
    [DllImport("user32.dll")]
    private static extern int ShowWindow(int hwnd, int command);

    private const int SW_HIDE = 0;
    private const int SW_SHOW = 1;

    protected static int Handle
    {
        get
        {
            return FindWindow("Shell_TrayWnd", "");
        }
    }

    private Taskbar()
    {
        // hide ctor
    }

    public static void Show()
    {
        ShowWindow(Handle, SW_SHOW);
    }

    public static void Hide()
    {
        ShowWindow(Handle, SW_HIDE);
    }
}
"@
Add-Type -ReferencedAssemblies 'System', 'System.Runtime.InteropServices' -TypeDefinition $Source -Language CSharp

# Restore the taskbar
[Taskbar]::Show()

HTML Report for SCCM Site Component Warnings and Errors

Just a quick one 🙂

If you’re like me you are too lazy busy to regularly check the component status of an SCCM Site Server for any issues, so why not get PowerShell to do it for you?

The code below will email an html-formatted report of any site components that are currently in an error or warning status, together with the last few error or warning status messages for each component. Run it as a scheduled task or with your favorite automation tool to keep your eye on any current issues. Whether you get annoyed because you now created more work for yourself, or get happy because you can stay on top of issues in your SCCM environment, I leave to you!

The report will display the components that are marked as either critical or warning with the current number of messages:

It will then display the last x status messages for each component for a quick view of what the current issue/s are:

Run the script either on the site server or somewhere where the SCCM console is installed, and set the required parameters in the script.