This guide uses the Chocolatey automation scripts as practical, real-world examples to teach PowerShell scripting concepts. Rather than learning from contrived examples, you will study code that solves actual problems — automating package management, detecting system state, handling errors gracefully, and building interactive tools for Windows. Each section builds on the previous one, taking you from fundamental concepts through to professional-level patterns.

How to use this guide: Read each section and study the code examples carefully. The examples are taken directly from the Chocolatey Scripts source code, so you can see how these patterns work in a production context. Try the exercises at the end of each section to reinforce what you have learned.

1. Basic Script Structure

Every well-written PowerShell script begins with a consistent structure that establishes the execution environment, documents the script's purpose, and sets up the foundations for reliable operation. Getting this foundation right is essential because it determines how reliably your script behaves across different systems and under different conditions.

The Requires Statement

The very first line of a professional PowerShell script is a #Requires statement. This special directive tells PowerShell to check prerequisites before running any of the script's code. If the requirements are not met, the script fails immediately with a clear error message rather than crashing partway through with a confusing error.

#Requires -Version 5.1

This single line ensures the script only runs on PowerShell 5.1 or later. Without it, a user running an older version of PowerShell would encounter cryptic errors when the script tries to use features that do not exist in their version. The #Requires statement catches this problem at the very start, before any code executes, and tells the user exactly what they need to fix.

You can also require administrator privileges, specific modules, or other PowerShell features:

#Requires -Version 5.1
#Requires -RunAsAdministrator

The -RunAsAdministrator requirement is particularly useful for Chocolatey scripts, which almost always need elevated privileges to install or update system-wide packages. Adding this requirement prevents the frustrating experience of running through half an installation only to have it fail because the script was not started with the right permissions.

Script Metadata (Comment-Based Help)

After the requires statement, professional scripts include a metadata block that describes the script's purpose, behavior, and requirements. PowerShell has a built-in system for this called Comment-Based Help, which lets users run Get-Help .\your-script.ps1 to see documentation without opening the script file.

<#
.SYNOPSIS
    Installs Chocolatey package manager for Windows.

.DESCRIPTION
    This script installs Chocolatey with proper configuration and runs
    health checks. Safe to run multiple times - skips installation if
    Chocolatey is already present on the system.

.NOTES
    Requires Administrator privileges.
    Logs to: $env:USERPROFILE\Logs\ChocolateyInstall.log
#>

This metadata is not just decoration. When someone encounters your script months or years later — possibly yourself — the .SYNOPSIS immediately communicates what the script does, the .DESCRIPTION provides behavioral details and safety information, and the .NOTES lists prerequisites and side effects. It takes seconds to write and saves significant time when maintaining or debugging scripts. Always include at least .SYNOPSIS, .DESCRIPTION, and .NOTES in every script you create.

Administrator Check

One of the most important patterns in Windows automation scripts is the administrator privilege check. Chocolatey installs system-wide software, which requires elevated permissions. Running without admin rights would fail partway through, potentially leaving things in a broken state.

function Test-Administrator {
    $currentUser = [Security.Principal.WindowsIdentity]::GetCurrent()
    $principal = New-Object Security.Principal.WindowsPrincipal($currentUser)
    return $principal.IsInRole(
        [Security.Principal.WindowsBuiltInRole]::Administrator
    )
}

if (-not (Test-Administrator)) {
    Write-Host "This script requires Administrator privileges." -ForegroundColor Red
    Write-Host "Right-click PowerShell and select 'Run as Administrator'." -ForegroundColor Yellow
    exit 1
}

This pattern uses the .NET security classes built into PowerShell to check whether the current session has administrator privileges. The function creates a WindowsPrincipal object from the current user's identity and then checks if that principal is in the Administrator role. This is the most reliable way to perform this check in PowerShell. If the check fails, the script exits with a clear, helpful error message rather than failing later with a confusing access denied error.

Exercise: Try running one of the Chocolatey Scripts without administrator privileges and observe the error messages. Then run it as Administrator and compare the behavior. Understanding the difference will help you debug permission-related issues in your own scripts.

2. Variables and Configuration

Variables are the foundation of any configurable script. PowerShell provides several categories of variables, each with different scopes and purposes. Understanding these categories and using the right type for each situation makes your scripts more robust and easier to maintain.

Automatic Variables

PowerShell provides many built-in variables that give you information about the current environment. These are called automatic variables because PowerShell sets and maintains them for you. You do not need to define them — they are always available.

# Environment variables — system and user settings
$env:USERPROFILE     # C:\Users\YourName
$env:COMPUTERNAME    # Your PC's hostname
$env:TEMP            # Temporary file directory

# Script-specific automatic variables
$PSScriptRoot        # Folder where the current script lives
$PSVersionTable      # PowerShell version information
$LASTEXITCODE        # Exit code from the last external command

The $PSScriptRoot variable is particularly important for scripts that need to reference other files relative to their own location. It always points to the directory containing the currently executing script, regardless of what the current working directory happens to be. This is the correct way to build file paths in PowerShell — never assume that the working directory is the same as the script's directory.

Script Variables

Script variables are the values you define for your script's own use. They store configuration values, paths, counters, and other data that your script needs during execution.

# Configuration values
$WifiNetwork = "YOUR_WIFI_NAME"
$MaxRetries = 3
$RetryDelay = 300  # 5 minutes in seconds

# Paths — use Join-Path for cross-version compatibility
$LogPath = "$env:USERPROFILE\Logs"
$LogFile = "$LogPath\AutoUpdateChocolatey.log"
$ConfigPath = Join-Path $PSScriptRoot "config.json"

Notice the use of Join-Path for constructing file paths. While simple string concatenation works most of the time, Join-Path correctly handles edge cases like trailing backslashes and is the idiomatic PowerShell way to build paths. It is especially important when combining paths from different sources, such as environment variables and relative paths.

Reading from a Config File

Hard-coding configuration values directly in scripts creates a maintenance burden — every time you need to change a setting, you have to edit the script itself. A better approach is to store settings in an external JSON file that the script reads at startup.

# Load settings from an external JSON file
$configPath = Join-Path $PSScriptRoot "config.json"
if (Test-Path $configPath) {
    $config = Get-Content $configPath -Raw | ConvertFrom-Json
    $WifiNetwork = $config.wifiNetwork
    $MaxRetries = $config.maxRetries
    Write-Host "[INFO] Loaded configuration from $configPath" -ForegroundColor Blue
} else {
    Write-Warning "No config.json found. Using defaults."
    Write-Warning "Copy config.example.json to config.json to customize."
}

This pattern follows a best practice from software engineering: keep configuration separate from code. The project ships with a config.example.json that contains safe placeholder values. Users copy it to config.json and customize it for their environment. The real config.json is listed in .gitignore so personal settings like WiFi names and email addresses are never accidentally committed to version control.

Exercise: Create a simple JSON config file with three settings (a string, a number, and a boolean). Write a PowerShell script that loads the file with Get-Content and ConvertFrom-Json, and prints each setting to the console.

3. Functions and Modularity

Functions are the building blocks of maintainable PowerShell scripts. By encapsulating logic into discrete, named units, you create code that is easier to read, test, debug, and reuse. In the Chocolatey Scripts project, helper functions are defined at the top of each script and handle recurring tasks like status output and logging.

Function Design Principles

Every function should follow the Single Responsibility Principle: it should do one thing and do it well. The function's name should clearly communicate its purpose, and its inputs should be well-defined using param() blocks.

# Good: Clear name, param block, single responsibility
function Write-Status {
    param([string]$Message)
    Write-Host "[INFO] $Message" -ForegroundColor Blue
    Add-Content -Path $LogFile -Value "[INFO] $(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') $Message"
}

function Write-Success {
    param([string]$Message)
    Write-Host "[SUCCESS] $Message" -ForegroundColor Green
    Add-Content -Path $LogFile -Value "[SUCCESS] $(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') $Message"
}

function Write-Error {
    param([string]$Message)
    Write-Host "[ERROR] $Message" -ForegroundColor Red
    Add-Content -Path $LogFile -Value "[ERROR] $(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') $Message"
}

Notice several important patterns in these functions. Each function has a single, focused purpose: output a message at a specific severity level. They all use param() blocks with type declarations for their inputs, making it clear what data they expect. And they all follow the same dual-output pattern: writing to the console with color coding for the user, and writing to a log file with timestamps for later review. This consistency makes the code predictable and easy to understand.

Functions with Multiple Parameters

More complex functions accept multiple parameters and may include error handling. Here is a real-world example from the Chocolatey Scripts notification system:

function Send-EmailNotification {
    param(
        [string]$Subject,
        [string]$Body
    )

    try {
        $smtp = New-Object System.Net.Mail.SmtpClient($smtpServer, $smtpPort)
        $smtp.EnableSsl = $true
        $smtp.Credentials = New-Object System.Net.NetworkCredential(
            $emailAddress, $smtpPassword
        )

        $mail = New-Object System.Net.Mail.MailMessage
        $mail.From = $emailAddress
        $mail.To.Add($emailAddress)
        $mail.Subject = $Subject
        $mail.Body = $Body

        $smtp.Send($mail)
        Write-Success "Email notification sent: $Subject"
    }
    catch {
        Write-Error "Failed to send email: $($_.Exception.Message)"
    }
}

This function demonstrates several key concepts. The param() block defines named parameters with types, making the function self-documenting. The try/catch block wraps the risky network operation so that a failure to send email does not crash the entire script. And the error message includes $_.Exception.Message, which extracts the specific error from the caught exception — giving the user actionable information about what went wrong.

Key Design Concepts

  • Single Responsibility — Each function should do exactly one thing. If you find yourself adding "and" to a function's description, it probably needs to be split into two functions.
  • Named Parameters — Use param() blocks with descriptive names and type declarations. This makes your functions self-documenting and prevents type-related bugs.
  • Error Handling — Wrap risky operations (network calls, file I/O, external commands) in try/catch blocks. Never let an unhandled exception crash your script when a graceful recovery or error message is possible.
  • Consistent Output — Use helper functions like Write-Status and Write-Success for uniform messaging throughout your script. This makes the output predictable and easy to scan.

Exercise: Write a function called Test-FileNotEmpty that takes a file path as a parameter and returns $true if the file exists and is not empty, or $false otherwise. Use Test-Path and Get-Item in your implementation.

4. User Input and Interaction

Interactive scripts need to communicate clearly with the user and handle a wide variety of inputs gracefully. Users will type unexpected things, press Enter without typing anything, and make mistakes. A well-designed interactive function anticipates all of these scenarios and handles them without crashing or producing confusing results.

Colored Console Output

Color makes terminal output dramatically easier to read. The Chocolatey Scripts project uses PowerShell's -ForegroundColor parameter on Write-Host consistently throughout to signal different types of information.

# Color-coded messages help users scan output quickly
Write-Host "[INFO] Checking system requirements..." -ForegroundColor Blue
Write-Host "[SUCCESS] All checks passed" -ForegroundColor Green
Write-Host "[WARNING] Package is outdated" -ForegroundColor Yellow
Write-Host "[ERROR] Installation failed" -ForegroundColor Red

Unlike Bash, where you need to remember escape sequences for colors, PowerShell provides named colors as a parameter. The available colors include Blue, Green, Yellow, Red, Cyan, White, and DarkGray among others. Using consistent color coding across your scripts means users can quickly scan output and focus on the messages that matter most to them.

Interactive Prompts

PowerShell provides Read-Host for getting input from the user. Combined with simple validation logic, you can build interactive menus that guide users through multi-step processes.

# Simple yes/no prompt
$response = Read-Host "Do you want to continue? (Y/N)"
if ($response -eq 'Y') {
    Write-Status "Continuing with installation..."
} else {
    Write-Status "Installation cancelled by user."
    exit 0
}

# Menu-driven selection from the setup wizard
Write-Host ""
Write-Host "Select an option:" -ForegroundColor Cyan
Write-Host "  1. Install Chocolatey"
Write-Host "  2. Update all packages"
Write-Host "  3. Run health check"
Write-Host "  4. Set up scheduled tasks"
Write-Host ""
$choice = Read-Host "Enter your choice (1-4)"

switch ($choice) {
    '1' { .\install-chocolatey.ps1 }
    '2' { .\auto-update-chocolatey.ps1 }
    '3' { .\health-check.ps1 }
    '4' { .\setup-scheduled-tasks.ps1 }
    default { Write-Warning "Invalid choice. Please enter a number between 1 and 4." }
}

The switch statement is PowerShell's equivalent of a case statement, and it is cleaner than a chain of if/elseif blocks when handling multiple options. The default case catches any input that does not match a valid option, providing helpful feedback instead of silently doing nothing.

Learning Points

  • Color coding makes output scannable at a glance. Establish a consistent color scheme and use it throughout all your scripts.
  • Read-Host pauses the script and waits for user input. Always provide clear instructions about what input is expected.
  • Menus guide users through complex multi-step processes and reduce the chance of errors by constraining the available options.
  • Input validation catches mistakes early. Always include a default or fallback case that handles unexpected input gracefully.

Exercise: Create a menu function that presents 4 options to the user (Install, Update, Cleanup, Quit), validates that their selection is a number between 1 and 4, and loops until a valid selection is made. Use Write-Host with colors and a while loop.

5. Error Handling and Validation

Robust error handling separates amateur scripts from professional-grade automation. In a production environment, network connections drop, disks fill up, permissions change, and commands that worked yesterday fail today. Your script needs to anticipate these situations and respond intelligently rather than simply crashing.

Defensive Programming

Defensive programming means checking prerequisites before attempting operations. The Test-Prerequisites function examines the system environment at startup, identifies all issues, and reports them together rather than failing on the first problem found. This saves the user from a frustrating cycle of fixing one issue, rerunning the script, finding the next issue, and repeating.

function Test-Prerequisites {
    $errors = 0

    # Check PowerShell version
    if ($PSVersionTable.PSVersion.Major -lt 5) {
        Write-Error "PowerShell 5.1 or later is required"
        $errors++
    }

    # Check internet connectivity
    if (-not (Test-Connection -ComputerName "chocolatey.org" -Count 1 -Quiet)) {
        Write-Error "No internet connection detected"
        $errors++
    }

    # Check disk space (need at least 1 GB free)
    $drive = Get-PSDrive C
    $freeGB = [math]::Round($drive.Free / 1GB, 2)
    if ($freeGB -lt 1) {
        Write-Error "Only $freeGB GB free. Need at least 1 GB."
        $errors++
    }

    # Check admin privileges
    if (-not (Test-Administrator)) {
        Write-Error "Administrator privileges are required"
        $errors++
    }

    return $errors -eq 0
}

The error counter pattern is a key technique. Rather than stopping at the first failure, the function accumulates all errors and reports them at once. If a user has three problems — wrong PowerShell version, no internet, and low disk space — they see all three issues in one run and can fix them all before trying again. The function returns $true only when the error count is zero, making it easy to use in an if statement.

Retry Logic with Backoff

Network operations are inherently unreliable. A server might be temporarily overloaded, a DNS lookup might time out, or a connection might be briefly interrupted. Rather than failing immediately, professional scripts retry the operation with increasing delays between attempts.

function Test-WifiNetwork {
    $retryCount = 0

    while ($retryCount -lt $MaxRetries) {
        try {
            # Check if connected to the right network
            $wifi = netsh wlan show interfaces |
                    Select-String "^\s*SSID\s*:" |
                    ForEach-Object { ($_ -split ":\s*", 2)[1].Trim() }

            if ($wifi -eq $WifiNetwork) {
                Write-Success "Connected to $WifiNetwork"
                return $true
            }
            Write-Warning "Not connected to $WifiNetwork (currently on: $wifi)"
        }
        catch {
            Write-Warning "WiFi check failed: $($_.Exception.Message)"
        }

        $retryCount++
        if ($retryCount -lt $MaxRetries) {
            Write-Status "Retrying in $RetryDelay seconds (attempt $($retryCount + 1) of $MaxRetries)..."
            Start-Sleep -Seconds $RetryDelay
        }
    }

    Write-Error "Not connected to $WifiNetwork after $MaxRetries attempts"
    return $false
}

This function demonstrates several important patterns. The while loop with a counter limits the number of attempts, preventing infinite retries. The try/catch block inside the loop catches transient errors without breaking out of the retry logic. And the delay between retries gives temporary issues time to resolve — a WiFi network that dropped for a moment may be back up after the wait period.

Key Patterns

  • Check early, fail fast — Validate the environment before starting real work. It is much better to discover a problem before you have made any changes to the system.
  • Accumulate errors — Report all problems at once instead of stopping at the first one. This respects the user's time by letting them fix everything in one pass.
  • Retry transient failures — Network issues are often temporary. A short wait and retry often succeeds where an immediate failure would have been reported.
  • Use exit codes — Return $true/$false from validation functions and use exit 1 for script-level failures. This makes your scripts work well when called from other scripts or automation tools.

6. Logging and Debugging

A good logging system is your best friend when something goes wrong. It provides a complete record of what happened, when it happened, and in what order. The Chocolatey Scripts logging system writes to both the console (with colors for readability) and a file (with timestamps for debugging), ensuring you always have the information you need for troubleshooting.

Setting Up the Log Directory

Before writing any log entries, the script needs to ensure the log directory exists. This is a common bootstrapping step that should happen early in the script's execution.

# Create log directory if it doesn't exist
$LogPath = "$env:USERPROFILE\Logs"
if (-not (Test-Path $LogPath)) {
    New-Item -ItemType Directory -Path $LogPath -Force | Out-Null
}

$LogFile = "$LogPath\ChocolateyInstall.log"
Add-Content -Path $LogFile -Value "--- Session started at $(Get-Date) ---"

The | Out-Null suppresses the output of New-Item, which normally displays the created directory object. In a script, this output would clutter the console with information the user does not need. The -Force flag creates parent directories as needed and does nothing if the directory already exists, making this code safe to run multiple times.

Dual Output (Console + File)

The core of the logging system is the dual-output pattern: every message is written to both the console and a log file. The console version uses colors for readability, while the file version adds timestamps and strips formatting for clean, parseable log files.

function Write-Status {
    param([string]$Message)
    # Show in console with color for immediate feedback
    Write-Host "[INFO] $Message" -ForegroundColor Blue
    # Write to log file with timestamp for later review
    Add-Content -Path $LogFile -Value "[INFO] $(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') $Message"
}

function Write-Success {
    param([string]$Message)
    Write-Host "[SUCCESS] $Message" -ForegroundColor Green
    Add-Content -Path $LogFile -Value "[SUCCESS] $(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') $Message"
}

function Write-Warning {
    param([string]$Message)
    Write-Host "[WARNING] $Message" -ForegroundColor Yellow
    Add-Content -Path $LogFile -Value "[WARNING] $(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') $Message"
}

function Write-Error {
    param([string]$Message)
    Write-Host "[ERROR] $Message" -ForegroundColor Red
    Add-Content -Path $LogFile -Value "[ERROR] $(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') $Message"
}

The timestamp format yyyy-MM-dd HH:mm:ss is ISO 8601 compatible, which means log entries sort correctly when you sort them alphabetically and are unambiguous across locales. This is essential for debugging timing issues where you need to understand the exact sequence and duration of operations.

Learning Points

  • Always log to a file so you can review what happened after the fact. Console output scrolls away and is lost when the window closes.
  • Timestamps are essential for debugging timing issues, understanding operation duration, and correlating events across multiple log files.
  • Dual output keeps the user informed during execution while preserving a permanent, detailed record for later analysis.
  • Consistent formatting with severity prefixes like [INFO], [ERROR], and [SUCCESS] makes it easy to grep or Select-String for specific types of entries.

7. System Detection and Adaptation

Windows automation scripts need to be aware of their environment. A laptop on battery power should not start downloading large updates. A desktop without a WiFi adapter does not need WiFi checks. Scripts that adapt to system conditions are more reliable and user-friendly than scripts that assume a fixed environment.

Power Status Detection

One of the smartest features of the auto-update script is its ability to check whether the computer is plugged into AC power before starting potentially long download operations. This prevents the script from draining a laptop's battery during a lengthy update process.

function Test-ACPower {
    try {
        $battery = Get-CimInstance -ClassName Win32_Battery -ErrorAction Stop
        if ($battery) {
            # BatteryStatus >= 2 means plugged into AC power
            # 1 = discharging, 2 = AC connected, 3+ = charging
            return $battery.BatteryStatus -ge 2
        }
        # No battery found = desktop computer = always on power
        return $true
    }
    catch {
        # Fallback for systems where CIM fails
        $battery = Get-WmiObject Win32_Battery
        if ($battery) {
            return $battery.BatteryStatus -ge 2
        }
        # If we can't detect battery at all, assume AC power
        return $true
    }
}

This function demonstrates several important concepts. First, it uses Get-CimInstance, which is the modern replacement for Get-WmiObject in PowerShell. CIM is faster, more reliable, and is the recommended approach for new scripts. Second, it includes a fallback to Get-WmiObject for older systems that may not support CIM queries. Third, it handles the case where no battery is detected — desktop computers do not have batteries, so no battery means the machine is always on AC power. This kind of thorough thinking about edge cases is what makes production scripts reliable.

Network Detection

The WiFi detection function uses Windows' built-in netsh command to determine the currently connected wireless network. This allows the auto-update script to only run when the computer is connected to a trusted network.

function Test-WifiNetwork {
    # Get the current SSID from the wireless adapter
    $wifi = netsh wlan show interfaces |
            Select-String "^\s*SSID\s*:" |
            ForEach-Object { ($_ -split ":\s*", 2)[1].Trim() }

    if ($wifi -eq $WifiNetwork) {
        Write-Success "Connected to $WifiNetwork"
        return $true
    } else {
        Write-Warning "Not on expected network. Connected to: $wifi"
        return $false
    }
}

The parsing of netsh output uses Select-String (PowerShell's equivalent of grep) to find the line containing the SSID, then splits on the colon delimiter to extract just the network name. The Trim() call removes any leading or trailing whitespace. This approach is more reliable than trying to match the entire line, because the output format of netsh can vary slightly between Windows versions.

Concepts Demonstrated

  • CIM over WMIGet-CimInstance is the modern replacement for Get-WmiObject. Use CIM for new scripts and WMI only as a fallback.
  • Graceful fallbacks — Try the modern approach first, fall back to legacy if needed. This maximizes compatibility across different Windows versions.
  • Environment awareness — Check power and network status before running long operations. Scripts that respect system conditions are more reliable and user-friendly.
  • Edge case handling — Desktop computers have no batteries. Wired connections have no SSIDs. Account for these variations in your detection logic.

8. Configuration Management

Configuration management is the practice of externalizing settings so they can be changed without modifying code. The Chocolatey Scripts project uses JSON configuration files to separate user preferences from script logic, making the scripts both customizable and maintainable.

JSON Configuration Files

JSON is the configuration format of choice for the Chocolatey Scripts project. It is human-readable, widely supported, and PowerShell has built-in cmdlets for parsing it. Here is the structure of the config file:

{
  "wifiNetwork": "YOUR_WIFI_NAME",
  "maxRetries": 3,
  "retryDelaySeconds": 300,
  "emailAddress": "[email protected]",
  "smtpServer": "smtp.gmail.com",
  "smtpPort": 587,
  "notifications": {
    "enableEmailNotifications": false,
    "enableToastNotifications": true,
    "notifyOnSuccess": true,
    "notifyOnError": true,
    "notifyOnWarning": false
  },
  "backupSettings": {
    "backupPath": "%USERPROFILE%\\Documents\\ChocolateyBackups",
    "autoBackupBeforeUpdate": true,
    "keepBackups": 5
  }
}

Loading and Using Configuration

The pattern for loading configuration is straightforward: check if the file exists, read it, parse the JSON, and apply the values. Always provide sensible defaults in case the config file is missing or a particular setting is not defined.

$configPath = Join-Path $PSScriptRoot "config.json"

if (Test-Path $configPath) {
    $config = Get-Content $configPath -Raw | ConvertFrom-Json
    Write-Status "Loaded configuration from $configPath"

    # Apply settings with fallback defaults
    $WifiNetwork = if ($config.wifiNetwork) { $config.wifiNetwork } else { "" }
    $MaxRetries = if ($config.maxRetries) { $config.maxRetries } else { 3 }
} else {
    Write-Warning "No config.json found. Using defaults."
    Write-Warning "Copy config.example.json to config.json to customize."
}

The -Raw parameter on Get-Content is important. Without it, PowerShell returns an array of lines rather than a single string, and ConvertFrom-Json needs a single string to parse correctly. This is a common source of confusing errors for PowerShell beginners — always use -Raw when reading JSON files.

Best Practice

Ship a config.example.json with safe placeholder values. Users copy it to config.json and customize. The real config.json is git-ignored so personal settings stay private. This pattern is standard across professional software projects and prevents accidental exposure of sensitive information like email addresses and WiFi network names.

9. Process Management and Automation

One of the most powerful features of the Chocolatey Scripts project is its ability to schedule tasks for unattended execution. Windows Task Scheduler provides a robust framework for running scripts automatically, and PowerShell gives you full control over creating, managing, and monitoring these scheduled tasks.

Windows Scheduled Tasks

Creating a scheduled task in PowerShell involves defining three components: the action (what to run), the trigger (when to run it), and the settings (under what conditions). Here is how the auto-update task is configured:

# Define the action — what command to run
$action = New-ScheduledTaskAction `
    -Execute "powershell.exe" `
    -Argument "-File `"$PSScriptRoot\auto-update-chocolatey.ps1`""

# Define the trigger — when to run
$trigger = New-ScheduledTaskTrigger -Daily -At "3:00AM"

# Define the settings — under what conditions
$settings = New-ScheduledTaskSettingsSet `
    -RunOnlyIfNetworkAvailable `
    -StartWhenAvailable

# Register the task with Windows Task Scheduler
Register-ScheduledTask `
    -TaskName "Chocolatey Auto Update" `
    -Action $action `
    -Trigger $trigger `
    -Settings $settings `
    -RunLevel Highest

The -RunLevel Highest parameter ensures the task runs with administrator privileges, which Chocolatey requires for installing and updating packages. The -RunOnlyIfNetworkAvailable setting prevents the task from running when the computer is offline, and -StartWhenAvailable ensures the task runs at the next opportunity if the computer was off during the scheduled time.

Idempotent Operations

A critical concept in automation is idempotency: the ability to run an operation multiple times without causing problems. Every script in this project checks the current state before making changes, ensuring it is safe to run repeatedly.

# Safe to run multiple times — checks before acting
if (Get-Command choco -ErrorAction SilentlyContinue) {
    Write-Success "Chocolatey is already installed (version $(choco --version))"
} else {
    Write-Status "Installing Chocolatey..."
    # installation logic here
}

The -ErrorAction SilentlyContinue parameter prevents Get-Command from throwing an error if choco is not found. Instead, it returns $null, which evaluates to $false in the if statement. This pattern is cleaner and more reliable than using try/catch for simple existence checks.

Idempotent scripts have several practical benefits: they can be safely resumed after interruptions, they do not duplicate installations, they detect existing state before making changes, and they can be run as part of automated pipelines without special handling for "already done" cases.

10. Testing and Safety Checks

Testing is essential for maintaining confidence in your scripts as they evolve. The Chocolatey Scripts project includes a validation script that checks the entire environment before any changes are made, ensuring that prerequisites are met and the configuration is valid.

Validation Scripts

The validate-setup.ps1 script performs a comprehensive check of your environment. It verifies that all required components are in place, the configuration is valid, and the system meets minimum requirements.

function Test-Configuration {
    $configPath = Join-Path $PSScriptRoot "config.json"

    # Check that the config file exists
    if (-not (Test-Path $configPath)) {
        Write-Fail "config.json is missing"
        Write-Host "  Run: Copy-Item config.example.json config.json" -ForegroundColor DarkGray
        return $false
    }

    # Check that the config file contains valid JSON
    try {
        $config = Get-Content $configPath -Raw | ConvertFrom-Json
        Write-Pass "config.json has valid JSON syntax"
    }
    catch {
        Write-Fail "config.json has invalid JSON: $($_.Exception.Message)"
        return $false
    }

    # Check for placeholder values that need customizing
    if ($config.wifiNetwork -eq "YOUR_WIFI_NAME") {
        Write-Warn "WiFi network is still set to the placeholder value"
        Write-Host "  Update 'wifiNetwork' in config.json to your actual WiFi name" -ForegroundColor DarkGray
    }

    # Verify notification settings are consistent
    if ($config.notifications.enableEmailNotifications -and
        (-not $config.emailAddress -or $config.emailAddress -eq "[email protected]")) {
        Write-Warn "Email notifications are enabled but email address is not configured"
    }

    return $true
}

Notice how the validation function not only reports problems but also suggests solutions. When the config file is missing, it shows the exact command to create one. When a placeholder value is detected, it tells the user which specific setting needs to be updated. This kind of helpful error reporting transforms a frustrating debugging experience into a guided setup process.

Exercise: Write a validation function that checks whether Chocolatey is installed and reports its version. Use Get-Command with -ErrorAction SilentlyContinue to check for the choco command, and choco --version to retrieve the version number.

11. Security and Safety Patterns

Security is a fundamental concern when writing automation scripts that run with administrator privileges. PowerShell provides several built-in mechanisms for managing script security, and following best practices helps protect both your system and your users.

Execution Policy Awareness

PowerShell's execution policy controls which scripts are allowed to run. A well-written script checks the current policy at startup and provides guidance if the policy is too restrictive.

# Check if scripts are allowed to run
$policy = Get-ExecutionPolicy
if ($policy -eq "Restricted") {
    Write-Warning "PowerShell execution policy is Restricted."
    Write-Warning "Scripts cannot run under this policy."
    Write-Host ""
    Write-Host "To fix this, run the following command as Administrator:" -ForegroundColor Cyan
    Write-Host "  Set-ExecutionPolicy RemoteSigned -Scope CurrentUser" -ForegroundColor White
    exit 1
}

Write-Status "Execution policy: $policy"

Checking the execution policy before doing anything else prevents the confusing scenario where a user tries to run a script and gets a generic "cannot be loaded" error. By detecting the issue and providing the exact command to fix it, you save the user from having to search the internet for a solution.

Safe Package Installation

When installing packages through Chocolatey, wrapping each installation in error handling ensures that a single failed package does not prevent the rest from being installed.

function Install-ChocolateyPackage {
    param([string]$PackageName)

    try {
        Write-Status "Installing $PackageName..."
        choco install $PackageName -y --no-progress

        if ($LASTEXITCODE -eq 0) {
            Write-Success "$PackageName installed successfully"
        } else {
            Write-Error "$PackageName installation returned exit code $LASTEXITCODE"
        }
    }
    catch {
        Write-Error "Failed to install $PackageName : $($_.Exception.Message)"
    }
}

The $LASTEXITCODE variable captures the exit code from the most recent external command (in this case, choco). An exit code of 0 means success, while any non-zero value indicates a problem. This two-layer error handling — try/catch for PowerShell exceptions and $LASTEXITCODE for external command failures — covers all the ways an installation can fail.

Security Tips

  • Review scripts before running — Never blindly execute code from the internet. Read through the script or use Get-Content .\script.ps1 | more to understand what it does.
  • Use -WhatIf where supported — Many PowerShell cmdlets support the -WhatIf parameter, which shows what would happen without actually making changes.
  • Run as admin only when necessary — Do not stay elevated for tasks that do not need it. Close the admin PowerShell window when you are done.
  • Keep Chocolatey updated — Run choco upgrade chocolatey regularly to get security patches and bug fixes for the package manager itself.
  • Verify package sources — Use choco info package-name to review package details before installing unfamiliar packages.

12. Documentation and Maintenance

Good documentation makes your scripts maintainable for the long term. PowerShell has a built-in help system that you should use extensively, along with inline comments that explain the reasoning behind your code decisions.

Comment-Based Help

PowerShell's Comment-Based Help system allows users to run Get-Help on your scripts and see structured documentation. This is the professional standard for PowerShell scripts and should be included in every script you create.

<#
.SYNOPSIS
    One-line description of what the script does.

.DESCRIPTION
    Detailed explanation of behavior, prerequisites, and side effects.
    Include information about what the script modifies on the system
    and any conditions that must be true before running.

.PARAMETER Name
    Description of each parameter the script accepts, including
    valid values, defaults, and any constraints.

.EXAMPLE
    .\install-chocolatey.ps1
    Installs Chocolatey with default settings.

.EXAMPLE
    .\install-chocolatey.ps1 -Verbose
    Installs Chocolatey with detailed progress output.

.NOTES
    Author:  CodeCraftedApps
    Version: 1.0.0
    Requires: PowerShell 5.1+, Administrator privileges
    Logs to: $env:USERPROFILE\Logs\ChocolateyInstall.log
#>

The .EXAMPLE sections are particularly valuable because they show users how to actually run the script. Include multiple examples that demonstrate different use cases and parameter combinations. Users can access these examples with Get-Help .\script.ps1 -Examples.

Inline Comments

Inline comments should explain why the code does something, not what it does. The code itself shows what is happening; the comment should explain the reasoning or context that is not obvious from the code alone.

# Good: Explain WHY, not WHAT
# BatteryStatus >= 2 means AC power (1 = discharging, 2+ = charging/plugged in)
return $battery.BatteryStatus -ge 2

# Bad: Just restating the code
# Check if status is greater than or equal to 2
return $battery.BatteryStatus -ge 2

The first comment adds information that you cannot derive from the code alone: the meaning of the battery status codes. The second comment simply restates what the code does, which is obvious to anyone who can read PowerShell. Good comments answer the question "why is this necessary?" or "what does this magic number mean?"

Maintenance Best Practices

  • Keep scripts focused — Each script should have one clear purpose. This makes them easier to understand, test, and debug.
  • Use consistent naming — Follow PowerShell's Verb-Noun naming convention for functions (e.g., Test-Administrator, Write-Status, Install-ChocolateyPackage).
  • Version your scripts — Include version numbers in your script metadata and update them when you make changes.
  • Use PSScriptAnalyzer — Run Invoke-ScriptAnalyzer -Path .\your-script.ps1 to check for best practice violations and common pitfalls.

Practical Exercises

The best way to learn PowerShell scripting is to write PowerShell scripts. These exercises are designed to reinforce the concepts covered in this guide. Each exercise builds on the patterns you have seen in the Chocolatey Scripts source code. Try to complete them on your own before looking at the hints.

Exercise 1: Directory Setup

Write a function called Ensure-Directory that takes a directory path as an argument. If the directory exists, it should log a success message and return. If it does not exist, it should create it (including any parent directories) and log what it did. The function should handle errors such as permission denied and return appropriate values.

function Ensure-Directory {
    param([string]$Path)

    # Your implementation here:
    # 1. Validate that $Path is not empty
    # 2. Check if the directory already exists (Test-Path)
    # 3. If not, create it with New-Item -ItemType Directory -Force
    # 4. Handle errors with try/catch
    # 5. Log the result using Write-Status or Write-Success
}

Exercise 2: Config Validation

Create a function called Test-ConfigFile that loads a JSON config file and checks for required keys. It should verify the file exists, contains valid JSON, and includes specific required settings.

function Test-ConfigFile {
    param([string]$Path)

    # Your implementation here:
    # 1. Check if the file exists (Test-Path)
    # 2. Read and parse JSON (Get-Content -Raw | ConvertFrom-Json)
    # 3. Verify required keys are present
    # 4. Check for placeholder values that need customizing
    # 5. Return $true if valid, $false otherwise
}

Exercise 3: Package Checker

Build a function called Get-PackageStatus that checks if a Chocolatey package is installed and reports its version. This exercise combines external command execution with output parsing.

function Get-PackageStatus {
    param([string]$PackageName)

    # Your implementation here:
    # 1. Check if Chocolatey is installed (Get-Command choco)
    # 2. Run 'choco list --local-only' to get installed packages
    # 3. Parse the output to find the specified package
    # 4. Report the package name and version if found
    # 5. Report "not installed" if not found
}

Testing your solutions: You can check your PowerShell syntax without running the script by using powershell -Command "& { Get-Content .\your-script.ps1 | Out-Null }". For deeper analysis, install PSScriptAnalyzer with Install-Module PSScriptAnalyzer and run Invoke-ScriptAnalyzer -Path .\your-script.ps1.

Next Steps and Resources

This guide has covered the core patterns used in professional PowerShell scripting: script structure and metadata, variables and configuration management, modular functions, interactive user input, defensive error handling, structured logging, system detection and adaptation, process automation with scheduled tasks, testing and validation, security best practices, and documentation standards. These are the same patterns used in production environments at organizations of all sizes.

To continue your learning journey, here are the recommended next steps:

  1. Study the actual scripts in the Chocolatey Scripts repository. Now that you understand the patterns, reading the full source code will reinforce your knowledge and show you how the pieces fit together in a complete application.
  2. Copy config.example.json to config.json and customize it for your environment. Experimenting with the configuration is a safe, low-risk way to learn how configuration-driven scripts work.
  3. Run validate-setup.ps1 to check your environment and see the validation patterns in action.
  4. Try modifying a script and test your changes. Start with small additions like adding a new package to the install list or changing a notification setting.
  5. Create your own scripts using the patterns from this guide. The best way to internalize these techniques is to use them to solve your own problems.
  6. Share improvements with the community by opening issues or pull requests on the GitHub repository.

Recommended Resources

These external resources complement this guide and provide deeper dives into specific topics:

  • PowerShell Documentation — Microsoft's official PowerShell documentation. Comprehensive and authoritative, this is the definitive reference for every PowerShell feature and cmdlet.
  • Chocolatey Documentation — The official Chocolatey docs for everything related to package management on Windows.
  • PSScriptAnalyzer — A static analysis tool for PowerShell scripts. It identifies bugs, portability issues, and style problems with detailed explanations and fix suggestions.
  • PowerShell Practice and Style Guide — Community-driven guidelines for writing clean, consistent, readable PowerShell code. A great reference for naming conventions, formatting, and design patterns.
  • Chocolatey Package Repository — Browse thousands of available packages to find software you can add to your automation scripts.

Remember: PowerShell scripting is a skill that improves with practice. Do not try to memorize everything at once. Instead, bookmark this guide and the resources above, write scripts to solve real problems, and refer back when you need a refresher on a specific technique. Every professional PowerShell scripter started where you are now.