Quality in: Laying the standards for PowerShell module development

Oh, are you back for more PowerShell module development already? That’s nice! In the first part, you’ve created a scaffolding project called PoweringUp, installed some amazing extensions for VS Code, and had a peek at PSScriptAnalyzer rules.

In this part, you will lay a foundation of standards for your PowerShell module, including:

  • Code formatting and layout standards
  • Looking at your own verb-noun pair
  • Building quality in with Pester tests

Get ready, as you’re going to dive deep right into the different formatting and layout styles.

Pssst… did you miss the first part? Find it here

Code formatting and layout standards

In every codebase, you might have rules about indentation, how much the maximum length of a line can be and the capitalization conventions. Experience has taught that it’s much easier to read and understand code when these standards have been set within the codebase, as you are not distracted by the details. While VS Code cannot cover all of the rules, you can make it easier for your fellow contributors. Let’s take a look where VS Code can help you out, and of course help your contributors as well.

Set the brace style for PowerShell code

Each programming language uses a variety of brace and indent styles, but in the PowerShell community there are essentially three that you will see in popular codebases:

  • BSD/Allman style
  • K&R/OTBS
  • Stroustrup

The Allman style, named after Eric Allman,  puts braces on their own lines, indented to the same level as the control statement. Let’s see what that looks like:

Function Get-FakeProcess 

        if ($ProcessId)
            Get-Process -Id $ProcessId

In the early days, the K&R (or One True Brace Style variant), gained many adherents, as it was simply not possible to put most braces on a new line when typing them into the console, since it was expecting a single line of PowerShell code. So what does the OTBS variant look like? Let’s grab the same code from above with the OTBS variant applied:

Function Get-FakeProcess {
    param (

    process {
        if ($ProcessId) {
            Get-Process -Id $ProcessId
        } else {

Lastly, there is Stroustrup, which was named after Bjarne Stroustrup who created it while writing Programming: Principles and Practice using C++ and The C++ Programming Language.

Function Get-FakeProcess {

    process {
        if($ProcessId) {
            Get-Process -Id $ProcessId
        else {

Stroustrup looks somewhat like a cross between Allman and OTBS.

Now, how can VS Code help you when you’re writing your PowerShell code? It is actually quite easy to set the formatting, and even formatting when the file is saved. You can add the settings.json file under the .vscode with the following lines to configure OTBS and auto format on save:

    "powershell.codeFormatting.preset": "OTBS",
    "editor.formatOnSave": true

You have now set up your workspace to follow the OTBS formatting style. You can always choose the other styles, but OTBS is the one that is mostly used. When you or your contributors introduce a new PowerShell script, it automatically gets formatted to this style when it’s saved. Let’s take a look at how you can use nouns to make your cmdlets more discoverable.

Choosing your noun in the approved verbs

In the first part of the series, you saw that PowerShell uses a list of approved verbs. PowerShell uses these verb-noun pairs for the cmdlets or functions you create. Using commonly known verbs is useful when you’re working with PowerShell, as you can retrieve the list of available commands with the Get-Command cmdlet. In one of the top PowerShell modules in the PowerShell community, like the Az module or dbatools, you’ve probably noticed a noun prefix that simplifies and normalizes the cmdlet, so that all cmdlets uses this prefix. Some examples would be:

  • Connect-AzAccount
  • Get-AzTag
  • Get-DbaBuild
  • Set-DbaAgentJobStep

Well, you get the point. Designing your own noun helps in discovering your cmdlets when you’ve published it. The module that you’ve created is called PoweringUp, which already has a Public function available. Instead of calling that function Get-PendingReboot, you can make it Get-PUPendingReboot. Once you have installed the module, you can easily use Get-Command -Noun *PU* to list out all the available cmdlets that contains PU in the name. It’s even more fun to make a boilerplate function code snippet to help out anyone contributing to the repository.

  1. Create a new file called pu.code-snippets inside the .vscode folder
  2. Copy the following JSON in the file and save it
    "Get function for PU": {
        "scope": "powershell",
        "prefix": "getpu",
        "body": [
            "function Get-PU$0 {",
            "\tparam (",
            "\tprocess {",
        "description": "template for building a get function to"
  1. Create a test.ps1 file in the Public folder

When you start typing getpu in the test.ps1 file, you’ll notice that the code snippets are suggested:

Figure 1: Code snippet suggestion.

When you press enter, the snippet is added to the file, giving a boilerplate function for you. That’s just cool!

Quality in, Quality out

You have already built a great set of standards and created an awesome developer experience. But what about the quality of the script that gets introduced to the repository? If you’ve already explored the module that was created by Sampler, you may have noticed that there was a QA folder inside the tests folder. If you ran the build script without any parameters, Pester tests have been executed to validate your module, see if you’re changelog was up-to-date and even tested the new functions that were introduced. But what are these Pester tests actually?

Pester is the unit-testing and mocking framework for PowerShell, and is used underneath the build script. You can create all kinds of tests, like unit, integration, quality and even end-to-end testing. You’re not going to cover them all in this blog post, but you have to fix up the module tests since at the time of writing, it was for Pester v4. There have been significant breaking changes from v4 to v5 of Pester. As you will see when fixing up the module tests, you will need to do some testing for the functions that you’ve created. For now, you can remove the module.tests.ps1 and re-create it empty. Let’s add the relevant tests back one by one.

  1. In the module.tests.ps1 file, add the following content to check if the changelog has been updated and it is still compliant with the keepachangelog format
$here = Split-Path -Parent $MyInvocation.MyCommand.Path

# Convert-path required for PS7 or Join-Path fails
$ProjectPath = "$here\..\.." | Convert-Path
$ProjectName = (Get-ChildItem $ProjectPath\*\*.psd1 | Where-Object {
    ($_.Directory.Name -match 'source|src' -or $_.Directory.Name -eq $_.BaseName) -and
        $(try { Test-ModuleManifest $_.FullName -ErrorAction Stop }catch { $false }) }

$SourcePath = (Get-ChildItem $ProjectPath\*\*.psd1 | Where-Object {
        ($_.Directory.Name -match 'source|src' -or $_.Directory.Name -eq $_.BaseName) -and
        $(try { Test-ModuleManifest $_.FullName -ErrorAction Stop }catch { $false }) }

$mut = Import-Module -Name $ProjectName -ErrorAction Stop -PassThru -Force
$allModuleFunctions = &$mut { Get-Command -Module $args[0] -CommandType Function } $ProjectName # | Where-Object name -in $mut.ExportedCommands.Values.name

Describe 'Changelog Management' -Tag 'Changelog' {
    It 'Changelog has been updated' -skip:(
        !([bool](Get-Command git -EA SilentlyContinue) -and
            [bool](&(Get-Process -id $PID).Path -NoProfile -Command 'git rev-parse --is-inside-work-tree 2>$null'))
    ) {
        # Get the list of changed files compared with master
        $HeadCommit = &git rev-parse HEAD
        $MasterCommit = &git rev-parse origin/master
        $filesChanged = &git @('diff', "$MasterCommit...$HeadCommit", '--name-only')

        if ($HeadCommit -ne $MasterCommit) {
            # if we're not testing same commit (i.e. master..master)
            $filesChanged.Where{ (Split-Path $_ -Leaf) -match '^changelog' } | Should -Not -BeNullOrEmpty

    It 'Changelog format compliant with keepachangelog format' -skip:(![bool](Get-Command git -EA SilentlyContinue)) {
        { Get-ChangelogData (Join-Path $ProjectPath 'CHANGELOG.md') -ErrorAction Stop } | Should -Not -Throw
  1. On your terminal, run .\build.ps1 -Task test to execute the tests.
Figure 2: Executing test to check changelog.

If you noticed a warning that the changelog is not up-to-date, you might not be working in a repository that is saved in Git or is on the master branch.

  1. Underneath the ChangeLog Management tests, add two test to see if your module can be loaded and removed from the session
Describe 'General module control' -Tags 'FunctionalQuality' {

    It 'imports without errors' {
        { Import-Module -Name $ProjectName -Force -ErrorAction Stop } | Should -Not -Throw
        Get-Module $ProjectName | Should -Not -BeNullOrEmpty

    It 'Removes without error' {
        { Remove-Module -Name $ProjectName -ErrorAction Stop } | Should -not -Throw
        Get-Module $ProjectName | Should -BeNullOrEmpty
  1. Run the PSScriptAnalyzer only for errors, and you want to see if both your functions also have some testing.
Describe "Quality for <_>" -Tags 'TestQuality' -ForEach $allModuleFunctions {
    BeforeAll {
        if (Get-Command Invoke-ScriptAnalyzer -ErrorAction SilentlyContinue) {
            $scriptAnalyzerRules = Get-ScriptAnalyzerRule
        } else {
            if ($ErrorActionPreference -ne 'Stop') {
                Write-Warning "ScriptAnalyzer not found!"
            } else {
                Throw "ScriptAnalyzer not found!"
    Context "<_>" {
        It "<_> has a unit test" {
            Get-ChildItem "tests\" -Recurse -Include "$_.Tests.ps1" | Should -Not -BeNullOrEmpty

    if ($scriptAnalyzerRules) {
        It "Script Analyzer for $_" {
            $PSSAResult = (Invoke-ScriptAnalyzer -Path $(Get-ChildItem -path $SourcePath -Recurse -Include "$_.ps1").FullName -Severity Error)
            $Report = $PSSAResult | Format-Table -AutoSize | Out-String -Width 110
            $PSSAResult  | Should -BeNullOrEmpty -Because "but got. $Report"
  1. Lastly, the functions always need some documentation, so let’s include some tests for these as well.
Describe "Help for <_>" -Tags 'helpQuality' -ForEach $allModuleFunctions {
    BeforeAll {
        $functionFile = Get-ChildItem -path $SourcePath -Recurse -Include "$_.ps1"
        $AbstractSyntaxTree = [System.Management.Automation.Language.Parser]::ParseInput((Get-Content -raw $functionFile.FullName), [ref]$null, [ref]$null)
        $AstSearchDelegate = { $args[0] -is [System.Management.Automation.Language.FunctionDefinitionAst] }
        $ParsedFunction = $AbstractSyntaxTree.FindAll( $AstSearchDelegate, $true ) | Where-Object Name -eq $_ 

        $FunctionHelp = $ParsedFunction.GetHelpContent()

    It 'Has a SYNOPSIS' {
        $FunctionHelp.Synopsis | Should -Not -BeNullOrEmpty

    It 'Has a Description, with length > 40' {
        $FunctionHelp.Description.Length | Should -BeGreaterThan 40

    It 'Has at least 1 example' {
        $FunctionHelp.Examples.Count | Should -BeGreaterThan 0
        $FunctionHelp.Examples[0] | Should -Match ([regex]::Escape($_))
        $FunctionHelp.Examples[0].Length | Should -BeGreaterThan ($_.Length + 10)

    $parameters = $ParsedFunction.Body.ParamBlock.Parameters.name.VariablePath.Foreach{ $_.ToString() }
    foreach ($parameter in $parameters) {
        It "Has help for Parameter: $parameter" {
            $FunctionHelp.Parameters.($parameter.ToUpper()) | Should Not BeNullOrEmpty
            $FunctionHelp.Parameters.($parameter.ToUpper()).Length | Should BeGreaterThan 25

Run the tests by executing .\build.ps1 -Task test once more to see the results.

Screenshot of quality test, script analyzer test and help test results.
Figure 3: Quality test, script analyzer test and help test results.

Yikes! Red is not good most of the time, but this was as expected. You have not introduced any help, nor any test against the functions you’ve created.

Did you call for help?

Before starting to fix the test, let’s talk about help. You should know by now that you can add comment-based help to your scripts and this typically is added just beneath the function name. VS Code helps you quite well when you start typing ## just underneath the function name.

Screenshot of auto generated docs when ## is typed under the function name.
Figure 4: Auto generated docs when ## is typed under the function name.

It’s nice to have the documentation in the script itself, but it also means that you’ll have to update it every time, which can lead to breaking changes or unnecessary bumping up the version number. Therefore, the lovely folks at Microsoft have created an open source project called PlatyPS on GitHub. This project aims to make it easier to generate external help files. You still need to write the documentation once; it doesn’t magically appear on its own. Get started by adding both the comment-based help in the Get-PUPendingReboot and LogWrite functions.

When you are finished writing the comment-based help, the Sampler module has two tasks that will create a docs folder and add the external help file in the output directory. Run the following in sequence:

  1. In your terminal, execute .\build.ps1 -Tasks Update_markdown_help_source which will determine to either create new markdown files or update it
  2. Execute .\build.ps1 -Tasks Generate_MAML_from_markdown_help_source to create the external help file
Screenshot of markdown and external help files generated by New-ExternalHelp and New-MarkdownHelp.
Figure 5: Markdown and external help files generated by New-ExternalHelp and New-MarkdownHelp.

It’s now possible for you to modify the documentation outside the script itself, and optionally use the HelpURI argument in the CmdletBinding attribute when you’ve uploaded it to a repository.

Now that you’ve fixed the help, you can create a new folder inside the tests folder called ‘unit’ to separate both QA and Unit tests. Inside the folder, create the files shown below:

Screenshot of unit test folder with'Get-PUPendingReboot.test.ps1 and LogWrite.Tests.ops1
Figure 6: Unit test folder.

All lights should be green now (expect the changelog one for the sharp eye) when executing the build script.

Screenshot showing all lines in green text, indicating the tests have passed.
Figure 7: All tests passed.

Awesome! You’ve got a great set of standards included in your repository now, which runs every time you build the module.


You’re nearing the point where you can publish your module internally, but as you can see, there is still a lot that needs to be added first. With these set of standards, you know for sure that you and your contributors know what to create and expect. By adding Pester test to the repository, following bracing standards, and creating a boilerplate function, you can make sure everyone adheres to these standards before publishing your precious module. Can you think of more boilerplate functions to help out your contributors? Bet you will! Looking forward to seeing you in the next and final part of this series.


About the author

Gijs Reijn
Cloud Engineer

Gijs Reijn is the DevOps Engineer at Rabobank’s ALM IT department. He primarily focusses on Azure DevOps, Azure and loves to automate processes including standardization around it. Outside working hours, he can be found in the early morning working out in the gym nearly every day, writes his own blog to share knowledge with the community and reading upon new ideas.

Related articles

Sharing PowerShell code internally with module development

  • 13 December 2022
  • 10 min
By Gijs Reijn

5 best practices for using Azure Bicep

  • 17 August 2022
  • 12 min
By Gijs Reijn

Writing documentation as a champ in engineering teams

  • 28 November 2022
  • 7 min
By Gijs Reijn