Writing documentation as a champ in engineering teams

Documentation… the task that is always delayed until the end because we are always busy. Even the Agile Manifesto states “Working software over comprehensive documentation”. But no documentation is also a no-go. In this blog post, you are going to look at to the philosophy of Documentation as Code (also known as Docs as Code).

There are a lot of benefits of documenting your software or tools:

  • It traces design decision and details
  • It encourages knowledge sharing within your team
  • It cuts down to duplicative work
  • It makes onboarding much easier
  • It creates a single point of truth

In this blog, I’ll dive into Markdown language, Visual Studio Code and Azure DevOps services, but won’t go into other documentation tools such as Word or Google docs. If you want to follow along locally, you are also going to need Node.JS and Docker installed.

The philosophy of Documentation as Code

The Documentation as Code approach hasn’t been there for a long time and plenty of talks in recent years started to give more attention to this subject. One of the co-founders of Write the Docs, Eric Holscher, gave a very interesting talk on the approach. But what does such an approach actually look like? Have a look at the process flow below.

Figure 1 showing the Documentation as Code approach, where the following flows are visible: 1. write the documentation, 2. commit the changes, 3. build docs and 4. deploy version
Figure 1: Documentation as Code approach.

For most DevOps engineers, this process should look quite familiar. It’s nearly identical to how you build and deploy your software and that’s exactly what the philosophy of Docs as Code is all about. You can still use the same tooling, same issue tracking system, same version control and same code reviews on your documentation. This gives you the approach where you can:

  • Support agile practices
  • Create better documentation by already creating the initial draft
  • Block changes if there is no documentation with your software releases

Before you can treat your documentation as code, you need your docs in a format which can be easily versioned and transformed. As I already mentioned, I use Markdown because it’s a lightweight markup language, which is used quite some time now within the open-source communities and is appealing for human readers as it’s in source code format. It’s also commonly used with static side generators, as it’s easy convertible to HTML, or other formats.

Take an example from the open-source community

In popular open-source codebases, you’ll find files like the README.md and CHANGELOG.md that have been written in Markdown. Some contributors even centralize their documentation in a docs folder, where they separate it by subject or topic together with the software or tool they’ve created. There are also codebases that are more decentralized from the software itself. One good example is the MicrosoftDocs, where Microsoft open-sourced their documentation. Take a look at the azure-devops-docs GitHub page.

Image showing documentation folder from Azure DevOps docs
Figure 2: Documentation folder from Azure DevOps docs.

But what makes good documentation? What are factors that you have to take into consideration when writing your precious documentation that’s available in production? An obvious one is of course spelling checker. But there’s a lot more available. Once a piece of documentation gets large enough, you have to start thinking about the quality and consistency. Some common errors might occur, like:

  • Not using the same formatting
  • Forgetting to use subheadings
  • Introducing dead links

That’s why it’s a great idea to introduce a linter on your documentation. Let’s take a look what options are available.

Can you give me a lint, please? Using static analysis tool to scan your documentation

No, not that red lint that you get from the King or Queen. A lint particularly to Markdown, is called Markdownlint. Clever wording, right? Markdownlint is a static analysis tool with a library of rules to enforce standards and consistency for Markdown files. It’s available as command-line interface, but there is also an extension available in VS Code. When you have VS Code installed, you can search through the extensions and install it.

Image showing Markdownlint extension installed in VS Code
Figure 3: Markdownlint extension installed in VS Code.

When the extension is enabled, you’ll quickly notice the rules that it’s checking for.

Image showing violation of first line heading
Figure 4: Violation of first line heading.

This gives you an easy glance when you’re writing your Markdown files on rules that might be violating. Of course, there are possibilities to ignore specific rules in your Markdown files when you might have your own standard. In the above example, you can simply add the below snippet above the heading to ignore rule MD041.

<!-- markdownlint-disable MD041 -->

Applying spelling checker with cSpell

Did you just cast a cSpell from the magic wand belonging to Harry Potter? Don’t worry, that’s not the case. cSpell is one of the most popular spelling checkers within the VS Code marketplace. You can see that it has many available dictionaries when you search for the extension in your editor.

Screenshot showing cSpell extension in VS Code
Figure 5: cSpell extension in VS Code.

Sometimes, it might be a bit hard when writing your content in Markdown where you introduce code snippets. The spelling checker will be applied on these blocks, as you can see in the image below.

Image showing how cSpell was triggered by the word 'Idonotexist', which is not a word
Figure 6: cSpell triggered as Idonotexist is not a word.

But of course, you can also build in your custom settings by introducing a file called .cspell.json in your repository and include the following content to ignore that word always.

  "version": "0.2",
  "language": "en",
  "words": [

Can you have it all?

Imagine that you’ve built up your docs folder with a bunch of Markdown files. Hell, let’s say hundreds of them! You don’t want to open them one by one and see what rules have been violated. Is there a possibility to dump out a report to see what violations are occurring? Glad you asked! MegaLinter is an open-source tool that analyzes the consistency of your code, in this case your Markdown files. It also supports CI/CD workflows. Out of the box, it’s supports more than 51 languages, 23 formats and 21 tooling formats. Let’s see that in action! If you want to follow along, you need Node.js and Docker installed.

  1. Open a command-line within the root of your project
  2. Run npx mega-linter-runner –install
  3. Either answer the questions or introduce your own .mega-linter.yml file at the root

That’s it to set up MegaLinter in your project repository. Depending on what you’ve answered in the questions when you tried installing, you should end up with something like this:

# Configuration file for Mega-Linter
# See all available variables at https://nvuillam.github.io/mega-linter/configuration/ and in linters documentation



APPLY_FIXES: all # all, none, or list of linter keys
DEFAULT_BRANCH: main # Usually master or main
# ENABLE: # If you use ENABLE variable, all other languages/formats/tooling-formats will be disabled by default
# ENABLE_LINTERS: # If you use ENABLE_LINTERS variable, all other linters will be disabled by default
  - COPYPASTE # Comment to enable checks of abusive copy-pastes
  # - SPELL # Uncomment to disable checks of spelling mistakes

This will enable the markdown and spelling linters for your docs folder. Now, when you run mega-linter-runner in your command-line terminal, it will pull a docker image from the Hub, start the analysis of your markdown files and dump out a report.

Screenshot showing how MegaLinter is executing locally
Figure 7: Executing MegaLinter locally.

In your local repository, a report folder is dumped out where you can see all the violations, in this case it is the sub-heading error. As you can see, it’s quite easy to set up. Since it also has the possibility to be run from the command-line, the implementation of the process becomes quite easy as well, providing the capability to run in a common CI/CD workflow.

Set up Documentation as Code process through Azure DevOps

Let’s look back at the process of Docs as Code. You’ve been writing your documentation and it is now time to review your documentation, plus a sign-off from your fellow co-workers. You’ve made sure that your main or master branch is protected and can only receive changes by a pull request. It’s also required that a build should succeed, providing that report inside the pull request. Let’s see how you can set this up with Azure DevOps. In this example, docs-as-code-example is the repository containing the documentation.

  1. Create a folder called cicd in the root of your repository
  2. Create azure-pipelines.yml inside the cicd folder
  3. Add the following content below to pull the MegaLinter image and run the linting checks
# Trigger none as build policy will run on PRs
trigger: none

# Run MegaLinter to detect linting and security issues
  - job: MegaLinter
      vmImage: ubuntu-latest
      # Pull MegaLinter docker image
      - script: docker pull oxsecurity/megalinter-documentation:v6
        displayName: Pull MegaLinter image from Docker Hub

      # Run MegaLinter
      - script: |
          docker run -v $(System.DefaultWorkingDirectory):/tmp/lint \
            -e GIT_AUTHORIZATION_BEARER=$(System.AccessToken) \
            -e CI=true \
            -e TF_BUILD=true \
            -e SYSTEM_ACCESSTOKEN=$(System.AccessToken) \
            -e SYSTEM_COLLECTIONURI=$(System.CollectionUri) \
            -e SYSTEM_PULLREQUEST_PULLREQUESTID=$(System.PullRequest.PullRequestId) \
            -e SYSTEM_TEAMPROJECT=$(System.TeamProject) \
            -e BUILD_BUILD_ID=$(Build.BuildId) \
            -e BUILD_REPOSITORY_ID=$(Build.Repository.ID) \
        displayName: Run Linters

      # Upload MegaLinter reports
      - task: PublishPipelineArtifact@1
        condition: succeededOrFailed()
        displayName: Upload MegaLinter reports
          targetPath: "$(System.DefaultWorkingDirectory)/megalinter-reports/"
          artifactName: MegaLinterReport
  1. Save the changes and commit it to the main or master branch
  2. Open Project Settings -> Repositories and select docs-as-code-example
  3. Click on Policies -> Branch Policies on main or master
  4. Tick the “Require a minimum number of reviewers” and set at least 1 as minimum number of reviewers
  5. Save the changes

Alright, your repository is now protected and only changes that flow through a pull request are able to update your main or master branch. Before adding the pipeline, make sure that your Project Collection Build Service Accounts have the permission to Contribute, as the $(System.AccessToken) is used.

If you want to know more about this special variable, you can take a look at the developer community

Screenshot showing the field where you can allow permissions to contribute for the build service account
Figure 8: Permissions to contribute for the build service account.

When the permissions are set, you can import the pipeline file in the Azure Pipelines section. It’s now possible to include a Build Validation policy to validate by pull request changes.

Figure 9: Build validate policy on master or main branch.

You can now make changes in a separate branch, commit the branch and start a pull request which triggers the build. Below you can see that MegaLinter posts the results back inside the pull request, giving you a direct link back to the build.

Figure 10: MegaLinter results from the build service account.

Isn’t it cool that you can apply the same principles on your documentation, like you would with your software code? Bet it is!


You now had a quick glance at the Docs as Code philosophy, building a process flow to validate your documentation based on rules and standards. You can still use the same tooling and principles as you do for your code. Can you think about more rules that might be applicable for your documentation? Anyway, you can start writing high-quality documentation now!


About the author

Gijs Reijn
Cloud Engineer

Gijs Reijn is the DevOps Engineer at Rabobank’s ALM IT department. He primarily focusses on Azure DevOps, Azure and loves to automate processes including standardization around it. Outside working hours, he can be found in the early morning working out in the gym nearly every day, writes his own blog to share knowledge with the community and reading upon new ideas.

Related articles

5 best practices for using Azure Bicep

  • 17 August 2022
  • 12 min
By Gijs Reijn

Juicing it up: testing best practices for Azure Bicep

  • 21 September 2022
  • 7 min
By Gijs Reijn

Stitching it together: pipeline best practices for Azure Bicep

  • 24 October 2022
  • 8 min
By Gijs Reijn