Code coverage for Swift Package Manager based apps

Learn how to process test results and improve code quality by getting code coverage reports. Guest post by Tibor Bödecs.

How to test using SPM?

The Swift Package Manager allows you to create standalone Swift applications both on Linux and macOS. You can build and run these apps and you have the ability to write unit tests for your codebase. Xcode ships with the XCTest framework, but you may not know that this is an open-source library. It's available on every single platform where you can install Swift. This also means that you can use the exact same assertion methods from the framework that you used to work with on iOS to unit test your SPM package. 📦

Let me show you how to make a brand new project using the Swift Package Manager:


mkdir "myProject" && cd $_

# this command creates a library package
swift package init

# alternatively you can create an executable
swift package init --type=executable
Copy code

Both the library and the executable template contains a sample test file with a dummy test case. You can run tests in many ways, there is built-in support for parallel execution (you can even specify the number of workers), you can also filter what to run by test target, test case or you can evaluate just one test. ✅


# run all the tests
swift test

# list available tests
swift test -l   #or `swift test --list-tests`

# run all the tests in parallel
swift test --parallel

# specify the number of tests to execute in parallel.
swift test --parallel --num-workers 2

# run test cases matching regular expression
# format: . or ./
swift test --filter myProjectTests.myProjectTests
Copy code

The test result is going to look somewhat like this:


Test Suite 'All tests' started at 2020-01-16 16:58:23.584
Test Suite 'myProjectPackageTests.xctest' started at 2020-01-16 16:58:23.584
Test Suite 'myProjectTests' started at 2020-01-16 16:58:23.584
Test Case '-[myProjectTests.myProjectTests testExample]' started.
Test Case '-[myProjectTests.myProjectTests testExample]' passed (0.070 seconds).
Test Suite 'myProjectTests' passed at 2020-01-16 16:58:23.654.
     Executed 1 test, with 0 failures (0 unexpected) in 0.070 (0.070) seconds
Test Suite 'myProjectPackageTests.xctest' passed at 2020-01-16 16:58:23.655.
     Executed 1 test, with 0 failures (0 unexpected) in 0.070 (0.071) seconds
Test Suite 'All tests' passed at 2020-01-16 16:58:23.655.
     Executed 1 test, with 0 failures (0 unexpected) in 0.070 (0.071) seconds
Copy code

Processing test results

If you need to process the outcome of the testing, that can be quite challenging. I’ve created a small tool that can convert your test results into a JSON file. It's called Testify, you can grab it from GitHub. Let me show you how it works:


swift test 2>&1 | testify
swift test --filter myProjectTests.myProjectTests 2>&1 | testify
Copy code

Unfortunately, you can't use the --parallel flag in this case, because if you do so you'll only get progress indication instead of the final test result output. Fortunately, you can still filter tests, so you don't have to wait for everything.

The swift test command returns the test results on the standard error, instead of the standard output. That's why you have to redirect the stderr into the stdout via the 2>&1 flag.

If everything went well you'll see a nice JSON output, just like this one:


{
  "endDate" : 602416925.25200009,
  "children" : [
    {
      "endDate" : 602416925.25200009,
      "children" : [
        {
          "endDate" : 602416925.25200009,
          "children" : [

          ],
          "startDate" : 602416925.19000006,
          "cases" : [
            {
              "outcome" : "success",
              "className" : "myProjectTests",
              "moduleName" : "myProjectTests",
              "testName" : "testExample",
              "duration" : 0.062
            }
          ],
          "unexpected" : 0,
          "outcome" : "success",
          "name" : "myProjectTests"
        }
      ],
      "startDate" : 602416925.19000006,
      "cases" : [

      ],
      "unexpected" : 0,
      "outcome" : "success",
      "name" : "myProjectPackageTests.xctest"
    }
  ],
  "startDate" : 602416925.19000006,
  "cases" : [

  ],
  "unexpected" : 0,
  "outcome" : "success",
  "name" : "Selected tests"
}
Copy code

Enabling code coverage data

Code coverage is a measurement of how many lines/blocks/arcs of your code are executed while the automated tests are running.

I believe that coverage reports are extremely useful for the entire developer team. Project managers can refer to the coverage percentage if it comes to software quality. The QA team can also examine coverage reports & test all the remaining parts or suggest new test ideas for the developers. Programmers can eliminate most of the bugs by writing proper unit / UI tests for the application. A coverage report helps them to analyse what needs to be done as well. Xcode has a built-in coverage report page, but you have to enable reports first. You can achieve the exact same thing without using Xcode, by simply providing an extra flag to the test command:

 
swift test --enable-code-coverage
Copy code

Ok, that's fine, but where is my report? 🤔

How to display coverage data?

So far so good, you have generated the code coverage report files, but they are still in a really complex file format. You need one more additional tool in order to display them properly.


#on linux
sudo apt-get install llvm

#on macOS
brew install llvm

echo 'export PATH="/usr/local/opt/llvm/bin:$PATH"' >> ~/.zshrc
#or before Catalina
echo 'export PATH="/usr/local/opt/llvm/bin:$PATH"' >> ~/.bashrc
Copy code

Now you are ready to use llvm-cov which is part of the LLVM infrastructure. You can read more about it by running man llvm-cov, but I'll show you how to display some basic coverage report for the sample project.


llvm-cov report \
    .build/x86_64-apple-macosx/debug/myProjectPackageTests.xctest/Contents/MacOS/myProjectPackageTests \
    -instr-profile=.build/x86_64-apple-macosx/debug/codecov/default.profdata \
    -ignore-filename-regex=".build|Tests" \
    -use-color
Copy code

This command will generate the coverage report for your tests, but only if you’ve provided the --enable-code-coverage flag during testing. You should note that these llvm-cov input paths may vary based on your current system. If you are using Linux, you should simply give the xctest path as a parameter (e.g. .build/x86_64-unknown-linux/debug/myProjectPackageTests.xctest in this case), the instrument profile is located under the same directory that's not a big difference, but still be careful with the platform name. Usually, you don't want to include the files from your .build & Tests directory, but you can specify your own regex based filter as well. 🔍

Putting everything together

You don't want to mess around with these parameters, right? Neither do I. That's why I made a handy shell script that can figure out everything based on the current project. Save yourself a few hours, here is the final snippet:


#!/bin/sh

BIN_PATH="$(swift build --show-bin-path)"
XCTEST_PATH="$(find ${BIN_PATH} -name '*.xctest')"

COV_BIN=$XCTEST_PATH
if [[ "$OSTYPE" == "darwin"* ]]; then
    f="$(basename $XCTEST_PATH .xctest)"
    COV_BIN="${COV_BIN}/Contents/MacOS/$f"
fi

llvm-cov report \
    "${COV_BIN}" \
    -instr-profile=.build/debug/codecov/default.profdata \
    -ignore-filename-regex=".build|Tests" \
    -use-color
Copy code

​You should save it as cov.sh or something similar. Add some permissions by using chmod +x cov.sh and you are ready to run it by simply entering ./cov.sh. Your coverage report will look like this:


Filename            Regions    Missed Regions     Cover   Functions  Missed Functions  Executed       Lines      Missed Lines     Cover
---------------------------------------------------------------------------------------------------------------------------------------
myProject.swift           3                 0   100.00%           3                 0   100.00%           8                 0   100.00%
---------------------------------------------------------------------------------------------------------------------------------------
TOTAL                     3                 0   100.00%           3                 0   100.00%           8                 0   100.00%
Copy code

Of course if you run this script on a project that has more source files & unit tests, it'll produce a better report. 😜

Conclusion

Using test results and coverage data is a nice way to show reports to other members in your team. By running these commands on a continuous integration server (like Bitrise), you can automate your entire workflow.

Get Started for free

Start building now, choose a plan later.

Sign Up

Get started for free

Start building now, choose a plan later.