Tests in R
Nikita Gusarov
Following the previous post about the package creation in R we are going to dive into some details about how to work with packages. The first thing of interest for us is the possibility to perform tests on the package’s contents in order to control the result. Such possibility greatly facilitates the workflow while creating a package and ensures its functionality.
What is testthat?
Focusing our attention on the devtools meta-package’s contents we encounter rather quickly a great number of tools to test packages.
One of the key element here is the testthat package, which has a number of functions to perform tests.
Among the advantages listed on the project’s official webpage we discover testthat is, as it:
- Provides functions that make it easy to describe what you expect a function to do, including catching errors, warnings, and messages.
- Easily integrates in your existing workflow, whether it’s informal testing on the command line, building test suites, or using R CMD check.
- Displays test progress visually, showing a pass, fail, or error for every expectation. If you’re using the terminal or a recent version of RStudio, it’ll even colour the output.
I assume that you probably have the testthat installed on your machine as a part of devtools package.
So, to use the package we would rather call the entire devtools suite:
# Load packages
library(devtools)
## Le chargement a nécessité le package : usethis
Nearly all the function are built over an expect_ prefix, followed with the expected output to test for.
For example, to test whether an object returns a "double" we can execute:
# Test if double
expect_double(object_to_test, type = "double")
In case when the condition is not validated, the function will output an error message. The number of different condition and wrappers for the things to be tested is extremely large. This allows developpers to avoid writting all testing conditions by hand, but rather use simple wrapping functions to test their code.
But these functions are merely a start - one should use them to automate the testing procedure. You may ask what is the difference between the automated testing and simply testing the code in the terminal. Effectively, the difference is tremendous, because having a hard coded and documented testing workflow in addition to good code structure and package’s documentation helps to keep track of your work. The advantages are fairly good summarised in the R packages manual:
While you are testing your code in this workflow, you’re only doing it informally. The problem with this approach is that when you come back to this code in 3 months time to add a new feature, you’ve probably forgotten some of the informal tests you ran the first time around. This makes it very easy to break code that used to work.
But probably it’s eathier to understand after a short demonstration.
During this demonstration we will be using one another package alongside the testthat - the usethis package.
The last one offers a toolset for repetitive tasks automation for project setup and development.
Automated tests
Before proceeding with the automated tests demonstration we should focus create a dummy package for this purpose. As an example, we will take the code written for the previous post. We create a simple function and generate a package structure around it:
# Create function
my_function = function() {
cat("Hello world!")
}
# Create package
package.skeleton(
name = "mypackage",
list = "my_function",
path = ".",
encoding = "UTF-8"
)
Afterwards, it remains to follow the instruction as specified in the Read-and-delete-me file1, generated at the root of the package’s directory:
- Edit the help file skeletons in
man, possibly combining help files for multiple functions. - Edit the exports in
NAMESPACE, and add necessary imports. - Put any C/C++/Fortran code in
src. - If you have compiled code, add a
useDynLib()directive toNAMESPACE. - Run R CMD build to build the package tarball.
- Run R CMD check to check the package tarball.
For now we are going to ignore most of these steps.
There is only one function, which does not require any supplementary documentation for testing purposes.
The NAMESPACE does not require any modification in terms of necessary imports and we have no compiled code in C or C++.
We may remove this file with command:
# Remove Read-and-delete-me
file.remove("./mypackage/Read-and-delete-me")
Nevertheless, before building the package we would like to create some automated testing procedure.
For example we may deside to verify the ouput of our function.
in order to do so, we should create an automated template for our tests with usethis (there is no need to specify namespace separately, because the usethis package is loaded as part of devtools meta-package):
# Move to package location
setwd("mypackage")
# Using testthat
usethis::use_testthat()
## ✔ Setting active project to '/home/nikita/Documents/Personal/website_source/content/en/post/2022-02-10-tests-in-r/mypackage'
## ✔ Adding 'testthat' to Suggests field in DESCRIPTION
## ✔ Setting Config/testthat/edition field in DESCRIPTION to '3'
## ✔ Creating 'tests/testthat/'
## ✔ Writing 'tests/testthat.R'
## • Call `use_test()` to initialize a basic test file and open it for editing.
This function will create a new tests folder in our package’s directory.
The folder contains a testthat subfolder and a testthat.R script.
The .R script will execute the following commands:
library(testthat)
library(mypackage)
test_check("mypackage")
Which means that the testing script will load required dependencies and the package to test. Then the serie of test will be executed. Once the main testing framework is generated, it remains to aliment it with the unit tests, which can be done with the command:
# Create test file for my_function
use_test("my_function", open = FALSE)
## ✔ Writing 'tests/testthat/test-my_function.R'
## • Edit 'tests/testthat/test-my_function.R'
By convention, this function will create a test-my_function.R file to contain all the tests related to the my_function() object.
The file will be stored under ./tests/testthat directory.
Afterwards, it remains only to edit the contents of the testing file to verify the desired behaviour.
For example, we may decide to test whether the function’s output is exactly "\nHello world!\n" with expect_output() function.
To do so, we write following lines to the file:
# Write contents
cat(
"test_that('output_hello_world', {",
" expect_output(",
" my_function(),",
" regexp = 'Hello world!'",
" )",
"})",
file = "tests/testthat/test-my_function.R",
sep = "\n",
append = FALSE
)
Once the tests configured, we may run the testing job with a command from devtools namespace.
But before it we are going to clear our workspace, don’t forget that we still have our function attached to the running session:
rm(list = c("my_function"))
Now, when everything is ready, execute:
# Run tests
test()
## ℹ Loading mypackage
## ℹ Testing mypackage
## ✔ | F W S OK | Context
##
⠏ | 0 | my_function
✔ | 1 | my_function
##
## ══ Results ═════════════════════════════════════
## [ FAIL 0 | WARN 0 | SKIP 0 | PASS 1 ]
##
## 🐝 Your tests are the bees knees 🐝
As we can see, the tests are passed without errors.
Which means that my_function() output exactly corresponds to the desired "Hello world!" statement.
But were we to modify the test for checking whether the output is "Big white cat":
# Write contents
cat(
"test_that('output_hello_world', {",
" expect_output(",
" my_function(),",
" regexp = 'Big white cat'",
" )",
"})",
file = "tests/testthat/test-my_function.R",
sep = "\n",
append = FALSE
)
And run the testing procedure:
# Run tests
test()
## ℹ Loading mypackage
## ℹ Testing mypackage
## ✔ | F W S OK | Context
##
⠏ | 0 | my_function
✖ | 1 0 | my_function
## ────────────────────────────────────────────────
## Failure (test-my_function.R:2:4): output_hello_world
## `my_function\(\)` does not match "Big white cat".
## Actual value: "Hello world!"
## Backtrace:
## 1. testthat::expect_output(my_function(), regexp = "Big white cat")
## at test-my_function.R:2:3
## 2. testthat::expect_match(...)
## 3. testthat:::expect_match_(...)
## ────────────────────────────────────────────────
##
## ══ Results ═════════════════════════════════════
## [ FAIL 1 | WARN 0 | SKIP 0 | PASS 0 ]
##
## Keep trying!
One failed test would be detected, because the values do not match.
Once you become familiar with the test procedures setup, you may proceed with more complex tasks. For example, it may be interesting to configure a Continuous Integration (CI) workflow to run all the test externally on a dedicated server.
This is an autogenerated files, which is create by the
package.skeleton()function.↩︎