Skip to content

BHoM Testing

BHoM allows to create tests of several types. We mainly distinguish between Unit Test/Functional Tests and Data-Driven Tests. This section explains in detail how to write Unit/Functional Tests for BHoM in Visual Studio. For Data-Driven Tests, please refer to their page; among other things, in this page you will also find a section dedicated to their comparison.

The main sections of this page are:

Test Solution setup

BHoM operates a separation between tests and functionality/object models. This is achieved by placing the tests in a different solution from the main repository solution.

In this page, we will make an example where we want to create tests for the Robot_Toolkit.

Create a new unit-tests directory

To add a new test solution, please create a new unit-tests folder in the Toolkit's .ci directory, e.g.:

image

If a .ci folder does not exist in your Toolkit yet, create that first.

Create a new test solution

You can create a new Test solution in Visual Studio from the File menu as shown below.

2023-08-22 13_16_15-Diffing_Tests - Microsoft Visual Studio

Search for NUnit in the search bar and select it:

2023-08-22 13_16_33-Diffing_Tests - Microsoft Visual Studio

Make sure that you have "create new solution" and "place solution and project in the same directory" toggled on.

Please name the new test solution with the same name as the main toolkit plus the suffix _Tests. For example, for Robot_Toolkit, the new test solution will be called Robot_Toolkit_Tests.

2023-08-22 13_16_47-Diffing_Tests - Microsoft Visual Studio

This will create a new solution with a dummy NUnit test project in it. For example, if we are setting up the Robot_Toolkit_Tests for the first time, we will end up with this:

image

Add the existing Toolkit projects to the Test solution

In order to reference the main Toolkit projects, you can add "Existing projects" to the test solution. This will allow debugging the Toolkit code while running the unit tests.

Right-click the solution name in the Solution Explorer and do "Add existing project":

image

Navigate to the Toolkit's repository and select the Toolkit's oM project, if it exists:

image

This will add the Toolkit's oM project to the Test solution.

Repeat for all the Toolkit's projects, e.g. the Engine and Adapter ones, if they exist. In the example for the Robot_Toolkit, you will end up with this:

image

Add a Solution Configuration for more efficient testing

After adding the Toolkit's existing projects to the Test solution, you can add a new "Test" Solution Configuration that can be used when running tests.

Doing this allows to avoid time-consuming situations, like when you need to close software that locks the centralised assemblies (e.g. Rhino Grasshopper, Excel) whenever you want to compile or run Unit Tests. This is because BHoM relies on post-Build Events to copy assemblies in the ProgramData/BHoM folder, and if a software locks them, the project cannot build successfully.

Go in the Configuration Manager as below:

image

Then select "New":

image

And do the following:

image

This will create a new Solution Configuration called "Test". Make sure it's always selected when running tests from the Test solution:

image

In order to get the benefits from this, we will need to edit the Post-build events of every non-test project in the Toolkit (in our example for the Robot_Toolkit, these are only 3: the Robot_oM, the Robot_Engine, and the Robot_Adapter). Let's take the example of Robot_oM. The post-build events can be accessed by right-clicking the project, selecting Properties, then looking for "Post-build Events".

image

The post build events should look something like this:

xcopy "$(TargetDir)$(TargetFileName)" "C:\ProgramData\BHoM\Assemblies" /Y

This instructs the MSBuild process to copy the compiled assembly to the BHoM central folder, from where they can be loaded by e.g. UIs like Grasshopper. We do not want this copy process to happen when we are only testing via NUnit. Therefore, we can modify the post build event by replacing it with:

if not "$(ConfigurationName)" == "Test" (xcopy "$(TargetDir)$(TargetFileName)" "C:\ProgramData\BHoM\Assemblies" /Y)

This means that the post-build event is going to be triggered only when the Solution Configuration is not set to "Test".

Solution Configuration

Make sure that the Solution Configuration is always set to "Test" when you are in the Test solution (e.g. GitHub/Robot_Toolkit/.ci/unit-tests/Robot_Toolkit_Tests.sln) and not selected when you are in the normal toolkit solution (e.g. GitHub/Robot_Toolkit/Robot_Toolkit.sln).

If you have followed the guide so far, this will work fine.

The only thing that this changes is that the DLLs are not copied in the BHoM central location if the "Test" configuration is selected: in you are developing some new functionalty and you want the change to appear in e.g. a UI like Grasshopper, you need to make sure to compile the solution with the "Debug" configuration!

Create a new a test project

At this point, you should have a Test solution .sln file in your Toolkit's .ci folder, e.g. something like GitHub/Robot_Toolkit/.ci/unit-tests/Robot_Toolkit_Tests.sln.
You will now want to create a Test project where we can write tests.

Decide what the Test project should target

In order to create a new test project, you should decide what kind of functionality you will want to test there. Because BHoM functionality only resides in Engine and Adapter projects (not oM projects), we can have one test project corresponding to each Engine/Adapter project.

For example, say you want to write tests to verify the functionality that is contained in some Robot_Engine method, for example, Robot.Query.GetStringFromEnum(). Because this method resides in the Robot_Engine, we will need to place it into a Test project that is dedicated to testing Robot_Engine functionality.

We can create a new test project for this. Right-click on the Solution in the Solution Explorer and do "Add" and then "New Project":

image

Search for NUnit in the search bar and select it:

image

Because this test project will target functionality in the Robot_Engine, let's name it appropriately as Robot_Engine_Tests:

image

Click next and accept .NET 6.0 as the target framework, then click "Create".

image

We will end up with this new test project:

image

We can also delete the dummy test project at this point. Right-click the Robot_Toolkit_Test project and do "remove":

image

We end up with this situation:

image

Configure the default namespace for the test project

We want to set up the default namespace for tests included in this project. To do so, right-click the test project and go in Properties:

image

Type "default namespace" in the search bar at the top, then replace the text into the text box with an appropriate namespace. The convention is: start with BH.Tests., then append Engine. or Adapter. depending on what the test project tests will target; then end with the name of the software/toolkit that the project targets, for example Robot. For our example so far, we will have BH.Tests.Engine.Robot.

image

Adding references to a Test Project

Add existing project references

Because the test will verify some functionality placed in another project, namely the Robot_Engine, we need to add a reference to it. Right-click the project's dependencies and do "add project reference":

image

Then add the target project and any upstream dependency to the target project. For example, if adding an Engine project, make sure you add also the related oM project; if adding an Adapter project, add both the related Engine and oM projects.

image

Add other BHoM assemblies dependencies

Most likely you will need to reference also other assemblies in order to write unit tests. Again, right-click the project's dependencies and do "add project reference", then click on "Browse" and "Browse" again:

image

This will open a popup. Navigate to the central BHoM installation folder, typically C:\ProgramData\BHoM\Assemblies. Add any assembly that you may need. These will appear under the "Assemblies" section of the project's Dependencies.

Typically, a structural engineering Toolkit will need the following assembly references, although they will vary case by case:

image

Once you have added the assemblies, please select all of them as in the image above (click on the top one, then shift+click the bottom one) and then right click on one of them. Select "Properties" and under "Copy Local" make sure that "True" or "Yes" is selected:

image

This is required to make sure that NUnit can correctly grab the assemblies.

Adding extra NuGet packages

We can leverage some other NuGet packages to make tests simpler and nicer.

If you want your Unit test to be automatically invocable by CI/CD mechanisms, you should check with the DevOps lead if the NuGet packages you want to use are already supported or can be added to the CI/CD pipeline. The following packages are already supported.

Add FluentAssertions

We use the FluentAssertions NuGet package for easier testing and logging. Please add it by right-clicking the Project's Packages and do "Manage NuGet packages":

image

Click "Browse", then type "FluentAssertions" in the search bar. Select the first result and then click "Install":

image

We will provide some examples on how to use this library below. Please refer to the FluentAssertions documentation to see all the nice and powerful features of this library.

Writing tests

Let's image we want to write some test functionality for the Robot Query method called Robot.Query.GetStringFromEnum(). Because this method resides in the Robot_Engine, we will need to place it into the Robot_Engine_Tests project (created as explained above).

Because the method we want to test is a Query method, let's create a folder called Query:

image

Right-click the newly created Query folder and do Add new Item:

image

Let's call the new item as the method we want to test, e.g. GetStringFromEnum:

image

Let's edit the content of the generated file, so it looks like the following.

using NUnit;
using NUnit.Framework;
using FluentAssertions;

namespace BH.Tests.Engine.Robot.Query
{
    public class GetStringFromEnumTests
    {
        [Test]
        public void GetStringFromEnum()
        {

        }
    }
}

In particular, note that: - we added a using NUnit;, using NUnit.Framework; and using FluentAssertions; at the top; - we edited the name of the class appending Tests - We added an empty test method called as the Engine method we want to verify (GetStringFromEnum). The test method is decorated with the [Test] attribute.

Test sections: Arrange, Act, Assert

Every good test should be composed by these 3 clearly identifiable main sections (please refer to Microsoft's Unit testing best practices for more info and examples):

  • Arrange: any statement that defines the inputs and configurations required to do the verification;
  • Act: execute the functionality that we want to verify, given the Arrange setup;
  • Assert: statements that make sure that the result of the Act is as it should be.

The test structure should always be clear and follow this structure. Each test should only verify a specific functionality. You can have multiple assertion statements if they all concur to test the same functionality, but it can be a red flag if you have more than two or three: it often means that you should split (or parameterise) the test.

Your first test: a simplistic example

Following the example so far, we could write this code for the GetStringFromEnum() test method:

[Test]
[Description("Verify that the GetStringFromEnum() method returns the correct string for a specific DesignCode_Steel enum value.")]
public void GetStringFromEnum()
{
    // Arrange
    // Set up any input or configuration for this test method.
    var input = oM.Adapters.Robot.DesignCode_Steel.BS5950;

    // Act
    // Call the target method that we want to verify with the given input.
    var result = BH.Engine.Adapters.Robot.Query.GetStringFromEnum(input);

    // Assert
    // Make sure that the result of the Act is how it should be.
    result.Should().Be("BS5950");
}

Note that we use FluentAssertions' Should().Be() method to verify that the value of the result is equal to the string BS5950, as it is supposed to be when calling the GetStringFromEnum engine method with the input DesignCode_Steel.BS5950.

Also note that a good practice is to add a test [Description] too! This is very helpful in case the test fails, so you get an explanation of what kind of functionality verification failed and what how it was supposed to work.

Why this is a bad example of unit test

This example is simplistic and shown for illustrative purposes. It's not a good unit test for several reasons:

  • we are not testing every possible combination of inputs to the GetStringFromEnum() engine method and related outputs.
  • it hard-codes the value BS5950. We took that value by copying it from the body of the GetStringFromEnum() method and putting it in the Assert statement. This effectively duplicates that value in two places. If the string in the engine method was modified, you would need to modify the test method too. You should avoid this kind of situation and limit yourself to verifying things variables defined as part of the "Arrange" step. If you need to verify multiple output value possibilities, you should be using a Data-Driven approach.

See below for better examples of unit tests.

Better examples of tests

To illustrate good unit tests, let's look at another repository, the Base BHoM_Engine. Let's look at the test in the IsNumericIntegralTypeTests class, which looks like this (edited and with additional comments for illustrative purposes):

namespace BH.Tests.Engine.Base.Query
{
    public class IsNumericIntegralTypeTests
    {
        [Test]
        public void AreEnumsIntegral()
        {
            // Arrange. Set up the test data
            var input = typeof(DOFType);

            // Act. Invoke the target engine method.
            var result = BH.Engine.Base.Query.IsNumericIntegralType(input);

            // Assert. Verify that the output of the Act is how it should be.
            // If it fails the message in the string will be returned.
            result.ShouldBe(true, "By default, IsNumericIntegralType() considers enums as a numeric integral type.");
        }

        [Test]
        public void AreIntsIntegral()
        {
            // Arrange. Set up the test data
            var input = 10.GetType();

            // Act. Invoke the target engine method.
            var result = BH.Engine.Base.Query.IsNumericIntegralType(input);

            // Assert. Verify that the output of the Act is how it should be.
            // If it fails the message in the string will be returned.
            result.ShouldBe(true, "Integers should be recongnised as Numeric integral types.");
        }
    }
}

As you can see, this class contains 2 tests: AreEnumsIntegral() and AreIntsIntegral(). A single test class should test the same "topic", in this case the BH.Engine.Base.Query.IsNumericIntegralType() method, but it can (and should) do so with as many tests as needed.
The first test checks that C# Enums are recognised as integers by the method IsNumericIntegralType (they should be). The second test checks that the same method also recognises C# Integers are recognised as integers.

Why are these tests better examples of good unit tests than the one in the previous section?

  • Test should be "atomical" like this, because if something goes wrong, there is going to be a specific test telling you what did go wrong.
  • The possible outcomes are limited to True/False; it can be acceptable to "hard-code" True/False in the unit test itself. Writing result.ShouldBe(true) makes sense, as opposed to result.ShouldBe(someVerySpecificString) or result.ShouldBe(someHugeDataset).

A good idea would be to add a test that verifies that a non-integral numerical value is recognised as not an integer, for example a double like 0.15. Another test could be verifying that a non-numerical type is also recognised as not an integer, for example a string.

If the possible outcomes of the output data were not limited to True/False, the target method would have been better suited to be verified with a Data-driven test. However, in certain situations, like when doing Test Driven Development, it can be acceptable to write tests that verify complex output data, although it's likely that a full test coverage will only be reached with Data-driven tests.

For more examples of good tests, keep reading.

Unit tests VS Data-Driven VS Functional tests

Unit tests verify that a particular piece of code, generally a function, works as expected. The perspective of a unit test is often that of the developer who authored the target function and that wants to make sure it works properly.
The power of unit tests comes by creating many of them that verify the smallest possible functionality with many different input combinations. You should always strive to write small, simple unit tests. Please refer to Microsoft's Unit testing best practices for more information and examples.

In some cases, as mentioned in the section above, the verification in a unit test may need to target a complex set of data. For example, you may want to test your method against a "realistic" set of object, for example, many different input objects that cannot be generated easily from the code itself, but that can be easily generated in e.g. Grasshopper. In these cases, you should rely on Data-driven testing. Data-driven testing provides for more robustness against changes, because it verifies that the target function always performs in the same way. If the test function needs to change, you will have to re-write also the expected output, and this procedure increases robustness.
However, in certain situations, like when doing Test Driven Development (TDD), it can be acceptable and even extremely helpful to write tests that verify against complex data. For example, Functional tests may well rely on complex set of data, and it's common to write them when doing TDD. In this scenario, it's still likely that a full test coverage will only be obtainable by also doing some Data-driven testing.

Test that verify larger functionality are also possible, in which case we talk about Functional tests. Functional test often take the perspective of a user using a piece of software that does many things in the background, like Pushing or Pulling objects via a BHoM_Adapter (in the next section you can an example of this).
Functional tests can be slow to execute and, when they fail, they do not always give good understanding of the possible causes for the failure, because they encompass many things. However, Functional tests can be very helpful to verify that large, complex pieces of functionality work as expected under precise conditions. They are also amazingly helpful when developing new pieces of functionality using the TDD approach.

In many cases, the best practice is to have a good balance of Unit, Functional and Data-driven tests. This comes with experience, just start with something and you'll get there!

unit test as an umbrella term

Sometimes, people use the term "unit tests" as an umbrella term for all kinds of tests. This is incorrect, as the only really generic umbrella term should be "test". However, it's a common misconception that it's often done in development.
In BHoM we mistakenly perpetrate it in a couple of places:

  • in the setup of the Test Solution parent folder (the .ci/unit-tests folder; we should have .ci/tests)
  • in the name of the Data-Driven test component (which is called "unit test", but could be called "data driven test"). BHoM's data-driven tests are simply a type of unit test (equality assertion on the stored output data of a single method).

A Functional test example

Examples of Functional tests can be seen in the Robot_Adapter_Tests project. Adapter Test projects will likely contain lots of functional tests, as we care about testing complex behaviours like Push and Pull.

For example, see below the test PushBarsWithTagTwice() (this is slightly edited and with additional comments for illustration purposes). We test the behaviour of the Push and Pull functionality, which in the backend is composed by a very large set of function calls. The test a first set of 3 bars, then a second set of 3 bars, and all bars are pushed with the same Tag; then it verifies that the second set of bars has overridden the first set.

[Test]
[Description("Tests that pushing a new set of Bars with the same push tag correctly replaces previous pushed bars and nodes with the same tag.")]
public void PushBarsWithTagTwice()
{
    // Arrange. Create two sets of 3 bars.
    int count = 3;
    List<Bar> bars1 = new List<Bar>();
    List<Bar> bars2 = new List<Bar>();
    for (int i = 0; i < count; i++)
    {
        bars1.Add(Engine.Base.Create.RandomObject(typeof(Bar), i) as Bar);
    }

    for (int i = 0; i < count; i++)
    {
        bars2.Add(Engine.Base.Create.RandomObject(typeof(Bar), i + count) as Bar);
    }

    // Act. Push both the sets of bars. Note that the second set of bars is pushed with the same tag as the first set of bars.
    m_Adapter.Push(bars1, "TestTag");
    m_Adapter.Push(bars2, "TestTag");

    // Act. Pull the bars and the nodes.
    List<Bar> pulledBars = m_Adapter.Pull(new FilterRequest { Type = typeof(Bar) }).Cast<Bar>().ToList();
    List<Node> pulledNodes = m_Adapter.Pull(new FilterRequest { Type = typeof(Node) }).Cast<Node>().ToList();

    // Assert. Verify that the count of the pulled bars is only 3, meaning that the second set of bars has overridden the first set of bars.
    pulledBars.Count.ShouldBe(bars.Count, "Bars storing the tag has not been correctly replaced.");

    // Assert. Verify that the count of the pulled nodes is only 6, meaning that the second set of bars has overridden the first set of bars.
    pulledNodes.Count.ShouldBe(bars.Count * 2, "Node storing the tag has not been correctly replaced.");
}

Leveraging the NUnit test framework: setup and teardown

When writing unit tests, you should leverage the NUnit test framework and other libraries in order to write clear, simple and understandable tests.

You may want to define NUnit "startup" methods like [OneTimeSetup] or [Setup] in order to execute some functionality when a test starts, for example starting up an adapter connection to a software. Similarly, you can define "teardown" methods to define some functionality that must be executed when a test finishes, for example closing some adapter connection.

Please refer to the NUnit guide to learn how to define startup and teardown methods.

For example, we defined such methods for the Robot_Adapter_Tests test project. Let's look at the OneTimeSetup done in Robot_Adapter_Tests:

namespace BH.Tests.Adapter.Robot
{
    public class PushTests
    {
        RobotAdapter m_Adapter;

        [OneTimeSetUp]
        public void OntimeSetup()
        {
            m_Adapter = new RobotAdapter("", null, true);
            //... more code ...
        }

        //... more code ...
    }
}
Here, we use the OneTimeSetup method to define a behaviour that should be executed only once before the tests contained in the class PushTests are run. This behaviour is the initialization of the RobotAdapter, which is stored in a variable in the class. All tests are going to reuse the same RobotAdapter instance, avoiding things like having to re-start Robot for each and every test, which would be time-consuming.

Check the Robot_Adapter_Tests test project for more examples of Setup and Teardown methods, and refer to the NUnit guide for more examples and info.

Running tests

All tests existing in a Test solution can be found in the Test Explorer. If you can't find the Test Explorer, use the search bar at the top and type "Test Explorer":

image

You can run a single test by right-clicking the test and selecting Run or Debug. If you choose "debug", you will be able to hit break points placed anywhere in the code.

By running tests often, you will be able to quickly develop new functionality while making sure you are not breaking any existing functionality.

Test Driven Development (TDD)

A good practice is Test Driven Development (TDD), which consists in writing tests first, and implement the functionality in the "Act" step later. You can create a stub of the implementation that does nothing, write the tests that should verify that it works fine, and then develop the functionality by adding code to the body of the stub. In other words:

  1. Write one or, better, many tests that verify a piece of functionality. They should have Arrange, Act and Assert phases.
  2. In the "Act" phase, just write a function call to the new function you want to define. Get inspired by the Arrange step to define the signature of the function call. Don't be bothered by the compiler complaining that the function doesn't exist!

    [Test]
    public void ()
    {
        // Arrange
        var input = someData; // data that I know I will want to use.
    
        // Act
        // DoSomething() does't exist yet!
        var output = BH.Engine.Something.Compute.DoSomething(input); 
    
        // Assert
        output.Should().Be(expectedValue);
    }
    
    2. Write a stub for the target function:
    public partial class Compute
    {
        public static object DoSomething(object input)
        {
            // You will implement this later. Don't do anything.
            return null;
        }
    }
    

  3. Run the tests. Make sure they all fail! Add as many tests you can think of: they should describe well the functionality you want to develop.

  4. Write the target function until all the tests pass!

Doing this allows focusing on the "what" first, and the "how" later.
It helps to focus on the requirements and the target result that you want to achieve with the new function. In many cases, the implementation will then almost "write itself", and you will also end up with a nice collection of unit tests that can be re-run later to verify that everything keeps working (regression testing).