Testing applications with real world data

When working with IoT devices you’re often relegated to a few options when testing data ingestion and features:

  • Generate a dataset that covers your usecases (Test fixtures)
  • Test with an actual device
  • Generate realistic, random device data (Simulator)

Each has some notable tradeoffs and advantages. In the example below we’ll be using SenML (RFC-8428), which is basically some formally-structured JSON.

Additionally, the tests we’re looking to do would be relatively low cost.

Test Fixtures

A test fixture might be as simple as /fixtures/telemetry.json in your project root.



  • Tests are easily repeatable
  • Initially low overhead
  • States are easily known and thus simple to design for


  • Missed edge cases can result in false-negatives; detecting edge cases relies on developer ingenuity
  • A change in code requires adjustment, and most likely testing, of all affected test fixtures
  • Maintaining various fixtures can get hairy

The problem with my test fixture above is that while it may satisfy some test cases, I would have to create a new test fixture to test error detection, or certain patterns. Before long, I might have many test fixtures to maintain with no end in sight to them.

Some astute programmers may say that you could modify this data in flight during certain tests thus minimizing duplication, however, test cases modified in setup are considered unsafe. A scenario that comes to mind is when you have modified the logic of the function behavior down the road and must now flush out all the custom manipulations you have done in your setups to deal with duplication.

Device Testing

I have found that while device based testing is typically more expensive on the spectrum of testing (in terms of both time and effort), it is very worthwhile. This is more of a question of when such testing is appropriate.

Without diving too deep, tests utilizing real world devices should be done later in the SDLC while the kind of tests we’re looking to do would be lower cost but not quite as cheap as unit tests.


  • As realistic as data can be in a controlled environment
  • Discover edge cases easier, does not require programmer ingenuity


  • Increased time
  • Data authenticity is dependent on the lab
  • Labs are expensive
  • Slightly increased effort for automation
  • Probably partially manual
  • Higher long-term overhead

Simulated Data

In the below graph we’ll be using a skewed distribution to simulate data based on numbers we know to be normal.

    "type": "line",
        "data": {
            "labels": [0, 1, 2, 3, 4, 5],
            "datasets": [
                "label": "Probability of mL per second",
                "data": [60, 20, 15, 5, 0, 0],

In plain ol’ English

  • The y-axis is a probability who’s total much equal 100%
  • The x-axis is a category
  • Category 0 represents literally 0.0000 - 0.0000 mL
  • Category 1 represents 0.0001 mL - 2.0000 mL
  • and so on

The probabilities can then be used to calculate a category with a corresponding range and a random number generator can generate a number within said range. The above graph is called a distribution, or more correctly a skewed distribution, which accurately reflects a summary of data we receive from devices. You could generate a distribution based on real world data and then assign categories later.

Now, all we need is a library that can manufacture such numbers, a CLI that acts as our HCI, and a little shameless self-promotion.

Fortunately, I developed a package in Go that takes care of generating data based on probabilities and ranges. From here it would be easy to develop special functions that can inject faults through the use of flags, which is precisely what we ended up doing.

One of our initial tests was actually load-testing our new service. We generated a years worth of data (which is absolutely unrealistic, but it’s testing why not) and fired it off at the service. Our new service consumed and normalized our SenML request in under two seconds flat.

Additionally, we were able to test our data storage mechanisms from the client. It became a rather inexpensive way for developers to do what would usually qualify as an integration test but with the cost of a unit test.


  • Can be versioned along with the app
  • Can be generic
  • Fast
  • Possible to easily test the tester
  • Lower long-term overhead
  • Could be used to generate Test Fixture data


  • Another piece of software to maintain
  • More up-front overhead
  • Must be maintained in lock-step with device use cases