A major challenge in testing IoT applications is recreating IoT device behaviour in terms of data, load and network scenarios. Many engineering teams find it tempting to develop their own simulators. After all they just need to send the device payloads using the right protocol. However, as the testing demands coverage of more scenarios and scale, homegrown simulators start becoming just as complex to build and maintain as the connected product itself. Beyond the continued investment of time and effort, there are some blind spots that one needs to watch out for before committing to build a simulator.

Blind Spot 1: Neglecting the Edge

As IoT applications are increasingly getting distributed across the cloud and edge, simulators often prioritize cloud-centric testing, disregarding the unique challenges and intricacies of the edge environment. Hence systems are poorly validated for performance in edge scenarios, thus increasing the risk of failures, compatibility issues, or suboptimal utilization of edge resources. Comprehensive testing should encompass both the cloud and edge components to ensure optimal performance across the entire IoT ecosystem.

Blind Spot 2: Ignoring Network Condition Variability

IoT systems operate in diverse network conditions, including varying levels of latency, packet loss, and bandwidth constraints. However, these network conditions are often not adequately considered during testing, leaving gaps in assessing the application’s resilience and adaptability. Neglecting network condition variability may result in unexpected failures or degraded performance when the application is deployed in real-world scenarios.

Blind Spot 3: Realistic load testing

Many teams end up maintaining a separate version of the simulator for load testing which is either developed grounds-up by internal teams or based on tools like JMeter. Software design in both cases ends up being thread-based simulations, which is not suited for generating realistic data flows. Problem with thread-based designs is they treat devices like users in the web world, but devices are not like users. Unlike users, IoT devices are always connected, maintain state, stream data continuously and use different protocols. This is the reason why the applications often fail at much lesser loads in production.  The deviations become more pronounced as the complexity and scale of the application grows.

Blind Spot 4: Cloud-to-Device Scenario Testing

Homegrown simulators are primarily designed to handle device-to-cloud communication, with only rudimentary handling of cloud-to-device communications. In today’s IoT landscape, where bidirectional communication is prevalent, the absence of comprehensive testing in this area creates a significant blind spot and exposes enterprises to significant risks. It is critical that cloud-to-device scenario combinations are tested comprehensively. Neglecting to do so can lead to untested pathways, potentially resulting in failures or suboptimal performance when devices rely on cloud commands or data transfers.

Blind Spot 5: Custom Simulators Don’t Support Automation

There is no dearth of automation and CI/CD tools for the user centric flows of IoT application that involves web, mobile, and APIs. However, when it comes to device data/communication centric workflows that involves devices, data ingestion and processing (real time, batch pipelines) layers, these tools are ineffective. This inconsistency in the automation toolchain impedes enterprises’ ability to make swift progress. It’s like riding a a sports car at the front end and a horse cart at the back. Eventually the horse cart dictates the speed of your development cycle.


Thorough testing of IoT applications is essential for ensuring their reliability, performance, and compatibility within IoT systems. However, blind spots in scenario coverage using simulators hinder comprehensive testing efforts. This post has highlighted five significant blind spots: inadequate consideration of the edge environment, the omission of network condition variability, load testing that does not match realistic device load characteristics,  lack of cloud-to-device scenario testing and no support for automation. By addressing these blind spots and conducting comprehensive scenario testing, organizations can enhance the quality and effectiveness of their IoT applications, minimizing failures and optimizing user experience in the dynamic IoT landscape.

Gaurav Johri Doppelio

Gaurav Johri, Co-Founder & CEO