Nected's Load Testing and Performance Benchmarking

Read the in-depth analysis of Nected's load testing and performance benchmarking. Discover how the platform ensures stability, high performance, and scalability under various operational loads.

Prabhat Gupta

 min read
Nected's Load Testing and Performance Benchmarking
Clock Icon - Techplus X Webflow Template
 min read
Table of Contents

Nected is all-in-one platform providing building blocks like rule engine, workflow automation, a/b testing etc to build any backend logic flows fast, experiment more and iterate faster. Since it’s designed for customer-facing and mission-critical flows, it needs to have high stability, performance and scalability.

This blog aims to provide a clear and factual insight into how Nected performs under significant operational loads. We will explore the detailed process and results of our load testing and performance benchmarking, focusing on the technical aspects of these exercises. The goal is to demonstrate, through empirical data and analysis, the stability and scalability of the Nected platform.

We will cover the architecture of Nected, the setup for our benchmarking tests, and the findings from these tests. This approach ensures a comprehensive understanding of Nected’s capabilities in handling high demand and scaling effectively.

Nected Architecture Overview

Nected's architecture is fundamentally designed to optimize task execution in a no-code/low-code environment. This section breaks down the key components of the platform, offering a clear view of how each contributes to overall performance and scalability.

Core Architecture Overview:

The architecture of Nected is structured to efficiently manage and execute backend logic flows. It comprises several critical services, each playing a unique role in the system:

  • Router Service: This service acts as the initial point of contact for all incoming tasks. It routes tasks to the appropriate services for execution, ensuring efficient distribution of workload.
  • Task Manager: The Task Manager is crucial for overseeing the lifecycle of each task. It monitors task status, manages task queues, and coordinates between different services to ensure smooth execution.
  • Executor Service: Responsible for the actual execution of tasks, the Executor Service processes the backend logic as defined in the tasks. It's designed for high efficiency and low latency in task processing.

This diagram provides a visual representation of the flow of tasks through the system and the interaction between different services.

Scalability and Performance:

Each component in Nected’s architecture is designed with scalability in mind. The platform can dynamically adjust resource allocation based on the incoming workload. This flexibility is key to maintaining performance under varying load conditions.

  • Scalable Router Service: It can scale up to handle high volumes of incoming requests, ensuring that task routing does not become a bottleneck.
  • Efficient Task Management: The Task Manager's ability to efficiently queue and monitor tasks plays a vital role in maintaining system performance even as the number of tasks increases.
  • High-Performance Executor Service: Optimized for rapid task execution, this service ensures that the processing time remains minimal, a critical factor in overall system responsiveness.

In summary, Nected’s architecture is a balanced ecosystem of services, each designed to contribute to the platform's overall efficiency and scalability. This structure is pivotal in ensuring that Nected can handle increasing loads without compromising on performance.

Benchmark Setup

In this section, we delve into the detailed setup used for benchmarking the Nected platform, highlighting the configurations and parameters involved.

  1. Kubernetes Cluster on AWS: We established a Kubernetes cluster on Amazon Web Services (AWS) for the benchmarking process. This cluster utilized Cassandra as a persistent layer for task management. The primary objective was to benchmark a cluster capable of achieving approximately 300 requests per second (rps) throughput with the given Cassandra configurations.
  2. Service Configurations:
Service Min Pods Max Pods CPU Memory Additional Details
Router Service 2 8 200m 512MB
Executor Service 4 16 200m 512MB
Frontend Service (Temporal) 4 10 400m 512MB
History Service (Temporal) 8 8 1 core 4GB
Matching Service (Temporal) 4 10 400m 512MB
Cassandra Cluster (Temporal) N/A N/A N/A 8GB 3 c6g.2xlarge nodes, CMS garbage collector, Replication factor 3, Local_quorum consistency
Redis Cluster N/A N/A N/A N/A Single t4g.medium node
  1. Backend Logic and Test Parameters: The backend logic applied in the benchmarking tests was specifically designed to mirror realistic operational scenarios that Nected would encounter in a production environment. This approach was vital to ensure that the test results accurately reflected the platform's capabilities under actual usage conditions.
  • Rule Configuration: The primary component of the backend logic was a decision table consisting of six rows. This setup was chosen to simulate a fundamental yet essential type of logic flow that Nected handles.
  • Dataset and Database Integration: The rule was connected to a dataset and utilized PostgreSQL as the underlying database connector. This integration represented a common use-case scenario for Nected, involving data retrieval and processing tasks.
  • Caching Mechanism: Caching was enabled in the backend logic. This feature is crucial for performance, as it reduces the time taken to access frequently used data. It reflects a typical optimization strategy in backend systems to improve response times and reduce database load.
  • Objective of Backend Logic: The design of the backend logic for testing aimed to challenge Nected’s processing efficiency, data handling capabilities, and response to database interactions. By incorporating elements like rule processing, database connectivity, and caching, the test provided a comprehensive assessment of how the platform manages core backend functions.

In summary, the benchmarking setup for Nected was meticulously designed to test the platform under realistic and demanding conditions. By configuring various services and employing rigorous testing methodologies, the setup aimed to push the platform to its limits and evaluate its performance under different load scenarios.

Benchmarking Results and Analysis

In this section, we analyze the results from the load tests performed on the Nected platform. Each test was designed to incrementally increase the load, allowing us to observe how the system responds under different levels of demand.

Test 1: 200rps

  • Methodology: This test was conducted using JMeter with 200 threads, limiting request throughput to 12,500 per minute. A total of 1 lakh requests were executed.
  • Resource Utilization:
Service # pods cpu memory
Router Service 6 33% 3%
Executor Service 6 43% 6%
Frontend Service 7 39% 18%
Matching Service 5 40% 12%
  • Results: The average response time was 197ms, with a 95th percentile (P95) of 212ms and 0% error rate. The CPU usage of Cassandra’s cluster nodes remained below 100%.
  • Analysis: The test ran smoothly, indicating that the cluster could handle higher throughput. All services scaled up immediately without errors.

Test 2: 250rps

  • Methodology: Executed with JMeter using 250 threads, restricting throughput to 15,500 per minute. A total of 1.25 lakh requests were processed.
  • Resource Utilization:
Service # pods cpu memory
Router Service 7 53% 3%
Executor Service 8 58% 5%
Frontend Service 8 59% 19%
Matching Service 6 53% 12%
  • Results: The test yielded an average response time of 270ms and a P95 of 608ms, with no errors. Cassandra's CPU usage was again below 100%.
  • Analysis: The test proceeded smoothly, showing that the cluster was capable of scaling up further. The average response time increased, but the system remained healthy.

Test 3: 275rps

  • Methodology: Conducted with JMeter using 275 threads, limiting throughput to 17,000 per minute. A total of 1 lakh requests were executed.
  • Resource Utilization:
Service # pods cpu memory
Router Service 7 57% 3%
Executor Service 9 55% 6%
Frontend Service 8 63% 20%
Matching Service 6 60% 12%
  • Results: The average response time was 406ms and P95 was 531ms, with 0% error. The CPU usage of Cassandra’s nodes reached 100%.
  • Analysis: This test executed smoothly, with only the executor service scaling up. It indicated that the cluster was nearing its limit as no other services scaled up, and the average time increased significantly.

Test 4: 300rps

  • Methodology: Performed with JMeter using 300 threads, restricting throughput to 18,500 per minute. A total of 1.05 lakh requests were processed.
  • Resource Utilization:
Service # pods cpu memory
Router Service 7 61% 3%
Executor Service 9 63% 6%
Frontend Service 8 64% 23%
Matching Service 6 61% 15%
  • Results: The test yielded an average response time of 718ms and P95 of 969ms, with no errors. Cassandra’s CPU usage was at 100% with a CPU queue length of approximately 4 per core.
  • Analysis: The cluster reached its limit during this test, as the increasing load did not prompt further scaling of services, and the Cassandra cluster experienced significant latency.

The benchmarking tests demonstrated that the Nected platform with given configurations can successfully handle a load of approximately 300rps. The cluster of scalable services and the preconfigured Cassandra cluster showed robust performance, although Cassandra cluster peaked at higher loads. Platform had auto-scaling enabled with maximum limit on all the services except Cassandra and Redis cluster. Cassandra cluster can also be configured to scale according to expected load. Though, this exercise was only intended to benchmark a given cluster configuration with varying load.


The load testing and performance benchmarking of the Nected platform with given configuration have been instrumental in demonstrating its capabilities in stability, performance, and scalability. These tests, conducted under increasing operational loads, have provided clear evidence of the platform's robustness and efficiency.

  1. Stability:Throughout the series of tests, from 200rps to 300rps, Nected exhibited remarkable stability. Even as the load increased, the platform maintained a 0% error rate across all tests. This consistent performance under escalating demand underscores the robustness of Nected’s architecture and its ability to handle high-traffic scenarios without compromising on reliability.
  2. Performance:The performance metrics observed during the tests were equally impressive. Starting with an average response time of 197ms at 200rps and gradually increasing to 718ms at 300rps, Nected demonstrated its capacity to process a high volume of requests efficiently. While there was an increase in response time as the load intensified, the platform continued to operate effectively, indicating a well-optimized system designed for high throughput.
  3. Scalability:One of the critical aspects evaluated in the benchmarking was the scalability of Nected. The platform’s ability to dynamically scale its services in response to increasing load was evident. As the tests progressed, services like the Executor and Router Service scaled up seamlessly to accommodate the higher demand.

Overall, the benchmarking exercise has demonstrated that Nected is a stable, high-performing, and scalable platform. It is well-equipped to handle significant operational loads, making it a reliable choice for users who require a robust no-code / low-code backend solution. The insights gained from these tests are invaluable, not only in affirming the current capabilities of Nected but also in guiding further enhancements to ensure that the platform continues to meet the evolving demands of its users.

Start using the future of Development, today