Skip to main content

Introduction to Reactive Programming in Java with Project Reactor

Introduction to Reactive Programming in Java with Project Reactor

we will go through below:

1. Introduction to Reactive Programming

  • What is reactive programming?
  • Key principles of reactive systems:
    • Responsive: Systems should respond in a timely manner.
    • Resilient: Systems should be fault-tolerant.
    • Elastic: Systems should scale as needed.
    • Message-driven: Systems should use asynchronous messaging.
  • Comparison between imperative programming and reactive programming.

2. Understanding Reactive Streams

  • Publisher, Subscriber, Subscription, and Processor interfaces.
  • The four key signals: onNext(), onComplete(), onError(), and onSubscribe().
  • Backpressure handling in reactive systems.

3. Introduction to Project Reactor

  • What is Project Reactor?
  • Key classes: Mono and Flux.
    • Mono: Represents 0 or 1 item.
    • Flux: Represents 0 to N items.
  • Non-blocking nature and how it helps in building scalable systems.

4. Building a Reactive Application with Project Reactor

  • Demonstrating how to use Mono and Flux.
    • Simple examples of creating and subscribing to reactive streams.
    • Combining streams, error handling, and transformations.
    • Operators: map(), flatMap(), filter(), zip(), etc.

5. Integrating Project Reactor with Spring WebFlux

  • Introduction to Spring WebFlux.
  • How WebFlux supports non-blocking, reactive applications.
  • Example: Building a reactive REST API using Spring WebFlux and Project Reactor.
  • Using reactive databases (e.g., MongoDB) with Spring Data Reactive.

6. Benefits and Challenges of Reactive Programming

  • Benefits: Better resource utilization, scalability, responsiveness.
  • Common pitfalls and challenges: Complexity, debugging, error handling, steep learning curve.

7. Conclusion and Q&A

  • When to use reactive programming and when not to.
  • Best practices for adopting reactive programming in real-world applications.

Demo (Optional):

  • A live demo of a simple reactive service using Mono and Flux, showcasing how non-blocking calls work in practice.


ChatGPTHere's a more detailed explanation with code samples for your tech talk on "Introduction to Reactive Programming in Java with Project Reactor"

1. What is Reactive Programming?

Definition: Reactive programming is a programming paradigm oriented around data streams and the propagation of change. In Java, reactive programming is realized with libraries like Project Reactor and RxJava.


2. Reactive Streams Example

Reactive Streams are the core building blocks of reactive programming. In Project Reactor, the main abstractions are:

  • Publisher: Emits items.
  • Subscriber: Consumes items.
  • Subscription: Manages the flow of data (including backpressure).
  • Processor: Acts as both a Publisher and Subscriber.

3. Mono and Flux Overview

  • Mono: Represents a stream of 0 or 1 element.
  • Flux: Represents a stream of 0 to N elements.

Example: Creating a Mono and Flux

java
// Creating a Mono that emits a single element Mono<String> monoExample = Mono.just("Hello, Mono!"); // Subscribing to the Mono monoExample.subscribe(System.out::println); // Creating a Flux that emits multiple elements Flux<String> fluxExample = Flux.just("Spring", "Reactor", "Flux"); // Subscribing to the Flux fluxExample.subscribe(System.out::println);

Output:

Hello, Mono! Spring Reactor Flux

4. Operators in Project Reactor

Operators transform, filter, or combine data in a stream. They are similar to functional programming methods like map, flatMap, and filter.

Example: Using map and flatMap

java
// Transforming data using `map` Flux<Integer> numbers = Flux.just(1, 2, 3, 4); Flux<Integer> squaredNumbers = numbers.map(num -> num * num); squaredNumbers.subscribe(System.out::println); // Asynchronous flatMap example Mono<String> nameMono = Mono.just("John"); Mono<String> greetingMono = nameMono.flatMap(name -> Mono.just("Hello, " + name)); greetingMono.subscribe(System.out::println);

Output:

1 4 9 16 Hello, John

Example: Error Handling with onErrorReturn

java
// Handling errors gracefully using `onErrorReturn` Flux<String> errorExample = Flux.just("A", "B", "C") .concatWith(Flux.error(new RuntimeException("An error occurred"))) .onErrorReturn("Error handled!"); errorExample.subscribe(System.out::println);

Output:

css
A B C Error handled!

5. Spring WebFlux Integration

Spring WebFlux is a reactive, non-blocking web framework that is part of the Spring ecosystem.

Example: Building a Reactive REST API

Here’s how you can build a simple reactive API using Spring WebFlux and Project Reactor.

  1. Controller:
java
@RestController public class GreetingController { @GetMapping("/greet") public Mono<String> greet() { return Mono.just("Hello, Reactive World!"); } @GetMapping("/numbers") public Flux<Integer> numbers() { return Flux.range(1, 10); } }
  1. Application Class:
java
@SpringBootApplication public class ReactiveApplication { public static void main(String[] args) { SpringApplication.run(ReactiveApplication.class, args); } }

Request:

bash
GET /greet

Response:

json
"Hello, Reactive World!"

Request:

bash
GET /numbers

Response:

json
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

6. Benefits and Challenges of Reactive Programming

Benefits:

  • Non-blocking I/O: Allows more efficient use of system resources.
  • Backpressure: Reactive streams can handle more data than traditional imperative approaches.
  • Elasticity: Systems built with reactive programming scale easily with increased demand.

Challenges:

  • Complexity: Reactive code can be harder to understand and debug, especially when managing backpressure and errors.
  • Steep Learning Curve: Requires developers to unlearn synchronous coding patterns.

7. Demo Idea

For a live demo, you can showcase:

  • A simple Spring Boot application that uses Mono and Flux.
  • Make two REST endpoints, one returning a Mono<String> and another returning a Flux<Integer>.
  • Use a delay operator like delayElements to simulate non-blocking, asynchronous behavior, which you can observe in logs.
java
@GetMapping("/delayed") public Flux<String> delayedResponse() { return Flux.just("A", "B", "C", "D") .delayElements(Duration.ofSeconds(1)); }

The logs will show that the delay happens asynchronously without blocking the main thread.


Reactive programming is especially useful in scenarios where asynchronous, non-blocking behavior is required to handle large volumes of data or high levels of concurrency efficiently. Here are some real-world use cases where Project Reactor and reactive programming shine:

1. Real-Time Streaming Applications

  • Use Case: Financial market data streams, stock price tracking, or real-time bidding systems.
  • Why Reactive: These systems require handling large volumes of data with minimal latency. Reactive programming helps manage high-throughput data streams while keeping the system responsive by using non-blocking I/O.
  • Example: A stock trading platform that updates prices in real time to thousands of connected clients. Flux streams can push new price data to the clients without blocking threads.

2. Highly Scalable REST APIs

  • Use Case: Social media platforms, online retail websites, or any high-traffic web service where thousands of users send requests simultaneously.
  • Why Reactive: Traditional blocking APIs can suffer performance issues under heavy load. Reactive APIs with Spring WebFlux can handle more concurrent users with the same hardware resources by not blocking threads while waiting for I/O operations (like database queries or API calls).
  • Example: An e-commerce platform where users browse and purchase items. Each API call (like retrieving product details) is handled asynchronously without blocking threads, leading to better scalability under high traffic.

3. Real-Time Notification Systems

  • Use Case: Notification services in messaging platforms, alerting systems, or collaboration tools.
  • Why Reactive: Reactive programming allows for push-based communication, where notifications are pushed to users as they occur, without polling the server.
  • Example: A real-time chat application where messages and notifications are pushed instantly to users via reactive streams (using WebSockets and Flux), ensuring low-latency communication.

4. IoT (Internet of Things) Systems

  • Use Case: Smart home devices, industrial sensor networks, or connected cars.
  • Why Reactive: IoT systems often involve a large number of devices sending small packets of data asynchronously. Reactive programming can handle data streams from multiple devices efficiently without blocking.
  • Example: A smart city system where thousands of sensors send data (like traffic patterns, weather, and air quality) to a central server for real-time processing. Reactive streams allow the system to handle massive input rates with minimal latency and resource usage.

5. Real-Time Data Processing

  • Use Case: Data pipelines, log aggregators, or analytics platforms.
  • Why Reactive: In scenarios where large volumes of data need to be processed in real-time, reactive programming can help by processing events asynchronously as they arrive.
  • Example: A log aggregation system that collects logs from different services and processes them in real time for monitoring and alerting (e.g., detecting security threats or system failures). A Flux stream can ingest the logs and process them without blocking, ensuring the system can keep up with high event rates.

6. Reactive Database Access

  • Use Case: Applications requiring real-time interaction with databases, like recommendation systems, user dashboards, or analytics.
  • Why Reactive: Reactive database drivers (such as R2DBC for SQL databases or Spring Data Reactive MongoDB) allow applications to interact with databases without blocking, making it possible to handle many database requests concurrently.
  • Example: A personalized recommendation engine where multiple API calls are made to retrieve user preferences and product details. With reactive programming, these database calls are non-blocking, allowing the system to remain responsive while fetching and processing data.

7. Live Dashboards and Monitoring Systems

  • Use Case: Real-time monitoring and alerting for DevOps, infrastructure, or business metrics.
  • Why Reactive: Dashboards that need to display live data from various sources (like server logs, application metrics, or business KPIs) benefit from the ability to push updates in real time without polling.
  • Example: A DevOps monitoring dashboard that shows real-time server health metrics. As servers report their status, the dashboard is updated via reactive streams, ensuring that users always see the latest data.

8. Reactive Microservices

  • Use Case: Systems built on microservices architecture where services need to communicate asynchronously.
  • Why Reactive: In microservices architectures, services often need to communicate over the network. Using reactive programming with non-blocking I/O allows for efficient inter-service communication, even under high load.
  • Example: A banking system where microservices handle user accounts, transaction processing, and notifications. These microservices interact via reactive streams, ensuring that requests between services do not block threads and can handle concurrent requests efficiently.

9. Server-Sent Events (SSE) or WebSockets

  • Use Case: Real-time data feeds, multiplayer games, or live score updates.
  • Why Reactive: SSE or WebSockets require long-lived connections where updates are pushed to clients as soon as they are available. Reactive streams handle these long-lived connections efficiently by only consuming resources when needed.
  • Example: A live sports score service that updates users as soon as new scores or events happen. The scores are streamed to the clients using WebSockets or SSE via a Flux, ensuring real-time updates.

Summary of Benefits in Real-World Use Cases

  • Improved Resource Utilization: By avoiding blocking operations, reactive programming helps systems handle a large number of concurrent requests with fewer resources.
  • Scalability: Reactive systems can scale more easily by using non-blocking I/O and processing streams of data asynchronously.
  • Responsiveness: Reactive systems remain highly responsive even under load, making them ideal for real-time applications.
  • Fault Tolerance: Reactive systems are often built with resilience and backpressure in mind, improving their ability to handle failures gracefully.

Reactive programming is particularly useful in modern architectures, where the ability to handle concurrent, distributed, and high-volume workloads is essential.

ChatGPTReactive programming is a powerful tool, but it's not the right solution for every problem. There are several scenarios where using reactive programming may introduce unnecessary complexity or lead to performance drawbacks. Here are cases where you

1. Simple, Synchronous Workloads

  • When to Avoid: If your application primarily performs synchronous, blocking operations (like simple CRUD operations on a local database or file I/O), reactive programming is unnecessary.
  • Why: Reactive programming introduces complexity with its asynchronous nature. For applications where blocking threads is acceptable and won't cause scalability issues (e.g., small applications with low traffic), traditional imperative programming is simpler and easier to maintain.
  • Example: A simple internal HR application where a few employees perform basic CRUD operations on a relational database. The blocking nature of JDBC in a low-traffic environment is sufficient and easy to manage.

2. CPU-Bound Operations

  • When to Avoid: If the bulk of your work involves CPU-bound tasks, such as complex calculations, data transformation, or image processing, reactive programming might not provide a significant performance boost.
  • Why: Reactive programming excels at managing I/O-bound tasks where you wait for external systems (like databases or remote services). For CPU-bound tasks, where computation is the bottleneck, you won't benefit much from non-blocking behavior. Thread pooling or parallel processing techniques (like Java's Fork/Join framework) are better suited for CPU-bound work.
  • Example: An application that performs intensive image processing or encryption. Here, the CPU is the bottleneck, so reactive programming will add complexity without improving performance.

3. Legacy Systems and Blocking APIs

  • When to Avoid: If your system heavily relies on legacy libraries or blocking APIs (e.g., traditional JDBC, old third-party libraries), using reactive programming can be counterproductive.
  • Why: Blocking APIs don't fit well into reactive programming. Wrapping them in reactive abstractions (like Mono and Flux) doesn't change their blocking nature, so you'll end up with reactive code that doesn't provide the benefits of non-blocking behavior.
  • Example: An application that interacts with a legacy relational database using traditional JDBC. Since JDBC is inherently blocking, trying to make it reactive through a wrapper (e.g., using reactive schedulers) adds complexity without real performance gains.

4. Small or Simple Applications

  • When to Avoid: For small, straightforward applications with low concurrency needs, the overhead of setting up and maintaining a reactive architecture may not be justified.
  • Why: The complexity of reactive programming may outweigh its benefits in small-scale applications. Writing reactive code can be harder to reason about, debug, and maintain. For simple applications with low performance demands, traditional approaches are more than sufficient.
  • Example: A personal blog website or a small company’s internal dashboard. These systems typically don't have high performance demands, so the extra complexity of reactive programming is unnecessary.

5. Applications with No High Concurrency Needs

  • When to Avoid: In applications where high concurrency or responsiveness is not a primary requirement, reactive programming adds unnecessary complexity.
  • Why: Reactive programming is built to handle high concurrency with non-blocking I/O. If your application serves few users or runs tasks that don’t involve heavy concurrency, using imperative programming is simpler and less error-prone.
  • Example: A payroll system that processes payments once a month. Since it only processes a limited amount of data at a scheduled time, there’s no need for a reactive, non-blocking approach.

6. Complex Debugging and Error Handling

  • When to Avoid: If your team is not experienced with asynchronous, reactive code and your application has complicated error handling or business logic.
  • Why: Debugging and maintaining reactive code can be significantly more difficult than with traditional, imperative code. Errors may propagate differently in reactive streams, making it harder to trace problems, especially for teams new to this paradigm.
  • Example: A financial services application with highly complex business logic and intricate error-handling requirements. The added complexity of reactive programming could make it difficult to ensure correctness, auditability, and compliance.

7. Real-Time Latency-Critical Systems (Low-Latency)

  • When to Avoid: If you’re building systems where real-time responses with minimal latency are critical (e.g., high-frequency trading).
  • Why: The abstraction of reactive frameworks like Project Reactor may introduce slight overhead due to context switching, deferred execution, and scheduling. In ultra-low-latency environments, where even microsecond delays matter, reactive programming might not be fast enough.
  • Example: A high-frequency trading platform that executes trades in microseconds. In such scenarios, a more direct approach using low-level constructs and optimizing for the fewest possible layers of abstraction might be preferable.

8. Steep Learning Curve for Teams

  • When to Avoid: If your development team is unfamiliar with the reactive paradigm and there is limited time for training or upskilling.
  • Why: Reactive programming requires a shift in thinking compared to traditional imperative programming. If the team isn’t familiar with it, the learning curve can slow down development, lead to confusion, and increase the likelihood of bugs. Training a team on reactive programming might take time and resources.
  • Example: A development team tasked with quickly delivering a new feature or system. If the team has no prior experience with reactive programming, it may be better to stick with familiar, imperative patterns to meet deadlines.

Conclusion: When Not to Use Reactive Programming

You should avoid reactive programming if:

  • Your workload is mostly synchronous or CPU-bound.
  • You rely on blocking legacy systems and APIs.
  • Your application is small, simple, or has no high concurrency requirements.
  • Your team lacks experience in reactive programming and cannot afford the overhead of learning and debugging.
  • You're building ultra-low-latency systems where every microsecond counts.

In these cases, sticking with traditional, imperative programming is often simpler, easier to debug, and provides enough performance for the scale of the application.


Question :
So basically, we use reactive programming when we do not need user to trigger an event or input?

Not exactly. Reactive programming is not about avoiding user-triggered events or inputs, but rather how events, data, and inputs are handled asynchronously and non-blockingly.

In reactive programming, you can still have user-triggered events like HTTP requests, button clicks, or form submissions. The key difference lies in how these events are processed and how the system reacts to them.

Key Points to Clarify:

  1. Reactive Programming Handles Both User and System Events Asynchronously:

    • Reactive programming shines when there are events or inputs that need to be processed without blocking resources, regardless of whether these events are triggered by users or systems.
    • Example: A user clicks a button to submit a form, which triggers a database query. Instead of waiting for the database to respond (blocking), the system handles this request reactively, allowing other tasks to continue in parallel.
  2. Reactive Programming is Useful When I/O Operations are Asynchronous:

    • Reactive programming is best for scenarios where the system needs to handle many concurrent tasks (like database calls, API requests, file I/O) efficiently without blocking threads.
    • Example: A user uploads a large file to a server. While the server processes the file asynchronously, it can still handle other requests.
  3. Reactive Programming is Event-Driven:

    • Reactive systems react to events (which can be user-generated, like clicks, or system-generated, like incoming data from a database or API) and respond in a non-blocking, asynchronous way.
    • Example: A live chat application where users send messages (user-triggered events). The system reacts to these events by pushing the messages to other users in real-time via a reactive stream.
  4. Real-Time and Asynchronous User Interactions:

    • Reactive programming is actually a good fit for user-triggered interactions that require real-time feedback, such as live dashboards, notifications, or chat applications. These systems push updates to users without requiring constant polling or waiting for user actions.
    • Example: A social media feed that automatically updates with new posts in real-time. The system reacts to incoming data and pushes it to the user without needing the user to manually refresh the page.

When Reactive Programming is Helpful (Even for User-Triggered Events):

  1. Web Applications Handling High Concurrency:

    • Example: A web API that serves thousands of users concurrently. Each request from users (like viewing a product page) can be handled asynchronously, allowing the server to scale and remain responsive under load.
  2. Real-Time User Interfaces:

    • Example: A live dashboard that shows real-time stock market data or game scores. The system continuously updates the interface without user intervention, but it's still responding to user-triggered actions like requests to subscribe to different data streams.
  3. Non-Blocking User Requests:

    • Example: A user submits a form on an e-commerce site. Instead of blocking the thread while waiting for the payment system or database to respond, the application can process the request asynchronously, improving scalability and responsiveness.

When Reactive Programming is Less Useful:

  • Simple User Interfaces: If the application is small, with minimal interactions and few users, or if the workload is not heavily asynchronous (e.g., CRUD operations), you might not need reactive programming.

To Summarize:

Reactive programming is not about avoiding user-triggered events but about how those events are handled—using non-blocking, asynchronous techniques. It’s most useful when the system needs to handle many concurrent operations efficiently, whether triggered by users or other systems, and ensure responsiveness at all times.

ChatGPTLet’s walk through a detailed example of how a user-triggered event is handled reactively in a Spring Boot application using Spring WebFlux

Scenario:

A user submits a request to retrieve the details of a product from a database. The application processes this request reactively, fetching the product details asynchronously from the database, and then returning the response without blocking any threads.


Step-by-Step Example:

1. Setting Up the Reactive Controller

When a user sends a GET request to the /product/{id} endpoint, the server retrieves the product details reactively.

java
@RestController @RequestMapping("/products") public class ProductController { private final ProductService productService; public ProductController(ProductService productService) { this.productService = productService; } @GetMapping("/{id}") public Mono<ResponseEntity<Product>> getProductById(@PathVariable String id) { return productService.getProductById(id) .map(product -> ResponseEntity.ok(product)) .defaultIfEmpty(ResponseEntity.notFound().build()); } }
  • Explanation:
    • The method getProductById() is mapped to a GET request.
    • The service call productService.getProductById(id) returns a Mono<Product>, which is a reactive stream that represents the asynchronous response.
    • .map(product -> ResponseEntity.ok(product)): If the product is found, we return the Product wrapped in an HTTP 200 OK response.
    • .defaultIfEmpty(ResponseEntity.notFound().build()): If the product is not found, we return a 404 Not Found response.

2. Creating the Reactive Service

The service handles fetching the product data from a repository (in this case, a reactive database using R2DBC or Spring Data Reactive MongoDB).

java
@Service public class ProductService { private final ProductRepository productRepository; public ProductService(ProductRepository productRepository) { this.productRepository = productRepository; } public Mono<Product> getProductById(String id) { return productRepository.findById(id); } }
  • Explanation:
    • The getProductById() method returns a Mono<Product>, meaning it will either emit a single Product or complete with no value if the product isn’t found.
    • The method findById(id) is non-blocking and provided by a reactive database (like MongoDB or PostgreSQL using R2DBC).

3. Reactive Repository (R2DBC for Non-blocking DB Calls)

The repository layer uses R2DBC (Reactive Relational Database Connectivity) to perform non-blocking database operations.

java
@Repository public interface ProductRepository extends ReactiveCrudRepository<Product, String> { // R2DBC will automatically generate reactive methods for CRUD operations }
  • Explanation:
    • ReactiveCrudRepository is a Spring Data interface that allows CRUD operations in a non-blocking, reactive way.
    • This ensures that the database query to find the product is asynchronous and does not block the thread handling the HTTP request.

Reactive Flow Breakdown:

  1. User Request:

    • The user sends a request to /products/{id} to retrieve the details of a product.
    • This triggers the getProductById() method in the ProductController.
  2. Non-Blocking Service Call:

    • The controller calls the productService.getProductById(id), which in turn makes a non-blocking call to the database using the ProductRepository.
  3. Database Query:

    • The ProductRepository.findById(id) fetches the product reactively using R2DBC or another reactive driver, without blocking the thread.
    • While the query is being processed, the server can handle other requests or perform other tasks, ensuring scalability.
  4. Handling the Response:

    • Once the database responds, the Mono<Product> is completed with the product data.
    • The response is then sent back to the client without blocking any other operations.

Reactive Non-Blocking Database Call:

Unlike traditional blocking calls (like with JDBC), which would block a thread while waiting for the database to respond, R2DBC allows you to fetch data reactively:

  • Traditional Blocking (JDBC): The thread that handles the request waits for the database to respond, wasting resources during this waiting time.
  • Reactive Non-Blocking (R2DBC): The thread handling the request is freed up while waiting for the response, and the response is processed once the data arrives.

Adding Asynchronous Behavior with Delay (Simulating Long Processing)

To demonstrate the non-blocking nature more clearly, let’s simulate a delayed response (as if the database took a long time to respond):

java
@GetMapping("/{id}") public Mono<ResponseEntity<Product>> getProductByIdWithDelay(@PathVariable String id) { return productService.getProductById(id) .delayElement(Duration.ofSeconds(3)) // Simulating a 3-second delay .map(product -> ResponseEntity.ok(product)) .defaultIfEmpty(ResponseEntity.notFound().build()); }
  • Explanation:
    • Here, we’ve added a delay of 3 seconds using .delayElement(Duration.ofSeconds(3)).
    • During this time, the thread handling the HTTP request is not blocked, and the server can continue processing other requests.

Benefits of Reactive Handling for User-Triggered Events:

  • Improved Resource Utilization: While waiting for the database, the server can handle other requests, making better use of resources.
  • High Concurrency: Many user requests can be handled concurrently without the need for creating multiple threads, thanks to non-blocking I/O.
  • Better Scalability: Reactive systems scale more easily to handle a large number of user-triggered events (like API requests) under high load.

Conclusion:

Even though the user triggered the event (HTTP GET request), the system handles it reactively—processing the request asynchronously without blocking resources. This approach is highly beneficial for scenarios involving high concurrency and I/O-bound operations, where responsiveness and scalability are important.

But the http api call is waiting from users browser so it is synchronous, right?

Great question! While the user’s browser is indeed waiting for the HTTP response, that doesn't mean the entire operation is synchronous on the server side. The synchronous or asynchronous nature refers to how the server processes the request, not how the client waits for the response.

Key Difference:

  • Client-Side (Browser): The user’s browser always waits for the response, whether it's synchronous or asynchronous on the server. From the browser's perspective, it sends a request and waits for the server's response. This waiting is indeed synchronous on the client side.

  • Server-Side (Backend): The server-side can handle the request asynchronously or synchronously:

    • Synchronous: In a traditional synchronous setup (e.g., using blocking I/O like Servlet API or JDBC), the server blocks a thread while waiting for database queries or I/O operations to complete. During this waiting period, the thread cannot do any other work.
    • Asynchronous (Reactive): In a reactive setup (like with WebFlux and R2DBC), the server does not block while waiting for operations like database access or other I/O. Instead, the request processing is non-blocking, freeing up resources to handle more requests concurrently.

Reactive vs Synchronous from the Server's Perspective:

  1. Synchronous (Blocking):

    • Traditional flow: User triggers an event -> Server allocates a thread -> Server waits (blocking) for the database or I/O -> Server sends response -> Thread is released.
    • The server thread is blocked during I/O operations like database calls, reducing scalability.
  2. Asynchronous (Non-Blocking, Reactive):

    • Reactive flow: User triggers an event -> Server initiates a non-blocking operation (e.g., DB call) -> Server continues handling other requests -> When the data is ready, the server sends a response to the user without having blocked any threads.
    • The server does not block threads while waiting for I/O operations, making it more scalable. The user still waits, but the server can handle many more requests simultaneously.

Why Does It Matter?

Even though the user’s browser is waiting for the response (synchronously from their perspective), the server-side asynchrony is crucial for:

  • Scalability: The server can handle thousands of concurrent requests without requiring a 1:1 mapping of requests to threads.
  • Efficiency: Non-blocking I/O allows the server to process other requests while waiting for a response from the database or other external services.
  • Resource Utilization: Instead of keeping server threads blocked, resources can be allocated to process more user requests, which is especially useful in high-load systems (like APIs, e-commerce platforms, etc.).

Example Breakdown:

Let’s break down the earlier reactive example with a non-blocking database call to make this clearer:

  1. User's Perspective:

    • The user makes a request to the API: GET /products/1234.
    • The browser waits for the response. The waiting is synchronous in the sense that the browser won’t do anything until the server responds.
  2. Server's Perspective (Reactive):

    • The server receives the request.
    • Instead of blocking a thread for a database call, the server uses a reactive, non-blocking method to fetch the product data.
    • The server continues handling other requests while it waits for the database to respond.
    • Once the database responds (asynchronously), the server processes the result and sends the response to the user’s browser.

How the User Perceives It:

  • From the user's point of view: They experience a normal HTTP request-response cycle. They don’t see or care whether the server used synchronous or asynchronous methods internally—they just get the response when it's ready.

  • From the server’s point of view: By using asynchronous, non-blocking operations, the server is free to handle many requests simultaneously, leading to better performance, especially under high traffic.


Summary:

  • User Perspective: The user/browser waits for the HTTP response. From their point of view, it seems synchronous.
  • Server Perspective: The server can handle the user request asynchronously (non-blocking), using reactive programming to make efficient use of resources, handle more requests, and improve scalability.

So, while the user's browser is indeed "waiting," the important distinction is that the server doesn’t have to wait synchronously and block resources during that time.


Question : So even the http request was not blocked a thread server side? Then how server remember which http request has to respond back?

Exactly! Even though the HTTP request is handled asynchronously and non-blockingly on the server side, the server still remembers which HTTP request needs a response. This is managed by a combination of reactive programming techniques and event-driven systems. Here's how the server achieves this without blocking a thread:



Key Concepts:

  1. Event Loop & Non-Blocking I/O:

    • The server (e.g., using Netty with Spring WebFlux) utilizes an event loop architecture, which is capable of handling a large number of requests asynchronously.
    • When an HTTP request comes in, the server does not block a thread while waiting for long-running operations (like I/O, database calls, etc.). Instead, it registers a callback or continuation that gets triggered once the response is ready (e.g., when the database query completes).
  2. Reactive Streams (Mono/Flux):

    • In a reactive system, responses are wrapped in reactive types like Mono (for a single value) or Flux (for multiple values). These reactive types represent an eventual result, meaning the server knows that a response is pending and will be sent back once the asynchronous task completes.
    • The Mono or Flux holds the reference to the original request, ensuring that when the result is ready, the server sends the response to the correct client.
  3. Deferred Execution (Callback Mechanism):

    • While the server waits for the database call (or any I/O-bound task), it does not hold on to a thread. Instead, it registers a callback or continuation for when the result is ready.
    • Once the non-blocking operation completes (e.g., a database query), the event loop resumes the processing of the original HTTP request, using the result of the asynchronous operation to generate the response.
  4. Context Propagation:

    • The server maintains context information for each HTTP request. This context includes the original request details (like the request ID, user session, etc.), which are passed along with the reactive streams.
    • Even though no thread is blocked, the system remembers the request by keeping track of its context, which is reactivated once the response is ready.

Step-by-Step Example of Non-Blocking Request Handling:

Let’s walk through how a reactive server remembers and responds to an HTTP request in a non-blocking way:

1. HTTP Request Received:

  • A user sends a GET request to /products/1234.
  • The server receives this request. Normally, a thread would be assigned to handle it, but in a reactive system, the request is registered with the event loop.

2. Non-Blocking Database Call:

  • The server starts querying the database to get the product details.
  • Instead of blocking the thread and waiting for the response, it registers a callback (e.g., using Mono or Flux) that will be executed when the database responds.

3. Context and Continuation:

  • The context of the request (e.g., request ID, user session, etc.) is stored in memory, along with the reactive stream (e.g., Mono<Product>).
  • The server knows that the request is "in-flight" and will be resumed once the database returns the result.

4. Handling Other Requests:

  • While the server waits for the database, the event loop continues processing other requests without being blocked. It can handle many such requests concurrently.

5. Database Response (Callback Triggered):

  • When the database query completes, the event loop is notified, and the callback or continuation that was registered with the Mono<Product> is triggered.
  • The original context (i.e., which HTTP request this corresponds to) is restored, and the response is processed.

6. Send Response to User:

  • The server sends the HTTP response back to the user who made the request, using the context to match the response to the correct request.

How the Server "Remembers" the Request:

  • Non-blocking I/O: The request is never forgotten; it’s deferred in a non-blocking way using the event loop mechanism. The server uses callbacks or promises (Mono/Flux) to store the fact that a response is pending, and it resumes processing once the operation (like a database query) completes.

  • Context Preservation: Each request has a context that includes metadata like request IDs, headers, or user sessions. Even when no thread is actively working on that request, the context is stored, allowing the server to send the right response when the time comes.

  • Reactive Types (Mono/Flux): These reactive types represent streams of data. They store the result of an asynchronous computation and know how to complete the request-response cycle once the data becomes available.


Illustration Using Project Reactor:

Let’s look at how Project Reactor helps manage this:

java
@GetMapping("/product/{id}") public Mono<ResponseEntity<Product>> getProductById(@PathVariable String id) { // A non-blocking DB call using Mono return productService.getProductById(id) .map(product -> ResponseEntity.ok(product)) // Once the data is available .defaultIfEmpty(ResponseEntity.notFound().build()); }
  1. Mono<Product>:
    • When the Mono<Product> is returned, the server knows that it is waiting for a result. It doesn’t block a thread but simply waits for the Mono to be completed.
  2. Non-blocking Context:
    • The original HTTP request context is saved while the server continues handling other tasks.
  3. Response Handling:
    • When the Mono<Product> is completed (e.g., the database query finishes), the context is reloaded, and the correct response is sent back to the client.

Conclusion:

Even though the server doesn't block a thread while waiting for I/O, it uses reactive streams (like Mono/Flux), callbacks, and an event-driven model to keep track of which HTTP requests are awaiting responses. This is how it "remembers" which HTTP request corresponds to which operation, ensuring that each response is sent back to the correct client without blocking any threads.

Comments

Popular posts from this blog

Camunda Spring Boot with Shared Engine with War file

This is the only one website (may be first) to show an example to create working war file for camunda shared engine using Spring Boot. We analysed the problem using pure spring boot "starter" dependencies that it contains camunda engine plus tomcat, so even when you remove tomcat using either <provided> or <exclude>, it continues to run on tomcat as its already with in camunda engine. And when you try to create war file and paste in into shared camunda engine outside then that war will no work. This is because that war you created containing camunda engine and a tomcat inside it and when you paste into another outside camunda engine, it gets confused to which camunda to use. Hence I worked on this and brought a mix version of spring and spring boot implementation of camunda. So here you go....and thanks me later :) Here is the pom file -------------------------------------------------------------------------------------------- <?xml version=&

Stream flatMap() in Java with examples

Stream flatMap() in Java with examples Stream flatMap(Function mapper) returns a stream consisting of the results of replacing each element of this stream with the contents of a mapped stream produced by applying the provided mapping function to each element. Stream flatMap(Function mapper) is an  intermediate operation . These operations are always lazy. Intermediate operations are invoked on a Stream instance and after they finish their processing, they give a Stream instance as output. Note :  Each mapped stream is closed after its contents have been placed into this stream. If a mapped stream is null, an empty stream is used, instead. flatMap() V/s map()  : 1) map() takes a Stream and transform it to another Stream. It applies a function on each element of Stream and store return value into new Stream. It does not flatten the stream. But flatMap() is the combination of a map and a flat operation i.e, it applies a function to elements as well as flatten them. 2) map() is used for

Java Signature Certificate

Signature A valid digital signature gives a recipient reason to believe that the message was created by a known sender ( authentication ), that the sender cannot deny having sent the message ( non-repudiation ), and that the message was not altered in transit ( integrity )   Original-data.getByte() + private key à signature All are in byte array As signature has to be passed through network, we convert into string but we cant use just string of original byte array because of security. Hence we do manipulation of byte array and convert into a different string. We shoud not send new String(byte [] signature) , we should send   new String(Base64.encode(byte[] signature)) Verification: Client send (new data + signature) object{ User name: Ram Id : 678 Signature : ZTngSmnSpmWIj40r5TQ1hmec0UbfJLSCRSxbVBxCwchcFu6A8RS+O9BUFgG7U+UozVlrO5xGl9tARHxcIK4y2x/UHvhfYu74SOq22XgdGNuPMGQ560pUpiSkXspfGuFh9xHqovNGs7MQvWyESgurqehdsFD18sXV0z7gnqqFm78= }