Reactive programming is definitely making its place around. From RxJS to Akka Streams, reactive libraries emerged in many programming languages so as to foster the reactive paradigm’s principles, as established in the Reactive Manifesto.
Reactive programming is often explained in overly theoretical terms. According to the Reactive Manifesto, all Responsive, Resilient, Elastic and Message Driven systems are reactive. Responsiveness has to do with response time and throughput. Resiliency has to do with fault tolerance and availability. Elasticity implies the ability to support different loads at different times, which may impact performance, availability and operational expenses. And being Message-Driven allows the system to behave according to messages passing from one component to another, setting clear boundaries between components while keeping interactions among them transparent.
A note on streams: streams are an abstraction over unbounded data. They are often compared to batches. Imagine we have two integers. Applying a common function to such integers, such as the sum operator (+) sounds feasible. Now, what happens if the operands wont stop coming because, for example, they come from some user input, which is using some application in real-time. Then, the sum becomes a challenge. Using streams leverages a different approach to this problem. We will dive deeper on composing streams later on.
Event streams are particularly powerful in programming languages that support functional programming to a certain degree. Multi-paradigm languages, like C# or Scala have some built-in abstractions to represent data streams. In this article, we will explore one particular implementation of reactive streams using modern Java and Project Reactor. This will serve us as a service layer in a web application. In future articles we will discuss how this service layer could interoperate with a WebFlux web access layer. Follow along!
In order to follow this guide make sure you have:
After completing this guide you should be able to:
In this guide we will use Java, Gradle, Spring-Boot and IntelliJ CE.
We will create a user-interaction processing microservice PoC. We will focus on the service layer and stub the web access layer (which we will implement on WebFlux in another article) and the data access layer.
In order to get started create a Spring basic application with Gradle. You can do it using Spring Initializr, manually setting it up or cloning our starter project on GitHub. Maven works as well, but we prefer Gradle’s succinct syntax instead of XML. Let us fetch our dependencies from Mvn Repository. Let us briefly present the principal dependencies used in our project:
Our build.gradle dependencies block looks like this:
First, let’s grasp some of Lombok’s features. One of our favorite features is the built-in builder pattern and the Data classes. With such, we can write a simple model class in a file called DomElement.java inside the models package with this code:
The @Data lombok annotation allows us to spare constructors, accessors, modifiers, equality comparator and toString methods. Lombok’s data feature behaves similarly to Java Records, although they are not exactly the same. @Data classes can be fine tuned, and customization can be found in Lombok’s documentation. For example, fields marked as transient won’t be taken into consideration for equality comparators or hash codes.
The @Builder annotation enables us to avoid creating numerous overloaded constructors or cluttering our classes with excessive setters. Instead, it allows us to use the Builder pattern and chain-call our setters with some syntactic sugar on top. It ends up building an elegant Fluent Interface. In Baeldung there is an article worthy taking a look on combining the Builder pattern with Fluent Interfaces.
Because of the scope we have to Lombok in our Gradle dependencies, we can access the methods provided by the Builder during development time. Take a look at the setterPrefix option we added to our builder.
Instead of calling traditional void setters, we can use its corresponding with[attribute] method instead, and chain them. Builder setters are not void so they can be composable and be chain called. To obtain the object being built, the method chain call must end with .build()
Also add a UserInteraction model. The purpose of this class is to represent some user interaction with a DOM element. We will model this as a wrapper class for a DomElement field and a userId field, which is a string. Create a UserInteraction.java file in the models package and make it look like this:
Now we can get started with the service itself. We will stub out the source of user interactions (in this context, the controller that feeds them to the service layer), hard-coding some Java Beans. We will hold the user interactions in some convenient data structure. Later on, we will wire the structure into our application so we could easily replace it with an API controller that may be listening to some front end component.
It would be interesting to filter user interactions. For example, listing only user interactions that occurred between two timestamps, or filtering out according to DomElement type. Let’s design this behavior by contract in our service.
This is almost the way to go. We are missing the reactive aspect of things. In order to make this contract reactive, we are going to make use of one of the key concepts in Java Reactor: the Flux.
In the Project Reactor Flux documentation we can find the following illustration, clearly explaining how Flux works.
Flux and Mono are the main Reactive Streams abstractions used by Reactor. Mono is similar to Flux, but it accepts at most one element. Most Flux and Mono operators accept one or more of its kind, and also return one such. Look at the following example:
This code snippet presents a way to create a Flux from an array and then transforms all the elements initially present in the string array to uppercase, producing a new Flux. Fluxes can be created from Reactor native sources and from most Java iterables. Note that most Flux operations can be chained together in a fluent fashion, just as we defined our Builder earlier on.
The reactive contract for our UserInteractionService class will look like this:
Let’s build an implementation for this interface. We will build a simple implementation with stub data, and we will refactor it as we go to keep it tidy up, leverage Spring capabilities and foster maintainability. Start by adding some test data. We will help you with that!
These are the stub user interactions our system will work with in this PoC. Refactor that by wrapping all that hard-coded data in a private method, so our public methods aren’t flooded with stub data initialization.
There is a lot going on there. Let’s break it down a bit.
The Flux operations pipeline is called to be lazy. Fluxes and Monos are lazy. No computation will trigger until we subscribe. The client of the pipeline is responsible for triggering the computation, and is free to choose when to do it by subscribing. It’s really interesting how we can think of two stages: assembly and execution. We can chain together many complex (and remember: asynchronous and nonblocking!) operations without worrying about stages: the entire pipeline will trigger at once.
There are many pros when choosing lazy structures. For example, if our filter operations were conditional, that is to say there are certain situations where they won’t be executed, then our code won’t waste any resources by defining the flux assembly. Also, we can materialize our query whenever we want.
Before refactoring our application, let’s test it out. Add the following code to your main method. Don’t worry about the SpringApplication line for now.
And run it.
Our PoC works, but it’s far from being acceptable. Bearing in mind that we plan to scale this service PoC up to an actual reactive microservice on top of WebFlux we can tidy up things a bit. Also, we can leverage Spring Framework’s capabilities such as automated dependency injection.
We want to autowire dependencies so that when we add some WebFlux controllers they can easily locate the service they will communicate with. Also, we can take advantage of the interface-implementation split to enforce compliance with the Dependency Inversion Principle. Note that, for the sake of simplicity our interfaces and implementations will reside within the same physical component. They could be split up into two JARs to foster maintainability and modularity.
Furthermore, we want to clear up our hard-coded data initialization from our service. We will autowire data initialization in a data access package, so that it mocks database access behavior.
There is one detail we often overlook when designing software and that is good package/module design. We often design in the direction of classes and objects, but we tend to forget about modules. We will break packages up so that we comply with the Stable Dependency Principle (depending on the direction of stability) and the Stable Abstraction Principle (a package should be abstract as it is stable). In our case, we have completely abstract and completely volatile packages, and our volatile packages depend on the abstract ones. Also, we have defined interfaces for business rules, so it makes sense to assume those are our stable packages.
Our packages will look like the following:
And we will define three beans in a @Configuration class named ServiceConfiguration:
That’s our supplyData method turned into a Bean plus the instantiation of our Service and Repository interfaces. Now our Service class looks clean and is not responsible for creating stub data anymore. Our last step is to set up our main method. This should not be necessary if we already had our WebFlux controllers, but for the time being, make it look like the following:
And that is how we can make our service get all the user interactions for buttons.
Check our full example here.
We have developed a service layer for a reactive microservice. The most interesting future step is adding WebFlux for our application to be exposed to the web. Also, some high-throughput non-blocking persistence mechanism would be welcome too!
Reactive programming is definitely making its place around. From RxJS to Akka Streams, reactive libraries emerged in many programming languages so as to foster the reactive paradigm’s principles, as established in the Reactive Manifesto.
Reactive programming is often explained in overly theoretical terms. According to the Reactive Manifesto, all Responsive, Resilient, Elastic and Message Driven systems are reactive. Responsiveness has to do with response time and throughput. Resiliency has to do with fault tolerance and availability. Elasticity implies the ability to support different loads at different times, which may impact performance, availability and operational expenses. And being Message-Driven allows the system to behave according to messages passing from one component to another, setting clear boundaries between components while keeping interactions among them transparent.
A note on streams: streams are an abstraction over unbounded data. They are often compared to batches. Imagine we have two integers. Applying a common function to such integers, such as the sum operator (+) sounds feasible. Now, what happens if the operands wont stop coming because, for example, they come from some user input, which is using some application in real-time. Then, the sum becomes a challenge. Using streams leverages a different approach to this problem. We will dive deeper on composing streams later on.
Event streams are particularly powerful in programming languages that support functional programming to a certain degree. Multi-paradigm languages, like C# or Scala have some built-in abstractions to represent data streams. In this article, we will explore one particular implementation of reactive streams using modern Java and Project Reactor. This will serve us as a service layer in a web application. In future articles we will discuss how this service layer could interoperate with a WebFlux web access layer. Follow along!
In order to follow this guide make sure you have:
After completing this guide you should be able to:
In this guide we will use Java, Gradle, Spring-Boot and IntelliJ CE.
We will create a user-interaction processing microservice PoC. We will focus on the service layer and stub the web access layer (which we will implement on WebFlux in another article) and the data access layer.
In order to get started create a Spring basic application with Gradle. You can do it using Spring Initializr, manually setting it up or cloning our starter project on GitHub. Maven works as well, but we prefer Gradle’s succinct syntax instead of XML. Let us fetch our dependencies from Mvn Repository. Let us briefly present the principal dependencies used in our project:
Our build.gradle dependencies block looks like this:
First, let’s grasp some of Lombok’s features. One of our favorite features is the built-in builder pattern and the Data classes. With such, we can write a simple model class in a file called DomElement.java inside the models package with this code:
The @Data lombok annotation allows us to spare constructors, accessors, modifiers, equality comparator and toString methods. Lombok’s data feature behaves similarly to Java Records, although they are not exactly the same. @Data classes can be fine tuned, and customization can be found in Lombok’s documentation. For example, fields marked as transient won’t be taken into consideration for equality comparators or hash codes.
The @Builder annotation enables us to avoid creating numerous overloaded constructors or cluttering our classes with excessive setters. Instead, it allows us to use the Builder pattern and chain-call our setters with some syntactic sugar on top. It ends up building an elegant Fluent Interface. In Baeldung there is an article worthy taking a look on combining the Builder pattern with Fluent Interfaces.
Because of the scope we have to Lombok in our Gradle dependencies, we can access the methods provided by the Builder during development time. Take a look at the setterPrefix option we added to our builder.
Instead of calling traditional void setters, we can use its corresponding with[attribute] method instead, and chain them. Builder setters are not void so they can be composable and be chain called. To obtain the object being built, the method chain call must end with .build()
Also add a UserInteraction model. The purpose of this class is to represent some user interaction with a DOM element. We will model this as a wrapper class for a DomElement field and a userId field, which is a string. Create a UserInteraction.java file in the models package and make it look like this:
Now we can get started with the service itself. We will stub out the source of user interactions (in this context, the controller that feeds them to the service layer), hard-coding some Java Beans. We will hold the user interactions in some convenient data structure. Later on, we will wire the structure into our application so we could easily replace it with an API controller that may be listening to some front end component.
It would be interesting to filter user interactions. For example, listing only user interactions that occurred between two timestamps, or filtering out according to DomElement type. Let’s design this behavior by contract in our service.
This is almost the way to go. We are missing the reactive aspect of things. In order to make this contract reactive, we are going to make use of one of the key concepts in Java Reactor: the Flux.
In the Project Reactor Flux documentation we can find the following illustration, clearly explaining how Flux works.
Flux and Mono are the main Reactive Streams abstractions used by Reactor. Mono is similar to Flux, but it accepts at most one element. Most Flux and Mono operators accept one or more of its kind, and also return one such. Look at the following example:
This code snippet presents a way to create a Flux from an array and then transforms all the elements initially present in the string array to uppercase, producing a new Flux. Fluxes can be created from Reactor native sources and from most Java iterables. Note that most Flux operations can be chained together in a fluent fashion, just as we defined our Builder earlier on.
The reactive contract for our UserInteractionService class will look like this:
Let’s build an implementation for this interface. We will build a simple implementation with stub data, and we will refactor it as we go to keep it tidy up, leverage Spring capabilities and foster maintainability. Start by adding some test data. We will help you with that!
These are the stub user interactions our system will work with in this PoC. Refactor that by wrapping all that hard-coded data in a private method, so our public methods aren’t flooded with stub data initialization.
There is a lot going on there. Let’s break it down a bit.
The Flux operations pipeline is called to be lazy. Fluxes and Monos are lazy. No computation will trigger until we subscribe. The client of the pipeline is responsible for triggering the computation, and is free to choose when to do it by subscribing. It’s really interesting how we can think of two stages: assembly and execution. We can chain together many complex (and remember: asynchronous and nonblocking!) operations without worrying about stages: the entire pipeline will trigger at once.
There are many pros when choosing lazy structures. For example, if our filter operations were conditional, that is to say there are certain situations where they won’t be executed, then our code won’t waste any resources by defining the flux assembly. Also, we can materialize our query whenever we want.
Before refactoring our application, let’s test it out. Add the following code to your main method. Don’t worry about the SpringApplication line for now.
And run it.
Our PoC works, but it’s far from being acceptable. Bearing in mind that we plan to scale this service PoC up to an actual reactive microservice on top of WebFlux we can tidy up things a bit. Also, we can leverage Spring Framework’s capabilities such as automated dependency injection.
We want to autowire dependencies so that when we add some WebFlux controllers they can easily locate the service they will communicate with. Also, we can take advantage of the interface-implementation split to enforce compliance with the Dependency Inversion Principle. Note that, for the sake of simplicity our interfaces and implementations will reside within the same physical component. They could be split up into two JARs to foster maintainability and modularity.
Furthermore, we want to clear up our hard-coded data initialization from our service. We will autowire data initialization in a data access package, so that it mocks database access behavior.
There is one detail we often overlook when designing software and that is good package/module design. We often design in the direction of classes and objects, but we tend to forget about modules. We will break packages up so that we comply with the Stable Dependency Principle (depending on the direction of stability) and the Stable Abstraction Principle (a package should be abstract as it is stable). In our case, we have completely abstract and completely volatile packages, and our volatile packages depend on the abstract ones. Also, we have defined interfaces for business rules, so it makes sense to assume those are our stable packages.
Our packages will look like the following:
And we will define three beans in a @Configuration class named ServiceConfiguration:
That’s our supplyData method turned into a Bean plus the instantiation of our Service and Repository interfaces. Now our Service class looks clean and is not responsible for creating stub data anymore. Our last step is to set up our main method. This should not be necessary if we already had our WebFlux controllers, but for the time being, make it look like the following:
And that is how we can make our service get all the user interactions for buttons.
Check our full example here.
We have developed a service layer for a reactive microservice. The most interesting future step is adding WebFlux for our application to be exposed to the web. Also, some high-throughput non-blocking persistence mechanism would be welcome too!