I am working with Akka and Spring.
I have an actor System that operates on a Kafka Stream set up (using akka-stream-kafka_2.12) and the actors hold some data in memory and persist their state using akka-persistence.
What I wanted to know is that can I create a REST-endpoint that can interact with my Actor-System to provide some data or send messages to my actors.
My question is, how can it be achieved?
As said in the comments, I have created a sample working application in github to demonstrate the usage of Spring with Akka.
Please note that :
I have used Springboot for quick setup and configuration.
You can't expect any kind of good/best practices in this demo
project as i had to create this in 30 mins. It just explains one of
the ways(simple) to use akka within Spring.
This sample cannot be used in microservice architure because there is
no Remoting or Clustering involved here. API controllers directly talk to actors.
In Controllers, Used GetMapping in all places instead of PostMapping for simplicity.
Will update the repository with another sample explaining the usage
with Clustering where the way of communication between API
Controller and ActorSystem changes.
Here is the Link to Repo. Hope this will get you started.
Either you can build the application yourself or run the api-akka-integration-0.0.1-SNAPSHOT.jar file in command prompt. It runs in default 8080 port.
This sample includes two kinds of APIs, /Calc/{Operation}/{operand1}/{operand2} and /Chat/{message}
/chat/hello
/calc/add/1/2
/calc/mul/1/2
/calc/div/1/2
/calc/sub/1/2
Edit:2
Updated the repo with Akka CLuster Usage in API
API-Akka-Cluster
Related
I'm coming from the PHP/Python/JS environment where it's a standard to run multiple instances of web application as separate processes and asynchronous tasks like queue processing as separate scripts.
eg. in the k8s environment, there would be
N instances of web server only, each running in separate pod
For each queue, dynamic number of consumers, each in separate pod
Cron scheduling using k8s crontab functionality, leaving the scheduling process to k8s
Such approach matches well the cloud nature where the workload can be scheduled across both smaller number of powerful machines and lot of less powerful machines and allows very fine control of auto scaling (based on the number of messages in specific queue for example).
Also, there is a clear separation between the developer and DevOps responsibility.
Recently, I tried to replicate the same setup with Java Spring Boot application and failed miserably.
Even though Java frameworks say that they are "cloud native", it seems like all the documentation is still built around monolith application, which handles all consumers and cron scheduling in separate threads.
Clear answer to this problem is microservices but that's way out of scope.
What I need is to deploy separate parts of application (like 1 queue listener only) per pod in the cloud yet keep the monolith code architecture.
So, the question is:
How do I design my Spring Boot application so that:
I can run the webserver separately without queue listeners and scheduled jobs
I can run one queue listener per pod in the k8s
I can use k8s cron scheduling instead of App level Spring scheduler?
I found several ways to achieve something like this but I expect there must be some "more or less standard way".
Alternative solutions that came to my mind:
Having separate module with separate Application definition so that each "command" is built separately
Using Spring Profiles to instantiate specific services only according to some environment variables
Implement custom command line runner which would parse command name/queue name and dynamically create appropriate services (this seems to be the most similar approach to the way how it's done in "scripting languages")
What I mainly want to achieve with such setup is:
To be able to run the application on lot of weak HW instead of having 1 machine with 32 cpu cores
Easier scaling per workload
Removing one layer from already complex monitoring infrastructure (k8s already allows very fine resource monitoring, application level task scheduling and parallelism makes this way more difficult)
Do I miss something or is it just that it's not standard to write Java server apps this way?
Thank you!
What I need is to deploy separate parts of application (like 1 queue listener only) per pod in the cloud yet keep the monolith code architecture.
I agree with #jacky-neo's answer in terms of the appropriate architecture/best practice, but that may require you to break up your monolithic application.
To solve this without breaking up your monolithic application, deploy multiple instances of your monolith to Kubernetes each as a separate Deployment. Each deployment can have its own configuration. Then you can utilize feature flags and define the environment variables for each deployment based on the functionality you would like to enable.
In application.properties:
myapp.queue.listener.enabled=${QUEUE_LISTENER_ENABLED:false}
In your Deployment for the queue listener, enable the feature flag:
env:
- name: 'QUEUE_LISTENER_ENABLED'
value: 'true'
You would then just need to configure your monolithic application to use this myapp.queue.listener.enabled property and only enable the queue listener when the property is set to true.
Similarly, you could also apply this logic to the Spring profile to only run certain features in your app based on the profile defined in your ConfigMap.
This Baeldung article explains the process I'm presenting here in detail.
For the scheduled task, just set up a CronJob using a curl container which can invoke the service you want to perform the work.
Edit
Another option based on your comments below -- split the shared logic out into a shared module (using Gradle or Maven), and have two other runnable modules like web and listener that depend on the shared module. This will allow you to keep your shared logic in the same repository, and keep you from having to build/maintain an extra library which you would like to avoid.
This would be a good step in the right direction, and it would lend well to breaking the app into smaller pieces later down the road.
Here's some additional info about multi-module Spring Boot projects using Maven or Gradle.
According to my expierence, I will resolve these issue as below. Hope it is what you want.
I can run the webserver separately without queue listeners and
scheduled jobs
Develop a Spring Boot app to do it and deploy it as service-A in Kubernetes. In this app, you use spring-mvc to define the controller or REST controller to receive requests. Then use the Kubernetes Nodeport or define ingress-gateway to make the service accessible from outside the Kubernetes cluster. If you use session, you should save it into Redis or a similar shared place so that more instances of the service (pod) can share same session value.
I can run one queue listener per pod in the k8s
Develop a new Spring Boot app to do it and deploy it as service-B in Kubernetes. This service only processes queue messages from RabbitMQ or others, which can be sent from service-A or another source. In most times it should not be accessed from outside the Kubernetes cluster.
I can use k8s cron scheduling instead of App level Spring scheduler?
In my opinion, I like to define a new Spring Boot app with spring-scheduler called service-C in Kubernetes. It will have only one instance and will not be scaled. Then, it will invoke service-A method at the scheduled time. It will not be accessible from outside the Kubernetes cluster. But if you like Kubernetes CronJob, you can just write a bash shell using service-A's dns name in Kubernetes to access its REST endpoint.
The above three services can each be configured with different resources such as CPU and memory usage.
I do not get the essence of your post.
You want to have an application with "monolithic code architecture".
And then deploy it to several pods, but only parts of the application are actually running.
Why don't you separate the parts you want to be special to be applications in their own right?
Perhaps this is because I come from a Java background and haven't deployed monolithic scripting apps.
I am designing integration tests on kinda legacy app and I am facing the problem I have a services I´d like to use only for one run of Integration tests.
The app contains a multiple modules, 4 spring (non boot) applications and these are using these services:
PostgreSQL database
RabiitMQ instance
ElasticSearch instance
Whole stack is currently dockerized via docker-compose (so using docker-compose up the whole app starts, database schemas are created etc).
I would like to achieve this via testcontainers. So start PostgreSQL container where I run flyway scripts to create schema and full database with data required to run (other data will be added in separate tests), then start RabbitMQ and then ElasticSearch instance.
All these automatically every time integration test run.
Is this even possible using "legacy" Spring (non Boot)?
And is this possible to automatize process that it could run many times on one server (so there wont be any port collisions). The goal is to run this on sobe Git repository after merge request was submitted to check if all integration tests pass.
Thank you for advices.
Testcontainers is absolutely independent from Spring from the beginning, actually as I know some king of integration with Spring Boot has only recently been added.
There are a few ways to achieve that, the simplest would be to create a few containers as test class fields as described here [1]
Yes, it is possible to achieve that without collisions, read here. In short - Testcontainers exposes container's port (for ex 5432 for Postgres) on a random host port in order to avoid collisions, you can get the actual port as described in the article. For JDBC containers it can be even easier.
I haven't personally worked with RabbitMQ and ElasticSearch but there are modules for that, you can read about that in docs.
P.S. It's also possible to use Docker Compose support for that, but I can't find any reason why, just FYI, the approach above is simpler.
[1] #Testcontainers annotation will start them for you but you can also manually manage container lifecycle.
We have a number of related Java Spring applications running on our servers. Lets call them App1, App2 & App3. As is standard all these use the common code in our-common-utils.jar
I want these applications(App1, App2 & App3) to broadcast their state to one or more remote listeners. For e.g.
App1: I failed to read file abc.
App2: I am using more than 90% of my heap space etc.
The listener/s of these events will take specific actions such as send emails to support and/or clients based on the notifications received.
The best solution I can think of is to have a NotificationSender JMX enabled(implements NotificationBroadcasterSupport) bean in our-common-utils.jar. This will have a thread consuming from a queue of Notifications and firing off sendNotification() to the listeners for each Notification. This will be done by each of the Apps in our eco system but using common code from common-utils.
Do you see any flaws in this design? Any more efficient ways/frameworks of doing it?
Many Thanks :)
Alternative solution is to use any distributed coordination service zookeeper for example. I used it in my very first micro service project. As I can see you are using spring. Spring cloud provides necessary solutions that you can use in declarative way. I would pay your attention to #FeignClient. It is very simple in use and flexible in spring world.
If I would work on this issue now, I would use spring hystrix based solution. To simplify integration between your java services I would recommend to check service-registration-and-discovery.
Ignore my opinion if spring is not general engine part in your projects (may be you need other vendor solutions, there are a lot of alternatives). I concentrate my attention on spring solutions because spring is not restricted in my projects and I can use anything I wish if it's reasonable.
We have configured storm cluster with one nimbus server and three supervisors. Published three topologies which does different calculations as follows
Topology1 : Reads raw data from MongoDB, do some calculations and store back the result
Topology2 : Reads the result of topology1 and do some calculations and publish results to a queue
Topology3 : Consumes output of topology2 from the queue, calls a REST Service, get reply from REST service, update result in MongoDB collection, finally send an email.
As new bee to storm, looking for an expert advice on the following questions
Is there a way to externalize all configurations, for example a config.json, that can be referred by all topologies?
Currently configuration to connect MongoDB, MySql, Mq, REST urls are hard-coded in java file. It is not good practice to customize source files for each customer.
Wanted to log at each stage [Spouts and Bolts], Where to post/store log4j.xml that can be used by cluster?
Is it right to execute blocking call like REST call from a bolt?
Any help would be much appreciated.
Since each topology is just a Java program, simply pass the configuration into the Java Jar, or pass a path to a file. The topology can read the file at startup, and pass any configuration to components as it instantiates them.
Storm uses slf4j out of the box, and it should be easy to use within your topology as such. If you use the default configuration, you should be able to see logs either through the UI, or dumped to disk. If you can't find them, there are a number of guides to help, e.g. http://www.saurabhsaxena.net/how-to-find-storm-worker-log-directory/.
With storm, you have the flexibility to push concurrency out to the component level, and get multiple executors by instantiating multiple bolts. This is likely the simplest approach, and I'd advise you start there, and later introduce the complexity of an executor inside of your topology for asynchronously making HTTP calls.
See http://storm.apache.org/documentation/Understanding-the-parallelism-of-a-Storm-topology.html for the canonical overview of parallelism in storm. Start simple, and then tune as necessary, as with anything.
I was wondering if someone could point me to a good tutorial or blog post on writing a spring application that can be all run in a single process for integration testing locally but when deployed will deploy different subsystems into different processes/dynos on heroku.
For example, I have services for User management, Job processing, etc. all in my web application. I want to run it just as a web application locally. But when I deploy to heroku I want to deploy just the stateless web front end to TWO dynos and then have worker dynos that I can select different services to run on. I may decide to group 2 of these services into one process or decide that each should run in its own process. Obviously when the services run in their own process they will need to transparently add some kind of transport like REST or RabbitMQ or AKKA or some such.
Any pointers on where to start looking to learn how to do this? Or am I thinking about this incorrectly and you'd like to suggest a different approach? I need to figure out how to setup the application and also how to construct maven and intellij to achieve this.
Thanks.
I can't point you to a prefabricated article or post, but I can share the direction I started down to solve a similar problem. Essentially, the proposed approach was similar to yours - put specific services with potentially long-running logic in worker dynos and pass messages via Jesque (Java port of Resque) on a RedisToGo instance (Heroku add-on). I never got the separate web vs. worker Spring contexts fully ironed out (moved on to other priorities) but the gist of it was 1) web tier app context would be configured to post messages and 2) worker app context configured to consume.
That said, I used foreman locally to simulate the Heroku environment to debug scaling (foreman start --formation="web=2" + Apache mod_proxy_http). Big Spring gotcha when you scale to 2+ dynos - make sure you are using Redis or Memcache for session storage when using webapp-runner. Spring uses HttpSession by default to store the security context... no session affinity or native Tomcat session replication.
Final caveat - in our case, none of our worker processing needed to be reflected to the end user. That said, we were using Pusher for other features (also a Heroku add-on). If you need to update the user when an async task completes, I recommend looking at it.