Scheduled Jobs in Spring using Akka - java

I am trying to determine the best way to implement handling long running batch jobs in Spring MVC. I come across Akka in my searching as a non blocking framework for aync processing, which is preferred because I don't want the batch processing to eat up all the threads from the thread pool.
Essentially what I will be doing is have a job that needs to run on some set schedule that will go out and call various web services, process the data, and persist it.
I have seen some code example with using it with Spring, but I've never seen it used with a CRON type scheduler. It always seems to be using a fixed time period.
I'm not sure if this is even the best approach to handling large scale batch processing within Spring. Any suggestions or links to good Akka Spring resources are welcome.

I would suggest you to look into Spring Integration and Spring Batch projects. The first one allows you configure chains of services using EIP. We used it in or project to fetch files from FTP, deserialize and process them, import into DB, send emails if required etc. - all by schedule. The second one is more straightforward and basically provides a framework to work on rows of data. Both are configurable with Quartz and integrate into Spring MVC project nicely.

Related

Spring boot microservices and syncrhonized schedulers

I have a service which is deployed as microservice and a mongodb with some documents with "few" states, for example: READY, RUNNING, COMPLETED. I need to pick the documents with state "READY" and then process them. But with multiple instances running there is high possibility of processing the "duplicates". I have seen the below thread, but it is only concerned about one instance only picking up tasks.
Spring boot Webservice / Microservices and scheduling
Above talks about solution using Hazlecast and mongodb. But what I am looking at is that all instances wait for the lock, get their own "documents (non-duplicates) and process them. I have checked the various documents and unfortunately I am not able to find any solution.
One of the option I thought is to introduce Kafka, where we can "assign" specific tasks to specific consumers. But before opting would like to see if we any solutions which can be implemented using simple methods such as database locks etc. Any pointers towards this are highly appreciated.

Should I use Spring Data flow server for my new Spring Batch jobs?

I have a requirement to create around 10 Spring Batch jobs, which will consists of a reader and a writer. All readers read data from some different Oracle DB and write into a different Oracle Db(Source and destination servers are different). And the Spring jobs are implemented using Spring Boot. Also all 10+ jobs would be packaged into a single Jar File. So far fine.
Now the client also wants some UI to monitor the job status and act as a job organizer. I gone through the Spring Data flow Server documentation for UI requirement. But I'm not sure whether it'll serve the purpose, or is there any other alternative option available for monitoring the job status, stop and start the jobs whenever required from the UI.
Also how could I separate the the 10+ jobs inside a single Jar in the Spring Data Flow Server if it's the only option for an UI.
Thanks in advance.
I don't have reputation to add a comment. So, I am posting answer here. Although I know this is not the way to share reference link as an answer.
This might help you:
spring-batch-job-monitoring-with-angular-front-end-real-time-progress-bar
Observability of spring batch jobs is given by data that are persisted by the framework in a relational database... instances..executions..timestamps...read count..write count....
You have different way to exploit these data. SQL client, JMX, spring batch api (JobExplorer, JobOperator), spring admin (deprecated in favor of cloud data flow server).
Data flow is an orchestrator allowing you to execute data pipelines with streams and tasks(finite and short lived/monitored services). For your jobs we can imagine wrap each jobs in tasks and create a multitask pipeline. Data flow gives you status of each executions.
You can also expose your monitoring data by pushing them as metrics in an influxDb for instance...

Polling Java EE background task status - how to?

I'm currently working on a web application which needs to import data and do some processing. This can take some time (probably in the "several minutes" range, once the data sets grow), so we're running it in the background - and now the time has come to show status in the frontend, instead of tailing log files :)
The frontend is using Angular, hooked up to REST endpoints (JAX-RS) calling services in EJBs that do persistance via JPA. Running in JBoss EAP 6.4 / AS 7.5 (EE6). Standard stuff, but this is the first time I'm dealing with Java EE.
With regards to querying status, polling a REST endpoint periodically is fine - we don't need fancy stuff like websockets. We do need to support multiple background jobs, though, and information consisting of runstate (running/done/error), progress and list of errors.
So, I current have two questions:
1) Is there a more suitable way of launching a background task than calling a #Asynchronous EJB method?
2) Which options do I have for keeping track of the background tasks, and which is most suitable?
My first idea was to keep a HashMap, but that quckly ended up looking like too much manual (and fragile-looking) code with concurrency and lifetime concerns - and I prefer not reinventing the wheel. The safe choice seems to be JPA persisting it, but that seems somewhat clumsy for volatile status information.
I'm obviously not the first person facing these issues, but my google-fu seems to be lacking at the moment :)
The tasks could be launched using #Asynchronous or by using JMS #MessageDriven
From java-ee-7 ManagedExecutorService is also an option.
The tasks would then update their own state that is stored in a ConcurrentHashMap inside a #Singleton EJB.
If you are in a clustered environment, state of tasks is better stored using JPA, as #Singleton is not for whole cluster

Job queuing library/software for Java

The premise is this: For asynchronous job processing I have a homemade framework that:
Stores jobs in database
Has a simple java api to create more jobs and processors for them
Processing can be embedded in a web application or can run by itself in on different machines for scaling out
Web UI for monitoring the queue and canceling queue items
I would like to replace this with some ready made library because I would expect more robustness from those and I don't want to maintain this. I've been researching the issue and figured you could use JMS for something similar. But I would still have to build a simple java API, figure out a runtime where I would put the processing when I want to scale out and build a monitoring UI. I feel like the only thing I would benefit from JMS is that I would not have to do is the database stuff.
Is there something similar to this that is ready made?
UPDATE
Basically this is the setup I would want to do:
Web application runs in a Servlet container or Application Server
Web application uses a client api to create jobs
X amount of machines process those jobs
Monitor and manage jobs from an UI
You can use Quartz:
http://www.quartz-scheduler.org/
Check out Spring Batch.
Link to sprint batch website: http://projects.spring.io/spring-batch/

Asynchronous processing in Java from a servlet

I currently have a tomcat container -- servlet running on it listening for requests. I need the result of an HTTP request to be a submission to a job queue which will then be processed asynchronously. I want each "job" to be persisted in a row in a DB for tracking and for recovery in case of failure. I've been doing a lot of reading. Here are my options (note I have to use open-source stuff for everything).
1) JMS -- use ActiveMQ (but who is the consumer of the job in this case another servlet?)
2) Have my request create a row in the DB. Have a seperate servlet inside my Tomcat container that always runs -- it Uses Quartz Scheduler or utilities provided in java.util.concurrent to continously process the rows as jobs (uses thread pooling).
I am leaning towards the latter because looking at the JMS documentation gives me a headache and while I know its a more robust solution I need to implement this relatively quickly. I'm not anticipating huge amounts of load in the early days of deploying this server in any case.
A lot of people say Spring might be good for either 1 or 2. However I've never used Spring and I wouldn't even know how to start using it to solve this problem. Any pointers on how to dive in without having to re-write my entire project would be useful.
Otherwise if you could weigh in on option 1 or 2 that would also be useful.
Clarification: The asynchronous process would be to screen scrape a third-party web site, and send a message notification to the original requester. The third-party web site is a bit flaky and slow and thats why it will be handled as an asynchronous process (several retry attempts built in). I will also be pulling files from that site and storing them in S3.
Your Quartz Job doesn't need to be a Servlet! You can persist incoming Jobs in the DB and have Quartz started when your main Servlet starts up. The Quartz Job can be a simple POJO and check the DB for any jobs periodically.
However, I would suggest to take a look at Spring. It's not hard to learn and easy to setup within Tomcat. You can find a lot of good information in the Spring reference documentation. It has Quartz integration, which is much easier than doing it manually.
A suitable solution which will not require you to do a lot of design and programming is to create the object you will need later in the servlet, and serialize it to a byte array. Then put that in a BLOB field in the database and be done with it.
Then your processing thread can just read the contents, deserialize it and work with the ressurrected object.
But, you may get better answers by describing what you need your system to actually DO :)

Categories