Scaling Scheduler Web Service - java

We are developing an application which periodically syncs the LDAP servers of different clients with our database. This application needs to be accessed via a web portal. A web user will create, modify or delete scheduled tasks on this application. So, we have developed this application as a web service.
Now, we have to scale this application and also ensure high availability.
The application is an Axis2 based web service running on Tomcat. We have thought of httpd + mod_jk + tomcat combination for load balancing. The problem is that if a request for modification/deletion comes, then it should land on the same tomcat server on which the task was created initially. But, since, the request can come from different web users accessing web portal from different ip addresses, we can not have same session id (sticky session).
Any solutions? Different architecture? Anything.
We have also thought of using Quartz scheduler api. The site says it supports load balancing and clustering. Does anyone has experience of working on such scenario with Quartz?

If you are using Quartz for your scheduling, that can be backed by a database (see JDBCJobStore). Then you could access any Tomcat server and the scheduling would be centralized. I would recommend using a key in the database that you return back to the Axis service, so that the user can reference the same data between calls.
Alternatively it is not difficult to use the database as a job scheduler, then have your tasks run on Tomcat (any location), and put the results into the database. If the results of the job (such as its status) are small, this would work fine.

Related

Is there a way to deploy two containers in one heroku app?

Is there any way to deploy two containers in just one app on heroku?
I have this question because I need to upload two Java applications to heroku, where I have a database configured in an app. These two applications, an API and a database update process need to access the same database.
If there is no way to upload two containers on heroku to solve this case, how would you do it? Having to upload a remote database, a process that updates that database based on a cron job and shell script and an API that accesses the updated database?
Would it be an option to have a single image with both applications and jobs?
It is possible to expose only one port on a Web Dyno therefore it is not possible to deploy 2 applications together if both require HTTP connectivity.
Option Web and Worker Dynos
The API processes the HTTP incoming traffic while the backend (DB) app runs in the background as worker: they communicate via a queue, for example RabbitQM, or you can use Redis (same concept: one app produces, the other consumes)
Option 2 Web Dynos
Deploy the 2 apps independently on 2 different web Dynos, then communicate over HTTPS using a secure token.
Option both apps in one Docker image
Although it is technically possible you won't find around much help (and consent) about it as it violates the Docker principles.
If you really want to give it a try I think you can start both apps (on different ports) and expose only the API one for incoming traffic.

spring boot application in cluster

I am developing a spring boot application.
Since spring boot created a .jar file for an application.
I want to cluster this particular application on different server. Lets say I build a jar file and ran a project then it should run in cluster mode from number of defined servers and should be able to serve end user needs.
My jar will reside on only one server but it will be clustered across number of servers. When end user calls a web service from my spring boot app he never know from where it is getting called.
The reason behind clustering is suppose any of the server goes down in future, end user will still be able to access web services from another server. But I don't know how to make it clustered.
Can any one please give me insight on this ?
If you want to have it clustered, you just run your Spring Boot application on multiple servers (of course, the JAR must be present on those servers, otherwise you can't run it). You would then place a loadbalancer in front of the application servers to distribute the load.
If all services you are going to expose are stateless so you only need to use load balancer in front of your nodes for ex. apache or nginx, if your services are stateful "store any state [session, store data in db]" so you have to use distributed cache or in memory data grid:
for session you can use spring-session project which could used rails to store sessions.
for store data in DB you need to cluster DB it self and can use distributed cache above your DB layer like Hazelcast.
Look into spring cloud, they have used some netflix open software along with amazons to create 12 factor apps for micro services.
Ideally you would need a load balancer, service registry that can help you achieve multiple instances of spring boot. I believe you have to add a dependency called eureka.
Check the below link
Spring cloud
You can deploy it in cloud foundry and use autoscale function to increase your application instances.

How to connect a WorkManager to a specific restful web service?

I have a wls server where there are several web applications deployed. One of these web applications contain a restful web service which can take a long time to execute. Therefore I want it to have a custom WorkManager that can handle threads which otherwhise are considered stuck. As I understand you can set a work manager for a specific ejb by using xpath to point at its dispatch policy, like this
'/weblogic-ejb-jar/weblogic-enterprise-bean/[ejb-name="anEJB"]/dispatch-policy'
Is there a way to do this for restful web services, i.e. set the workmanager for a specific web service (or web application) and not all of the applications which are deployed on the same wls? The examples I have found only does this globally.
The WLS version I am using is 10.3.6

how to deploy an app on glassfish without without crashing the service?

I have an app that take long time to deploy/redeploy because this use EJB3, JPA2, JSF, Icefaces
The app is deployed on glassfish 3 on ec2 in amazon web services. Each i redeploy the app, while is redeploying the app, the service isn't available.
How can i redeploy an existing application and still the service available, until the redeploy finish?
thanks in advance
Depending on your architecture, you will always lose the service for a few seconds whilst you redeploy.
The proper way to architect this would be to have a software load balancer sitting in front of 2 or more glassfish server instances which are set in a cluster. The load balancer will automatically route all requests to the server holding the older available service. Once the new service is up and running, it will route requests there again. Using mod_jk inside apache works well as the load balancer.

Clustering Apache Tomcat6

I've got a Java Enterprise Web Application which uses Tomcat6+Struts+Hibernate+MySql. at the time being it's publicly up and running on a single server. In order to performance issues we should move the application to a clustered environment. Anyhow I wanna use Tomcat6 clustering as below:
A Load Balancing machine including a web server (Apache+mod_proxy) as front-end
Some application server machines, each one running a tomcat6 instance
A session management back-end
And finally a db server
something like this
The load balancer machine receives all the requests and depending on the balancing algorithm redirects them to the respective tomacat6 machine. After doing the business part, the response is returned to the webserver and at the end to the user. In this scenario the front-end machine processes all the requests and responses so it'd be a bottleneck point in the application.
In Apache Tomcat clustering, is there a way to load balancing mechanism and web servers? I mean putting a load balancer in the front end and leave the request/response processing part to multiple web servers.
Tomcat has no support for clustering built in. What happens is that the load balancer distributes requests, so the various Tomcat instances don't need to know what is happening.
What you have to do is to make sure your application can handle it. For example, you must be aware that caches can be stale.
Say instance 1 has object X in it's cache and X is modified by a request processed on instance 2. The cache in instance 2 will be correct, the cache from instance 1 will be stale now.
The solution is to use a cache which supports clustering or disable caching for instances which can be modified. But that doesn't matter to Tomcat.

Categories