We have a Java Spring application running on dedicated bluemix with Tomcat and cloudfoundry. We want to increase the number of running instances, thus we need to replicate our session variables to each instance.
From our perspective the natural path would be using Redis and Spring Sessions.
However, there is big red tag telling us that bluemix Redis support is experimental and should not be used in production environments.
If we can't use Redis in production environments, what is the alternative to session cluster aware in dedicated bluemix?
The two services that are available in Public that you could use are "Session Cache" and "Compose for Redis".
Session Cache: Improve application resiliency by storing session state information across many HTTP requests. Enable persistent HTTP sessions for your application and seamless session recovery in event of an application failure.
You can use Compose Enterprise in Bluemix Dedicated which contains production ready Redis: https://enterprise.compose.com/
Related
I have coded a Spring MVC Hibernate application with RabbitMQ as a messaging server & a MySQL DB. I have also used Hazelcast an in-memory distributed cache to centralize the state of the application, moving the local tomcat session to a centralized session & implementing distributed locks.
The app right now is hosted on a single tomcat server in my local system.
I want to test my application on a multiple JVM node environment i'e app running on multiple tomcat servers.
What would be the best approach to test the app.
Few things that come to my mind
A. Install & configure a load balancer & set up a tomcat cluster in my local system. This I believe is a tedious task & requires much effort.
B. Host the application on a PAAS like OpenShift, cloudfoundry but I am not sure if I will be able to test my application on several nodes.
C. Any other way to simulate a clustered environment on my local windows system?
I would suggest first you should understand your application requirement. For the real production/live environment, are you going to use Infrastructure as a service or PAAS.
If Infrastructure as a service then
I would suggest create local cluster environment and use the tomcat and spring application sticky session concept. Persist the session in Hazelcast or redis server installed on different node. Configure load balancer for multiple nodes having tomcat server. 2-3 VMs for testing purpose would be suitable.
If requirement is PAAS then
Don't think about local environment. Test directly on OpenShift or AWS free account and trust me you would be able to test on PAAS if all setup is fine.
I am looking for a way to integrate external services (like active directory and DB access, web service calls) in a non (or incomplete) Java EE environment such as Tomcat. In a full Java EE server I would (probably) implement a resource adapter (JCA) however in the current project everything runs in a tomcat (v.7.X). This means that many apects like concurrency, transactions, state etc. are not handled by a Java EE container.
It is a stateless setting and the folks told me that there will be some kind of transaction handling (details are due, JTA?). There will be several tomcats in a clustered environment. The number of users is in 3-4 digits range.
My questions are:
How and when should I initialize external services access? (Use a singleton or a factory etc.?)
Should I use a third party connection pool for database access? (like Hikari or the tomcat internal one)
How to deal with concurrency? (synchronized blocks, using volatile?)
I am developing a spring boot application.
Since spring boot created a .jar file for an application.
I want to cluster this particular application on different server. Lets say I build a jar file and ran a project then it should run in cluster mode from number of defined servers and should be able to serve end user needs.
My jar will reside on only one server but it will be clustered across number of servers. When end user calls a web service from my spring boot app he never know from where it is getting called.
The reason behind clustering is suppose any of the server goes down in future, end user will still be able to access web services from another server. But I don't know how to make it clustered.
Can any one please give me insight on this ?
If you want to have it clustered, you just run your Spring Boot application on multiple servers (of course, the JAR must be present on those servers, otherwise you can't run it). You would then place a loadbalancer in front of the application servers to distribute the load.
If all services you are going to expose are stateless so you only need to use load balancer in front of your nodes for ex. apache or nginx, if your services are stateful "store any state [session, store data in db]" so you have to use distributed cache or in memory data grid:
for session you can use spring-session project which could used rails to store sessions.
for store data in DB you need to cluster DB it self and can use distributed cache above your DB layer like Hazelcast.
Look into spring cloud, they have used some netflix open software along with amazons to create 12 factor apps for micro services.
Ideally you would need a load balancer, service registry that can help you achieve multiple instances of spring boot. I believe you have to add a dependency called eureka.
Check the below link
Spring cloud
You can deploy it in cloud foundry and use autoscale function to increase your application instances.
We are developing an application which periodically syncs the LDAP servers of different clients with our database. This application needs to be accessed via a web portal. A web user will create, modify or delete scheduled tasks on this application. So, we have developed this application as a web service.
Now, we have to scale this application and also ensure high availability.
The application is an Axis2 based web service running on Tomcat. We have thought of httpd + mod_jk + tomcat combination for load balancing. The problem is that if a request for modification/deletion comes, then it should land on the same tomcat server on which the task was created initially. But, since, the request can come from different web users accessing web portal from different ip addresses, we can not have same session id (sticky session).
Any solutions? Different architecture? Anything.
We have also thought of using Quartz scheduler api. The site says it supports load balancing and clustering. Does anyone has experience of working on such scenario with Quartz?
If you are using Quartz for your scheduling, that can be backed by a database (see JDBCJobStore). Then you could access any Tomcat server and the scheduling would be centralized. I would recommend using a key in the database that you return back to the Axis service, so that the user can reference the same data between calls.
Alternatively it is not difficult to use the database as a job scheduler, then have your tasks run on Tomcat (any location), and put the results into the database. If the results of the job (such as its status) are small, this would work fine.
I've got a Java Enterprise Web Application which uses Tomcat6+Struts+Hibernate+MySql. at the time being it's publicly up and running on a single server. In order to performance issues we should move the application to a clustered environment. Anyhow I wanna use Tomcat6 clustering as below:
A Load Balancing machine including a web server (Apache+mod_proxy) as front-end
Some application server machines, each one running a tomcat6 instance
A session management back-end
And finally a db server
something like this
The load balancer machine receives all the requests and depending on the balancing algorithm redirects them to the respective tomacat6 machine. After doing the business part, the response is returned to the webserver and at the end to the user. In this scenario the front-end machine processes all the requests and responses so it'd be a bottleneck point in the application.
In Apache Tomcat clustering, is there a way to load balancing mechanism and web servers? I mean putting a load balancer in the front end and leave the request/response processing part to multiple web servers.
Tomcat has no support for clustering built in. What happens is that the load balancer distributes requests, so the various Tomcat instances don't need to know what is happening.
What you have to do is to make sure your application can handle it. For example, you must be aware that caches can be stale.
Say instance 1 has object X in it's cache and X is modified by a request processed on instance 2. The cache in instance 2 will be correct, the cache from instance 1 will be stale now.
The solution is to use a cache which supports clustering or disable caching for instances which can be modified. But that doesn't matter to Tomcat.