I have Spring boot app deployed on 4 instances of ECS on AWS FARGATE. (I'm new to it.)
In my app, we have pure java in memory cache.
Assuming I put data using /putdata and get data using /getdata
When i hit /getdata, it sometimes returns results and sometime it doesn't.
is there a possibility that my /putdata went to one of the 4 instances and only that In memory cache has that data, other 3 instance don't have it?
OR my spring boot object states are managed to stay in sync on all 4 instances?
in summary, does rest requests land on different ECS container and may behave different if it lands on other ECS instance next time?
to achieve this, you need a centralize cache server and point all your ECS instance/spring boot application to that cache server.
either you could go with AWS's managed cache server(ElsticCache) which is totally managed by AWS OR you need to spin some EC2 instance and install some distributed cache server in it. These are few you can give a try Hazelcast, Redis, Apache Ignite,etc
i would suggest go with AWS ElasticCache(Redis),so you don't have to manage anything. best of luck.
Related
I have a spring boot application which uses embedded tomcat. The app is hosted on multiple EC2 instances, which auto scale if required and some of which may be killed/restarted. So, effectively there are 3 instances of the app running , and requests are routed from the load balancer to any of these instances.
I am trying to track user sessions on my app. I started with implementing container level session management using tomcat HttpSession. But it is not able to track sessions across instances. On researching a bit, I got to know that i need something like session replication.
My app is not running a tomcat cluster, it has 3 independent instances of the API which do not talk to each other in anyway. I am not planning to change that and not sure if it is possible with AWS as it does not encourage multicast communication for this purpose.
Also, I do not want to setup/manage a separate DB (like redis with spring session) just for this purpose, because I only need session Ids for logging, and I need to do that in a lightweight manner.
Is there any other way to manage sessions across instances ? or for my purpose, would it be better to just implement some custom code which can check for session id/token passed to and fro between the frontend and backend.
The goal is to externalize the sessions from your application server so that you can autoscale, restart, load balance etc. without worrying about breaking a User's session.
Honestly on AWS using the Spring stack, I would recommend Spring Session + Redis. I've used it countless times and it is very easy to implement. You can leverage AWS Elasticache which manages the Redis cluster for you (like RDS does for relational DBs).
You could write your own custom implementation of Spring Session with a backing store of S3, Dynamo, etc. But is that really any better than the Redis implementation? I'd recommend the path of least resistance.
i'm trying to make a microservice architecture using spring cloud, for that I use config server, eureka and etc, also I exploit the docker to deploy my services. I use several machines for that. For redundancy and load balancing i'm gonna deploy one copy of each services into each machine, but i face a problem: some of these services must be working in one copy at the same time (e.g. monitoring of something which is executed by cron expression) That is to say I don't want to have several monitorings components to be run at same time, instead they have to be set up on each machine by rotation. (e.g. as here http://www.quartz-scheduler.org/documentation/quartz-..)
How could i do that the best way? What should i use for that?
thanks
Currently I am developing module to display list of online user in my application. I am using comet streaming technology. When users log in I put data in map and then sending data in message queue. Now message queue is stored in servlet context.
Now problem I am facing is it is working in local environment but it is not working in production environment because in production environment i have set up tomcat cluster. so data set in servlet context for tomcat 1 is not accessible in tomcat 2.
I have already develop module but not getting any way to solve above issue. I google and found that tomcat doesn't support context replication.
I have one doubt that how many JVM instance will be created in tomcat cluster web application. e.g I have two tomcat cluster.
I would not use servlet context to store data for a cluster. The common pattern is to use a database for data that must be shared across different servers.
For your use case, it is no need persisting the values between different runs, so the database is not necessarily a nice solution, even if it is easy to setup. IMHO what you need it just a shared data cache or better a memory data grid. hazelcast should be easy to use for your requirements. If I correctly understand them, what you need is a distributed map, with a concatenation of node_id, session_id as key (or maybe simply session_id), and a user object as value.
In tomcat7 this requires writing a custom valve to force replication, the same is true in tomcat 6. Refer to Is there a useDirtyFlag option for Tomcat 6 cluster configuration? to see how to do this.
I was wondering if someone could point me to a good tutorial or blog post on writing a spring application that can be all run in a single process for integration testing locally but when deployed will deploy different subsystems into different processes/dynos on heroku.
For example, I have services for User management, Job processing, etc. all in my web application. I want to run it just as a web application locally. But when I deploy to heroku I want to deploy just the stateless web front end to TWO dynos and then have worker dynos that I can select different services to run on. I may decide to group 2 of these services into one process or decide that each should run in its own process. Obviously when the services run in their own process they will need to transparently add some kind of transport like REST or RabbitMQ or AKKA or some such.
Any pointers on where to start looking to learn how to do this? Or am I thinking about this incorrectly and you'd like to suggest a different approach? I need to figure out how to setup the application and also how to construct maven and intellij to achieve this.
Thanks.
I can't point you to a prefabricated article or post, but I can share the direction I started down to solve a similar problem. Essentially, the proposed approach was similar to yours - put specific services with potentially long-running logic in worker dynos and pass messages via Jesque (Java port of Resque) on a RedisToGo instance (Heroku add-on). I never got the separate web vs. worker Spring contexts fully ironed out (moved on to other priorities) but the gist of it was 1) web tier app context would be configured to post messages and 2) worker app context configured to consume.
That said, I used foreman locally to simulate the Heroku environment to debug scaling (foreman start --formation="web=2" + Apache mod_proxy_http). Big Spring gotcha when you scale to 2+ dynos - make sure you are using Redis or Memcache for session storage when using webapp-runner. Spring uses HttpSession by default to store the security context... no session affinity or native Tomcat session replication.
Final caveat - in our case, none of our worker processing needed to be reflected to the end user. That said, we were using Pusher for other features (also a Heroku add-on). If you need to update the user when an async task completes, I recommend looking at it.
I'm new to both J2EE and WebLogic. I'd trying to determine the best way to implement a non-distributed cache (one cache per application instance) in a Java Web Services application running on WebLogic 10.3. I need to cache several different POJO's.
There will be multiple WebLogic instances running on each server in a cluster. When reading about ServletContext and InitialContext, I was a bit confused. I believe ServletContext is instance specific, but I can only access it from a Servlet, correct? I will need to access to the cache in separate threads so I'm not sure if this is possible outside of a Servlet.
I was reading a bit about JNDI, but it seems to work at the server or cluster level and not for each WebLogic/application instance.
Can anyone provide me with a suggestion and a code example to initialize, access, and destroy a cache of Java POJO's?
Thanks!
Leon
Here is an example on how to implement a method cache with Spring and EHCache:
http://opensource.atlassian.com/confluence/spring/display/DISC/Caching+the+result+of+methods+using+Spring+and+EHCache
The cache will be local if configured as in the example.
I am using this method in a web service client library to cache the results of a frequently used service that has nearly no update to its data.