spring boot database connection for master and slave databases [duplicate] - java

I am new to aws.
I have a mysql rds instance and I just created 2 read replicas. My application is written in Java, and what I have done up until now is using the JDBC I have connected to the one aws instance, but now how do I distribute the work around the 3 servers?

You can set up an internal Elastic Load Balancer to round robin requests to the slaves. Then configure two connections in your code: one that points directly to the master for writes and one that points to the ELB endpoint for reads.
Or if you're adventurous, you could set up your own internal load balancer using Nginx, HAProxy, or something similar. In either case, your LB will listen on port 3306.

AWS suggests setting up route 53. Here is the official article on the subject https://aws.amazon.com/premiumsupport/knowledge-center/requests-rds-read-replicas/

In case you have the option to use Spring boot and spring-cloud-aws-jdbc
You can take a look at this working example and explanation in this post

Related

Spring boot rest endpoint result inconsistent because of in memory caching?

I have Spring boot app deployed on 4 instances of ECS on AWS FARGATE. (I'm new to it.)
In my app, we have pure java in memory cache.
Assuming I put data using /putdata and get data using /getdata
When i hit /getdata, it sometimes returns results and sometime it doesn't.
is there a possibility that my /putdata went to one of the 4 instances and only that In memory cache has that data, other 3 instance don't have it?
OR my spring boot object states are managed to stay in sync on all 4 instances?
in summary, does rest requests land on different ECS container and may behave different if it lands on other ECS instance next time?
to achieve this, you need a centralize cache server and point all your ECS instance/spring boot application to that cache server.
either you could go with AWS's managed cache server(ElsticCache) which is totally managed by AWS OR you need to spin some EC2 instance and install some distributed cache server in it. These are few you can give a try Hazelcast, Redis, Apache Ignite,etc
i would suggest go with AWS ElasticCache(Redis),so you don't have to manage anything. best of luck.

Communicate between microservices on the same machine without exposing a public API

I am relatively new to Camel and Spring, and I am making a service to predict stock prices using a neural network to practise using Camel, Spring and also DL4J.
My service is divided into 5 microservices (Gateway, H2 SQL Database, Admin Console, Data Fetcher, DL4J Handler) which will each run in their own Java application. Each one has a REST API.
How can I prevent an external computer from connecting to 4 of the services, while leaving the gateway open and connectable?
To clarify:
All 5 services have a REST endpoint, and they are all visible to each other because they are all running on the same machine and can connect with localhost:port. I'd like to know how I can prevent an external computer from connecting to 4 of the services, whilst leaving 1 (the gateway) still connectable.
There's nothing unique about Spring or Camel here.
Each one has an REST API, meaning there's an HTTP endpoint, meaning each service has bound its server port on localhost, and so can reach each other via http://localhost:<port>, assuming nothing is running in a VM or Docker container
You should also be able to use the gateway on localhost

spring boot Enterprise application with eureka and zuul

I am in charge of designing a new enterprise application that should handle tons of clients and should be completely fault free.
In order to to that I'm thinking about implementing different microservices that are going to be replicated so eureka server / client solution is perfect for this.
Then since the eureka server could be the single point of failure I found that is possible to have it replicated in multiple instances and it is perfect.
In order to not expose every service I'm going to put as a gateway zuul that will use the eureka server in order to find the perfect instance of the backend serivice that will handle the requests.
Since now zuul is the single point of faiulre I found that it is possible to replicate also this component so if one of them fails I still have the others.
At this point I need to find the way to create a load balancer between the client application (android and ios app) and the zuul stack but a server side load balancer will be the single point of failure so it is useless.
I would like to ask if there is a way to make our tons of clients connect to an healty instance of zuul application without having any single point of failure. Maybe by implementing ribbon on the mobile application that will choose a proper healty instance of zuul.
Unfortunatly everything will be deployed on a "private" cluster so I can not use amazon elastic load balancer or any other different propietary solution
Thanks

Use of Task Definition in AWS | Configure Hazelcast to run on AWS

I'm a Java developer, aware of AWS and good at Hazelcast independently.
Have 2 AWS EC2 instances running and would like to run Hazelcast as an in-memory cluster between nodes. Followed link to do the required changes. Except configuration for taskdef.json in Task Definition.
Read some documentation but couldn't understand what and why exactly task definition is?
How to i know if it's already created? else if I create one now, would my production gets distracted?
The whole reason for the ec2 discovery is to resolve the issue with non static ip addresses. The EC2 plugin performs a describe instances and pulls the ip adddress from the json.

AWS Elasticache SDK doesn't work for Redis ElastiCache

I want to dynamically configure my API servers depending on the name of the "cluster".
So I'm using AmazonElastiCacheClient to discover the clusters name and need to extract the endpoint of the one that has a specific name.
The problem is that I can find it but there doesn't seem to be a way to get an endpoint.
foundCluster.getCacheNodes() returns an empty list, even if there is 1 Redis instance appearing in the AWS console, in-sync and running.
foundCluster.getConfigurationEndpoint() returns null.
Any idea?
Try adding
DescribeCacheClustersRequest.setShowCacheNodeInfo(true);
I am making a guess:
AWS Elastic Cache with redis currenlty supports only single node clusers (so no auto discovery etc). I am not sure this is due this. Memcached based cluster is different.
"At this time, ElastiCache supports single-node Redis cache clusters." http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheNode.Redis.html

Categories