I have a 8 spring boot micro services which internally call each other. The calling dns's of other micro services, define in the application.properties file of each service.
Suppose, micro service A represent by A -> a.mydns.com and B-> b.mydns.com etc
So basically each micro service consist of a ELB and two HA Proxies (distribute
in two zones) and 4 App servers (distribute in two zones).
Currently I am creating the new Green servers (app servers only) and switch the live traffic from HA Proxy level. In this case, while the new version of the micro services are testing, it expose to the live customers also.
Ideally, the approach should be, creating the entire server structure including ELB's and HA Proxies for each micro service right?
But then how come I face the challenge of testing it with a test dns. I can map the ELB to a test dns. But then how about the external micro service dns's which hard coded in side the application.properties file?
What would be the approach I should take in such scenario?
I would suggest dockerizing your microservices (easy with spring-boot), and then using ECS (Elastic Container Service) and ELB (Elastic Load Balancer) with application loadbalancers. (can be internal, or internet faced).
ECS and ELB then utilizes your microservices /health endpoints when you deploy new versions.
Then you could implement a more sophisticated HealthIndicator in spring-boot, to determine whether or not the application is healthy (and therefor ready to recieve incomming requests). Only when the new application is healthy, is it put into service, and the old one(s) are put to sleep.
Then test all your business logic on a test environment, and because of Docker, you're running the exact same image on all environment, you shouldn't need to be running (any) tests when deploying to production. (Because it has already been tested, and if it boots up, you're good to go).
Ideally, the approach should be, creating the entire server structure including ELB's and HA Proxies for each micro service right?
This is not necessarily true. The deployment (blue green or canary, no matter what your deployment strategy is) should be transparent to it's consumers (in your case other 7 microservices). That means, your services DNS name (Or IP) to which other services interacts should stay the same. IMHO, in the event of a microservice deployment, you shouldnt have to think about other services in the ecosystem as long as you are keeping your part of the contract; after all that's the whole point of "micro"services. As other SOer pointed out, if you can't deploy your one microservice without making changes to other services, that is not a microservice, it's just a monolith talking over http.
I would suggest you to read this article
https://www.thoughtworks.com/insights/blog/implementing-blue-green-deployments-aws
I am quoting relevant parts here
Multiple EC2 instances behind an ELB
If you are serving content through a load balancer, then the same
technique would not work because you cannot associate Elastic IPs to
ELBs. In this scenario, the current blue environment is a pool of EC2
instances and the load balancer will route requests to any healthy
instance in the pool. To perform the blue-green switch behind the same
load balancer you need to replace the entire pool with a new set of
EC2 instances containing the new version of the software. There are
two ways to do this -- automating a series of API calls or using
AutoScaling groups.
There are other creatives ways like this too
DNS redirection using Route53
Instead of exposing Elastic IP addresses or long ELB hostnames to your
users, you can have a domain name for all your public-facing URLs.
Outside of AWS, you could perform the blue-green switch by changing
CNAME records in DNS. In AWS, you can use Route53 to achieve the same
result. With Route53, you create a hosted zone and define resource
record sets to tell the Domain Name System how traffic is routed for
that domain.
To answer other question.
But then how about the external micro service dns's which hard coded
in side the application.properties file?
If you are doing this, I would suggest you to read about 12factor app; especially the config part. You should take a look at service discovery options too, if you haven't already done so.
I have a feeling that, what you have here is a spaghetti of not-so-micro-services. If it is a greenfield project and if your timeline-budget allows, I would suggest you to look in to containerizing your application along with it's infrastructure (a single word: Dockerizing) and use any container orchestration technology like kubernetes, Docker swarm or AWS ECS (easiest of all, provided you are already on AWS-land), I know this is out of scope of this question, just a suggestion.
Typically for B/G testing you wouldn't use different dns for new functions, but define rules, such as every 100th user gets send to the new function or only ips from a certain region or office have access to the new functionality, etc.
Assuming you're using AWS, you should be able to create an ALB in front of the ELBs for context based routing in which you should be able define rules for your routing to either B or G. In this case you have to separate environments functioning independently (possibly using the same DB though).
For more complicated rules, you can use tools such as leanplum or omniture inside your spring boot application. With this approach you have one single environment hosting old and new functionality and later you'd remove the code that is outdated.
I personally would go down a simpler route using a test DNS entry for the green deployment which is then swapped out for the live DNS entry when you have fully verified your green deployment is good.
So what do I mean by this:
You state that your live deployments have the following DNS entries:
a.mydns.com
b.mydns.com
I would suggest that you create a pattern where each micro-service deployment also gets a test dns entry:
test.a.mydns.com
test.b.mydns.com
When deploying the "green" version of your micro-service, you deploy everything (including the ELB) and map the CNAME of the ELB to the test DNS entry in Route 53. This means you have the green version ready to go, but not being used by your live application. The green version has it's own DNS entry, so you can run your full test-suite against the test.a.mydns.com domain.
If (and only if) the test suite passes, you swap the CNAME entry for a.mydns.com to be the ELB that was created as part of your green deployment. This means that your existing micro-services simply start talking to your green deployment once DNS propagates. If there is an issue, simply reverse the DNS update to the old CNAME entry and you have fully rolled-back.
It requires a little bit of co-ordination here, but you should be able to automate the whole thing with something like Jenkins and the AWS CLI.
Related
I'm coming from the PHP/Python/JS environment where it's a standard to run multiple instances of web application as separate processes and asynchronous tasks like queue processing as separate scripts.
eg. in the k8s environment, there would be
N instances of web server only, each running in separate pod
For each queue, dynamic number of consumers, each in separate pod
Cron scheduling using k8s crontab functionality, leaving the scheduling process to k8s
Such approach matches well the cloud nature where the workload can be scheduled across both smaller number of powerful machines and lot of less powerful machines and allows very fine control of auto scaling (based on the number of messages in specific queue for example).
Also, there is a clear separation between the developer and DevOps responsibility.
Recently, I tried to replicate the same setup with Java Spring Boot application and failed miserably.
Even though Java frameworks say that they are "cloud native", it seems like all the documentation is still built around monolith application, which handles all consumers and cron scheduling in separate threads.
Clear answer to this problem is microservices but that's way out of scope.
What I need is to deploy separate parts of application (like 1 queue listener only) per pod in the cloud yet keep the monolith code architecture.
So, the question is:
How do I design my Spring Boot application so that:
I can run the webserver separately without queue listeners and scheduled jobs
I can run one queue listener per pod in the k8s
I can use k8s cron scheduling instead of App level Spring scheduler?
I found several ways to achieve something like this but I expect there must be some "more or less standard way".
Alternative solutions that came to my mind:
Having separate module with separate Application definition so that each "command" is built separately
Using Spring Profiles to instantiate specific services only according to some environment variables
Implement custom command line runner which would parse command name/queue name and dynamically create appropriate services (this seems to be the most similar approach to the way how it's done in "scripting languages")
What I mainly want to achieve with such setup is:
To be able to run the application on lot of weak HW instead of having 1 machine with 32 cpu cores
Easier scaling per workload
Removing one layer from already complex monitoring infrastructure (k8s already allows very fine resource monitoring, application level task scheduling and parallelism makes this way more difficult)
Do I miss something or is it just that it's not standard to write Java server apps this way?
Thank you!
What I need is to deploy separate parts of application (like 1 queue listener only) per pod in the cloud yet keep the monolith code architecture.
I agree with #jacky-neo's answer in terms of the appropriate architecture/best practice, but that may require you to break up your monolithic application.
To solve this without breaking up your monolithic application, deploy multiple instances of your monolith to Kubernetes each as a separate Deployment. Each deployment can have its own configuration. Then you can utilize feature flags and define the environment variables for each deployment based on the functionality you would like to enable.
In application.properties:
myapp.queue.listener.enabled=${QUEUE_LISTENER_ENABLED:false}
In your Deployment for the queue listener, enable the feature flag:
env:
- name: 'QUEUE_LISTENER_ENABLED'
value: 'true'
You would then just need to configure your monolithic application to use this myapp.queue.listener.enabled property and only enable the queue listener when the property is set to true.
Similarly, you could also apply this logic to the Spring profile to only run certain features in your app based on the profile defined in your ConfigMap.
This Baeldung article explains the process I'm presenting here in detail.
For the scheduled task, just set up a CronJob using a curl container which can invoke the service you want to perform the work.
Edit
Another option based on your comments below -- split the shared logic out into a shared module (using Gradle or Maven), and have two other runnable modules like web and listener that depend on the shared module. This will allow you to keep your shared logic in the same repository, and keep you from having to build/maintain an extra library which you would like to avoid.
This would be a good step in the right direction, and it would lend well to breaking the app into smaller pieces later down the road.
Here's some additional info about multi-module Spring Boot projects using Maven or Gradle.
According to my expierence, I will resolve these issue as below. Hope it is what you want.
I can run the webserver separately without queue listeners and
scheduled jobs
Develop a Spring Boot app to do it and deploy it as service-A in Kubernetes. In this app, you use spring-mvc to define the controller or REST controller to receive requests. Then use the Kubernetes Nodeport or define ingress-gateway to make the service accessible from outside the Kubernetes cluster. If you use session, you should save it into Redis or a similar shared place so that more instances of the service (pod) can share same session value.
I can run one queue listener per pod in the k8s
Develop a new Spring Boot app to do it and deploy it as service-B in Kubernetes. This service only processes queue messages from RabbitMQ or others, which can be sent from service-A or another source. In most times it should not be accessed from outside the Kubernetes cluster.
I can use k8s cron scheduling instead of App level Spring scheduler?
In my opinion, I like to define a new Spring Boot app with spring-scheduler called service-C in Kubernetes. It will have only one instance and will not be scaled. Then, it will invoke service-A method at the scheduled time. It will not be accessible from outside the Kubernetes cluster. But if you like Kubernetes CronJob, you can just write a bash shell using service-A's dns name in Kubernetes to access its REST endpoint.
The above three services can each be configured with different resources such as CPU and memory usage.
I do not get the essence of your post.
You want to have an application with "monolithic code architecture".
And then deploy it to several pods, but only parts of the application are actually running.
Why don't you separate the parts you want to be special to be applications in their own right?
Perhaps this is because I come from a Java background and haven't deployed monolithic scripting apps.
In CloudFoundry you can access other microservices by registering them in a discovery service and query them by their name. But you can also setup a route ("subdomain"), from which you can call the service, that seems to be quite easier to handle. In both cases clustering, circuit breaker and such can be used.
In which case should one want to use the first or the second approach?
A registry approach would be preferable when you are concerned about your software's maintainability and resilience.
A registry name can be meaningful to your software's problem domain, and it can be reused across all deployments of your software (dev, qa, prod, etc.)
A route name introduces dependencies on your network infrastructure. It must be globally unique, you need to configure and manage a different one for each deployment of your software, and it can break due to external concerns (example: your subdomain changes due to a company name change).
I have a cloud-native application, which is implemented using Spring Cloud Netflix.
So, in my application, I'm using Eureka service discovery to manage all instances of different services of the application. When each service instance wants to talk to another one, it uses Eureka to fetch the required information about the target service (IP and port for example).
The service orchestration can also be achieved using tools like Docker Swarm and Kubernetes, and it looks there are some overlaps between what Eureka does and what Docker Swarm and Kubernetes can do.
For example, Imagine I create a service in Docker Swarm with 5 instances. So, swarm insures that those 5 instances are always up and running. Additionally, each services of the application is sending a periodic heartbeat to the Eureka internally, to show that it's still alive. It seems we have two layers of health check here, one for Docker and another inside the Spring Cloud itself.
Or for example, you can expose a port for a service across the entire swarm, which eliminates some of the needs to have a service discovery (the ports are always apparent). Another example could be load balancing performed by the routing mesh inside the docker, and the load balancing happening internally by Ribbon component or Eureka itself. In this case, having a hardware load balancer, leads us to a 3-layered load balancing functionality.
So, I want to know is it rational to use these tools together? It seems using a combination of these technologies increases the complexity of the application very much and may be redundant.
Thank you for reading!
If you already have the application working then there's presumably more effort and risk in removing the netflix components than keeping them. There's an argument that if you could remove e.g. eureka then you wouldn't need to maintain it and it would be one less thing to upgrade. But that might not justify the effort and it also depends on whether you are using it for anything that might not be fulfilled by the orchestration tool.
For example, if you're connecting to services that are not set up as load-balanced ('headless services') then you might want ribbon within your services. (You could do this using tools in the spring cloud kubernetes incubator project or its fabric8 equivalent.) Another situation to be mindful of is when you're connecting to external services (i.e. services outside the kubernetes cluster) - then you might want to add load-balancing or rate limiting and ribbon/hystrix would be an option. It will depend on how nuanced your requirements for load-balancing or rate-limiting are.
You've asked specifically about netflix but it's worth stating clearly that spring cloud includes other components and not just netflix ones. And that there's other areas of overlap where you would need to make choices.
I've focused on Kubernetes rather than docker swarm partly because that's what I know best and partly because that's what I believe to be the current direction of travel for the industry - on this you should note that kubernetes is available within docker EE. I guess you've read many comparison articles but https://hackernoon.com/a-kubernetes-guide-for-docker-swarm-users-c14c8aa266cc might be particularly interesting to you.
You are correct in that it does seem redundant. From personal observations, I think that each layer of that architecture should handle load balancing in its' own specific way. It ends up giving you a lot more flexibility for not much more cost. If you want to take advantage of client side load balancing and any failover features, it makes sense to have Eureka. The major benefit is that if you don't want to take advantage of all of the features, you don't have to.
The container orchestration level load balancing has a place for any applications or services that do not conform to your service discovery piece that resides at the application level (Eureka).
The hardware load balancer provides another level that allows for load balancing outside of your container orchestrator.
The specific use case that I ran into was on AWS for a Kubernetes cluster with Traefik and Eureka with Spring Cloud.
Yes, you are correct. We have a similar Spring Cloud Netflix application deployed on Oracle cloud platform and Predix Cloud Foundry. If you use multiple Kubernetes clusters then you have to use Ribbon load balancing because you have multiple instance for services.
I cannot tell you which is better Kubernetes or Docker Swarm. We use Kubernetes for service orchestration as it provides more flexibility.
i'm trying to make a microservice architecture using spring cloud, for that I use config server, eureka and etc, also I exploit the docker to deploy my services. I use several machines for that. For redundancy and load balancing i'm gonna deploy one copy of each services into each machine, but i face a problem: some of these services must be working in one copy at the same time (e.g. monitoring of something which is executed by cron expression) That is to say I don't want to have several monitorings components to be run at same time, instead they have to be set up on each machine by rotation. (e.g. as here http://www.quartz-scheduler.org/documentation/quartz-..)
How could i do that the best way? What should i use for that?
thanks
Background
My Environment - Java, Play2, MySql
I've written 3 stateless Restful Microservices on Play2 -> /S1,/S2,/S3
S1 consumes data from S2 and S3. So when user hits /S1, that service asynchronously calls /S2, /S3, merges data and returns final json output. Side note - The services will be shipped eventually as docker images.
For testing in developer environment, I run /s1,/s2,/s3 on ports 9000, 9001 and 9002 respectively. I pickup the port numbers from a config file etc. I hit the services and everything works fine. But there is a better way to setup the test env on my developer box correct? Example - What if I want to run 20 services etc..
So with that said, on production they will be called just like mydomain.com/s1, mydomain.com/s2, mydomain.com/s3 etc. I want to accomplish this on my developer environment box....I guess there's some reverse proxying involved I would imagine.
Question
So the question is, how do I call /S2 and /S3 from within S1 without specifying or using the port number on developer environment. How are people testing microservices on their local machine?
Extra Bonus
Knowing that I'll be shipping my services as docker images, how do I accomplish the same thing with docker containers (each container running one service)
The easiest way (IMO) is to set up your development environment to mirror as closely as possible your production environment. If you want your production application to work with 20 microservices, each running in a separate container, then do that with your development machine. That way, when you deploy to production, you don't have to change from using ports to using hostnames.
The easiest way to set up a large set of microservices in a bunch of different containers is probably with Fig or with Docker's upcoming integrated orchestration tools. Since we don't have all the details on what's coming, I'll use Fig. This is a fig.yml file for a production server:
application:
image: application-image
links:
- service1:service1
- service2:service2
- service3:service3
...
service1:
image: service1-image
...
service2:
image: service2-image
...
service3:
image: service3-image
...
This abbreviated fig.yml file will set up links between the application and all the services so that in your code, you can refer to them via hostname service1, service2, etc.
For development purposes, there's lots more that needs to go in here: for each of the services you'll probably want to mount a directory in which to edit the code, you may want to expose some ports so you can test services directly, etc. But at it's core, the development environment is the same as the production environment.
It does sound like a lot, but a tool like Fig makes it really easy to configure and run your application. If you don't want to use Fig, then you can do the same with Docker commands - the key is the links between containers. I'd probably create a script to set things up for both the production and development environments.
Example - What if I want to run 20 services etc
docker will create entries in etc hosts file for each of the linked containers using their alias. So If you link a lot of containers you can just address them using their alias name. Do not map the port to a public port using -p 9000:9000. This way all your services can be on port 9000 on their own docker host which can be looked up from /etc/hosts
"how do I accomplish the same thing..."
This is an open question, here is some good reading on the topic. SkyDock and SkyDNS get you most of the way of service discovery and weave gets you easy to use networking between remote docker containers. I have not seen a better end-to-end solution yet although there may be some out there.