My question is simple: is it perfectly safe to have 2 independent Jetty Server instances in one JVM process, listening on different ports with independent URL mappings and SSL/TLS setup? I'm not seeing odd behaviour but before deploying to live, I'd like to get some assurance that what I'm doing is sound. If not, would it be proper to have the same set-up using a single Server instance with somehow separate URL namespaces and security SSL/TLS setup?
Yes, absolutely. We do this in many unit tests throughout Jetty.
Related
I have two different APIs. They each have their own .war file and are both running on the same tomcat instance.
Strangely, I am able to reach one API with requests like this: https://(ip-address):443/(path1)
but the other responds only to this: http://(ip-address):8090/(path2)
Also complicating things is that, when I deploy the second war to a certain other tomcat instance on another server, it will respond to https 443 requests.
Any idea how this is possible?
This is strange, because at different times either the war or tomcat works as intended (by using https), so it is unclear whether to blame the war or tomcat.
Applications can declare they need confidential connections (HTTPS). Look at the WEB-INF/web.xml inside.
So one of the applications might use both because there is no constraint defined, the other may just respond to https as the container is responsible to ensure secured communication. I'd be more surprised to hear that one of the applications responds to http only.
From https://tomcat.apache.org/tomcat-9.0-doc/config/http.html#Introduction:
One or more such Connectors can be configured as part of a single Service, each forwarding to the associated Engine to perform request processing and create the response.
Check in your server.xml whether you have several services with http and https connectors that are mapping to different engines, and whether the applications are deployed distributed on these different engines. That could explain one application responding to http only, while the other is responding to https only.
I have a Spring application with two controllers. I want to run one controller on localhost:8080 and second controller on localhost:8081.
Am I able to configure Tomcat to serve two ports simultaneously i.e 8080 and 8081? Is it possible? How?
Please note that it is not a Spring Boot application.
It sounds like two completely different applications.
You certainly could configure your Tomcat's server.xml file to have multiple HTTP connectors running on different ports. But you'll find it much easier and hassle-free to deal with two different Tomcat instances.
The App Server (Tomcat, JBoss,Glassfish) run on / watch one port. You can run multiple app servers on a single node (computer) with different port numbers for this reason. They could be the same (Tomcat+Tomcat) or different ones as well (Tomcat+Glassfish)
But in this case you need to split the controllers into 2 different applications and deploy them on the app server instances.
This is the MicroServices architectural desing style. When you run a separate app server for every service. Microservices services most of the cases use REST over HTTP to communicate to each other.
But in case of Tomcat (maybe not by all of the products) it is possible : Running Tomcat server on two different ports
No. spring runs on a specific port and that will be port for both rest controllers . You can have different URLS for them though.
It's not possible.
Spring MVC, as many other web frameworks, is designed around the front
controller pattern where a central Servlet, the DispatcherServlet,
provides a shared algorithm for request processing, while actual work
is performed by configurable delegate components.
https://docs.spring.io/spring/docs/current/spring-framework-reference/web.html
Spring itself doesnot run on any port. It is just a technology to create APIs. Port binds with server (like Tomcat, JBoss, etc). So if you want to use different ports for different controllers, then you need to deploy multiple applications across multiple servers and make those servers listen different ports.
On the application that should be on 8081, in the application.properties file add the following line:
server.port=8081
Then Just run both of them...
Otherwise in the TomcatConfiguration set the port to 8081, and again run both of them.
You can find the perfect example in below link. They use different port for different resources. It uses port binding with embedded tomcat in spring boot. Hope this helps you.
https://tech.asimio.net/2016/12/15/Configuring-Tomcat-to-Listen-on-Multiple-ports-using-Spring-Boot.html
Yes, you can, but they will behave like two separate applications and are independent of each other. However they can share common resources like databases, Password directories etc.
However for a use case such as this I would recommend to look into microservices.
Read more about microservices here
One approach is to create additional org.apache.catalina.connector.Connector and route requests from it with org.springframework.web.servlet.mvc.condition.RequestCondition https://stackoverflow.com/a/69397870/6166627
I have a 8 spring boot micro services which internally call each other. The calling dns's of other micro services, define in the application.properties file of each service.
Suppose, micro service A represent by A -> a.mydns.com and B-> b.mydns.com etc
So basically each micro service consist of a ELB and two HA Proxies (distribute
in two zones) and 4 App servers (distribute in two zones).
Currently I am creating the new Green servers (app servers only) and switch the live traffic from HA Proxy level. In this case, while the new version of the micro services are testing, it expose to the live customers also.
Ideally, the approach should be, creating the entire server structure including ELB's and HA Proxies for each micro service right?
But then how come I face the challenge of testing it with a test dns. I can map the ELB to a test dns. But then how about the external micro service dns's which hard coded in side the application.properties file?
What would be the approach I should take in such scenario?
I would suggest dockerizing your microservices (easy with spring-boot), and then using ECS (Elastic Container Service) and ELB (Elastic Load Balancer) with application loadbalancers. (can be internal, or internet faced).
ECS and ELB then utilizes your microservices /health endpoints when you deploy new versions.
Then you could implement a more sophisticated HealthIndicator in spring-boot, to determine whether or not the application is healthy (and therefor ready to recieve incomming requests). Only when the new application is healthy, is it put into service, and the old one(s) are put to sleep.
Then test all your business logic on a test environment, and because of Docker, you're running the exact same image on all environment, you shouldn't need to be running (any) tests when deploying to production. (Because it has already been tested, and if it boots up, you're good to go).
Ideally, the approach should be, creating the entire server structure including ELB's and HA Proxies for each micro service right?
This is not necessarily true. The deployment (blue green or canary, no matter what your deployment strategy is) should be transparent to it's consumers (in your case other 7 microservices). That means, your services DNS name (Or IP) to which other services interacts should stay the same. IMHO, in the event of a microservice deployment, you shouldnt have to think about other services in the ecosystem as long as you are keeping your part of the contract; after all that's the whole point of "micro"services. As other SOer pointed out, if you can't deploy your one microservice without making changes to other services, that is not a microservice, it's just a monolith talking over http.
I would suggest you to read this article
https://www.thoughtworks.com/insights/blog/implementing-blue-green-deployments-aws
I am quoting relevant parts here
Multiple EC2 instances behind an ELB
If you are serving content through a load balancer, then the same
technique would not work because you cannot associate Elastic IPs to
ELBs. In this scenario, the current blue environment is a pool of EC2
instances and the load balancer will route requests to any healthy
instance in the pool. To perform the blue-green switch behind the same
load balancer you need to replace the entire pool with a new set of
EC2 instances containing the new version of the software. There are
two ways to do this -- automating a series of API calls or using
AutoScaling groups.
There are other creatives ways like this too
DNS redirection using Route53
Instead of exposing Elastic IP addresses or long ELB hostnames to your
users, you can have a domain name for all your public-facing URLs.
Outside of AWS, you could perform the blue-green switch by changing
CNAME records in DNS. In AWS, you can use Route53 to achieve the same
result. With Route53, you create a hosted zone and define resource
record sets to tell the Domain Name System how traffic is routed for
that domain.
To answer other question.
But then how about the external micro service dns's which hard coded
in side the application.properties file?
If you are doing this, I would suggest you to read about 12factor app; especially the config part. You should take a look at service discovery options too, if you haven't already done so.
I have a feeling that, what you have here is a spaghetti of not-so-micro-services. If it is a greenfield project and if your timeline-budget allows, I would suggest you to look in to containerizing your application along with it's infrastructure (a single word: Dockerizing) and use any container orchestration technology like kubernetes, Docker swarm or AWS ECS (easiest of all, provided you are already on AWS-land), I know this is out of scope of this question, just a suggestion.
Typically for B/G testing you wouldn't use different dns for new functions, but define rules, such as every 100th user gets send to the new function or only ips from a certain region or office have access to the new functionality, etc.
Assuming you're using AWS, you should be able to create an ALB in front of the ELBs for context based routing in which you should be able define rules for your routing to either B or G. In this case you have to separate environments functioning independently (possibly using the same DB though).
For more complicated rules, you can use tools such as leanplum or omniture inside your spring boot application. With this approach you have one single environment hosting old and new functionality and later you'd remove the code that is outdated.
I personally would go down a simpler route using a test DNS entry for the green deployment which is then swapped out for the live DNS entry when you have fully verified your green deployment is good.
So what do I mean by this:
You state that your live deployments have the following DNS entries:
a.mydns.com
b.mydns.com
I would suggest that you create a pattern where each micro-service deployment also gets a test dns entry:
test.a.mydns.com
test.b.mydns.com
When deploying the "green" version of your micro-service, you deploy everything (including the ELB) and map the CNAME of the ELB to the test DNS entry in Route 53. This means you have the green version ready to go, but not being used by your live application. The green version has it's own DNS entry, so you can run your full test-suite against the test.a.mydns.com domain.
If (and only if) the test suite passes, you swap the CNAME entry for a.mydns.com to be the ELB that was created as part of your green deployment. This means that your existing micro-services simply start talking to your green deployment once DNS propagates. If there is an issue, simply reverse the DNS update to the old CNAME entry and you have fully rolled-back.
It requires a little bit of co-ordination here, but you should be able to automate the whole thing with something like Jenkins and the AWS CLI.
We're trying to design a new addition to our application. Basically we need to submit very basic queries to various remote databases accessed over the internet and not owned or controlled by us.
Our proposal is to install a small client app on each of the foreign systems, tiered in 2 basic layers, 1 that is tailored to the particular database its talking to, to handle the actual query in SQL or whatever, the other tier would be the communication tier to handle incoming requests and send back responses. This communication interface would be the same over all of the foreign systems, ie all requests and responses have the same structure.
In terms of java remoting I guess this small client app would be the 'server' and our webapp (normally referred to as the server) is the 'client'.
I've looked at various java remoting solutions (Hessian, Burlap, RMI, SOAP/REST WebServices). However am I correct in thinking that with all of these the 'server' must run in a container, ie in a tomcat/jetty etc instance?
I was really hoping to avoid having to battle all the IT departments controlling the foreign systems to get them to install very much. The whole idea is that its thin/small/easy to install/pain free. Are there any solutions that do not require running in a container / webserver?
The communication really is the smallest part of this design, no more than 10 string input params (that have no meaning other than to the db) and one true/false output. There are no complex object models required. The only complexity would be from security/encryption etc.
I wamly suggest somethig based on Jetty, the embedded HTTP server. You package a simple runnable JAR with dependency JARs into a ZIP file, add a startup script, and you have your product. See for example here.
I often use Sprint-Remoting in my projects and here you find a description how to use without a container. The guy is starting the jetty from within his application:
http://forum.springsource.org/showthread.php?12852-HttpInvoker-without-web-container
http://static.springsource.org/spring/docs/2.0.x/reference/remoting.html
Regards,
Boskop
Yes, most of them runs a standard servlet container. But containers like Jetty have very low footprint and you may configure and run Jetty completely out of your code while you stay with servlet standards.
Do not fail to estimate initial minimal requirements that may grow with project enhancement over time. Then have a standard container makes things much more easier.
As you have tagged this question with [rmi], RMI does not require any form of container. All you need is the appropriate TCP ports to be open.
I have a problem. I need to host many (tens, hundreds) of small identical JAVA web applications that have different loads during one time. I want to use Glassfish V3. Do I need to use a load balancer and clusters or something else? Advise where can I find information about similar problems and their solutions...
I need to host many (tens, hundreds) of small identical JAVA web applications that have different loads during one time.
For hundreds of webapps, you will very likely need more than one app server instance. But this sounds odd to be honest.
I want to use Glassfish V3. Do I need to use a load balancer and clusters or something else?
Right now, GlassFish v3 offers only basic clustering support using mod_jk (i.e. no load balancer plugin, no centralized admin, no high availibility). If you are interested, have a look at this note that describes the configuration steps of GFv3 and mod_jk.
For centralized admin and clustering, you'll have to wait for GlassFish 3.1 (see the GlassFish Roadmap Community Update slides).
You could check out Gigaspaces. I have seen it used in conjunction with Mule for a somewhat similar project. ESBs tend to be overkill in my opinion, but it sounds like you have quite the task to conquer.
Based on your requirements, you cannot do load balancing since the load is predetermined by which client the request is for. Each request has to go to the app handling that client, so it cannot be distributed outside the set of apps dedicated to that client.
You can use multi-threading. you could set up the configuration so that different threads handle different clients. However, it might be better to simply have a server that can handle requests from different clients. Based on the client sent with the request, it would be dispatched to a different database etc.