How to configure subdomains with Payara and virtual servers? - java

I'm struggling here with something that may be easy to do, but I haven't found a correct solution, so i hope you can help me please.
Background
We are developing an application that consists in 4 different Java Web projects.
AppA
AppB
AppC
WebService
All of these applications have to be accessed from 4 diferent sub domains of mydomain.com:
a.mydomain.com
b.mydomain.com
c.mydomain.com
api.mydomain.com
Technology
Application server: Payara server 4 (what is almost the same that Glassfish 4).
Payara server is running inside a Docker container which in turn is running inside an Amazon EC2 instance.
I've used Amazon Route 53 in the following scenario:
What I have already done successfully
This was done for another proyect where there was only 1 app which is accessed from a subdomain of otherdomainiown.com.
This works perfectly, because the DNS records of the domain provider (iPage) just points to my Amazon Route 53 records of the hosted zone I configured. This hosted zone has an A record that points to the fixed IP of my Amazon EC2 instance. Then, Docker exposes Payara server through port 80 that is mapped to port 8080 which Payara uses by default to serve it's applications.
Problem
Now, i'm facing a similar scenario. The difference is that I have 4 different apps that need to be accessed by 4 different sub domains.
I've tried with Virtual Servers (virtual hosts) with no luck, I'm not familiar with that, but i think that could be a possible solution.
I considered using Amazon S3 buckets to redirect but I don't think that's what I need.
In an image, this should be the final scenario, although I just draw 2 sub domains for simplicity:
Should I use Docker mappings to resolve this?
Should I use Virtual Servers?
Should I buy 4 different machines? (this will solve all this in a few seconds, but buying more instances is not an option)
Should I use a Docker container for each application?
As you can see, i'm a little lost, so it would be great if you could point me in the right direction.
Thanks in advance.

What are you using Route 53 for? What benefit do you get from it in this scenario?
There is a blog post on the Payara website which gives an overview of using Virtual Servers in Payara Server, but it's a bit in-depth to quote for an answer here.
The key point is that you still need to configure incoming traffic to arrive at different subdomains. If all your traffic is coming in on the same IP address as it looks like Route53 is doing, then it will be very tricky to differentiate what traffic should go to which endpoint.
The usual way to do this would be to have a load balancer or proxy where you have Route53 in your diagram. An Amazon ELB would be able to perform the redirects you need. A cheaper option (though it would involve more management) would be to use something like Apache httpd or Nginx to forward requests to the Payara Server.

You just need to create a virtual server for each subdomain and set the subdomain in the "Hosts" field. Then you need to dpeloy all 4 applications and select the proper virtual server in the "Virtual Servers" field. The blog linked by #Mike will guide you: https://blog.payara.fish/virtual-servers-in-payara-server
All of the virtual servers will be listening on the same IP address but Payara Server will read the domain from incoming HTTP requests and will route the request to the correct virtual server.
However, this is recommended only for very small applications. Bigger applications should be deployed separately on different Payara Server instances running on different ports or different machines. If you use docker then you can run 4 instances in docker and map them to different ports. Then you would need a proxy server (Apache Httpd, Nginx, ) to route requests to the correct Payara instances (ports) according to the domain name in requests.

Related

Pivotal gemfire cluster configuration

I am trying to set up a Pivotal Gemfire cluster with two nodes/hosts. Precisely two different unix servers. The idea behind is creating 1 locator and 1 cache server in each host where the locators should take care of load balancing among the cache servers. A replicated region will be created in both the cache servers. When a client creates/update a region in cache server using gfsh or java API, it should be replicated to other one
Using gfsh, I am able to start a locator (locator 1) and a cache server (server 1) in host_A and likewise in host_B. I have created a region (RegionA) in both the servers.
Is that all i have to do ?. Pivotal tutorials talk about having a locator and multiple cache servers in same machine. I could not find any appropriate resource which talks about multi-server/host configuration.
After starting the servers in both the hosts. I am starting servers in each of the host like this.
start server --name=server1 --locators=host_A[10334],host_B[10334] --group=group1 --server-port=40406
start server --name=server2 --locators=host_A[10334],host_B[10334] --group=group1 --server-port=40406
When i do "list members" in gfsh, host B shows (locator 2, server 1 [from host A], server 2), but host A shows locator 1 only. Ideally i am expecting 2 locator s and 2 servers as members in both the machines. Is that not right?
The steps look just fine, are you having any issues or something is not working while using the started cluster?. You can go through Pivotal GemFire in 15 Minutes or Less to get to know how to start locators and servers, and how to interact with them as well. The only extra item I can think of (not mentioned withint he previous link as all members are started locally within the same gfsh session) is that you need to correctly configure the --locators parameter when starting your members, more information about how this works can be found in How Member Discovery Works and Configuring Peer-to-Peer Discovery.
Just for your reference, you can have as many members as you want per host, there's no implicit limit about this other than the actual physical resources on the host itself (memory, disk, ports, network throughput, etc.). Keep in mind, however, that it is always better to have only one member per host to achieve the highest reliability and availability for both your data and locator services.
Hope this helps, cheers.

How to use Docker to run various web servers on default ports on one server?

My goal is to have web servers that work on the default port so users don't have to type in a port #. Easy to do with LAMP stack, where A is apache... and no other web server exists. However, if I purchase general purpose hosting with Centos and I want to run
1) Gunicorn/NGINX for Python/Django -> access from example.com from outside (no port required to be entered by the web browser.
2) Spring framework in a Java EE container - Java EE defaults to port 8080 and other ports in that range but people just enter a domain name and expect it to work. -> So reachable from example2.com
3) Node.js - Reachable from example3.com
4) PHP apps such as WordPress, Drupal on LAMP - example3.com
Recommendations are appreciated.
My closest experience that seems to do this for example would be AWS with load balancer allowing access from public web - app servers accessible from only load balancer.
Thanks,
Bruce
You can use nearly any http server in front to do this kind of job.
bind everything (tomcat, nodejs, gunicorn, uwsgi, etc. pp.) to local http or file sockets and use the proxy feature of your favorite server to bundle them all on this host. With the naming of nginx: use different locations on one server and/or different server blocks with proper server names set to build your custom host.
A few servers:
nginx with proxy feature
apache2 supports setups like that, too if you use mod_proxy.
haproxy is another alternative
Finally it depends on your specific needs (and experiences) which setup to pick.
Edit: missed docker a little - but the same thing works for containers - except that you do not use file sockets, but make everything with (http) sockets in private or public nets.

How configure aws-ec2 Instance to run playframework 1.2.7 application

I have deployed our playframework 1.2.7 web application to aws-ec2 ubuntu instance. The started the application on port 8081 since 80 or 8080 complains about not able to bind to those ports. How can I configure the ubuntu instance either througth the aws security group or on ubuntu itself so that I wouldn't have to add the port 8081 to the end of the public url or the public ip provided by aws.
ie I don't want do this:
example.com:8081 / ip4:8081
But I just want to use:
example.com / ip4
to access the application.
Please I need help on this.
The problem is that on Ubuntu ports < 1024 are privileged. This means that normal users can do nothing with it. To start play on port 80 you could simply start it as root user. Anyway it's not a best practice to start webserver as root due to possible security issues.
I'd suggest to start it on whatever non-privileged port you want, as normal user, and make use of an Elastic Load Balancer (ELB) to redirect all inbound traffic on port 80(or 443 for instance) to your play port. You can accomplish this simply using AWS web interface, when creating an ELB
So users will reach your play instance calling ELB on port 80 using Amazon auto-assigned dns name.
Example flow:
User browser --> http://your-elb-dns-name.com --> your_play_server_ip:8081
Just make sure that the Security Group associated to your play server instance will accept inbound traffic on 8081 from your ELB (you can identify your ELB using the amazon id assigned during its creation)
Another great advantage of using this ELB approach is that you can use it as reverse proxy to hide your ec2-instance(s) ip(s) to the internet. In fact, if you use ELB you could also avoid assignin a public ip to your ec2 instance during creation. ELB doesn't need to know a public ip beacuse it will have access to the Virtual Private Cloud (VPC) in which your ec2 instance was started
Another possible approach, if you don't want to use ELB, is to install NGINx or Apache on your ec2 instance to act as reverse proxy, but I think you should make use of Amazon web services to accomplish that. You may want to use an internal NGINX or Apache reverse proxy if you need to hide a particular resource of your play server to the internet.
https://aws.amazon.com/it/elasticloadbalancing/

Control the routing of load balancer from a Tomcat

I have a load balancer problem. All load balancer configuration examples I have read inspect the client data and bases all load balancing routing decitions on this. I have a different problem. I need to let the application server tell the load balancer that he serves a specific url right now.
Background:
I have around 10000 hardware devices which connects to tomcat servers (by a binary TCP protocol). The tomcat servers are also serving http towards clients who would like to communicate with these devices.
I don't know when the hardware device connects (and I can't identify them on the connection) but I want all http requests from clients, which are directed towards that device to go to that tomcat-server after a device has connected. The hardware devices are load balanced by round robin dns.
Question:
Are there any good http load balancers to which I can let the tomcat server say "hey, device with id xxx just connected, please redirect all traffic towards this device to me"? The http requests are easy to identify. They have the id of the device in the request url.
Any suggestions on load balancers or google queries to make would be appreciated.
Interesting problem you've got there. I've had the same problem as you, but I was using jboss AS 7 instead of tomcat. However, the principles are more or less the same.
We solved this issue by using apache with mod_cluster which allows the tomcat or jboss server to register which context that it has available to the load balancer. The loadbalancer will determine which application server that has the context and route the traffic to it.
There are lots of tutorials for how to do this online, here is one good example.
http://www.devx.com/Java/Article/48086
For the original question, I think you are not looking for a load balancer, but just a plain reverse proxy with the twist that it has to be dynamic.
Check out Apache httpd mod_proxy with mod_rewrite. For the dynamic part maybe your tomcats can register their connected "refrigerators" in a sqldb, in that case use RewriteMap with dbd.

How to configure Apache to redirect subdomains to Tomcat applications

I have a few applications hosted on Tomcat running a machine called test-websites throuhg port 8080. So they are accessible like this:
http://test-websites:8080/app1/
http://test-websites:8080/app2/
...
http://test-websites:8080/appN/
What I need to do is make these applications accessible on my local network by:
http://app1.test-websites/
http://app2.test-websites/
...
http://appN.test-websites/
As I add new applications to Tomcat's webapps folder, I want them to be automatically available using the same subdomain pattern.
So I thought using Apache in front of Tomcat to make the URL rewriting would be a good idea, but so far I have not been able to configure the virtual host on Apache to make this redirect. I installed apache2 on port 80 and I see the default "It Works!" apache page when I access http://test-websites/, but I couldn't find how to make the redirects to the apps in the Tomcat following the format above.
I have searched for over 4 hours and didn't get an answer for this use case.. any help us much appreciated!
Thank you!
Eduardo
First you need to add a DNS entry for app1.test-websites, app2.test-websites,.. such that it points to test-websites. Generally CNAME entry works best in this case. If you only need the URLs to resolve on your local machine (for testing purpose), you can just update your /etc/hosts or C:\windows\system32\drivers\etc\hosts file. Otherwise you need to figure out how your company's network is setup and change the DNS entry (if it's a Windows domain network, normally there's a DNS service on the domain controller. On some smaller network you have to configure it on the router).
Next, the quickest way to achieve this is to not use apache2 to front it, bust simply have tomcat listening on port 80. You can setup virtual host on tomcat such that it serves different web-app depending on the URL requested.

Categories