AWS EC2 Tomcat Java Webapp - How can I manage bot http sessions - java

I have a Tomcat 8 Java webapp deployed in an AWS EC2 Ubuntu instance.
It seems that there is a lot of bots trying to access my app because in my Javamelody monitoring I can see one-request bot's sessions cached by spring security like:
DefaultSavedRequest[http://52.27.73.101/phpmy-admin/]
DefaultSavedRequest[http://52.27.73.101/wp-login.php]
DefaultSavedRequest[http://52.27.73.101/admin/phpmyadmin/]
Is there a way to prevent from that bot's request? I don't know, maybe a spring security config that does not save them on caché, o a tomcat config that do something similar?
Lots of them doesn't have even IP, or Country, or User Agent.
Apart from security warning, my Javamelody Http Sessions Info it's not trustable because there are a lot of these.
The EC2 Instances is behind an AWS Load Balancer, so maybe can help in this thread too.

If you have an EC2 instance you have full control of your SO. You could use any tool, for example iptables, to control the network traffic. Maybe with something like this:
iptables -I INPUT -s <bot IP source> -j DROP

Related

How to configure subdomains with Payara and virtual servers?

I'm struggling here with something that may be easy to do, but I haven't found a correct solution, so i hope you can help me please.
Background
We are developing an application that consists in 4 different Java Web projects.
AppA
AppB
AppC
WebService
All of these applications have to be accessed from 4 diferent sub domains of mydomain.com:
a.mydomain.com
b.mydomain.com
c.mydomain.com
api.mydomain.com
Technology
Application server: Payara server 4 (what is almost the same that Glassfish 4).
Payara server is running inside a Docker container which in turn is running inside an Amazon EC2 instance.
I've used Amazon Route 53 in the following scenario:
What I have already done successfully
This was done for another proyect where there was only 1 app which is accessed from a subdomain of otherdomainiown.com.
This works perfectly, because the DNS records of the domain provider (iPage) just points to my Amazon Route 53 records of the hosted zone I configured. This hosted zone has an A record that points to the fixed IP of my Amazon EC2 instance. Then, Docker exposes Payara server through port 80 that is mapped to port 8080 which Payara uses by default to serve it's applications.
Problem
Now, i'm facing a similar scenario. The difference is that I have 4 different apps that need to be accessed by 4 different sub domains.
I've tried with Virtual Servers (virtual hosts) with no luck, I'm not familiar with that, but i think that could be a possible solution.
I considered using Amazon S3 buckets to redirect but I don't think that's what I need.
In an image, this should be the final scenario, although I just draw 2 sub domains for simplicity:
Should I use Docker mappings to resolve this?
Should I use Virtual Servers?
Should I buy 4 different machines? (this will solve all this in a few seconds, but buying more instances is not an option)
Should I use a Docker container for each application?
As you can see, i'm a little lost, so it would be great if you could point me in the right direction.
Thanks in advance.
What are you using Route 53 for? What benefit do you get from it in this scenario?
There is a blog post on the Payara website which gives an overview of using Virtual Servers in Payara Server, but it's a bit in-depth to quote for an answer here.
The key point is that you still need to configure incoming traffic to arrive at different subdomains. If all your traffic is coming in on the same IP address as it looks like Route53 is doing, then it will be very tricky to differentiate what traffic should go to which endpoint.
The usual way to do this would be to have a load balancer or proxy where you have Route53 in your diagram. An Amazon ELB would be able to perform the redirects you need. A cheaper option (though it would involve more management) would be to use something like Apache httpd or Nginx to forward requests to the Payara Server.
You just need to create a virtual server for each subdomain and set the subdomain in the "Hosts" field. Then you need to dpeloy all 4 applications and select the proper virtual server in the "Virtual Servers" field. The blog linked by #Mike will guide you: https://blog.payara.fish/virtual-servers-in-payara-server
All of the virtual servers will be listening on the same IP address but Payara Server will read the domain from incoming HTTP requests and will route the request to the correct virtual server.
However, this is recommended only for very small applications. Bigger applications should be deployed separately on different Payara Server instances running on different ports or different machines. If you use docker then you can run 4 instances in docker and map them to different ports. Then you would need a proxy server (Apache Httpd, Nginx, ) to route requests to the correct Payara instances (ports) according to the domain name in requests.

Kubernetes: Exposed service to deployment unreachable

I deployed a container on Google Container Engine and it runs fine. Now, I want to expose it.
This application is a service that listens on 2 ports. Using kubectl expose deployment, I created 2 load balancers, one for each port.
I made 2 load balancers because the kubectl expose command doesn't seem to allow more than one port. While I defined it as type=LoadBalancer on kubectl, once these got created on GKE, they were defined as Forwarding rules associated to 2 Target pools that were also created by kubectl. kubectl also automatically made firewall rules for each balancer.
The first one I made exposes the application as it should. I am able to communicate with the application and get a response.
The 2nd one does not connect at all. I keep getting either connection refused or connection timeout. In order to troubleshoot this issue, I further stripped down my firewall rules, to be as permissive as possible, to troubleshoot this issue. Since ICMP is allowed, by default, pinging the ip for this balancer results in replies.
Does kubernetes only allow one load balancer to work, even if more than one can be configured? If it matters any, the working balancer's external ip is in the pattern 35.xxx.xxx.xxx and the ip of the balancer that's not working is 107.xxx.xxx.xxx.
As a side question, is there a way to expose more than one port using kubectl expose --port, without defining a range i.e. I just need 2 ports?
Lastly, I tried using the Google console, but I couldn't get the load balancer, or forwarding rules to work with what's on kubernetes, the way doing it on kubectl does.
Here is the command I used, modifying the port and service name on the 2nd use:
kubectl expose deployment myapp --name=my-app-balancer --type=LoadBalancer --port 62697 --selector="app=my-app"
My firewall rule is basically set to allow all incoming TCP connections over 0.0.0.0/0.
Edit:
External IP had nothing to do with it. I kept deleting & recreating the balancers until I was given an IP of xxx.xxx.xxx.xxx for the working balancer, and the balancer still worked fine.
I've also tried deleting the working balancer and re-creating the one that wasn't working, to see if it's a conflict between balancers. The 2nd balancer still didn't work, even if it was the only one running.
I'm currently investigating the code for the 2nd service of my app, though it's practically the same as the 1st service, a simple ServerSocket implementation that listens on a defined port.
After more thorough investigation (opening a console in the running pod, installing tcpdump, iptables etc), I found that the service (i.e. load balancer) was, in fact, reachable. What happened in this situation was, although traffic reached the container's virtual network interfrace (eth0), the data wasn't routed to the listening services, even when these were ip aliases for the interface (eth0:1, eth0:2).
The last step to getting this to work was to create the required routes through
iptables -t nat -A PREROUTING -p tcp -i eth0 --dport <listener-ip> -j DNAT --to-destination <listener-ip>
Note, there are other ways to accomplish this, but this was the one I chose. I wish the Docker/Kubernetes documentation mentioned this.

How to use Docker to run various web servers on default ports on one server?

My goal is to have web servers that work on the default port so users don't have to type in a port #. Easy to do with LAMP stack, where A is apache... and no other web server exists. However, if I purchase general purpose hosting with Centos and I want to run
1) Gunicorn/NGINX for Python/Django -> access from example.com from outside (no port required to be entered by the web browser.
2) Spring framework in a Java EE container - Java EE defaults to port 8080 and other ports in that range but people just enter a domain name and expect it to work. -> So reachable from example2.com
3) Node.js - Reachable from example3.com
4) PHP apps such as WordPress, Drupal on LAMP - example3.com
Recommendations are appreciated.
My closest experience that seems to do this for example would be AWS with load balancer allowing access from public web - app servers accessible from only load balancer.
Thanks,
Bruce
You can use nearly any http server in front to do this kind of job.
bind everything (tomcat, nodejs, gunicorn, uwsgi, etc. pp.) to local http or file sockets and use the proxy feature of your favorite server to bundle them all on this host. With the naming of nginx: use different locations on one server and/or different server blocks with proper server names set to build your custom host.
A few servers:
nginx with proxy feature
apache2 supports setups like that, too if you use mod_proxy.
haproxy is another alternative
Finally it depends on your specific needs (and experiences) which setup to pick.
Edit: missed docker a little - but the same thing works for containers - except that you do not use file sockets, but make everything with (http) sockets in private or public nets.

How configure aws-ec2 Instance to run playframework 1.2.7 application

I have deployed our playframework 1.2.7 web application to aws-ec2 ubuntu instance. The started the application on port 8081 since 80 or 8080 complains about not able to bind to those ports. How can I configure the ubuntu instance either througth the aws security group or on ubuntu itself so that I wouldn't have to add the port 8081 to the end of the public url or the public ip provided by aws.
ie I don't want do this:
example.com:8081 / ip4:8081
But I just want to use:
example.com / ip4
to access the application.
Please I need help on this.
The problem is that on Ubuntu ports < 1024 are privileged. This means that normal users can do nothing with it. To start play on port 80 you could simply start it as root user. Anyway it's not a best practice to start webserver as root due to possible security issues.
I'd suggest to start it on whatever non-privileged port you want, as normal user, and make use of an Elastic Load Balancer (ELB) to redirect all inbound traffic on port 80(or 443 for instance) to your play port. You can accomplish this simply using AWS web interface, when creating an ELB
So users will reach your play instance calling ELB on port 80 using Amazon auto-assigned dns name.
Example flow:
User browser --> http://your-elb-dns-name.com --> your_play_server_ip:8081
Just make sure that the Security Group associated to your play server instance will accept inbound traffic on 8081 from your ELB (you can identify your ELB using the amazon id assigned during its creation)
Another great advantage of using this ELB approach is that you can use it as reverse proxy to hide your ec2-instance(s) ip(s) to the internet. In fact, if you use ELB you could also avoid assignin a public ip to your ec2 instance during creation. ELB doesn't need to know a public ip beacuse it will have access to the Virtual Private Cloud (VPC) in which your ec2 instance was started
Another possible approach, if you don't want to use ELB, is to install NGINx or Apache on your ec2 instance to act as reverse proxy, but I think you should make use of Amazon web services to accomplish that. You may want to use an internal NGINX or Apache reverse proxy if you need to hide a particular resource of your play server to the internet.
https://aws.amazon.com/it/elasticloadbalancing/

Best practices for building a simple, scalable cluster on Amazon EC2 for a Java web app

I want to build a Java web app and deploy it on EC2. It will be written in Java and will use MySQL. I was hoping to get some pointers on the actual deployment process and configuration. In particular I'm interested in the following topics:
machine images (DIY vs ready made)
mysql replication and backup to S3
ways of deploying and redeploying the app to EC2 without interruptions
firewalls?
load balancing and auto scaling
cloudtools (or alternative tools)
I can only speak to a few of your discussion points from experience. I've had to strip out hyperlinks to the various Amazon products because I'm new to Stackoverflow and don't have enough rep to post more than one link.
Machine Images: While you can certainly start with your own machine image and convert it to an AMI with the EC2 AMI Tools, I prefer starting with one of Amazon's ready made images and customizing it to suit my needs. The advantage here is that you already know that the base image will deploy, you're more likely to get help on the forum or from the EC2 staff, and you don't have to go through the trouble of setting up a physical machine or your own VM in order to bundle the image and upload it. If you're using the EC2 API Tools, you can get a list of the available base images with ec2-describe-images -o amazon.
MySQL Replication and Backup: Check out the new(ish) Amazon Relational Database Service. It's designed to work with MySQL, can perform automatic backups, and scales easily.
Firewalls: Handling the firewalls for your instances is easy with the API tools. For example, you can create a group,
ec2-add-group condor –d “Condor Workers”
setup firewall rules for that group (bad example - opens all UDP and TCP ports for a CIDR range),
ec2-authorize condor -P tcp -p 0-65535 -s 129.127.0.0/16
ec2-authorize condor -P udp -p 0-65535 -s 129.127.0.0/16
and then launch your instances as part of the group, so that they inherit the firewall rules.
ec2-run-instances ami-12345678 –g condor –k mykeypair
The tricky part is going the other direction -- allowing your EC2 instances to communicate with your company/school/personal network. Since you don't know what IP your instances will have before they start (Amazon Elastic IP can alleviate this to some extent) you're generally forced to allow some subnet of the EC2 cloud.
You can also setup Iptables or additional firewalls on your instances.
Load Balancing: Consider Amazon Elastic Load Balancing. If that doesn't suit your needs, you can create your own "virtual cluster" and use whatever framework you like.

Categories