Huge latency on https websockets (wss) - java

I'm working on a mmo browser game called mope.io (The game is live at https://mope.io) - we recently added https support, but have noticed a HUGE amount of latency over wss websockets. On many of our game servers with https (the issue seems to happen randomly to some servers), there is a delay of several seconds on wss that wasn't seen before on ws.
Quick info: our game server sends 10 update packets per second, giving info on what changed in-game.
We use cloudflare for our site (setting Full:Strict), through our own wildcard certificate (*.mope.io). All of our game servers have matching DNS records which fall under this certificate (so that the websockets can work over https- we connect to eg. wss://server1.mope.io:7020 instead of ws://1.2.3.4:7020).
The game servers are written in Java, using the following library: https://github.com/TooTallNate/Java-WebSocket
Any ideas on reasons that websockets could perform so terribly slow under wss/tls? This even happens when I'm the only one connected to the server. Any help/guidance is greatly appreciated :)
Extra info: I've noticed a 11s time-to-first-byte for the first https request when connecting to the site, before cloudflare cached it, what could cause this!?

My friend had similar problem in his Node.js wss game server and he suspected that it was DoS attack. He managed to reproduce the "freezing" in his server with just single computer by opening fastly WSS connections. I am not sure if this affects only secure websocket servers or both insecure and secure.
Problem occurred on both WS and UWS library.
In websockets/ws library, lag happened even when rejecting the connections on verifyClient() to prevent it ever getting into applications own connection handling logic (which could be reasonable explanation for the lag) so I wonder could the bottleneck be somewhere in Nodes underlying socket stuff with handling secure connections.
Our solution to prevent the freezing was to setup iptables with rate limit. Replace 1234 with your server port.
sudo iptables -I INPUT -p tcp --dport 1234 -m state --state NEW -m recent --set
sudo iptables -I INPUT -p tcp --dport 1234 -m state --state NEW -m recent --update --seconds 1 --hitcount 2 -j DROP
That will allow 2 connections in second per IP address. Also save iptables if necessary since its resets on system restart.
https://serverfault.com/questions/296838/traffic-filtering-for-websockets
https://debian-administration.org/article/187/Using_iptables_to_rate-limit_incoming_connections

Related

Kubernetes: Exposed service to deployment unreachable

I deployed a container on Google Container Engine and it runs fine. Now, I want to expose it.
This application is a service that listens on 2 ports. Using kubectl expose deployment, I created 2 load balancers, one for each port.
I made 2 load balancers because the kubectl expose command doesn't seem to allow more than one port. While I defined it as type=LoadBalancer on kubectl, once these got created on GKE, they were defined as Forwarding rules associated to 2 Target pools that were also created by kubectl. kubectl also automatically made firewall rules for each balancer.
The first one I made exposes the application as it should. I am able to communicate with the application and get a response.
The 2nd one does not connect at all. I keep getting either connection refused or connection timeout. In order to troubleshoot this issue, I further stripped down my firewall rules, to be as permissive as possible, to troubleshoot this issue. Since ICMP is allowed, by default, pinging the ip for this balancer results in replies.
Does kubernetes only allow one load balancer to work, even if more than one can be configured? If it matters any, the working balancer's external ip is in the pattern 35.xxx.xxx.xxx and the ip of the balancer that's not working is 107.xxx.xxx.xxx.
As a side question, is there a way to expose more than one port using kubectl expose --port, without defining a range i.e. I just need 2 ports?
Lastly, I tried using the Google console, but I couldn't get the load balancer, or forwarding rules to work with what's on kubernetes, the way doing it on kubectl does.
Here is the command I used, modifying the port and service name on the 2nd use:
kubectl expose deployment myapp --name=my-app-balancer --type=LoadBalancer --port 62697 --selector="app=my-app"
My firewall rule is basically set to allow all incoming TCP connections over 0.0.0.0/0.
Edit:
External IP had nothing to do with it. I kept deleting & recreating the balancers until I was given an IP of xxx.xxx.xxx.xxx for the working balancer, and the balancer still worked fine.
I've also tried deleting the working balancer and re-creating the one that wasn't working, to see if it's a conflict between balancers. The 2nd balancer still didn't work, even if it was the only one running.
I'm currently investigating the code for the 2nd service of my app, though it's practically the same as the 1st service, a simple ServerSocket implementation that listens on a defined port.
After more thorough investigation (opening a console in the running pod, installing tcpdump, iptables etc), I found that the service (i.e. load balancer) was, in fact, reachable. What happened in this situation was, although traffic reached the container's virtual network interfrace (eth0), the data wasn't routed to the listening services, even when these were ip aliases for the interface (eth0:1, eth0:2).
The last step to getting this to work was to create the required routes through
iptables -t nat -A PREROUTING -p tcp -i eth0 --dport <listener-ip> -j DNAT --to-destination <listener-ip>
Note, there are other ways to accomplish this, but this was the one I chose. I wish the Docker/Kubernetes documentation mentioned this.

How to keep sending messages in websockets even when the network is down?

I am using the java implementation of websockets(org.java_websocket). I am using "ifconfig l0 down" to simulate a network failure. I would like the server to keep sending messages even when the network is down and resend them(through tcp mechanism) once the network is up again. However, the java implementation has the following check in the send function
private void send( Collection<Framedata> frames ) {
if( !isOpen() )
throw new WebsocketNotConnectedException();
which leads to
an error occured
close a stream
Exception in thread "main" org.java_websocket.exceptions.WebsocketNotConnectedException
as soon as I simulate connectivity loss between the server and client.
Since I am not properly calling the close() function in websockets I feel the TCP mechanism should work for some time before timing out leading to websockets layer to close the connection. Am I expecting something outwordly? Is there an implementation that can help me?
It is not a good idea to simulate network connectivity problems by ifconfig down because the operating system knows the interface status. The OS then may (and it does) a handle connectivity error and an interface error differently and a network application gets different error indication.
An option how to simulate a connectivity errors on a linux box is iptables. Let say your application uses the port 80. You can drop all communication to/from port 80 via
iptables -A INPUT -p tcp --destination-port 80 -j DROP
iptables -A OUTPUT -p tcp --source-port 80 -j DROP
The traffic drop in iptables is handled like a network outage.
I think your test setup here does not really do what you want. I guess what happens is that as soon as you are using ifconfig the operating system is smart enough to look up which sockets are bound to this interface and closes them. Thereby Java and the websocket application get notified about the closed socket and the websocket connection gets cleaned up.
You probably need another way to simulate an "unplugged" connection, in the sense of you want to see a high packet loss/latency but without an actual connection close notification.
It is so late for answering to this question, but I think my answer can help another guys:
you should use
connectBlocking()
instead of
connect()
function. note that connectBlocking() , block your thread!

Sending messages from various IP-adresses to a single server using Java

My issue is a protocol that identifies terminals by it's sending IP. I want to manage the connections of several terminals to this server using some kind of proxy that implements that protocol.
So I have Terminal A which is identified by the server by the IP 1.2.3.4 and Terminal B which is identified by the server using the IP 5.6.7.8. Now the proxy will be in a local network with Terminal A and B.
When Terminal A wants to reach the server, it will query the proxy and the proxy needs to send the request on behalf of Terminal A using IP 1.2.3.4 to the server
When Terminal B wants to reach the server, it will query the proxy and the proxy needs to send the request on behalf of Terminal A using IP 5.6.7.8 to the server
Is it even possible to solve that issue in Java or do I have to do network voodoo on the router to achieve this?
Edit: to make things clear. I know what a network proxy is and what a router does. I also know how to solve my problem on a network level using advanced network voodoo if required. What I want to know is if my guess that the problem can't be solved using Java is correct. So the bottom line question is: can I use Java to send traffic using a specific network interface to which a specific IP has been assigned or do I have to rely on what the operating system does to route my traffic (in which case the advanced network voodoo would be required)?
Edit2: If routing of network traffic can be done in java, I'd just like a quick pointer where to look into. My own googling didn't return any useful results.
1) You already have some implementations for tcp tunelling with java. Below are some examples:
http://jtcpfwd.sourceforge.net/
http://sourceforge.net/projects/jttt/
2) Even with these existing implementations, you can still do you own by forwarding packets arriving in the proxy using java.net.Socket.
3) I still think that a better option would be a specific implementation using java.lang.Runtime.exec() and socat linux command. socat is just like the Netcat but with security and chrooting support and works over various protocols and through a files, pipes, devices, TCP sockets, Unix sockets, a client for SOCKS4, proxy CONNECT, or SSL etc. To redirect all port 80 conenctions to ip 202.54.1.5:
$ socat TCP-LISTEN:80,fork TCP:202.54.1.5:80

Apache multiple requests with jmeter

I am using Jmeter to test multiple requests to my web application.
I used NumberOfThread in Jmeter as 50.
My process is as follows:
Login page.
Login with userID and password.
Show menu page.
Click search page.
Go to search page.
Click search button.
Click the searched result link to go to update page.
Update data and click update button.
Show the updated result page.
Go back to search page.
Log out button click.
In the above process, I used looping controller for process number 5 to 10 with 5 looping.
In that situation, if I used more than 25 thread to run Jmeter test, address already in use, the socket binding exception has occur.
I would like to know how to solve that problem.
Looks like your client ran out of ephemeral port or there's some problem with your client environment.
Are you using Windows?
You can possibly do at least the following:
Windows: look into this article for solution for Windows system as host for jmeter.
Use Linux system instead as host to run you Jmeter load-scenarios.
As well you possibly will find this article useful for your testing activities (I've seen Jboss in tags).
UPDATE:
Once more from linked article above:
When an HTTP request is made, an ephemeral port is allocated for the
TCP / IP connection. The ephemeral port range is 32678 – 61000. After
the client closes the connection, the connection is placed in the
TIME-WAIT state for 60 seconds.
If JMeter (HttpClient) is sending thousands of HTTP requests per
second and creating new TCP / IP connections, the system will run out
of available ephemeral ports for allocation.
. . .
Otherwise, the following messages may appear in the JMeter JTL files:
Non HTTP response code: java.net.BindException
Non HTTP response message: Address already in use
The solution is to enable fast recycling TIME_WAIT sockets.
echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
Other options include TCP_FIN_TIMEOUT to reduce how long a connection
is placed in the TIME_WAIT state and TCP_TW_REUSE to allow the system
to reuse connections placed in the TIME_WAIT state.
On the server's side:
This enables fast recycling of TIME_WAIT sockets
/sbin/sysctl -w net.ipv4.tcp_tw_recycle=1
This allows reusing sockets in TIME_WAIT state for new connections - a safer alternative to tcp_tw_recycle
/sbin/sysctl -w net.ipv4.tcp_tw_reuse=1
The tcp_tw_reuse setting is particularly useful in environments where numerous short connections are open and left in TIME_WAIT state, such as web-servers. Reusing the sockets can be very effective in reducing server load.
Maximum number of timewait sockets held by system simultaneously
/sbin/sysctl -w net.ipv4.tcp_max_tw_buckets=30000
or the same but in another way - add the lines below to the /etc/sysctl.conf file so that the change survives reboot:
net.ipv4.tcp_max_tw_buckets = 30000
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
As well on the server's side look onto result of ulimit -n.
Default value is 1024 for the limit of max open files, which can explain appearance of BindExceptions at 1000 connections.
As well you can monitor number of connections between the server and jmeter during test-run using e.g.
netstat -an | grep SERVER_PORT | wc -l
to define limit of connections - if any.

Opening port 80 with Java application on Ubuntu

What I need to do is running a Java application which is a RESTful service server side writtern by Restlet. And this service will be called by another app running on Google App Engine.
Because of the restriction of GAE, every http call is limited to port 80 and 443 (http and https) with HttpUrlConnection class. As a result, I have to deploy my server side application on port 80 or 443.
However, because the app is running on Ubuntu, and those ports under 1024 cannot be accessed by non-root user, then a Access Denied exception will be thrown when I run my app.
The solutions that have come into my mind includes:
Changing the security policy of JRE, which is the files resides in /lib/security/java.policy, to grantjava.net.SocketPermission "*.80" "listen, connect, accept, resolve" permission。However, neither using command line to include this file or overrides the content in JRE's java.policy file, the same exception keeps coming out.
try to login as a root user, however because my unfamiliarity with Unix, I don't know how to do it.
another solution I haven't try is to map all calls to 80 to a higher port like 1234, then I can deploy my app on 1234 without problem, and GAE call send request to port 80. But how to connect the missing gap is still a problem.
Currently I am using a "hacking" method, which is to package the application into a jar file, and sudo running the jar file with root privilege. It works now, but definitely not appropriate in the real deployment environment.
So if anyone have any idea about the solution, thanks very much!
You can use iptables to redirect using something like this:
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport http -j REDIRECT --to-ports 8080
Make the changes permanent (persist after reboot) with:
iptables-save
Solution 1: It won't change anything, this is not a Java limitation, it's the OS that is preventing you to use privileged port numbers (ports lower than 1024).
Solution 2: Not a good idea IMO, there are good reasons to not run a process as root.
Solution 3: Use setcap or iptables. See this previous question.
A much easier solution is to set up a reverse proxy in Apache httpd, which Ubuntu will run for you on port 80 from /etc/init.d.
There are also ways of getting here with iptables, but I don't have recent personal experience. I've got such a proxy running right now.

Categories