Fast TCP exchange rate in Java - java

I am making an multiplayer game in Java, and i am using a tcp connection for client/server communication. I tried to optimize the transmission by making it send packets only when there are changes in the game, googling the issue, flushing the objectstream and socket after each transmittion, and i set the setTcpNoDelay to true. But i still can see lag, and sometimes the incoming packets stall for a few seconds(sometimes forever(or a very long period that i didn't try to wait)), until another packet is sent(sounds like buffering to me).
I didn't test it over lan, just on my localhost.
I know that Minecraft is written in Java, and it works very well with minimal lag, so there must be a way to fix this...
Are there any other tips that i can implement to speed up the tcp client/server communication?
Edit: The player instance includes the player speed, so between two incoming packets the game runs the other players from last known position with the last known speed.

Related

Java server wait for all clients to be ready

I'm currently making a little multiplayer game with Java. Two clients connect to the server. When they are connected, in each round, they send a number to the server. They can do it simultaneously. I'm using threads to handle the socket's incoming data. If all the two clients have sent their number, the server sends data to the clients.
Currently, I'm doing this with a single while loop, something like this in pseudocode:
while(!readyForEvaluate){
readyForEvaluate = allClientSentData();
}
Where allClientData() returns a boolean, which checks logical variables, if player 1 and 2 have sent their number.
Basically, this solution is working for me, but my question is: Is there a more efficient way of solving this kind of problem, or what is the general solution?

Java multicast listening and IGMP

I have an issue that is driving me crazy! Both design-wise and tech-wise.
I have a need to listen to a LOT of multicast addresses. They are divided into 3 groups per item that I am monitoring/collecting. I have gone down the road of having one process spin-up 100 threads. Each thread uses 2 ports, and three addresses/groups. (2 of the groups are on same port) I am using MulticastChannel for each port, and using SELECT to monitor for data. (I have used datagram but found NIO MulticastChannel much better).
Anyway, I am seeing issues where I can subscribe to about a thousand of these threads, and data hums along nicely. Problem is, after a while I will have some of them stop receiving data. I have confirmed with the system (CentOS) that I am still subscribed to these addresses, but data just stops. I have monitors in my threads that monitor data drops and out-of-order via the RTP headers. When I detect that a thread has stopped getting data, I do a DROP/JOIN, and data then resumes.
I am thinking that a router in my path is dropping my subscription.
I am at my wits end writing code to stabilize this process.
Has anyone ever sent IGMP joins out the network to keep the data flowing? Is this possible, or even reasonable.
BTW: The computer is a HP DL380 Gen-9 with a 10G fiber connection to a 6509 switch.
Any pointers on where to look would really help.
Please do not ask for any code examples.
The joinGroup() operation already sends out IGMP requests on the network. It shouldn't be necessary to send them out yourself, and it isn't possible in pure Java anyway.
You could economize on sockets and threads. A socket can join up to about 20 groups on most operating systems, and if you're using NIO and selectors there's no need for more than one thread anyway.
I have used datagram but found NIO MulticastChannel much better).
I don't know what this means. If you're referring to DatagramSocket, you can't use it for receiving multicasts, so the sentence is pointless. If you aren't, the sentence is meaningless.

TCP or UDP with Java

Sorry for such the long post. I have done lots and lots of research on this topic and keep second guessing myself on which path I should take. Thank you ahead of time for your thoughts and input.
Scenario:
Create a program in Java that sends packets of data every .2-.5 seconds that contain simple "at that moment" or live data to approximately 5-7 clients. (Approximately only 10 bytes of data max per packet). The server can be connected via wifi or ethernet. The clients however are restricted to only using Wifi. The client(s) will not be sending any packets to the server as it will only display the data retrieved from the server.
My Thoughts:
I originally started out creating the server program using TCP as the transport layer. This would use the ServerSocket class within Java. I also made it multithreaded to accept multiple clients.
TCP sounded like a great idea for various reasons.
With TCP, the message will always get sent unless the connection fails.
TCP rearranges the order of packets sent.
TCP has flow control and requires more time to set up when started.
So, perfect! The flow control is crucial for this scenario (since the client wants the most concurrent information), the setup time isn't that big of a deal, and I am pretty much guaranteed that the message will be received. So, its official, I have made my decision to go with TCP....
Except, what happens when TCP gets behind due to packet loss? "The biggest problem with TCP in this scenario is its congestion control algorithm, which treats packet loss as a sign of bandwidth limitations and automatically throttles the sending of packets. On 3G or Wi-Fi networks, this can cause a significant latency."
After seeing the prominent phrase "significant latency", I realized that it would not be good for this scenario since the client needs to see the live data at that moment, not continue receiving data from .8 seconds ago. So, I went back to the drawing board.
After doing more research, I discovered another procedure that involved UDP with Multicasting. This uses a DatagramSocket on the server side and a MulticastSocket on the client side.
UDP with Multicasting also has its advantages:
UDP is connectionless, meaning that the packets are broadcasted to anyone who is listening.
UDP is fast and is great for audio/video (although mine is simple data like a string).
UDP doesn't guarantee delivery - this can be good or bad. It will not get behind and create latency, but there is no guarantee that all client(s) might receive the data at that moment.
Sends one packet that gets passed along.
Since the rate of sending the packets will be rather quick (.2-.5 sec), I am not too concerned about losing packets (unless it is consistently dropping packets and looks to be unresponsive). The main concern I have with UDP above anything is that it doesn't know the order of which the packets were sent. So, for example, lets say I wanted to send live data of the current time. The server sends packets which contain "11:59:03", "11:59:06", "11:59:08", etc. I would not want the data to be presented to the client as "11:59:08", "11:59:03", "11:59:06", etc.
After being presented with all of the information above, here are my questions:
Does TCP "catch up" with itself when having latency issues or does it always stay behind once the latency occurs when receiving packets? If so, is it rather quick to retrieving "live" data again?
How often do the packets of data get out of order with UDP?
And above all:
In your opinion, which one do you think would work best with this scenario?
Thanks for all of the help!
Does TCP "catch up" with itself when having latency issues or does it always stay behind once the latency occurs when receiving packets?
TCP backs its transmission speed off when it encounters congestion, and attempts to recover by increasing it.
If so, is it rather quick to retrieving "live" data again?
I don't know what that sentence means, but normally the full speed is eventually restored.
How often do the packets of data get out of order with UDP?
It depends entirely on the intervening network. There is no single answer to that.
NB That's a pretty ordinary link you're citing.
UDP stands for 'User Datagram Protocol', not 'Universal Datagram Protocol'.
UDP packets do have an inherent send order, but it isn't guaranteed on receive.
'UDP is faster because there is no error-checking for packets' is meaningless. 'Error-checking for packets' implies retransmission, but that only comes into play when there has been packet loss. Comparing lossy UDP speed to lossless TCP speed is meaningless.
'UDP does error checking' is inconsistent with 'UDP is faster because there is no error-checking for packets'.
'TCP requires three packets to set up a socket connection, before any user data can be sent' and 'Once the connection is established data transfer can begin' are incorrect. The client can transmit data along with the third handshake message.
The list of TCP header fields is incomplete and the order is incorrect.
For TCP, 'message is transmitted to segment boundaries' is meaningless, as there are no messages.
'The connection is terminated by closing of all established virtual circuits' is incorrect. There are no virtual circuits. It's a packet-switching network.
Under 'handshake', and several other places, he fails to mention the TCP four-way close protocol.
The sentences 'Unlike TCP, UDP is compatible with packet broadcasts (sending to all on local network) and multicasting (send to all subscribers)' and 'UDP is compatible with packet broadcast' are meaningless (and incidentally lifted from an earlier version of a Wikipedia article). The correct term here is not 'compatible' but supports.
I'm not fond of this practice of citing arbitrary Internet rubbish here and then asking questions based on it.
UDP packets will may get out of order if there are a lot of hops between the server and the client, but more likely than getting out of order is that some will get dropped (again, the more hops the more chance of that). If your main concern with UDP is the order, and if you have control over the client, then you can simply have the client discard any messages with an earlier timestamp than the last one received.

UDP Datagrams in gaming, what happens when they get lost?

Let's say you want to make a real-time game, perhaps a 2D topdown game.
Things you would do in the game to keep it simple are:
Connect to the server
Move the player with the keys
Possibly press space bar to attack
Send a chat message
But what would happen if a datagram from any of these situations get lost?
What would you do?
1) Connecting to the server
If you send a UDP datagram to the server, the server would take your IP and port and then create a player
based on an ID it gives you, this it how it identifies you every time it receives a datagram from you.
But what if upon "connection" (not actually a connection, just a udp datagram that says 'make me a part of your server'), this initial datagram gets lost. Is it right to say that you would just resend it if after a certain period of time you did not receive a reply back from the server?
2) Moving the player/attacking
If at any time we move the player/press a movement key/attack key, we would send a keystroke
to the server telling it what key we pressed/released.
The server would then relay this to all the other clients.
But what if this keystroke datagram gets lost? Either upon client->server or server->clients.
Because keystrokes are happening all the time, is it efficient to just resend it all the time?
Or would we just ignore the fact that it got lost along the way because there's a chance that if you press a key again it will most likely make it the next time?
3) Sending a chat message
This would be something that MUST reach the server.
If it got lost along the way, after a certain period of time if we did not receive a reply from the other side/receiving end, do we re-send it?
TL;DR
Is it okay to just keep resending datagrams if we know after a certain period of time it did not reach?
And also, what about confirming if a datagram got successfully sent?
If client sends a chat message to the server, and the server receives it, should the server send a reply back to the client to confirm that it received it? What would happen if that reply got lost, what then?
Typically games using UDP have an application level protocol on-top so they can send some messages reliably and others not. If they wanted to send everything reliably they would be better off using TCP. However fast-paced games can not afford the delay TCP introduces and if the packet does get lost it's often too late to re-send it!
One way to implement reliable messages is, like you suppose, to send an ACKnowledgment reply that this particular packet was received - but what if that gets dropped you ask? The sender of the reliable packet typically re-sends a certain number of times (e.g. 3 times) and if no ACK is still received the sender then assumes the peer has dropped.
An excellent place for articles and game networking including how to minimise data sent is Shawn Hargreaves blog. (This is focused at C# XNA but the same logic applies irrespective of language/framework/platform).
Lastly:
Game network programming for fast paces games is hard. If your game is slow and/or turn based consider TCP and a lock-step approach which makes things considerably easier.
Don't re-invent the wheel, there are libraries that do things such as implementing reliability on top of UDP for when you need it, e.g. FalconUDP. Though this is a .NET one - I would like to see a Java port :)
If a datagram gets lost there is nothing the receiver can do, because he doesn't know about it. It's up to the sender to re-send, and it's up to you to design an ACK-based or NACK-based protocol so that the receiver knows when to do that.
Whether it is okay to just keep resending UDP datagrams depends on how the server treats the data in the datagrams. For example, if you resend a chat message, and the server receives both the original message and the resent message, does the server display the message twice? Note that you might end up resending even if the original message was received, because the reply to the original message might have gotten lost or delayed.
The best bet is to use TCP instead of UDP, since TCP has already solved all these problems for you. World of Warcraft, for example, started with TCP for chat and UDP for gameplay, and eventually converted to TCP for everything.

Is this a good multi-threaded server design?

I have a server for a client-server game (ideally the basis for small MMO) and I am trying to determine the best way to organize everything. Here is an overview of what I have:
[server start]
load/create game state
start game loop on new thread
start listening for udp packets on new thread
while not closing
listen for new tcp connection
create new tcp client
start clients tcp listener on new thread
save game state
exit
[game loop]
sleep n milliseconds // Should I sleep here or not?
update game state
send relevant udp packet updates to client
every second
remove timed out clients
[listen for udp]
on receive, send to correct tcp client to process
[listen for tcp] (1 for each client)
manage tcp packets
Is this a fair design for managing the game state, tcp connections, and send/receive udp packets for state updates? Any comments or problems?
I am most interested on the best way to do the game loop. I know I will have issues if I have a large number of clients because I am spawning a new thread for each new client.
That looks like a reasonable start to the design. It wouldn't be bad to go beyond 10 clients (per your comment) in terms of scaling the number of threads. As long as the threads are mostly waiting and not actually processing something you can easily have thousands of them before things start to break down. I recall hitting some limits with a similar design over 5 years ago, and it was around 7000 threads.
Looks like a good design. If I were you I would use an existing nio framework like netty.
Just google java nio frameworks and you'll find several frameworks with examples and documentation.
Update
Thanks, but I prefer to do it all on my own. This is only for a side project of mine, and much of its purpose is
imho you'll learn more by using an existing framework and focus on the game design than doing everything by yourself from start.
Next time you'll know how do design the game and that makes it easier to design the IO framework too.
I'd say that (especially) in Java, threads are more for convenience than for performance (there are only oh-so-many cores, and I guess you'd like to keep some control on which threads have priority).
You haven't said anything about the volume of message exchange and number of clients, but you might want to consider more natural from the perspective of having a single thread handling the network IO, and having it dispatch incoming requests in a tight loop, and mapping the ACTUAL work (i.e. requests that you can't handle in the tight loop) to a thread pool. IIRC, in such a setup the limiting factor will be the maximum number of TCP connections you can wait for input on simultaneously in a single thread.
For increased fun, compare this with a solution in a language with has more light-weight threads, like in Erlang.
Of course, as #WhiteFang34 said, you won't need such a solution until you do some serious simulation/use of your code. Related techniques (and frameworks like Netty where you could find inspiration) might also be found in this question here.
There is a certain cost to thread creation, mostly the per-thread stack.

Categories