Currently my game server is small (one area and ~50 AI) and each time it sends out state update packets (UDP), it sends out a complete state to each client. This creates a packet size of ~1100 bytes. Pretty much all it sends is the following information for all entities:
int uid
int avatarImage
float xPos
float yPos
int direction
int attackState
24 bytes
Edit: More efficient structure
int uid
byte avatarImage
float xPos
float yPos
byte direction & attackState
14 bytes
but I am going to need to send more information eventually for the entities. For instance I am adding to this:
float targetXPos
float targetYPos
float speed
As more data is needed to be sent for each entity, I am fast approaching and most likely already passed the maximum size of the packet. So I am trying to think of a few possible ways to fix my problem:
1) Just build up the status update packet until I run out of room and then leave out the rest. Very bad client view. Not really an option.
2) Only send the data for the N closest entities to a client. This requires that each state update I calculate the closest N for each client. This could be very time consuming.
3) Some how design the packets so that I can send multiple for the same update. Currently, the client assumes the packets are in the following structure:
int currentMessageIndex
int numberOfPCs
N * PC Entity data
int numberOfNPCs
N * NPS Entity data
The client then takes this new data and completely overwrites its copy of the state. Since the packets are complete self contained, even if the client miss a packet, it will be ok. I am not sure how I will implement the idea of multiple packets for the same update, because if I miss one of them, what then? I can't overwrite the complete, outdated state with a update, partial state.
4) Only send the actual variables that change. For instance, for each entity I add one int that is a bit mask for each field. Things such as speed, target, direction, and avatarImage won't need to be sent every update. I still come back to the issue of what happens if the client misses a packet that did actually need to update one of these values. I am not sure how critical this would be. This also requires a little more computation on both the client and server side for creating/reading the packet, but not too much.
Any better ideas out there?
I would go with number 4 and number 2.
As you have realized, it is usually better to only send updates instead of a complete game state. But make sure you always send absolute values and not deltas, so that no information is lost should a packet be dropped. You can use dead reckoning on the client side to make animations as smooth as possible under crappy networking conditions.
You have to design carefully for this so that it is not critical if a packet is lost.
As for number 2, it does not has to be time consuming if you design for it. For example, you can have your game area divided into a grid of squares where each entity is always in exactly one particular square and let the game world keep track on this. In that case, calculating the entities in the 9 surronding grids is a O(1) operation.
This type of system is commonly solved using a Dead Reckoning or predictive contract algorithm. You can't depend on all clients getting updates at the same time, so you need to predict positions based on previously known values and then validate these predictions against server-generated results.
One problem I ran into sending delta updates (basically your 4th option), is that the data can be out of order or just old, depending on how concurrent your server is, basically race conditions of multiple clients updating the server at the same time.
My solution was to send an update notification to all the clients, with a bitmask setting a bit for each item that has been updated.
Then the client requests the current value of the specific data based on the bitmask, This also allows the client to only request data it is interested in.
The advantage of this is it avoids race conditions and the client always gets the latest value.
The disadvantage is it requires a roundtrip to get the actual value.
UPDATE to demonstrate the point I am trying to make.
Presume 4 clients A,B,C,D.
A and B send simultaneous updates to a mutable state X on the server Xa and Xb. As B gets in somewhat later than A the final state of X on the server is X= Xb.
The server sends out the updated status as it gets it to all clients, so C and D get the updated status of X, as the order of delivery is indeterminant C gets Xa then Xb and D gets Xb then Xa, so at this point the clients C and D have different ideas of the state of X, one reflects what the server has the other doesn't, it has deprecated (or old) data.
On the other hand if the server just sends out a notification that X has changed to all the clients, C and D will get two status change notifications for X. They both make requests for the current state of X, and they both end up with the final state of X on the server which is Xb.
As the order of the status notification is irrelevant as there is no data in it, and the clients issue a request for the updated state on each notification they both end up with consistent data,
I hope that is more clear as to the point I was trying to make.
Yes it does increase the latency, but the designer has to decide what is more important, the latency or having all clients reflecting the same state of mutable data. This will depend on the data and the game.
Related
I have a class witch is responsible for sending data to a client and all others classes use this when they need to send data. Let's call it 'DataSender.class'.
Now the client is asking us to control the throughput to a maximum of 50 calls per second.
I need to create an algoritm on this class (if possible) to keep the number of calls in the current second and if it reaches the maximum of 50, hold the process either with a sleep or something and continue without losing data.
Maybe I have to implement a queue or something better than a simple sleep. I need sugestions or a direction to follow.
For the sake of simplicity, just imagine that everyone is using something like this and I cannot change how they call me now. post() return is syncronous but that maybe I can possibly change (not sure yet):
DataSender ds = new DataSender();
ds.setdata(mydata);
if (ds.post()) {
//data send succesfull
}
If I am not mistaken, what you are looking for is throttling or rate limiting.
As Andrew S pointed out, you will need a Queue, to hold the extra requests and a sender algorithm.
The main point is that because you are not sending the data right away, the callers need to be aware that the data is not necessarily sent when their call returns. Usually senders will not be happy, if their call returns, they assume data is sent, and then the data is lost. There are many reasons why data can be lost in this scenario. As Andrew S pointed out, making senders aware that it will be an asynchronous send queue, maybe with confirmations upon successful send, will be a safer or proper approach.
You will need to decide on the size of the queue, you have to limit the size or you can run out of memory. And you need to decide what happens with the request when queue is full. Also what happens when the end point is not accessible (server down, network issues, solar flares), keep accepting data to queue or reject / throw exception.
Hint : if you have 50 requests limit, don't blast 50 requests and then sleep for 1 second. Figure out the interval between sends, send one request, make a short interval sleep.
Pro hint : if new data sent invalidates the data that was previously requested to be sent, but not sent yet, you can optimize the data sent by removing the invalidated data from the queue. This is called Conflation. Usual example is stock market prices. Say you got price of ACME 100 ten seconds ago and for whatever reason that data was not sent. If you get a new price for ACME 101 now, it is usually not useful to send the 100 price record, just send the 101 price.
I have a cluster of Web applications (Java + Tomcat), and the apps generate events. The volume is not that high, but somewhere under 10 million of events per day (unevenly distributed with peaks and valleys).
We need to display calculated aggregates of events on the user interface. Currently, this is done by running DB queries against a large table with many indexes on each page display.
Is there a good architectural approach to keeping a flow of events and also calculating (on the fly) and keeping aggregate numbers, like Average, Mean, Min, Max, etc?
Real time is not important, but near-real time is a must. For instance, a latency of under 1 minute is acceptable.
You can go with a push model or a pull model. (Or proactive/reactive if you like those terms better.) In both cases you've got a centralized records-keeper that must aggregate the data you want. In the push model your decentralized services/servers/applications will periodically push updates to your records keeper. In the pull model your records keeper will periodically query your decentralized services and request updates.
In a push scenario, each independent service/server/application keeps a log of their own event counter. Once the event counter ticks over a certain threshold it will notify the records keeper of the new status. For example, they could push an update every 100 or 1000 or delta events. Thus, (assuming there are no undetectable failures) the records keeper always knows how many events have occurred in the system plus or minus your delta. This gives great performance, since whenever someone wants to access the event records all of the data is already aggregated. One downside is that there's a low but persistent overhead imposed on the system. Another is that you never know if a service has failed or whether it just hasn't had a lot of events recently (plus/minus delta).
In the pull scenario your decentralized services still keep logs, but they don't do anything until the records keeper requests an update. When you want to know the state of the system the records keeper must query everyone in the system, get their responses, and assemble the results. This is probably the easiest thing to implement, and one positive aspect is that there is zero system overhead until you actually request an update. The downside is that update requests can cause a big drag on the system when they occur (since everyone drops everything and you generate traffic throughout the entire system). For this same reason it'll take a while to generate updates when the request comes in.
Now, both of these approaches are independent of implementation methodology. Either one of these approaches might be implemented with a completely flat topology, where every service communicates directly with your records keeper. Alternately you might form a hierarchy of services, so that each parent in the hierarchy is responsible for aggregating the data of their children. What you want to do in this respect really depends on exactly how fast an efficient the system needs to be.
So im doing an MMO, i was progressing alot, 6 months progrramming this thing.
The problem is that i was testing offline my game, today i have the brilliant idea to port foward my server and make it online, i knew it was gonna be slighty slower, but its awful! too much lag!!! the game is unplayable.
Im managing my packets like so....
Player wants to move up, client send movePacket to the server, the server recieve it, move the player in the server and send the new position to all clients...
Each time a monster move, the server send a the new position to all clients...
I thought i was over sending packets, but i test it just with the movement of the player... it seems to have a significant delay to recieve the packet and sending them to the client....
Im I doing this whole thing wrong?
Lag is always a problem with online games. While your current method is the standard way of doing things, as your finding out the lag becomes unbearable (a common problem in 1990's and early 2000's games). The best approach to take is the same approach that almost all modern games take which is do as much as you can client side and only use your authoritative server to resolve differences between predictions that clients make. Here are a some helpful ways of reducing perceived lag:
Client-side prediction
For an MMO this may be all you need. The basic idea of client-side prediction is to locally figure out what to do. In your game when Player wants to move up he sends a packet that says [request:1 content:moveup] then BEFORE receiving a response from the server, the client displays Player moving up one (unless there you can already tell that such a move is invalid i.e. moving up would mean running into a wall). If your server is really slow then Player may also move right before receiving a response so your packet next packet may look like [request:2 content:moveright] at which point in time you show your player to the right. Keep in mind at this point Player has already moved up and right before the server has even confirmed that moving up is a valid move. Now if the server responds that the new player position after packet 1 should be up and the position after packet 2 should be right then all is well. However, if lets say another player steps happens to be above Player then the server may respond with the player in a new location. At this point Player will 'teleport' to wherever the server tells him he's supposed to be. This doesn't happen often but when it does happen it can be extremely noticeable (you've probably noticed it in commercial fps games).
Interpolation
At this point your probably fine for a MMO game but in case it isn't (or for future reference) interpolation is your next step. Here the idea is to send more data regarding rates at which values change to help make the movement of other players smoother. This is the same concept as using a taylor series in mathematics to predict values for a function. In this case you may send position as well as velocity and maybe even acceleration data for all the entities in the game. This way the new position can be calculated as x = x + v*t + 0.5*att where t is the frame rate. Again, you show the player's predicted position before the server actually confirms that this is the correct position. When the next packet from the server comes, you'll inevitably be wrong most of the time but the more rate data you send, the less you'll be off by and thus the smaller the teleportation of other entities is.
If you want a more detailed outline of how to deal with lag in online games read the mini bible on multiplayer games: http://www.gabrielgambetta.com/fast_paced_multiplayer.html
I'm using Opengl and Jbox2d to write a real-time 2d game in Java.
I want to start coding the networking components.
Although it uses box2d, my game is very small and I want to create a bare-bones architecture using the Kryonet library.
The program itself is a 'match game' like chess. The most logical system I can think of would be to have dedicated server that stores all player data.
PlayerA and PlayerB will connect with the dedicated server which will facilitate a TCP link between their computers.
When the match is complete, both players will communicate the resulting data back to the dedicated server, which will authenticate and then save over their respective player data.
For those familiar, Diablo2 implemented a similar setup.
I want this TCP connection to simply send the shape coordinates Vector data from the host (lets say playerA) to the client (player B) which the client will then render on its own.
Then I want the client to send mouse/keyboard data back to the host. All of the processing will be run on the host's computer.
My first question: Are there any flaws in this network logic?
My second question: How does one implement barebones server/client packet transferring (as described) using Kryonet?
Note: I have done this exact type of packet transferring in C++ using a different library. The documentation/tutorials I've found for Kryonet are terrible. Suggesting another library with good support is an acceptable answer.
I know this is an old question, and I'm sure OP has gotten their answer one way or another, but for fun I thought I'd respond anyway. This exact question has been on my mind since I've been playing around with game development using Kryonet very recently.
Some early network games, such as Bungie's Marathon (1994) seemed like they did exactly this: each player's events would be sent using UDP to other players. Thus, if a player moved or fired a shot, the player's movement or shot's direction, velocity, etc. would be sent to other players. There are some problem with this approach. If one of the player's actions are temporarily lost over the network, a player or players appeared to be out of sync with everyone else. There was no "truth" or "reconciliation" of game state in such situations.
Another approach is to have the players compute their movements and actions client-side and send the updated positions to the dedicated server. With a server receiving all player state updates, there is an opportunity to reconcile them. They also do not become out of sync permanently if some data is lost on the network.
To compare with the previous example, this would be equivalent of each player sending their positions to the server, and then having the server send each player's position to all the other players. If one of these updates gets lost for some reason, a subsequent update will correct for it. However, if only key presses are sent, a single lost keypress throws the game out of sync because all clients are computing the other clients' positions separately.
For action games you can use a hybrid approach to minimize apparent lag. I've been using Kryonet successfully in this manner for an action game. Each player sends their state to the server at every render tick (though this is probably excessive and should be optimized). The state includes position, number of shots left, health, etc. The player also sends the shots they take (starting velocity and position.)
The server simply echos these things back to clients. Whenever a shot is received by a client it is computed client-side, including whether or not the shot hits the receiving player. Since the receiving player only computes their own state, everything appears to stay in sync from their own point of view. When they are hit, they feel the hit. When they hit another player, they believe they've hit the other player. It is up to the player "receiving" a shot to update their health and send that info back to the server.
This does mean that a shot could theoretically lag or "get lost" and that a player may think their shot has hit another player, while on the other player's screen no hit occurred. But in practice, I've found this approach works well.
Here's an example (pseudocode, don't expect it to compile):
class Client {
final Array<Shot> shots;
final HashMap<String, PlayerState> players; // map of player name to state
final String playerName;
void render() {
// handle player input
// compute shot movement
// for shot in shot, shot.position = shot.position + shot.velociy * delta_t
// if one of these shots hits another player, make it appear as though they've been hit, but wait for an update in their state before we know what really happened
// if an update from another player says they died, then render their death
// if one of these shots overlaps _me_, and only if it overlaps me, deduct health from my state (other players are doing their own hit detection)
// only send _my own_ game state to server
server.sendTCP(players.get(playerName));
}
void listener(Object receivedObject) {
if(o instanceOf PlayerState) {
// update everyone else's state for me
// but ignore my own state update (since I computed it.)
PlayerState p = (PlayerState)o;
if(!p.name.equals(playerName) {
players.add(p.name, p);
}
} else if (o instanceof Shot) {
// update everyone else's shots for me
// but ignore my own shot updates (since I computed them.)
Shot s = (Shot)o;
if(!s.firedBy.equals(playerName) {
shots.add(s);
}
}
}
}
class Server {
final HashMap<String, PlayerState> players; // map of player name to
void listener(Object receivedObject) {
// compute whether anybody won based on most recent player state
// send any updates to all players
for(Connection otherPlayerCon : server.getConnections()) {
otherPlayerCon.sendTCP(o);
}
}
}
I'm sure there are pitfalls with this approach as well, and that this can be improved upon in various ways. (It would, for example, easily allow a "hacked" client to dominate, since they could always send updates that didn't factor in any damage etc. But I consider this problem out-of-scope of the question.)
I've built a server application in java, where clients can connect . I've implemented a heartbeat system where the client is sending every x seconds a small message.
On the server side I save in a HashMap the time the client has sent the message , and I use a TimerTask for every client to check every x seconds if I received any message from the client.
Everything works ok for a small amount of client, but after the number of clients increase (2k+) the memory amount is very big, plus the Times has to deal with a lot of TimerTask and the program start to eat a lot of CPU.
Is there a better way to implement this? I thought about using a database and make a select the clients that didn't sent any update in a certain amount of time.
Do you think this will work better, or is a better way of doing this.
Few random suggestions:
Instead of one timer per each client, have only one global timer that examines the map of received heartbeats quite often (say 10 times per second). Iterate over that map and find dead clients. Remember about thread-safety of shared data structure!
If you want to use database, use a lightweight in-memory DB like h2. But still sounds like an overkill.
Use cache or some other expiring map and be notified every time something is evicted. This way you basically put something in the map when a client sends a heartbeat and if nothing happened with that entry within given amount of time, the map implementation will remove it, calling some sort of listener.
Use actor-based system like Akka (has Java API). You can have one actor on the server side that handles one client. It's much more efficient than one thread/timer.
Use a different data structure, e.g. a queue. Every time you receive a heartbeat, you remove client from the queue and put it back at the end. Now periodically check only the head of the queue, which should always contain the client with oldest heartbeat.