I am responsible of the network part of a multiplayergame.
I hope anybody of you got some eperience with that.
My questions are:
Should I create an Object which contains all information (Coordinates, Stats, Chat) or is it better to send an own Object for each of them?
And how can i avoid the Object/s beeing cached at the client so i can update the Object and send it again? (i tried ObjectInputStream.reset() but it still received the same)
(Sorry for my bad english ;))
For every time send all data is not good solution, just diff of previous values can be better. Sometimes(eg 1 time for every 10 or maybe 100 update) send all values to sync.
1.in the logic layer, you can split the objects, and in transmission layer you send what you want, of course you can combine them and send.
2.you can maintain a version for each user and the client also have the version number, when things change, update the corresponding version in the server and then send the updates to all the clients, then the client should update version. it should be a subcribe mode.
Related
A client and a server application needs to be implemented in Java. The scenario requires to read large number of small objects from database on the server side and send them to client.
This is not about transferring large files rather it requires streaming large number of small objects to client.
The number of objects needs to be sent from server to client in a single request could be one or one million (let's assume the number of clients is limited for the sake of discussion - ignore throttling).
The total size of the objects in most cases will be too big to hold them in memory. A way to defer read and send operation on the server side until client requests the object is needed.
Based on my previous experience, WCF framework of .NET supports the scenario above with
transferMode of StreamedResponse
ability to return IEnumerable of objects
with the help of yield defer serialization
Is there a Java framework that can stream objects as they requested while keeping the connection open with the client?
NOTE: This may sound like a very general question, but I am hoping to give specific details that would hopefully lead to a clear answer benefiting me and possible others.
A standard approach is to use a form of pagination and get the results in chunks which can be accommodated temporarily in memory. How to do that specific it depends, but a basic JDBC approach would be to first execute a statement to find out the number of records and then get them in chunks. For example, Oracle has a ROWNUM column that you use in order to manage the ranges of records to return. Other databases have some other options.
You could use ObjectOutputStream / ObjectInputStream to do this.
The key to making this work would be to periodically call reset() on the output stream. If you don't do that, the sending and receiving ends will build a massive map that contains references to all objects sent / received over the stream.
However, there may be issues with keeping a single request / response (or database cursor) open for a long time. And resuming a stream that failed could be problematic. So your solution should probably combine the above with some kind of pagination.
The other thing to note is that a scalable solution needs to avoid network latency from becoming the bottleneck. It may be worth implementing a receiver thread that eagerly pulls objects from the stream and buffers them in a (bounded) queue.
I have a enhanced for loop that enters ch.ethz.ssh2.connection to obtain over 200 values. Every time it goes into the loop a new server is being authenicated and it only retrieves one value from that server. Each time it's looped the data are being saved into an arraylist to be displayed in html tables using thymeleaf. But this method takes forever for eclipse to run through all 200 values one at a time, then it have to restart when I open up localhost:8080 to load up all the tables with all the data. It takes over 5 mins to load the page up. What can I do to speed things up?
Problem in code
List<DartModel> data = new ArrayList<DartModel>();
for(String server:serverArray) {
try {
conn = new ch.ethz.ssh2.Connection(server);
conn.connect();
boolean isAuthenticated = conn
.authenticateWithPassword(username_array[j],
password_array[j]);
if (isAuthenticated == false) {
throw new IOException("Authentication failed.");
}
I need to somehow recode the code above so I can obtain the data all in super quickly.
Output
Loop1: Server1
Loop2: DifferentServer2
Loop3: AllDifferentSever3
and goes on......
Alternative
I was thinking to let the java program run several times while saving the data into redis. Then Auto refresh the program, when it runs it sends the data into redis. Set an expiration time, But I was unable to get the data into thymeleaf html tables. Would this work? If so how can I display this into thymeleaf.
You can query multiple servers at once (in parallel).
If your framework for remote connections is blocking (the methods you call actually wait until the response is received), you'd have to start handful of threads (one thread for one server in the edge case) to do that in parallel (which doesn't scale very well).
When you can use some Future/Promise based tool, you can do it without much overhead (convert 200 futures into one future of 200 values/responses).
Note: In case you would query single server for 200 responses, it is not good idea to do it this way, because you would flood it with too many requests at once. Then you should implement some way to get all the data by one request.
Short answer:
Create a message protocol that sends all values in one response.
More Info:
Define a simple response message protocol.
One simple example might be this:
count,value,...
count: contains the number of values returned.
value: one of the values.
Concrete simple example:
5,123,234,345,456,567
You can go bigger and define the response using json or XML.
Use whatever seems best for your implementation.
Edit: My bad, this will not work if you are polling multiple servers. This solution assumes that you are retrieving 200 values from one server, not one value from 200 servers.
At face value, it's hard to tell without looking at your code (recommend sharing a gist or your code repo).
I assume you are using library. In general, a single SSH2 operation will make several attemtps to authenticate a client. It will iterate over several "methods". I you are using ssh over a command line, you can see these when you use the flag -vv. If one fails, it tries the next. The java library implementation that I found appears to do the same.
In the loop you posted (assuming you loop 200 times), you'll try to authenticate 200 x (authentication method order). I suspect the majority of your execution may be burned in SSH handshakes. This can be avoided by making sure you use your connection only once and get as much as you can from your (already authenticated) opened socket.
Consider moving you connection outside the loop. If you absolutely must do ssh, and the data you are using is too large, parallelism may help some, but that will involve more coordination.
I have a little GAE application, a backend for my Android app.
I have a servlet in the app that pulls data from the datastore and send it to the user.
I don't want anyone to be able to use this servlet, so I store a private key in the app, and for every request I'm sending a token - a hash string of the private key and the current milliseconds, and the milliseconds I've used in the hash.
The server is taking the milliseconds and the private key, and comparing it with the token. If it went well, the server is storing the milliseconds in a HashSet so it will know not to use it again. (Someone can sniff the device data - and send the same milliseconds and token over and over again).
At first, I held a static field in the Servlet class, which was later discovered as mistake, because this field is not persisted, and all the data is getting lost when the instance get destroyed.
I've read about Memcache, but it's not an optimal solution because from what I understand, the data in the Memcache can get erased if the app is low on memory, or even if there are server failures.
I don't want to use datastore because it will really make the requests much slower.
I guess I'm not the first who is facing the problem.
How can I solve it?
I used a reverse approach in one of my apps:
Whenever a new client connects, I generate a set of three random "challenges" on the server (like your milliseconds), which I store in memcache with an expiration time of a minute or so. Then I send these challenges to the client. For each request that the client makes, it needs to use one of these 3 challenges (hashed with aprivate key). The server then deletes the used challenge, creates a new one and sends it to the client. That way, each challenge is single-use and I won't have to worry about replay-attacks.
A couple of notes on this approach:
The reason I generate 3 challenges is to allow for multiple requests in flight in parallel.
The longer you make the challenge, the less likely it will be that it will be randomly reused (allowing for a playback attack then).
If memcache forgets the challenges I stored, the app's request will fail. In the failure, response I include a "forget all other challenges and use these 3 new ones: ..." command.
You can tie the challenges to the client's IP address or some other sort of session info to make it even less likely that someone can "hack" you.
In general, it's probably always best to have the server generate the challenge or salt for an authentication than giving that flexibility to the client.
Another approach you could use if you would like to stick with using a timestamp is to use the first request interchange to determine the time offset between your server instance and your client device. Then, only accept requests with a "current" timestamp. For this, you would need to determine the uncertainty with which you can get the time offset and use that as a cutoff for a timestamp not to be current. To prevent replay-attacks within that cutoff period, you might need to save and disallow the last couple of timestamps used. This, you can probably do inside your instance since AppEngine, AFAIK, routes requests from the same client preferentially to the same instance. Then, if it takes longer to shut down an instance and restart one (i.e. to clear your disallow cache) than your "current"-cutoff is, you shouldn't have too many issues with replay-attacks.
I'm building an application that is a kind of registry. Think about the dictionary: you lookup for a word and it return something if the word is found. Now, that registry is going to store valuable informations about companies, and some could be tempted to get the complete listing. My application use EJB 3.0 that replies to WS.
So I was thinking about permits a maximum of 10 query per IP address per day. Storing the IP address and a counter on a table that would be empty by a script every night.
Is it a good idea/practice to do so? If yes, how can I get the IP address on the EJB side?
Is there a better way to prevent something to get all the data from my database?
I've also though about CAPTCHA but I think it's a pain for the user, and sometime, they are difficult to read even for real human.
Hope it's all clear since I'm not english...
Thanks
Alain
I'd say the limit of 10 query per day per IP is not very good. Take into account that many people may share the same public IP.
Although it's not 100% accurate you could analyze if an unusual amount of request are coming from the same IP in a short period of time. In case that your alarm sounds, you show a CAPTCHA.
An alternative is to put an unique request based token in a hidden field of the form which you store in the session scope and then compare that on submit of the form. That would filter out the bots which doesn't maintain the session and that are already pretty much.
To go a step further, you could add a timestamp to the request based token and then check if the form is submitted within reasonable time, e.g. 5 seconds (at least the fastest time a normal human can enter and submit the form). That would filter out another bots which usually instantly fills and submits the form in subsecond. Another advantage of this is that in case of a very smart bot that it is then forced to take it more easy with firing lot of subsequent requests.
I would at least not rely on the IP address. It comes with too much external disturbing factors.
I would like to design a PvP game uses flash in client and java socket server, but I do need server validates trajectory and if bullets hit target from cheating.
Is there any tutorial or paper provides how to do this ?
To do it you need to have a server-side logic.
Mainly you will use clients just to show gamestates that are sent by server (if you want you can also let your clients show whatever they think is right until a new gamestate is received and synch to it) and to send to the servers just actions that are done (clicks or key presses) while your server should take care of everything else..
clients should be mainly frontends for the world representation..
The general idea for a uncheatable multiplayer game is:
You should only send the keys the user is pressing, the server stores it and after some intervals, it processes the informations and send a snapshot of the current position of all objects in the game.
Maybe if you don't want to waste too much network traffic:
You could save everything's position for 2 seconds, record the last user input (with the input, he may also send his last snapshot id), then send only what differs from the position now and what the user have.
Since you asked for patterns, I am assuming you understand the kind of logic you want to write on server side, but not sure about how to organize your code.
You should look at strategy pattern (http://en.wikipedia.org/wiki/Strategy_pattern) once. Since in this problem based on various locations on the screen, you need to change the way server validates the data, strategy pattern is a good fit for the problem.
#Jack: +1, and you should not actually do physical exercises at server,server just check start point, end point, range and time ect... if they are reasonable!