I have a server for a client-server game (ideally the basis for small MMO) and I am trying to determine the best way to organize everything. Here is an overview of what I have:
[server start]
load/create game state
start game loop on new thread
start listening for udp packets on new thread
while not closing
listen for new tcp connection
create new tcp client
start clients tcp listener on new thread
save game state
exit
[game loop]
sleep n milliseconds // Should I sleep here or not?
update game state
send relevant udp packet updates to client
every second
remove timed out clients
[listen for udp]
on receive, send to correct tcp client to process
[listen for tcp] (1 for each client)
manage tcp packets
Is this a fair design for managing the game state, tcp connections, and send/receive udp packets for state updates? Any comments or problems?
I am most interested on the best way to do the game loop. I know I will have issues if I have a large number of clients because I am spawning a new thread for each new client.
That looks like a reasonable start to the design. It wouldn't be bad to go beyond 10 clients (per your comment) in terms of scaling the number of threads. As long as the threads are mostly waiting and not actually processing something you can easily have thousands of them before things start to break down. I recall hitting some limits with a similar design over 5 years ago, and it was around 7000 threads.
Looks like a good design. If I were you I would use an existing nio framework like netty.
Just google java nio frameworks and you'll find several frameworks with examples and documentation.
Update
Thanks, but I prefer to do it all on my own. This is only for a side project of mine, and much of its purpose is
imho you'll learn more by using an existing framework and focus on the game design than doing everything by yourself from start.
Next time you'll know how do design the game and that makes it easier to design the IO framework too.
I'd say that (especially) in Java, threads are more for convenience than for performance (there are only oh-so-many cores, and I guess you'd like to keep some control on which threads have priority).
You haven't said anything about the volume of message exchange and number of clients, but you might want to consider more natural from the perspective of having a single thread handling the network IO, and having it dispatch incoming requests in a tight loop, and mapping the ACTUAL work (i.e. requests that you can't handle in the tight loop) to a thread pool. IIRC, in such a setup the limiting factor will be the maximum number of TCP connections you can wait for input on simultaneously in a single thread.
For increased fun, compare this with a solution in a language with has more light-weight threads, like in Erlang.
Of course, as #WhiteFang34 said, you won't need such a solution until you do some serious simulation/use of your code. Related techniques (and frameworks like Netty where you could find inspiration) might also be found in this question here.
There is a certain cost to thread creation, mostly the per-thread stack.
Related
I have a Java program with multiple sockets that occasionally have data that need to be read and processed, but there is an indeterminate amount of time which there is no data to be read. I need a good way to constantly check if there is data in the sockets, and process the data. Assigning one thread per socket is not a good idea since there could be too many sockets and use too much memory.
Currently, I have a couple threads, each one assigned to service its own list of sockets. If there was nothing to read in any of the sockets, then sleep one second, then loop. If there was something to read in any of the sockets, just loop without waiting and iterate through the sockets again.
The reason I do this is because I don't want to use up too much resources if there is nothing to read, and the one second delay is not a problem. The only down side is that there is no flexibility for sockets to jump threads, so the worst case scenario is that a single thread is overloaded with work, while the other threads are doing nothing.
Another idea I've had: create a thread pool, and queue up all the sockets to be serviced, and re-add them when they are serviced, but there is no good way to know if none of the sockets need servicing and the threads can take a break to free up CPU cycles.
Is there a good way to assign threads tasks, but not overload computer resources if there is nothing to do?
Ideally an event is triggered each time there is data available in a socket, but as far as I know, there is no way to do this, and I must poll the sockets.
To reiterate, I do not want a one to one relationship between socket and thread.
there could be too many sockets and use too much memory.
You can achieve 1,000 to 10,000 this way. Memory is much cheaper than it was when NIO was introduced 12 years ago and threads are more efficient and scalable than they used to be.
I have a couple threads, each one assigned to service its own list of sockets. If there was nothing to read in any of the sockets, then sleep one second, then loop.
I use a pause which busy waits for a short period and yeilds and finally sleeps for an escalating period of time.
You can use Selectors, but these are not simple to use correctly. In this situation I would use a library like netty or at the very least read the code it uses.
The only down side is that there is no flexibility for sockets to jump threads, so the worst case scenario is that a single thread is overloaded with work, while the other threads are doing nothing.
This is where using a thread per socket is better.
I must poll the sockets.
You can use Selectors, but these are single threaded and switch sockets between selectors is not simple.
I would reconsider using more threads for simplicity.
I'm currently developing a simple P2P network as an exercise. Each node in the network sends heartbeats to a subset of the other nodes to be able to detect nodes that have left the network. Beside the heartbeat packets I send packets when new nodes join/leave the network, when they want to locate a resource (small text files), etc. All packets are UDP packets.
Whenever I receive a packet I start a new thread that handles that specific packet. I am however concerned with the amount of threads I start during one applications lifetime which adds up to quite a lot (Especially because of the heartbeats). (There is also the risk of deadlocks and the like I would like to avoid).
I thought about having a queue or something where I put all incoming packets and have a single thread handling all packets one at a time from that queue (something like the producer-consumer pattern). I would like the packets to be handled rapidly so the sender doesn't think the packet is lost.
What is the best way to handle a lot of different incoming packets without having to start a new thread for each of them? Should I go with what I have, the producer-consuming or something different?
How long does it take to your application to process one packet?
For the ping ones it is probably faster to just process them as they are received, you can put the others in a shared data structure such as a particular blocking queue, so when the queue is empty the worker threads wait for new jobs, and when a new jobs is added, a thread is awaken and will do the job.
Probably starting one thread per packet makes you consume more time on starting and stopping the threads than in actually doing the job.
If the things to do in response of a packet aren't so time consuming for all the type of packets, it might be the case that the extra time spent with the locks of the queue and scheduling threads makes your program slower rather than faster.
In any case use thread pool and start the workers in the beginning. If you want you could increase or reduce the number of working threads dynamically depending on the load of the past minutes.
I would use an event driven architecture. Creating a new thread for every packet is not scalable, so this will work to a certain amount of workload, but there is a point it won't work anymore. You could compare that to e.g. a chat program like the Facebook chat where messages are the packets.
An event driven architecture would be scalable and IMHO exactly what your looking for. Just do some googling, there libraries for many programming languages, so just pick the right one for you (I like to do that in Erlang, Scala, C or Python).
edit: ok, didn't see the java tag. But the language does not matter.
Take a look at this link for example:
http://www.nightmare.com/medusa/async_sockets.html
I find it a quite good one to get the idea of event driven programming.
I am connecting 10 devices to a LAN, all of them have a udp server that goes like:
while(true){
serverSocket.receive(receivePacket);
dostuff(receivePacket);
}
serverSocket.close();
Now lets assume 9 of the devices try to initiate connection to the 10th device simultaenously. How can I accept all 9 instead of just the first which will then block the socket untill the server completes computation? Should I start a thread which will take care of dostuf() ? Will this let me get request from all of the simultaneous requests I got?
A basic design would have on thread responsible for handling incoming requests (with your desired limit) and then handing them off to worker/request handler threads. When each of these worker threads is finished, you'd want to update a shared/global counter to let the main thread know that it can establish a new connection. This will require a degree of synchronization, but it can be pretty fun.
Here's the idea:
serverThread:
while true:
serverLock.acquire()
if numberOfRequests < MAX_REQUESTS:
packet = socket.receive()
numberOfRequests++
requestThread(packet).run()
else
serverMonitor.wait(serverLock);
serverLock.release()
requestThread:
//handle packet
serverLock.acquire()
if numberOfRequests == MAX_REQUESTS:
numberOfRequests--
serverMonitor.pulse();
serverLock.release()
You'll want to make sure the synchronization is all correct, this is just to give you an idea of what you can start out with. But when you get the hang of it, you'll be able to make optimizations and enhancements. One particular enhancement, which also lends itself to limited number of requests, is something called a ThreadPool.
Regardless the basic structure is very much the same with most servers: a main thread responsible for handing off requests to worker threads. It's a neat and simple abstraction.
You can use threads in order to solve that problem. Since java already has an API that handles threads you can just create instance of runnable executors, take a look at the Executor Interface. Here is another useful link that could potentially help: blocking queue
Use a relatively larger size threadpool since udp doesn't require response.
main method will run as a listener and a threadpool will be doing rest of the heavy lifting
From what I read about Java NIO and non-blocking [Server]SocketChannels, it should be possible to write a TCP server that sustains several connections using only one thread - I'd make a Selector that waits for all relevant channels in the server's loop.
Is that right, or am I missing some important detail? What problems can I encounter?
(Background: The TCP communication would be for a small multiplayer game, so max. 10-20 simultaneous connections. Messages will be sent about every few seconds.)
Yes, you are right. The problems you can encounter is when the duration of processing is too long. In this case, you'd have to wrap the processing inside another thread, such that it will not interfere with the networking thread, and prevent noticeable delay.
Another detail; Channels are all about "moving" data. If your data you'd wish to send is ready, then you can move this data to a networking channel. The copying/buffering/etc. is all done by the NIO implementation, then.
Your single-threaded "networking thread" is only steering the connection, but not throttling it (read: weird analogy with a car).
The basic multithreaded approach is easier to design and implement than a single threaded NIO. Performance gain isn't noticeable in a small multiplayer game server/client, especially if a message is only sent every few seconds.
Brian Agnew said:
This all works well when the server-side processing
for each client is negligible. However a multi-threaded
approach will scale much better.
I beg to disagree. A one-client-one-thread approach will exhaust memory much faster than if you handle multiple clients per thread as you won't need a full stack per client. See the C10K paper for more on the topic: http://www.kegel.com/c10k.html
Anyway, if there won't be more than 20 clients, just use whatever is easiest to code and debug.
Yes you can. See this example for an illustration on how to do this.
The important section is this:
for (;;) { // Loop forever, processing client connections
// Wait for a client to connect
SocketChannel client = server.accept();
// Build response string, wrap, and encode to bytes (elided)
client.write(response);
client.close();
}
This all works well when the server-side processing for each client is negligible. However a multi-threaded approach will scale much better.
I would like to design a simple application (without j2ee and jms) that can process massive amount of messages (like in trading systems)
I have created a service that can receive messages and place them in a queue to so that the system won't stuck when overloaded.
Then I created a service (QueueService) that wraps the queue and has a pop method that pops out a message from the queue and if there is no messages returns null, this method is marked as "synchronized" for the next step.
I have created a class that knows how process the message (MessageHandler) and another class that can "listen" for messages in a new thread (MessageListener). The thread has a "while(true)" and all the time tries to pop a message.
If a message was returned, the thread calls the MessageHandler class and when it's done, he will ask for another message.
Now, I have configured the application to open 10 MessageListener to allow multi message processing.
I have now 10 threads that all time are in a loop.
Is that a good design??
Can anyone reference me to some books or sites how to handle such scenario??
Thanks,
Ronny
Seems from your description that you are on the right path, with one little exception. You implemented a busy wait on the retrieval of messages from the Queue.
A better way is to block your threads in the synchronised popMessage() method, doing a wait() on the queue resource when no more messages can be pop-ed. When adding (a) message(s) to the queue, the waiting threads are woken up via a notifyAll(), one or more threads will get a message and the rest re-enter the wait() state.
This way the distribution of CPU resources will be smoother.
I understand that queuing providers like Websphere and Sonic cost money, but there's always JBoss Messaging, FUSE with ApacheMQ, and others. Don't try and make a better JMS than JMS. Most JMS providers have persistence capabilities that for provide fault tolerance if the Queue or App server dies. Don't reinvent the wheel.
Reading between the lines a little it sounds like your not using a JMS provider such as MQ. Your solution sounds in the most parts to be ok however I would question your reasons for not using JMS.
You mention something about trading, I can confirm a lot of trading systems use JMS with and without j2ee. If you really want high performance, reliability and piece of mind don't reinvent the wheel by writing your own queuing system take a look at some of the JMS providers and their client API's.
karl
Event loop
How about using a event loop/message pump instead? I actually learned this technique from watching the excellent node.js video presentation from Ryan which I think you should really watch if not already.
You push at most 10 messages from Thread a, to Thread b(blocking if full). Thread a has an unbounded [LinkedBlockingQueue][3](). Thread b has a bounded [ArrayBlocking][4] of size 10 (new ArrayBlockingQueue(10)). Both thread a and thread b have an endless "while loop". Thread b will process messages available from the ArrayBlockingQueue. This way you will only have 2 endless "while loops". As a side note it might even be better to use 2 arrayBlockingQueues when reading the specification because of the following sentence:
Linked queues typically have higher
throughput than array-based queues but
less predictable performance in most
concurrent applications.
Off course the array backed queue has a disadvantage that it will use more memory because you will have to set the size prior(too small is bad, as it will block when full, too big could also be a problem if low on memory) use.
Accepted solution:
In my opinion you should prefer my solution above the accepted solution. The reason is that if it all posible you should only use the java.util.concurrent package. Writing proper threaded code is hard. When you make a mistake you will end up with deadlocks, starvations, etc.
Redis:
Like others already mentioned you should use a JMS for this. My suggestion is something along the line of this, but in my opinion simpler to use/install. First of all I assume your server is running Linux. I would advise you to install Redis. Redis is really awesome/fast and you should also use it as your datastore. It has blocking list operations which you can use. Redis will store your results to disc, but in a very efficient manner.
Good luck!
While it is now showing it's age, Practical .NET for Financial Markets demonstrates some of the universal concepts you should consider when developing a financial trading system. Athough it is geared toward .Net, you should be able to translate the general concepts to Java.
The separation of listening for the message and it's processing seems sensible to me. Having a scalable number of processing threads also is good, you can tune the number as you find out how much parallel processing works on your platform.
The bit I'm less happy about is the way that the threads poll for message arrival - here you're doing busy work, and if you add sleeps to reduce that then you don't react immediately to message arrival. The JMS APIs and MDBs take a more event driven approach. I would take a look at how that's implemented in an open source JMS so that you can see alternatives. [I also endorse the opinion that re-inventing JMS for yourself is probably a bad idea.] The thing to bear in mind is that as your systems get more complex, you add more queues and more processing busy work has greater impact.
The other concern taht I have is that you will hit limitiations of using a single machine, first you may allow greater scalability my allowing listeners to be on many machines. Second, you have a single point of failure. Clearly solving this sort of stuff is where the Messaging vendors make their money. This is another reason why Buy rather than Build tends to be a win for complex middleware.
You need very light, super fast, scalable queuing system. Try Hazelcast distributed queue!
It is a distributed implementation of java.util.concurrent.BlockingQueue. Check out the documentation for detail.
Hazelcast is actually a little more than a distributed queue; it is transactional, distributed implementation of queue, topic, map, multimap, lock, executor service for Java.
It is released under Apache license.