I have a Java program (call it Jack) and an Objective-C program (call it Oscar), which I run on the same Mac OS X computer. Oscar sends a string message via a socket to Jack, once per second.
For reliability and performance would it be better to maintain an open socket between Jack and Oscar? Or would it be better to repeatedly open a socket, send the message, then close the socket again?
Keep it open. You are going to need it a lot (once per second), and there's some overhead involved in opening new sockets. Plus, you will be chewing up the heap with new objects until the garbage collector comes by.
Keep it open Jack, keep it open. It takes me some CPU cycles already to open and close the connection only to do it again the next second.
Would it be easier to make a Java Native Interface call? Personally, I think that messing with sockets locally might be a little overkill, but then again, I do not know the whole story about what you are trying to accomplish.
If you can stand to drop a packet or two, you might want to use UDP instead.
Every once in a while, long-term TCP connections get a little funky and hang when the connection goes bad. Usually the recover, but not always--and in the meantime they can get slow.
UDP is made to operate better in cases where you are resending all the data every time and didn't need every single packet because you don't care about the history...
Keeping an open connection may work for you and is theoretically fine... I just haven't always had the best luck.
Sorry I read the question rather quickly. Yes I would keep the socket open if it is on the local machine. It makes no sense to be opening and closing every memory needs to be allocated. Grinding back and forth won't help in that case.
Just so I understand correctly, you are writing a Cocoa server app that listens for a connection just so it can pass some data to a Java app that doesn't have access to the information returned from the Cocoa API?
Are you sure you couldn't just get the results from a terminal command in Java. I'm totally guessing but I thought if this is the case you could improve what you plan to do.
Related
I;m looking for a very basis IPC mechanism between Java programs. I prefer not to make use of sockets because my 'agent' is spawning new JVM's and setting up sockets in such an environments is a bit more complicated.
I was thinking about having 2 files per spawned JVM: in and out. On the in, the agent sends commands to the worker. And on the out, the worker sends back a response back to the agent.
The big problem is that till so far I didn't manage to get the communication up and running. Just creating ObjectOutputStream/ObjectInputStream doesn't work out of the box, because the readObject method isn't blocking. It will throw an EOFException when there is no content instead instead of blocking. Luckily that was easy to fix, by adding a delay and trying again a bit later.
So I got my POC up and running, but eventually I ran into a stream corruption issue. So apparently, even in append only mode, you still can run into corruption issue. So I started to look at the FileLock, but I'm running now into a ""main" java.lang.Error: java.io.IOException: Bad file descriptor".
So till so far the 'lets do the simple file thing' has been quite an undertaking and I'm not sure if I'm in the right path at all. I don't want to introduce a heavy weight solution like JMS or a less heavyweight solution like sockets. Does anyone know something extremely simple that solves this particular problem? My preference is still for a file based approach.
I have coded a server in Java that will have several clients connected to it. I want to be able to see how much data is sent to each client to be able to make decisions like allowing more clients or decreasing them, or even to increase/decrease the frequency at which the data is sent.
How can I do that?
I'm currently using Java's Socket API, but if any other library gives me this easily, then a change can be done. The server will run in a linux flavor, likely Ubuntu, so a OS specific answer is welcomed too.
When you write data to the socket, you need to remember how much you sent. There really isn't smarter way to do this.
Generally speaking, you would allow the server to have a limited number of connections. Trying to tune the system based on bandwidth restrictions is very hard to get right.
I'm using Java and RMI in order to execute 100k Montecarlo Simulations on a cluster of hundreds of cores.
The approach I'm using is to have a client app that invokes RMI processes and divides simulations on the number of available (RMI) processes on the grid.
Once that the simulations have been run I have to reaggregate results.
The only limit I have is that all this has to happen in less than 500ms.
The process is actually in place BUT randomly, from time to time, one of the RMI call takes 200ms more to execute.
I've added loads of logs and timings all over the place and as possible reason I've already discarded:
1) Simulations taking extra time
2) Data transfer (it constantly works, only sometimes the slowdown is verified, and only on a subset of RMI calls)
3) Transferring results back (I can clearly timing how long from last RMI calls return to the end of the process)
The only thing I cannot measure is IF any of the RMI Call is taking extra time to be initialized (and honestly is the only thing I can suppose). The reason of this is that -unfortunately- clocks are not synchronized :(
Is that possible that the RMI remote process got passivated/detached/collected even if I keep a (Remote) reference to it from the client?
Hope the question is clear enough (I'm pretty much sure it isn't).
Thanks a mil and do not hesitate to make more questions if it is not clear enough.
Regards,
Giovanni
Is that possible that the RMI remote process got passivated/detached/collected even if I keep a (Remote) reference to it from the client?
Unlikely, but possible. The RMI remote process should not be collected (as the RMI FAQ indicates for VM exit conditions). It could, however, be paged to disk if the OS desired.
Do you have a way to rule out GC calls (other than writing a monitor with JVM TI)?
Also, is your code structured in such a way that you send off all calls from your aggregator asynchronously, have the replies append to a list, and aggregate the results when your critical time is up, even if some processors haven't returned results? I'm assuming that each processor is an independent, random event and that it's safe to ignore some results. If not, disregard.
I finally came up with issue. Basically after insuring that the stub wasn't getting deallocated and that the GC wasn't triggered behind the scene, I used wireshark for understanding if there was any network issue.
What I found out it was that randomly one of the packet got lost and TCP needed on our network 120ms (41 retransmission) for correctly re-transfer data.
When switching to jdk7, SDP and infiniband, we didn't experience the issue anymore.
So basically the answer to my question was... PACKET LOST!
Thanks who replied to the post it helped to focus on the right path!
Gio
Please stop me before I make a big mistake :) - I'm trying to write a simple multi-player quiz game for Android phones to get some experience writing server code.
I have never written server code before.
I have experience in Java and using Sockets seems like the easiest option for me. A browser game would mean platform independence but I don't know how to get around the lack of push using Http from the Server to the Browser.
This is how the game would play out, it should give some idea of what I require;
A user starts the App and it connects using a Socket to my server.
The server waits for 4 players, groups them into a game and then broadcasts the first question for the quiz.
After all the players have submitted their answers (Or 5 seconds has elapsed) the Server distributes the correct answer with the next question.
That's the basics, you can probably fill in the finer details, it's just a toy project really.
MY QUESTION IS;
What are the pitfalls of using a simple JAR on the server to handle client requests? The server code registers a ServerSocket when it is first run and creates a thread pool for dealing with incoming client connections. Is there an option that is inherently better for connection to multiple clients in real time with two way communication?
A simple example is in the SUN tutorials at the bottom you can see the source for a multithreaded server, except that I have a pool of threads initially to reduce overhead, my server is largely the same.
How many clients do you expect this system to be able to handle? If we have a new thread for each client I can see that being a limit, also the number of free Sockets for concurrent players. Threads seem to top out at around 6500 with the number of sockets available nearly ten times that.
To be honest If my game could handle 20 concurrent players that would be fine but I'm trying to learn if this approach is inherently stupid. Any articles on setting up a simple chess server or something would be amazing, I just can't find any.
Thanks in advance oh knowledgeable ones,
Gav
You can handle 20 concurrent players fine with a Java server. The biggest thing to make sure you do is avoid any kind of blocking UI like it was the devil itself.
As a bonus, if you stick with non-blocking I/O you can probably do the whole thing single-threaded.
Scaling much past 100 users or so may need to get into multiple processes/servers, depending on how much load each user places on your client.
It should be able to do it without an issue as long as you code it properly.
Project Darkstar
You can get around the "push from server to client over HTTP" problem by using the Long Poll method.
However, using TCP sockets for this will be fine too. Plenty of games have been written this way.
I'm currently translating an API from C# to Java which has a network component.
The C# version seems to keep the input and output streams and the socket open for the duration of its classes being used.
Is this correct?
Bearing in mind that the application is sending commands and receiving events based on user input, is it more sensible to open a new socket stream for each "message"?
I'm maintaining a ServerSocket for listening to the server throwing events but I'm not so sure that maintaining a Socket and output stream for outbound comms is such a good idea.
I'm not really used to Socket programming. As with many developers I usually work at the application layer when I need to do networking and not at the socket layer, and it's been 5 or 6 years since I did this stuff at university.
Cheers for the help. I guess this is more asking for advice than for a definitive answer.
There is a trade off between the cost of keeping the connections open and the cost of creating those connections.
Creating connections costs time and bandwidth. You have to do the 3-way TCP handshake, launch a new server thread, ...
Keeping connections open costs mainly memory and connections. Network connections are a resource limited by the OS. If you have too many clients connected, you might run out of available connections. It will cost memory as you will have one thread open for each connection, with its associated state.
The right balanced will be different based on the usage you expect. If you have a lot of clients connecting for short period of times, it's probably gonna be more efficient to close the connections. If you have few clients connecting for long period of time, you should probably keep the connections open ...
If you've only got a single socket on the client and the server, you should keep it open for as long as possible.
If your application and the server it talks to are close, network-wise, it MAY be sensible to close the connection, but if they're distant, network-wise, you are probably better off letting the socket live for the duration.
Guillaume mentioned the 3-way handshake and that basically means that opening a socket will take a minimum of 3 times the shortest packet transit time. That can be approximated by "half the ping round-trip" and can easily reach 60-100 ms for long distances. If you end up with an additional 300 ms wait, for each command, will that impact the user experience?
Personally, I would leave the socket open, it's easier and doesn't cost time for every instance of "need to send something", the relative cost is small (one file descriptor, a bit of memory for the data structures in user-space and some extra storage in the kernel).
It depends on how frequent you expect the user to type in commands. If it happens quite infrequently, you could perhaps close the sockets. If frequent, creating sockets repeatedly can be an expensive operation.
Now having said that, how expensive, in terms of machine resources, is it to have a socket connection open for infrequent data? Why exactly do you think that "maintaining a Socket and output stream for outbound comms is not such a good idea" (even though it seems the right thing to do)? On the other hand, this is different for file streams if you expect that other processes might want to use the same file. Closing the file stream quickly in this case would be the way to go.
How likely is it that you are going to run out of the many TCP connections you can create, which other processes making outbound connections might want to use? Or do you expect to have a large number of clients connecting to your server at a time?
You can also look at DatagramSocket and DatagramPacket. The advantage is lower over-head, the disadvantage is the over-head that regular Socket provides.
I suggest you look at using an existing messaging solution like ActiveMQ or Netty. This will handle lot of the issues you may find with messaging.
I am coming a bit late, but I didn't see anyone suggest that.
I think it will be wise to consider pooling your connections(doesn't matter if Socket or TCP), being able to maintain couple connections open and quickly reuse them in your code base would be optimal in case of performance.
In fact, Roslyn compiler extensively use this technique in a lot of places.
https://github.com/dotnet/roslyn/search?l=C%23&q=pooled&type=&utf8=%E2%9C%93