I have a server that needs to stream data to multiple clients as quickly as possible in the Observer pattern.
At least 200 messages need to be sent to each client per second until the client disconnects from the sever, and each message consists of 8 values of several primitive types. Because each message needs to be sent as soon as it is created, messages cannot be combined into one large message. Both the server and the clients reside on the same LAN.
Which technology is more suitable to implement streaming under this situation, RMI or socket?
The overhead of RMI is significant so it would not be suitable. It would be better to create a simple protocol and use sockets to send the data.
Depending on the latency that is acceptable you should configure socket buffer sizes and turn off Nagle algorithm.
I would not use RMI for this, RMI is just there for Remote Method Invocation, i.e. when the client wants to execute a method (i.e. some business logic) on the server side.
Sockets are OK for this, but you might want to consider JMS (Java Messaging Service) for this specific scenario. JMS supports something called a Topic, which is essentially a broadcast to all listeners interested in that topic. It is also generally optimised to be very fast.
You can use something like Apache ActiveMQ to achieve what you want. You also have lots of options such as persistence (in case the queue goes down messages remain in queue), message expiry (in case you want the messages to become outdated if a client does not pick them up) etc.
You can obviously implement all this using normal Sockets and take care of everything yourself, but JMS provides all this for you. You can send text or binary data, or even serialized object (I don't personally recommend the latter).
RMI is a request/response protocol, not a streaming protocol. Use TCP.
Related
I'm trying to create a small software component in Java which has the following workflow:
As you can see, it receives messages via a single TCP connection (being the client and listening to a stream). Each received message gets processed (means it is converted into another format and some information is added). Afterwards, the newly created message should be distributed among several receivers. The receivers can be grouped: some only unterstand plain TCP or UDP, some of them use a HTTP-REST interface.
Do you know some kind of pattern or best practice to realize this szenario?
You're essentially describing an Enterprise Service Bus. There are plenty of them available, from commercial software to lightweight open-source ones.
I'd like to implement a function of realtime message such as chatting in facebook but several questions confuse me:
1. To reduce overhead of server and make it really 'realtime', I should use a full-duplex way of communication like socket instead of Ajax, is that right?
2. If I use socket, which protocol should I choose, TCP or UDP?
3. Assuming that I am using TCP, will server keep trying to resend the lost packages so that it would take much overhead?
4. What if the network failed in a communication between server and a client? Will the socket close it self or I should handle with several kinds of network conditions?
Can anyone help?
You can use WebSockets. XMLHttpRequest is probably obsolete now for anything real-time (because it's not real-time), though you could fall back to using it for people who use a browser that doesn't support WebSockets
Use UDP if the information you are sending is only valid for the time it is sent, for example in games that would be the position of the players (you don't care to receive the position they were in 5 seconds ago). Besides, you can't use UDP with WebSockets
For anything other than that, use TCP (unless you do hole punching to achieve p2p), because loss of data is probably bad for you, and TCP handles that.
You would have to check for and resend lost data manually with UDP anyway, unless failure in communication is acceptable by you
You will get an IOException. If the connection was closed improperly the exception will be thrown after a timeout of unresponsiveness that you are able to change according to your needs. This is assuming you use TCP, otherwise you should figure out yourself when you consider clients connected or disconnected according to the responses/data you receive (or not receive).
I have two Java netty servers that need to pass lots of messages between themselves fairly frequently and I want it to happen fairly promptly.
I need a TCP socket between the two servers that I can send these messages over.
These messages are already-packed byte[] arrays and are self-contained.
The servers are currently both running HTTP interfaces.
What is the best way to do this?
For example, websockets might be a good fit yet I am unable to find any websocket client examples in netty..
I'm a netty newbie so would need some strong simple examples. It surely can't be so hard?!
Since you mentioned HTTP, you could look at the HttpStaticFileServer in the examples.
When established, a TCP connection is a Channel. To send your messages, you need to write them to a ChannelBuffer and call channel.write.
Of course, this does not cover message borders. The Telnet example shows a case, where the messages are delimited by the newline character.
I have written a socket program in Java. Both server and client can sent/receive data to each other. But I found that if client sends data to server using TCP then internally TCP sends acknowledgement to the client once the data is received by the server. I want to detect or handle that acknowledgement. How can I read or write data in TCP so that I can handle TCP acknowledgement. Thanks.
This is simply not possible, even if you were programming in C directly against the native OS sockets API. One of the points of the sockets API is that it abstracts this away for you.
The sending and receiving of data at the TCP layer doesn't necessarily correlate with your Java calls to send or receive data. The data you send in one Java call may be broken into several pieces which may be buffered, sent and received independently, or even received out of order.
See here for more discussion about this.
Any data sent over a TCP socket is acknowledged in both directions. Data sent from client to server is the same as data sent from server to client as far as TCP wire communications and application signaling. As #Eric mentions, there is no way to get at that signaling.
It may be that you are talking about timing out while waiting for the response from the server. That you'd like to detect if a response is taking too long. Is it possible that the client's message is larger than the server's response so the buffering is getting in the way of the response but not the initial request? Have you tried to use non-blocking sockets?
You might want to take a look at the NIO code if you have not already done so. It has a number of classes that give you more fine grained control over socket communications.
This is not possible in pure Java since Java's network API all handles socket, which hides all the TCP details.
You need a protocol that can handle IP-layer data so you can get TCP headers. DLPI is the most popular API to do this,
http://www.opengroup.org/onlinepubs/9638599/chap1.htm
Unfortunately, there is not Java implementation of such network. You have to use native code through JNI to do this.
I want to detect or handle that acknowledgement.
There is no API for receiving or detecting the ACKs at any level above the protocol stack.
Rethink your requirement. Knowing that the data has got to the server isn't any use to an application. What you want to know is that the peer application has received it, in which case you have to get the peer application to acknowledge at the application protocol level.
I have a serial hardware device that I'd like to share with multiple applications, that may reside on different machines within or spanning multiple networks. A key requirement is that the system must support bi-directional communication, such that clients/serial device can exist behind firewalls and/or on different networks and still talk to each other (send and receive) through a central server. Another requirement of the system is that the clients must be able to determine if the gateway/serial device is offline/online.
This serial device is capable of receiving and sending packets to a wireless network. The software that communicates with the serial device is written in Java, and I'd like to keep it a 100% Java solution, if possible.
I am currently looking at XMPP, using Jive software's Openfire server and Smack API. With this solution, packets coming off the serial device are delivered to clients via XMPP. Similarly, any client application may send packets to the serial device, via the Smack API. Packets are just byte arrays, and limited is size to around 100 bytes, so they can be converted to hex strings and sent as text in the body of a message. The system should be tolerant of the clients/serial device going offline, meaning they will automatically reconnect when they are available again, but packets will be discarded if the client is offline. The packets must be sent and received in near real-time, so offline delivery is not desired. Security should be provided by messaging system and provided client API.
I'd like to hear of other possible solutions. I thought of using JMS but it seems a bit too heavyweight and I'm not sure it will support the requirement of knowing if clients and/or the gateway/serial device is offline.
Jini might fit the job. It works really well in distributed environments where multicast is available but it also works in unicast, and is quite fast. Not only it provides remote services, but also remote events, and distributed transactions if you need them. A downside is that it only works with Java.
Where I work, Jini is used in a infrastructure with more that 1000 machines, with each machine providing remote services used to access the devices connect to the machine serial ports.
You really need to provide a bit more detail... do the clients need guaranteed delivery? What about offline delivery? Is this part of a larger system? Do you need encryption? Security?
If you want the smallest footprint possible, then should transmit data using SocketServer, Sockets, and serialization. But then you lose all of the advantages of the 3rd party solutions you mentioned, which typically include reliability, delivery guarantees, security, management, etc.
I would personally use JMS, but that's because I'm familiar with it. There are a number of stand-alone servers that can be deployed out-of-the-box with virtually no configuration. They all provide for guaranteed delivery, some security, encryption, and a number of other easy-to-use features. Coding a JMS publisher or subscriber is pretty easy.
Update:
If you want the most ease in coding, then I would look at the third-party solutions. Looking at Smack/XMPP, the API seems to be a little easier than a JMS for the functionality you asked for. You still have to setup/configure a server, etc.
The Smack API also has a lot of extra baggage that you don't need either, but its "concepts" are a little more intuitive since its all chat/IM concepts.
I would still look at OpenJMS or ActiveMQ. I think knowing JMS will be more valuable in the future as compared to knowing XMPP. Take a look at their Getting Started documentation or the Sun Tutorial to see how much coding is involved. In JMS parlance, you will want an administered "Topic" and a "Queue" where the Serial Port App will receive and send messages respectively. All of your clients will open a subscription to the Topic and send their outbound messages to the Queue. When you send messages, their delivery mode should be non-persistent.
I ended up using XMPP via the Smack API. What led me to this decision was its native support for presence (is the client online/offline) and robust connection handling (it automatically reconnects if a the underlying connection breaks). Another benefit of XMPP is that it's compatible with Google Talk, so I don't need to setup a server. Thanks for the suggestions. In case anyone is interested, I have released the code on Google Code http://code.google.com/p/xbee-xmpp/