I'm using Netty to implement a client/server application, I'm also using Gson to send data from/to the client in json format and convert it from/to a java POJO.
The problem is that if the data exceeds a certain size the message will be truncated and will not be used in the program. So I'm trying to find a compressed format (better than the json provided by the Gson library) or maybe a way to compress the json string and avoid having the messages truncated.
Any help will be appreciated..
If protocol you are using is TCP/IP, you don't have guarantee, that message you send will came in one part. You should put some date to your message, which will allow client to determine if it got whole message (e.g. you can put message length in the begining of the message or some delimiter on the end of the message).
On the client side you should check if whole message came, and if not you should wait for the rest of the message. If you are using netty on the client side, you should put frame decoder in the begining of channel pipeline (e.g. DelimiterBasedFrameDecoder in case of delimiter, LengthFieldBasedFrameDecoder in case of length field).
Related
I have an application which processes messages which are of 3 different formats, I am using Netty client to receive messages over TCP listener.
So the issue I am facing in that is for receiving messages over TCP, I have to use ByteBuf in my Decoder class, hence messages are concatenating one after the other and I m not able to split them.
I searched over Internet and, I found we can use LineBasedFrameDecoder, DelimeterBasedFrameDecoder or FixedLengthFieldDecoder to resolve this, but the issue is in my message I don't have any fixed size, also I cannot use LineBasedFrameDecoder because LineBasedFrameDecoder splits the messages on the basis of new line i.e '\n' or '\r' and in my messages there can be new lines as well so LineBasedFrameDecoder will not work in this scenario as it will give the partial or half message, also I don't have any specific delimiter from which my messages ends and I can't use DelimeterBasedFrameDecoder
Please suggest me some approach to resolve this problem.
Also, Is there anything I can add to my pipeline for TCP so that my ByteBuf Object will not contain the concatenation of messages and for every decode method call I will have a single message so that I can parse them easily, just like in cases of UDP as they receive the Datagram packets for every single message.
Thanks in advance.
this is actually two questions in one but they are closely related
I have read plenty of times that you have to send either json/string or binary data over a websocket like socket.io but that you cannot mix these types. But then I was puzzled by finding the following example in the official documentation of the socket.io client java implementation
// Sending an object
JSONObject obj = new JSONObject();
obj.put("hello", "server");
obj.put("binary", new byte[42]);
socket.emit("foo", obj);
// Receiving an object
socket.on("foo", new Emitter.Listener() {
#Override
public void call(Object... args) {
JSONObject obj = (JSONObject)args[0];
}
});
where the "binary" element of that json is clearly binary as the name suggests. The documentation talks about socket.io using org.json, but i couldnt find that this library supports adding binary data to json files anywhere.
is this functionality now supported? if so, what is socket.io doing in the background? is it splitting the emit in two separate messages and then remerging it? or is it simply saving the binary data in base64?
A bit of background.
I am trying to add a private chat functionality to my app so that a user can have multiple private two-party audio-message based chat conversations with several other users. i am having issues finding out how to tell my server how to forward all the messages. if i use a json i can simply add a sender and a receiver to the json and have the server read the id of the receiver and forward the message accordingly. but i am not sure how to handle messages containing only binary data. i have no idea how to add metadata (such a sender and a receiver id) to them so that the server knows to whom they are addressed. i have heard the suggestion of sending a json with a sender id, a receiver id and an MD5 hash of the file i am trying to send, and then send the binary data alone separately and having the server match the two messages over the md5 signature, but that seems to come with problems of its own. like i dont know how having to calculate the MD5 of a ton of audio files on the server is going to affect server performance and there is also the issue of potentially receiving the audio byte array before the json specifying its destination has arrived.
there is always the alternative of encoding my audio files in base64 and sending them as json, as i have been doing so far, but i have been told this is a bad practice and should be avoided as if inflates package sizes.
i feel like there are a bunch of messaging apps already out there, and i bet at least some are based on websockets. I would like to know if there are any best practices on how to route binary data over a websocket to a specific receiving connection.
Ill upvote any answer to the questions above as well as any hint on how to tackle the problem mentioned in the background part.
I am trying to implement a client-server in java
and i made connection between in sockets
and sending JSON objects as strings on streams
if i have big object is there's a way to handle it
so i don't have to regroup it because the limit size of tcp packet (cant know when the single object is fully transferred to me as client or not yet)
note :am using G-son to convert objects to JSON objects
If I have big object is there's a way to handle it so I don't have to regroup it because the limit size of tcp packet. (I cant know when the single object is fully transferred to me as client or not yet)
Actually, the client can know when it has received a complete JSON object. When your client sees the } that matches the opening {, you have the complete object. Of course, this means that that your client needs to understand JSON syntax, but you can use an off-the-shelf JSON parser to do that.
So the best way to do this is for the server to generate and send the JSON, and the client to parse the socket input stream using a normal JSON parser. If you do it that way, then you don't need to know whether the TCP/IP stack has broken the data stream into multiple packets. By the time the JSON parser sees them, they will have been reassembled into a stream of bytes.
If this doesn't answer your question, we need to see what your code is currently doing to generate and send the JSON on the server side.
is there's a way to handle it so i don't have to regroup it because the limit size of tcp packet
You don't have to care about the size of TCP packets. Just write the data. TCP will segmentize and packetize it for you.
(cant know when the single object is fully transferred to me as client or not yet)
Yes you can. You reach the closing '}', as #StephenC mentions. Your JSON parser should take of that for you in any case.
Your question is founded on false assumptions.
I have a question related to Camel and JMS message.
My system contains a JMS topic and a JMS queue, say, TopicInput and QueueInput. A process of mine listens to QueueInput and process the message send into this queue. The result is then passed to another Topic, say, TopicOutput.
The process that processes the message uses Java and Apache Camel. The response my Camel route send out is a String. Therefore the String is sent to TopicOutput.
My problem is that when I send my message to the QueueInput directly, everything is fine, I get a String response from TopicOutput. However, if I send the request message to the TopicInput, which internally bridges to QueueInput anyway, the result I get from TopicOutput will be a byte array representation of the String.
Does anyone know how this could happen? I am not even sure whether this is the Camel's problem or JMS problem.
Any suggestions or hints will be helpful.
Many thanks.
Not quite sure what's going on exactly in your logic.
JMS has BytesMessage and TextMessage. To get a string directly, the message has to be TextMessage, otherwise a String must be constructed from a byte array, which you can retrieve from the message.
When sending messages using Camel, Camel tries to map the payload to the best JMS message type. Check this table out.
To be sure to always produce a TextMessage (that parses to String), convert the payload to String before sending it with a JMS producer. Make sure you are aware of what the message type and payload is in every step of your flow, then you should easily solve your issue.
I am trying to send some very large files (>200MB) through an Http output stream from a Java client to a servlet running in Tomcat.
My protocol currently packages the file contents in a byte[] and that is placed a a Map<String, Object> along with some metadata (filename, etc.), each part under a "standard" key ("FILENAME" -> "Foo", "CONTENTS" -> byte[], "USERID" -> 1234, etc.). The Map is written to the URL connection output stream (urlConnection.getOutputStream()). This works well when the file contents are small (<25MB), but I am running into Tomcat memory issues (OutOfMemoryError) when the file size is very large.
I thought of sending the metadata Map first, followed by the file contents, and finally by a checksum on the file data. The receiver servlet can then read the metadata from its input stream, then read bytes until the entire file is finished, finally followed by reading the checksum.
Would it be better to send the metadata in connection headers? If so, how? If I send the metadata down the socket first, followed by the file contents, is there some kind of standard protocol for doing this?
You will almost certainly want to use a multipart POST to send the data to the server. Then on the server you can use something like commons-fileupload to process the upload.
The good thing about commons-fileupload is that it understands that the server may not have enough memory to buffer large files and will automatically stream the uploaded data to disk once it exceeds a certain size, which is quite helpful in avoiding OutOfMemoryError type problems.
Otherwise you are going to have to implement something comparable yourself. It doesn't really make much difference how you package and send your data, so long as the server can 1) parse the upload and 2) redirect data to a file so that it doesn't ever have to buffer the entire request in memory at once. As mentioned both of these come free if you use commons-fileupload, so that's definitely what I'd recommend.
I don't have a direct answer for you but you might consider using FTP instead. Apache Mina provides FTPLets, essentially servlets that respond to FTP events (see http://mina.apache.org/ftpserver/ftplet.html for details).
This would allow you to push your data in any format without requiring the receiving end to accommodate the entire data in memory.
Regards.