Mechanism to prevent "flood" for a websocket - java

I am writing a web application that makes intense use of websockets (standard JSR implementation).
Through the websocket I exchange information.
A Client sends a request (JSON) to the Server, the Server decodes the message and sends some info back (JSON).
How can avoid maliciously client to flood my Server with requests. For example I want to limit the number of requests at 10 every 5 seconds (what I mean by request is the certain JSON message the client is sending in order to get the information).
Is there are built-in-way of doing this or I have to write my own mechanism ?

Related

Why to use websocket and what is the advantage of using it?

I tried reading some articles, but not so clear on this topic.
Would someone like to explain me below points:
Why use websocket over http
what is full duplex communication
what do you mean by lower latency interaction
Why use websocket over http?
A webSocket is a continuous connection between client and server. That continuous connection allows the following:
Data can be sent from server to client at any time, without the client even requesting it. This is often called server-push and is very valuable for applications where the client needs to know fairly quickly when something happens on the server (like a new chat messages has been received or a new price has been udpated). A client cannot be pushed data over http. The client would have to regularly poll by making an http request every few seconds in order to get timely new data. Client polling is not efficient.
Data can be sent either way very efficiently. Because the connection is already established and a webSocket data frame is very efficiently organized (mostly 6 extra bytes, 2 bytes for header and 4 bytes for Mask), one can send data a lot more efficiently than via a HTTP request that necessarily contains headers, cookies etc...
what is full duplex communication?
Full duplex means that data can be sent either way on the connection at any time.
what do you mean by lower latency interaction
Low latency means that there is very little delay between the time you request something and the time you get a response. As it applies to webSockets, it just means that data can be sent quicker (particularly over slow links) because the connection has already been established so no extra packet roundtrips are required to establish the TCP connection.
For a comparison in what's involved to send some data via an http request vs. an already established webSocket connection see the steps listed in this answer: websocket vs rest API for real time data?
These other references may also be useful:
Server-push whenever a function is called: Ajax or WebSockets
For a push notification, is a websocket mandatory?
HTML5 WebSocket: A Quantum Leap in Scalability for the Web

REST Server to client communication

I'm developing a Java API for an Adndroid app in Spring. Right now my API is 100% REST and stateless. For the client to receive data, it must send a request first.
However, what I need is the server to send data to the to the client /not the client to the server fisrt/ whenever it is ready with it's task.
I think that some kind of session must be created between the two parties.
My question is: How can I achieve this functionality of the SERVER sending data to the CLIENT when it's ready with it's task? /It is unknown how long the task will take./
What kind of API should I develop for this purpose?
One idiotic workaround is sending a request to the server every n seconds but I'm seeking for a more intelligent approach.
There are multiple options available. You can choose what suits best for you.
Http Long Polling - In this, server holds the request until it's ready with its task (in your case). Here, you don't have to make multiple requests every few seconds (Which is Http Polling).
Server Sent Events - In this, server sends update to the client without long-polling. It is a standardized part of HTML 5 - https://www.w3.org/TR/eventsource/
Websockets - Well, websockets work in duplex mode and in this a persistent TCP connection is established. Once TCP connection is established, both server and client sends data to and fro. Supported by most modern browsers. You can check for Android Websocket Library like autobahn and Java websocket.
SockJs - I would recommend to go with this option instead of plain WebSocket. http://docs.spring.io/spring/docs/current/spring-framework-reference/htmlsingle/#websocket-fallback-sockjs-enable

Sending SOAP http response in one package

We have a Java web service with document style and http protocol. Local this service works smoothly and fast (~ 6ms). But calling the service-methods from remote takes over 200ms.
One main reason for this delay is that the
server sends first the response http header,
the client sends in return a ACK and
then again the server sends the response http body.
This second step where the client sends the ACK costs the most time, almost the whole 200ms. I would like to avoid this step and save the time.
So that's why my question: Is it possible to send the whole response in one package? And how and where do I configure that?
Thanks for any advice.
I'm not fully understanding the question.
Why is the server sending the first message? Shouldn't the client be requesting for a web service via HTTP initially?
From what I understand, SOAP requests are wrapped within an http message. HTTP messages assumes a TCP connection and requires a response. This suggests that a client must respond when the server sends an http header.
Basically whatever one end sends to another, the other end must reply. The ACK return from you step 2 will always be present.
EDIT:
I think the reason for the difference in time when requesting via local and remote is simply the routing that happens in the real network versus on your local machine. It's not the number of steps taken in your SOAP request and response.

Implementing a protocol exchanging four messages within a single persistent connection

I have a complete implementation of a protocol where four messages are exchanged between the client (a native Android application) and the server (a standalone Java server) in the following way using a persistent connection through Java sockets:
(client->server): message1
(server->client); message2
(client->server): message3
(server->client): message4
Between sending each message, both client and server have to do heavy mathematical (cryptographic) operations (pairing-based computations over elliptic curves).
This project works properly in my local development machine using sockets and mantaining opened this socket from message1 to the message4 between the Android app and the Java server. Now, I need to do the same with Google AppEngine, but since it does not allow opening sockets, I do not know how can I do it. I already checked the Channel and XMPP APIs, but I do not know whether my use-case applies to that APIs. Is it the right method using Channel and XMPP APIs from AppEngine? Is it possible to emulate the functionality implemented in my local machine through sockets on AppEngine?
Thank you for your response.
Google App Engine doesn't support persistent connections.
You will need to significantly re-design your protocol to run over HTTP.
As an example, message1 can be sent over an HTTP request, message2 can be returned with the matching HTTP response. At that point, your socket connection ends.
You'll have to issue a second HTTP request to open a new socket with message3, and you can return message4 with the second HTTP response.
You can "connect" the first and second HTTP request by using an HTTP session. A session is basically an id with extra data stored on the server side. You'd create the session in the first HTTP request, and pass it as a parameter to the second HTTP request. The server can look up the session id and the associated data when processing the second request.
You can find more info about sessions on SO: How to use session on Google app engine
The XMPP API will not help you, it's for communicating between the GAE server-side code and other XMPP clients that use HTTP as a communcation protocol.
The Channel API can be used to send data from the server->client, but it's actually implemented as an HTTP long poll. Multiple long HTTP requests are used, and you are not guaranteed to have a single socket that stays open; multiple sockets are opened and closed in the process. It will be more complicated that what I described above, and more expensive.

Request-reply model for hybrid SOAP over HTTP/JMS over middleware

One of our products implements the following one-way web service structure:
Server <--------------------- Middleware <---------------- Client
SOAP over JMS (queue) SOAP over HTTP
In this model, clients send SOAP messages over HTTP to our middleware (Progress SonicMQ). The messages get pushed into JMS queues by SonicMQ and our server fetches them from there. However, as you can see, the server does not send a response to the client (asynchronous JMS).
We would like to implement a response-channel to this model. The often suggested solution is to create a temporary replyTo-queue (on the fly) in the middleware, allowing server to send a response to that queue. Then, client can fetch the response and the replyTo-queue is closed. This sounds convenient enough, but unfortunately our clients operate over plain HTTP and not over JMS, so their clients can not easily set up replyTo queues.
One approach to achieving a response channel in such hybrid HTTP/JMS SOAP model would be to configure the middleware to open the replyTo queue on each succesful SOAP receive, append the replyTo-queue and sender information to the SOAP message and push the message to the queue, where it would be fetched by the server. After receiving and processing the message, the server could send a response to the indicated replyTo-queue in the middleware. Finally, the middleware would send the response (SOAP) over HTTP back to the original client by using the data from the SOAP message (the data that was inserted there in the middleware procedures when the request was first received).
While propably possible, this sounds kind of a hacky. So the question is: any cleaner ways of achieving such request/response model on our case? The server end has been implemented in Java.
The solution:
Progress SonicMQ supports "Content Reply Send" HTTP Acceptor, which allows to easily send JMS reply. The Content Reply Send acceptor works in a following way:
Acceptor receives the HTTP message a client sent
Acceptor creates a temporary JMS queue
Acceptor builds up a JMS message, containing the HTTP body, and adds the temporary queue's identification to the newly created JMS message
Acceptor pushes the JMS message into its destination queue (not the temporary queue)
Acceptor starts consuming the temporary reply-To queue
When client fetches message from original destination queue, it contains the set reply-To queue identification
Client consumes message
Client sends reply to the reply-To queue
Acceptor receives message from the queue
Acceptor sends message as HTTP to the client that originally sent the HTTP message
Should consumer ("server" in our case) fail and not send reply causing timeout, Sonic's HTTP Acceptor sends an HTTP message to the client indicating the timeout. This is a very standard feature in SonicMQ. I suppose it exists in other products as well.
This allows using standard SOAP over JMS (see skaffman's answer) in the "server" end avoids any custom programming in the middleware.
I still see some problems in the JMS model though, but this is definitely an improvement.
Update 2009-11-05:
After researching this issue even more, it turns out my suspicion against HTTP<-->middleware<-->JMS has been relevant.
There are a few critical problems in this model. Synchronous-asynchronous model with middleware simply isn't convenient. Either have both ends implement JMS connection (which should rock) or go with HTTP in both ends. Mixing them results only in headaches. Of these two, SOAP-over-HTTP is simpler and better supported than SOAP-over-JMS.
Once more: if you are designing this kind of a system... DON'T.
I don't think your suggested solution is hack at all, I think that's the right solution. You have the client-middle layer with a synchronous protocol, and then the middle-server layer using an asynchronous layer, to which you have to add a reply path in order to satisfy the synchronous semantics. That's what middleware is for. Remember that that JMS provides explicit support for temporary reply-to queues, you won't need to mess with the payload at all.
A more left-field possibility is the leverage the fact that SOAP 1.2 was designed with JMS in mind, and so you could use web service layer between middleware and server layer which does SOAP-over-JMS. That means you can keep SOAP from end-to-end, with the middleware changing only the transport.
The only web service stack that I know of that supports JMS transport is Spring Web Services, where the process and development is documented here. This would also give you the opportunity to port your SOAP layer to Spring-WS, which kicks ass :)
Why not add a link to a page that lets users check to see when a response is ready, a la a Fed Ex tracker ID? Give your users the tracker ID when they send the request.
This would fit into the HTTP request/response idiom, and your users would still know that the request is "fire and forget".

Categories