WebSocket app architecture - java

Let's consider application using WebSockets which can be divided into several, independent modules. A simplest example would be chat application where client app can join/connect to several chat rooms at once (each chat room is independent from each other). What is the preferred aproach of organizing the connections while developing such application
Open new websocket connection in client for each chat room. This way you'll have multiple instances of javax.websocket.server.ServerEndpoint on the server side, each with different url. Both the server and client apps will thus be a little bit less complex and can be separated into functional (reusable) blocks. The drawback is that the client will have to keep multiple opened connnections at once. In my case we're talking about up to ten max at a time.
Open one websocket connection and multiplex the messages to chat room underneath, i.e by field with chat room id in the messages. Not a big deal to implement, will make the app a little bit more complex, but is it worth it?
What is the preferred approach?

This is not easy to answer generally, for it depends on your specific setup. However, here are my thoughts on this:
I think option 2 is the better approach, because open connections are really a limited resource for many webservers. Remember, that a websocket connection is different from a regular http request and stays open over a long time. The additional complexity of the multiplexing protocol is really not an issue I think. All implementations I know of websocket communication protocols use the latter approach, although I must admit to not know really many examples.

Related

What is the best way to implement push server-side app for android?

I'm developing an Android app that requires 2 (or more) devices to communicate with each other.
I tried using Google Cloud Messaging but I was disappointed to find out that the GCM max capacity is 100 messages, so it is broken and does not fit my requirements.
I was thinking about java sockets. Every device will open a new socket (or keep its socket open) and communicate with a group of sockets (devices).
In order to communicate this way I need a server-side app that can send messages to the client (android device). So I figured out that HTTP or web-service won't help me. Am I right?
What is the best way for me to implement such a server-side app?
You can refer to this question I previously asked and implemented. It was for implementing my own Notification mechanism but it equally (or even more) applies to chatting applications since message queues perfectly fit that usecase.
Building an Android notification server
I ended up not doing it and using GCM at the end but I did have a fully working solution using ActiveMQ and Paho. You can research them both and understand their inner workings. It's easy in principle and definitely possible but the problem is, you may not be able to do this for iOS or WP as it requires running a service in the background (in case your app is not open and you want to make sure the messages are at least sent in a notification).
The possible solution to that problem would be to use both the notification service (GCM or equivalent) for background notifications and then using your MQ for actual communication but I decided that was too much for my project.
If you look at Paho, it will have a fully working MQTT solution that will work even if the phone is not "online" (sleeping or otherwise) and there are plenty of samples for ActiveMQ and drivers for multiple programming languages.
I think this solution is much better than having open sockets between two apps, at least because they allow you to persist messages and guarantee delivery which is an important aspect for a chatting application.
As it is said by kha, choose one of the message queue protocols is the best solution. 3 reasons in brief,
Delivery guaranteed regardless of temporary offline or long latency.
As simple as subscribe / publish, no worry about transport layer any more.
Broker available online. You save time and money for setting up your own.
For mobile devices like in your case, I'd prioritize MQTT too. It's lightweight and stable. If you are totally new to message queue or MQTT, refer to this documentaion and example code

Realtime data transport architecture

I have a concept of a game system that includes (preferably) Java server and multi-platform clients (Web, Android, iOS).
This is a 1vs1 player-vs-player realtime game. Servers performs a matchup of 2 players. So basically server needs to handle many matches containing 2 players. Both players alter same data, and each player should be updated in realtime with actions of other player.
Can you suggest me:
1) Server-side framework/library that would ease the implementation, as I would rather not start learning node.js from the scratch. :) Vert.x comes to mind.
2) Should clients hold the replica of the data, and alter it locally (meaning only data that is transfered are only commands, here I see JMS as good solution), or should only server alter the data and then send the complete data set every time change occurs?
3) How should the data be transfered? Considering the multi-platform requirement only thing I see viable are WebSockets.
4) An example/tutorial of server handling pairing of WebSocket connections? All I ever found are 1-to-1 connections.
5) Considering scalability, can you explain how could all this work in a distributed environment?
1) I don't think node.js is such big deal to learn. I would personally prefer a well known - broadly used framework.
2) If you are considering mobile, probably the first option seems more sound. You should consider send/push deltas during the game, and still provide functionality to retrieve the full state of the game in case the client disconnect and connect with same ID.
3) WebSocket would be the best option. Push approach, TLS option and well supported. Another option is the WebRTC data connection, that is peer-2-peer most of the times. I say most of the times because if one of the users is behind a dynamic NAT router or restrictive firewall, it won't be possible, and you will need a TURN (relay) server. Anyway, it is less supported than WS.
4) You should not "pair websockets". The WS connections just input commands to your logic, and your logic broadcast events to whoever it wants. Despite of being a 1vs1 game, probably you want to inspect the flow of events for further debugging or analysis. So consider WS as a transport, not as an entity.
5) Very, very, very broad question. But assuming that you are going to use WS, and that your application will be so successful that you will need multiple servers... assume that it is impossible to predict that two users will connect to the same server, so you should consider a message bus that allow to play users from one server with the users in other server. An EDA (Event Driven Architecture) would make sense.

android multi player game over network

I'm programming an Android multi-player game, which basically consist of a server where the clients connect and exchange messages. When the player connects to a server, a player list is return to him/her. A player can then select a user to challenge - of course he must select a player from the player list, which only contains connected users.
When a player1 challenges player2, a message needs to be transmitted from player1 to the server, which in turn must send a message to the player2, notifying him about the challenge. The player2 can then accept/decline the challenge.
I can use the following techniques to make this happen:
Use custom server/client with Java socket programming. The server basically accepts a connection from the client, spawning a new thread for each connected client. The problem with this are:
There needs to be a persistent connection open from client to server wasting battery life of the android phone. This is not really big limitation since the battery isn't consumed that much.
When I'll want to develop another game I'll have to rewrite the client/server code from the scratch - also choosing another port to listen for incoming connections - the whole concept gets rather difficult to maintain.
I'm also worried if this is the way to do it. Spawning another thread for each clients sound quite a lot if thousands clients are connecting at the same time. But I'm guessing the PC games do it like this. Not sure about android.
Use Java REST jersey to build the client-server on top of HTTP. This would be a perfect solution if the server could easily send notifications to clients. There are actually multiple design decisions here:
the client pulls the server for any new data/notifications every few seconds - this is really bad, since we're stuck with non responsiveness, delay, etc.
the client can send a waiting request to server, so the client receives the response only after some data becomes available. This is better, but can still produce a delay when two notifications one after another need to be sent to the user. The first notification is sent instantly, since the client already has a connection open, waiting for data to receive. But we would have to wait for the client to initiate another long http request to receive the second notification. The problem gets bigger as there are multiple notifications that need to be send in a row to a specific client.
the client can initiate a http streaming, where the communication is left open when the request is handled, so the server can also send multiple messages to client whenever it wishes. The problem here is that I don't know how well this works on Android. I've looked at several implementations:
Java jersey + atmosphere: didn't succeed in actually making it work. This seems the most promising, but I don't want to spend too much time on it, since I'm not even sure if it does what I want.
Deacon: seems pretty neat, but after seen the video tutorial on their official web page, I'm not sure that it can do what I need. When a player1 challenges player2, can it send a notification to player2 letting it know about the match request?
I would be glad to know how other multi-player games handle the network communications, if the two players are playing the game over the network.
I'm also open to a totally new suggestion how to achieve what I want. I can pretty much code anything, so don't hesitate to let me know of some more difficult way to achieve the network communication.
Let me also mention that I'll be glad to implement a totally specific method to work in my case, so it can be anything that will do the job done, but I'm also looking at more general way for communication between clients and server. So that I can program an interface/whatever and reuse the code in other android games, android applications.
I hope I presented the problem allright and that I'll receive some valuable answers.
Thank you
You should take a look at XMPP. It's a protocol (originally created for chat programs) that allows sending of xml data between users.
It has a separated client-server relationship, so that you can focus on developing a client application fit for phones, and a different server depending on your needs.
There are loads of information available on the protocol (I should know, I wrote a thesis about using the protocol in game applications), but you can start by looking it up on wikipedia to see if it is what you want.
aSmack is a library for creating android xmpp-clients. It takes some tweaking to set it up and get everything to work, but once you do, it's neat.
EDIT: relating to the answer suggesting using the C2DM:
from the c2dm docs "Sending large numbers of C2DM messages":
Are you sending C2DM messages too frequently? If you need to communicate with your application frequently over a short period of
time, C2DM is probably not the best solution. Instead, consider
implemeting XMPP or your own protocol to exchange messages, and use
C2DM only to send the initial notification.
It sounds like Android Cloud-to-Device-Messaging might be what you need
Push notifications without the app having to keep a connection open
I would vote in favor of some message passing technique - like activeMQ, rabbitMQ, zeroMQ eor something like it. On the server side you may stick with java , or javascript ( like
node.js ) - such solution would provide most performance and minimal latencies.
If latency is not that critical, you may as well use REST calls with JSON

Persistent push with comet long-polling on Jetty?

I am trying to create a Jetty servlet that allows clients (web browsers, Java clients, ...) to get broadcast notifications from the web server.
The notifications should be sent in a JSON format.
My first idea was to make the client send a long-polling request, and the server respond when a notification is available using Jetty's Continuation API, then repeat.
The problem with this approach is that I am missing all the notifications that happen between 2 requests.
The only solution I found for this, is to buffer the Events on the server and use a timestamp mechanism to retransmit missed notifications, which works, but seems pretty heavy for what it does...
Any idea on how I could solve this problem more elegantly?
Thanks!
HTTP Streaming is most definitely a better solution than HTTP long-polling. WebSockets are an even better solution.
WebSockets offer the first standardised bi-directional full-duplex solution for realtime communication on the Web between any client (it doesn't have to be a web browser) and server. IMHO WebSockets are the way to go since they are a technology that will continue to be developed, supported and in demand and will only grow in usage and popularity. They're also super-cool :)
There appear to be a few WebSocket clients for Java and Jetty also supports WebSockets.
Sorry for bumping this up, yet I believe numerous people will come across this thread and the accepted answer, IMHO, is at least outdated, not to say misleading.
In order of priority I'd put it as following:
1) WebSockets is the solution nowadays. I've personally had the experience of introducing WebSockets in enterprise oriented applications. All of the major browsers (Chrome, Firefox, IE - in alphbetical order :)) support WebSockets natively. All major servers/servlets (IIS, Tomcat, Jetty) are the same and there are quite a number of frameworks in Java implementing JSR 356 APIs. There is a problem with proxies, especially in cloud deployment. Yet there is a high awareness of the WebSockets requirements, so NginX supported them already 1.5 year ago. Anyway, secured 'wss' protocol solves proxy problem in 99.9% (not 100% just to be on the safe side, never experienced that myself).
2) Long Polling is probably the second best solution, and the 'probably' part is due to 'short polling' alternative. When speaking of long polling I mean the repeated request from client to server, which responses as soon as any data available. Thus, one poll can finish in a few millis, another one - till the max wait time.
Be sure to limit the poll time to something lesser than 2mins since usually otherwise you'll need to manage timeout error in you client side. I'd propose to limit the poll time to something like tens of seconds.
To be sure, once poll finished (timely or before that) it is immediately repeated (yet better to establish some simple protocol and give to your server a chance to say to the client - 'suspend').
Cons of the long polling, which IMHO justifies the continuation of the list, is that it holds one of just a few (4, 8? still not that many) allowed connections, that browser allows each page to establish to a server. So that is can eat up ~12% to ~25% of your website's client traffic resource.
3) Short polling not that much loved by many, but sometimes i prefer it. The main Cons of this one, is, of course, the high load on the browser and the server while establishing new connections. Yet, i believe that if connections pools are used properly, that overhead much lesser than it look like on the first glance.
4) HTTP streaming, either it be page streaming via IFrame or XHR streaming, is, IMHO, highly BAD solution since it's like an accumulation of Cons of all the rest and more:
you'll hold the connections opened (resources of browser and server);
you'll still be eating up from total available client traffic limit;
most evil: you'll need to design/implement (or reuse design/implementation) the actual content delivery in order to be able to differentiate the new content from the old one (be it in pushing scripts, oh my! or tracking the length of the accumulated content). Please, don't do this.
Update (20/02/2019)
If WebSockets are not an option - Server Sent Events is the second best option IMHO - effectively browsers implemented HTTP streaming for you here at the lower level.
I have done this before using Http Streaming via Atmosphere framework and it worked fine.
Visit Comet, Streaming
if you see the atmosphere tutorial they have given multiple examples
You may want to check how they implemented this in CometD: http://cometd.org .
Or you may even consider to use that tool, without having to reinvent the wheel.

server push or client push is better?

I am developing a chat website using jsp/servlet.I will be hosting my website on gooogle appengine .Now i have some doubts regarding whether to use server push or client pull technology
1)If i use server push and if i dont close the response of servlet will it cause the server to go slow?How many simultanious connection can a tyicall tomcat server can handle if i keep the socket open for the entire chat session between 2 clinets??
2)Will server push or clinet push be better??
If you are using a servlet (prior to 3.0), then I guess you'll have to go with pull because of the programming model of servlet. However, there ARE advantages in using a push model. Primarily, wasted load on server and the limitation in latency. That's why there are technologies such as comet. Servlet 3.0 also supports push model. These are commonly used in ajax based apps.
In fact I believe a push model is more suited for a chatting app. because of the fast response time (=better user experience) it can provide.
If you use a nio based implementation for push-model, you can support thousands or even more than 10k concurrent connections (obviously, your millage varies).
If you use a conventional IO based implementation, it will be likely in the range of hundreds of concurrent connections (don't take this estimation too seriously though. I'm just giving these numbers to give a very, very rough feeling).
As for tomcat, last time I checked, people were saying that it won't have a good push-model support until version 7.0. But I'm not following the current status so I'm not sure (Sorry, perhaps somebody else can help you on this). If that is the case, you might want to check out comet support of jetty.
grizzly and netty are also good NIO based network frameworks, but if you want to use JSP, and find that tomcat is not sufficient, I guess jetty would be the best bet.
edit: (some additional info)
In this "push models", it's not like the server opens a connection to the client. The connection will be kept alive, and the server will push messages as it sees fit.
Also, it's not like there are only "push" and "pull" models. You can have a hybrid, like long polling.
I don't know how are you thinking of achieving server push here. As far as I can see, server needs a request to respond over HTTP. So, when there is a request, server will respond to that.
If i use server push and if i dont close the response of servlet will it cause the server to go slow?
App Engine will not let you do that. You have to finish your response within thirty seconds, or it will be killed. The thirty seconds is also an edge case, most calculations they do (for quota and such) are based on a 75 millisecond response time.
How many simultanious connection can a tyicall tomcat server
Tomcat? I thought you are planning to use App Engine?
Pull. Always pull.
I know it's a manufacturing-oriented book but the advice from Lean Thinking (Womack & Jones) is invaluable in any context (roughly, from memory):
Start by defining value,
line up the activities that create value in the value-stream,
create flow across the value-stream,
let customers pull value from the value-stream,
compete against perfection rather than other organizations
If I misquoted them, I apologize. Anyway, all of those principles can easily be applied in the development of any software product just as they could in the production of any physical product but the one that matters for you is pull.
Letting consumers of a service pull rather than pushing to them not only makes your programming model easier, it aligns activity with demand. You can still use queuing to load-level over time, if you have to, just the way you could with push but, this way, you have complete visibility into what, exactly happens in any given transaction.
I don't quite get your first question but the answer is still pull.
The answer to your query depends on what underlying protocol you wish to use.
Since you have mentioned JSP/servlets, your app will be implemented over the HTTP protocol.
HTTP is a protocol over TCP. TCP is connection oriented and remains alive, until the connection is ended. However, HTTP connections are persistent, only for the duration of a single request-response cycle. The TCP connection is broken after every request-response cycle. So that should answer your doubt with regards to how many socket connections a typical TOMCAT server will be able to handle. The connections will not be persistent, at all. They will only last the duration of a HTTP request-response cycle.
Given this basic idea, I would suggest , you use a client pull strategy, to implement your app.
Even with server push, over HTTP, even though the name says "server push", it is always the web client that polls the server at regular intervals, which just gives an illusion of "server-push". HTTP specification mandates that the client makes a request to which the server responds.
I have considerable experience in developing chat applications (both mobile and web).
Let me know , if you need any assistance. I will be more that willing to help.

Categories