Situation : 2 small java applications both of them connecting to remote service and sending some data there (first application listens to local socket, process data, sends it for verification to remote service and process response; second application starts on scheduled time, process some data for database and send that data to remote service).The problem is that remote service allows only one connection (that connection is SMPP session), that mean if one application is running and other application starts and try to make connection then bad things will happens...
The idea is to combine that 2 applications in 1 (maybe there are other solutions?) and create some kind of control workflow functionality which responsibility will be to manages applications to avoid collisions in connecting to remote service.
Can someone give me any advice about that idea? Maybe there is some kind of design pattern which allows me to avoid some pitfalls when I will be implementing that? (it would be even better if there are some open source applications which manages similar kind of problem so I could browse source code and gather some good information).
Thank you.
Wrap data to classes together with necessary metadata.
Place your applications to separate threads and instead of sending the data add them to a queue.
Then, in another thread, read the queue and send data from the queue to the service.
I'd give a try to BlockingQueue (http://download.oracle.com/javase/1,5.0/docs/api/java/util/concurrent/BlockingQueue.html) .
Most obvious solution is to write a simple reverse proxy server that will gather requests in to queue and send them one by one to your remote service. Or proxy could run new instance of service for each request.
http://en.wikipedia.org/wiki/Reverse_proxy
Related
I've been through different questions about this topic, however, none of them have cleared my doubts on the best approach notifying the client side of a server-client IM app.
The Problem:
The whole problem is how to notify the client application of updates. I've alread seen the following approaches:
Clients keeps checking for updates: From time to time, client app performs a check in the server to see if there are updates for that specific user;
Problem: it is not performatic at all. Suppose you have one million users and each one of them checks for new updates every second. Serve would have to deal with one million requests per second. Wont work.
Client app opens a socket: The client app opens a socket and sends its address to the server. Server, by its turn, persists this information and connects to the socket whenever it needs to notify the client of some update.
Problem: Often the client will be connected to a NAT, so, the IP it has access to is in a non-visible range. In order to send messages to this client, a port forwarding in the NAT would have to be configured, which can't be done.
Despite of the technology, I think this approach will always be used, however, I have no idea how the problem described above can be solved.
Google Cloud Message (GCM): use the GCM service to notify the client of any update. Problem: It does't seems right to use a third server to handle the IM and it raises concerns about the scalability of the system. When the number of messages and users increases exponentially, it seems that the service will go down. Despite that, it seems that passing the information for two servers before delivering to the targets just adds bottlenecks in the process.
A combination of 2 and 3: uses GCM to reach the client when the last persist addres is no longer available.
Problem: same as described in 2
XMPP: I've seen many answers indicating the use of XMPP for IM applications, however, XMPP is a protocol - as per what I've foun in the web. I don't see how it can solve the problem described in 2 for instance.
Given the options above, can someone indicate me what line should I try to go for? Which one of these approaches has the best chances of success?
Thank y'all in advanced.
Use Google Cloud Messaging. Opposing to what you stated this service is built to scale to billions of users it will generally not introduce performance bottlenecks.
What you basically want to do is to use the messaging service to wake up devices. If you insist you can then still use your client server approach and thus your own protocol to have the client lookup new messages from the backend.
I have deployed a Java web application in Heroku.
Now, I want to change the back-end so that it can notify connected users regarding specific events. I thought I could use server-sent events to do that and the way I thought it would work is the following:
When user opens up the front-end, it would establish a connection for the server-sent events.
When the back-end receives such a request, it would create such a connection (basically an EventOutput) and store it somewhere along with the user's ID (let's say in a Map in memory).
When a new event comes along, the back-end will find the user that needs to be notified, retrieve his connection according to his ID and send him the notification.
This works just fine when you have only one machine handling the requests.
My problem starts when I want to scale up my app and introduce more machines. Then, I cannot really store these connections in memory in one machine anymore, I need to use some centralized location. But the centralized location will need to serialize/deserialize the connection, which means that it's not the same connection anymore!
How do you usually do something like that?
One solution is to use session affinity (a.k.a. sticky sessions), which will ensure that a single session's requests are "always" routed to the same process (I say "always" because there are some caveats). You can turn this feature on by running this command:
$ heroku labs:enable http-session-affinity
In this way, you can keep things in memory and will not have to serialize the session.
Here is an article describing this feature in more detail: https://blog.heroku.com/archives/2015/4/28/introducing_session_affinity
You could use a pub-sub solution (ex: Redis pub-sub) that is accessible to each of your dynos.
On starting, your app subscribes to the appropriate channels. When an event happens, it is published to a channel. This means all instances of your app (spread across multiple dynos) receive that event, and any of them that have SSE connections open can respond to the event.
I have several PC's on each of them I set small swing application that get data with JSON request to one web server. Can I receive the data from web server without to send request to the web server, with other words can the Web server send the data without the Java application to ask for this?
If you have enough server resources
you can consider usage of websockets.
Every PC can open a socket to the server.
When you open the socket you need to send to the server, the pc's unique ID.
Then you need to store this ID in some database or file that will contain all online pc's and sockets .
Then the Server will be aware which pc's are online and which socket to use to communicate with this pc. After this you can send whatever information you need to this PC depending on your application.
This can be implemented in several ways. One common way would be to open a connection and do blocking read in the client application. On receiving something it will look like push from the server. Then you process the push and do another blocking read.
Another option would be doing regular checks if there is something for you on the web server. You set the retry interval frequent enough so it will look like real time push from your app point of view.
If you use HTTP i think the smartest way is to drop the realtime requirement and use a thread that polls the server every 5 seconds. Keeping a HTTP Connection open all time is expensive as it blocks a request processor thread and limits the amount of clients you can have.
You might also consider moving to something like a registration mechanism if you really need near-realtime updates which is often not the case. You would have to open a Server on the clients and have the server push the updates after clients registered their Address with the server.
We have a Java (Spring) web application with Tomcat servlet container.
We have a something like blog.
But the blog must load its posts dynamically with Ajax.
The client's ajax script checks for new posts every second.
I.e. Ajax must ask the server for new posts every second and it will be very heavy for database.
But what if we have hundreds of thousands connects simultaneously?
I think that we must retrieve all posts with cron every second and after that save it somewhere. But where? The main idea is to unload the database.
Any ideas about architecture?
Thanks in advance!
There is other architecture for polling that could be more optimal, depending on the case:
Long polling
Long polling is a variation of the
traditional polling technique and
allows emulation of an information
push from a server to a client. With
long polling, the client requests
information from the server in a
similar way to a normal poll. However,
if the server does not have any
information available for the client,
instead of sending an empty response,
the server holds the request and waits
for some information to be available.
Once the information becomes available
(or after a suitable timeout), a
complete response is sent to the
client. The client will normally then
immediately re-request information
from the server, so that the server
will almost always have an available
waiting request that it can use to
deliver data in response to an event.
In a web/AJAX context, long polling is
also known as Comet programming.
Long Polling
Example of Implementations of this technology:
Push Server
You could also use the observer pattern to register the requests, and notify them when an update is done.
Hundreds of thousands of concurrent users all polling our site every second makes for a huge amount of traffic. If you truly expect this load you are going to have to design your platform accordingly, probably by clustering multiple web, application and DB servers.
Remember that with a database connection pool you don't need a DB connection for every user.
I'm not as familiar with Tomcat, but in WebSphere we can set up connection pools to prepare a certain number of connections.
Also, are you mainly worried about reads or the same number of writes?
Plus, you may also want to have the database "split" depending on region etc. This way there is no single heavy load across the entire database, but it can then be split and even load balanced.
There is also the "NoSQL" databases to look into as well. Maybe something to consider. Just ideas to help out.
We have a string processing service (c++, uses stdin/out for in/output) that has different layouts, each layout runs separately (eventually will run on separate machines), each layout takes time to load, thats why it must keep running after first run.
I must implement a system with client that will ask the master server to connect it to a relevant slave server which actually runs the relevant layout service. The slave server will communicate the data passed from the client to the service, and when finished will become available on the master server for other clients.
The question is what is the best way to go about implementing the servers? Should I keep an open connection between slave/master until the process is complete to notify the master that the connection is over or keep some sort of var in a synchronized function to check that?
Any other important inputs (or other designs) I have overlooked are also very welcomed, Thanx!
Assuming you can't replace the C++ stuff, here is how I would do it off the top of my head.
I would setup one master server. That server would run a process that accepts requests (probably by HTTP, so it'd be a webservice) and I would have it read the request, parse out what it is, and then call the correct slave. Basically it acts as a proxy. Once it receives the response from the slave it forwards it back to the caller. The simplicity here means that if you start getting more of one type of request, you can set up additional servers for that and round-robin requests to them.
The slaves would be webservices that open the C++ program and forward input and retrieve output. That's all it would do.
I wouldn't bother keeping open connections (except between the slave and the C++ program based on your description). Just using a web request for this stuff will keep the connection between the master and the slave open during the process, but it shouldn't be a problem. This way you don't need to worry about this detail.
Now if I were you I would seriously look at reimplementing the C++ code in Java or calling it via JNI or something. If you can avoid it, I think avoiding the Java wrapper around C++ thing would be a good design goal. The Java could do whatever expensive process it is during start up once, and then hold things ready in memory like the C++ code does.
I hope this helps.
Depending on your scalability needs, you may want to take a look at the Java NIO package. This will give you a starting point to build a scalable, non-blocking server implementation.