Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a Java TCP/IP client and server program that exchanges data based on requests from the clients. I now need to implement multiple instances of this server, with each one holding a subset of the data, with the client still only connecting to the master server. What is the best way to go about this, preferably without multithreading?
(I agree in that multithreading would be an improvement for high availability and performance, though it's not strictly necessary.)
If you want an scheme like this:
{ ---> slave server 1
client ---> master server { ---> slave server 2
{ ---> slave server 3
... then, you'll have to add to the master server a client API, because it will have a double role: As a server, to receive requests from the client/s, and as a client to send requests to the slave servers.
If you have already implemented a client/server communication protocol, then it would be useful to reuse that same protocol for communicate master/slave servers.
You must also address the binding matter: How does the master know how many and which slaves there are?
If they run in the same host, the master itself could start the slaves.
If they run in different hosts, you'll have to provide this addressing information to the master: Either by static configuration in the master, either by making the slaves send the master an address message as soon as each one of them is started (this implies a complication in the master/slave protocol).
And there's still the availability problem: What if one of the slaves shuts down?
If the network data is wisely distributed between the slaves, in order to produce a high amount of redundant data, the problem gets solved just making the master poll the slaves one by one until he gets all the needed data (of corse, this will still serve for just one or two slaves being off simultaneously; if many of the slaves shut down, there can be no gurantee to maintain the data available. Redundancy has a limit).
If there is no redundancy at all, the master will have to detect this situation, and react properly:
If they run in the same host, the master can re-start any of the servers dynamiclly.
If they run in different hosts, the master can do no other thing than report the problem to the client through an apporpiate error message.
Data synchronization can be an issue if they share writable data. In this case, the master will have to broadcast the same writing to each affected slave.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I need to design the similar stuff like whats app or any messenger like module.
High level flow
Client > Load Balancer > Web servers(assume 10 clustered server for now) > Rest based controller > Service > DAO > DB
Challenge :-
say Friend 1 and Friend 2 are online . Friend 1 has established HTTP web connection to web server 1 and Friend 2 has established HTTP web connection to web server 2.
Friend 1 send the message to Friend 2.
Now as soon as message comes to web server 1, i need to convey the message to web server 2 so that message can be pushed back to friend 2 through already established web connection
I have couple of related questions here :-
I can use distributed queues to propagate the message from one server to another. As soon as message comes to one server , it will push it to distributed queue(distribute
queue because of load balancing and high availability) with message content, fromUserId, toUserId. My question how the right server (in this case web server 2) will be notified ?
because if i use JMS queue , only one server will be notified through listener. If i use topic all servers will be notified . In this case all servers can reject the message except the one server where fromUserId resides. Is there a better way where queue just notifies the right server based on some meta data ?
Also if destinationUserId is offline, i need to put back the message on queue. Not sure how it can be achieved ? I believe we need to use some other queue implementation(probably java based in-memory queue)
instead of JMS queue/topic ? Server code will only remove the message from custom queue once it gets acknowledgement from client.
If any client is offline at the time message is sent, in that case whe he coming online,he will send the pull request . Server will make the request to distributed queue
, distributed queue will pull the message from right physical queue. My question ia should distributed queue will keep the destinationUserId and message as value in metadata ?
DB vs Queue :- With this approach i believe there is no need to store the message in DB which can be costly(time complexity) than queue(in-memory queue) in highly real time application like messenger. We just need to store user/group details in db.
Update :- I found related link at quora where last point i.e. What protocol is used in Whatsapp app ?... by Kah Seng Tay also confirms the simialr approach using queue , but still my above questions on queue are yet to be answered.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am working on Lan Messenger, how can I check if somebody else's system has LAN plugged in?
Means, is that person Online?
Also, our hostel has LAN, and I've tried running the client server program a lot many times, but it runs fine on my system (2 clients on the same machine as server) but it doesn't run when server and client are on different machines.
The code is perfectly fine.
What could be the reason? Any special Firewall settings to be changed for allowing packets?
I'm creating a chat server using sockets right now and the way I'm doing it is I have every user query the server about every 20-30 seconds. The server keeps track of the last time a user "refreshed" itself. If a user's gone a certain time period or more without doing so, then the server tells anyone trying to contact this user that they are offline.
Here is a VERY good reference to work off of. Take a look at the Server folder for the server-side and the src folder for the client-side:
https://code.google.com/p/simple-android-instant-messaging-application/source/browse/trunk/#trunk%2FServer%253Fstate%253Dclosed
If you only want to communicate within a LAN, then the socket implementatation in that link is defininitely what you want. If you want to communicate globally( a user in 1 LAN to a user in some other LAN ) then you'll want to redesign it a little so that your server socket is actually on some server accepting client connections. The current implementation creates a server socket within each client and accepts connections from other clients trying to communicate with it. This design breaks due to NAT routers (for reasons I'd rather not explain unless you really want to know).
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a server written in Java which basically awaits requests from different clients and serves their requests. I am running this server from Eclipse. This server is accessible on the local network but I want to be able to access this service from outside the local network. Is there any way to do this please?
P.S. I am a real beginner in these things
you can open a port in the router that leads to the one that the server is listening on. You then connect to your public IP. This ip can be found on
As #Java Player said, the problem is that your router(Nat) denied any incoming packet to your local network...briefly, there is many solutions for this:
Third party server: you must have a dedicated server that plays the role of intermediary, between your client/server programs.
Pros:
Solve completely the problem related to the Nat.
Cons:
In addition of your client, you must code another third party that forward packet to the desired destination.
BTW it gets a little heavy(waste of bandwidth).
Reversed connexion: the server and the client program are reversed, that's mean the client become a server and the server become a client, 'used by most of trojan...
Pros:
Very easy to implement this approach.
cons:
You must at least has an opened port.
Udp hole punching: this approach is used by perhaps all peer2peer solutions(eg: skype, utorrent...).
Pros:
You don't need to any router configuration.
Direct connection between peers.
Cons:
You need also a third party server called STUN server to get informations about your router.
Not all router that works with udp hole punching, you must consider also the first solution.
Writing a hole punching solution is not easy task.
You could also download something like Hamachi and then download Hamachi and sign into your network on the other PCs and Macs (and Linux...which is currently in beta). Then you'll want to be able to access your server on.
Main Hamachi product page
Linux Beta
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am trying to set up a P2P instant messaging system, and while I haven't hit the issue yet, I expect I will have some issues if the client is behind NAT on a local LAN (read: Everyone.)
Let me explain the algorithm and you'll see what I mean. There are three components: A server and two clients - Client Alice wants to initiate a chat with Client Bob. The server only keeps track of who is online, but the actual conversation doesn't go through the server (for privacy for the clients)
So, Alice and Bob both sign in to the server - Connecting to the server's static listening port from an ephemeral port. They tell the server what static port they are listening on for incoming chat requests. Alice asks the server how she can contact Bob. The server responds with the ipaddress and listening port, among other things. Alice sends the request to Bob on that IP address and port to establish the connection. Hope that makes sense.
If Bob is behind NAT, then sure he can talk to the server because he's the one who starts the communication. But Alice's request won't get to him because the NAT relationship hasn't been set up yet for the port he's listening on for chat requests, from Alice's IPaddress.
Is there some kind of black magic that someone knows to make this work? Will it be a non issue? Development isn't that far along, I haven't actually hit this problem yet.
To state the obvious, I don't want to have to make end users configure port forwarding for their listening ports.
For the aforementioned black magic, client and server are both in java, but I'm just generally after the algorithm (and if it's even possible)
Check ICE.
Most P2P frameworks, like JXTA in Java, use the principle of relay servers.
Say A wants to connect to B and B is behind a firewall.
- both A and B establish ** outbound ** synchronous (or full duplex/websockets) connections to Relay Server R
- A signals to R that it wants to transmit data to B
- R 'binds' the inbound connection from A to the outbound connection to B (the synchronous HTTP response to B for instance)
- A sends data to R which is relayed to B
The key thing here is that all connections are established outbound (an usually using a friendly firewall protocol like HTTP on well known ports)
Things get obviously a bit more involved when you have distributed relays; you then need 'routers' that maintain routes to various peers via the relays which rely on Distributed Hashmaps (DHTs) to maintain the information.
There is no black magic. If both clients are behind NAT the message has to go through a third party (the server).
And I would consider using such architecture for all communication if it's only about text messages (you can think of some kind of encryption if privacy is an issue). The server (or servers) will be more loaded but you get simpler (and in some cases more reliable) architecture. For instance, if Alice sends message to Bob, and Bob has some network issues, the server can queue and keep the message for some time and deliver it later (even if Alice goes offline). Another thing is conference (group) chat. Handling it with P2P only is much more challenging (but can be very interesting). But if all clients are behind NAT you get the same problem.
I would also strongly suggest implementing an application-level acknowledgment mechanism for all transmitted and received messages (both from client and server). Protocols such as TCP/IP are not reliable enough.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
We have a Java EE-based web application running on a Glassfish app server cluster. The incoming traffic will mainly be RESTful requests for XML-based representations of our application resources, but perhaps 5% of the traffic might be for JSON- or XHTML/CSS-based representations.
We're now investigating load-balancing solutions to distribute incoming traffic across the Glassfish instances in the cluster. We're also looking into how to offload the cluster using memcached, an in-memory distributed hash map whose keys would be the REST resource names (eg, "/user/bob", "/group/jazzlovers") and whose values are the corresponding XML representations.
One approach that sounds promising is to kill both birds with one stone and use the lightweight, fast nginx HTTP server/reverse proxy. Nginx would handle each incoming request by first looking its URI up in memcached to see if there's an unexpired XML representation already there. If not, nginx sends the request on to one of the Glassfish instances. The nginx memcached module is described in this short writeup.
What is your overall impression with nginx and memcached used this way, how happy are you with them? What resources did you find most helpful for learning about them? If you tried them and they didn't suit your purposes, why not, and what did you use instead?
Note: here's a related question.
Update: I later asked the same question on ServerFault.com. The answers there are mainly suggesting alternatives to nginx (helpful, but indirectly).
Assuming you have a bank of application servers upstream delivery data to the users.
upstream webservices {
server 10.0.0.1:80;
server 10.0.0.2:80;
server 10.0.0.3:80;
}
server {
... default nginx stuff ...
location /dynamic_content {
memcached_pass localhost:11211;
default_type text/html;
error_page 404 502 = #dynamic_content_cache_miss;
set $memcached_key $uri;
}
location #dynamic_content_cache_miss {
proxy_pass http://webservices;
}
What the above nginx.conf snippet does is direct all traffic from http://example.com/dynamic/* DIRECTLY to memcached server. If memcache has the content your upstream servers will not see ANY traffic.
If the cache hit fails with a 404 or 502 error (not in cache or memcache cannot be reached) then nginx will pass the request to the upstream servers. Since there are three servers in the upstream definition you also get transparent load balancing proxy as well.
Now the only caveat is that you have to make sure that your backend application servers keep the data in memcache fresh. I use nginx + memcached + web.py to create simple little systems that handle thousands of requests per minute on relatively modest hardware.
The general pseudo code for the application server is like this for web.py
class some_page:
def GET(self):
output = 'Do normal page generation stuff'
web_url = web.url().encode('ASCII')
cache.set(web_url, str(output), seconds_to_cache_content)
return output
The important things to remember in the above web.py / pseudo code is that content coming from memcached via nginx cannot be changed at all. nginx is using simple strings and not unicode. If you store unicode output in memcached, you'll get at the very least weird characters at the start and end of your cached content.
I use nginx and memcached for a sports related website where we get huge pulses of traffic that only last for a few hours. I could not get by without nginx and memcached. Server load during our last big Fourth of July sports event dropped from 70% to 0.6% after implementing the above changes. I can't recommend it enough.