So the problem is I have fifteen clients which need to be able to communicate between each other. My question is how should this be done? Clearly one way is to simply make the clients also servers, but that means 120 unique connections necessary to fully connect the fifteen clients. I'd rather not do this as it seems messy.
Current solution:
Each new connection has the server spin off a separate thread for listening to it. Each client has a separate thread monitoring the channel for incoming information.
Server acts as a message router: Process 1 needs to send a message to Process 2 and sends a message to the server indicating intended recipient, sender, and message.
Upon receiving the message the server passes message to Process 2. The listening thread detects it and passes it to the process.
So on for each message between the clients.
This seems clunky. Is there a better methodology/package to use for this?
A UDP multicast system would work for this but will get complicated for you to do yourself (since you have to worry about synchronization and fault detection/correction yourself as well as nodes droping in and out of the group).
There are various middleware solutions including distributed caches that already address this problem pretty well. Look at Infinispan. If that's too high level and you just want a lower level solution, try JGroups. I only list those because I know they are quick and usable, but there are many others out there.
Related
I've set up sockets for communication between a server and client and have threads running on the server for multiple client connections. Furthermore I'm now sending byte arrays between server and client for data however I'm thinking of implementing cyclic barriers to make the server wait for a specific number of clients to connect before a different message is sent to each client.
This communication and waiting will need to be ongoing for example once this threshold of client connections is made and the message sent out, the server should now wait again for a message to come back from each client, probably a different message. This should continue for a few iterations at least, I'm wondering if I implement cyclic barriers for this process would i go be finding the best solution to this process?
Is this the intended use of cyclic barriers or would there be a better alternative to my idea?
To keep it simple, I intend to wait for 2 clients to connect. There will also be timeout conditions enforced to deal with possible failure.
After googling how message is sent/received in chat messenger like whatsapp, i came across they use queues based messaging system. I am just trying
to figure out what can be high level design of this feature
HLD per mine understanding :-
Say Friend 1 and Friend 2 are online . Friend 1 has established HTTP web connection to web server 1 and Friend 2 has established HTTP web connection to web server 2. Friend 1 send the message to Friend 2.
Now as soon as message comes to web server 1, i need to convey the message to web server 2 so that message can be pushed back to friend 2 through already established web connection.
I believe distributed custom java queues can be used here to propagate the message from one server to another. As soon as message comes to one server , it will push it to distributed queue(distribute queue because of load balancing and high availability) with message content, fromUserId, toUserId. There will be listener on queue which will see destination userId of just poppedIn message and find on which webserver destination userId is active . If user is active pop out the message and push it to client otherwise store it in db so that it can be pulled
once once gets online. To see which user is active on which server, there we can maintain the treemap with userId as key and value as serverName for efficient look up
Probably actual design must be more complex/scalable than above brief . Would like to know if this is the right direction for scalable chat messenger?
Also i believe we need to have multiple distributed queues instead of one for such a scalable application. But if we have multiple distributed queues how system will ensure the FIFO message delivery across distributed queues ?
Would like to know if this is the right direction for scalable chat
messenger?
Designing this application using message queues has the following benefits:
Decoupling of client-server and reduce of failure blast: Queues can gracefully handle traffic peaks, by just having a temporarily increased queue size, which will be back to normal as long as traffic normal again (or any transient failures have been fixed)
In a messaging application, clients (mobiles) can be offline for long periods. As a result, a synchronous design would not work, since the clients might not be accessible for message delivery. However, with an asynchronous design as with message queues, the responsibility of message delivery is on the client side. As a result, the client can poll for new messages as soon as it gets online.
So, yes this design could be quite scalable in terms of performance and usability. The only thing to have in mind is that this design would require a separate queue for each user, so the number of queues would scale linearly with the number of the application's users (which could be a significant financial & scalability issue).
But if we have multiple distributed queues how system will ensure the
FIFO message delivery across distributed queues ?
Many queues, either open-source (rabbitMQ, activeMQ) or commercial (AWS SQS), support FIFO ordering. However, the FIFO guarantee inside the queue is not enough, since the messages sent by a single client could be delivered to the queue in different order due to asynchronicity issues in the network (unless you are using a single, not-distributed queue and TCP which guarantees ordered delivery).
However, you could implement FIFO ordering on the client side. Following this approach, the messages would include a timestamp, which would be used by each client to sort the messages when receiving them. The only side-effect of that is that a client could see a message, without having seen all the previous messages first. However, when previous messages arive, they will be shown in the correct order in the client's UI, so eventually the user would see all the messages and in the correct order.
Would like to know if this is the right direction for scalable chat messenger?
I would probably prefer a slightly different approach. Your ideas are correct, but I would like to add up a bit more to the same. I happened to create such a chat messenger a few years ago, and it was supposed to be quite similar to watsapp. I am sure that when you googled, you would have come across XMPP Extensible messaging and presence protocol. we were using openfire as the server that maintains connections . The concept that you explained where
Say Friend 1 and Friend 2 are online . Friend 1 has established HTTP web connection to web server 1 and Friend 2 has established HTTP web connection to web server 2. Friend 1 send the message to Friend 2.
is called federation, and openfire can be run in a federated mode. After reading through your comments, i came across the one queue per user point. I am sure that you already know that this approach is not scalable as its very resource intensive. A good approach would be use an Actor framework such as akka. Each actor is like a light weight thread in java and each actor has an inbox. so messaging is taken care of in this case.
So your scenario transforms to Friend 1 opens a connection to openfire xmpp server and initializes a Friend 1 Actor.When he types a message, it is transferred to the Friend 1 actor's in-box ( Each actor in akka has an in memory inbox). This is communicated to the xmpp server. The server has a database of its own, and since it is federated with other xmpp servers, it will try to find if friend 2 is online. The xmpp server will keep the message in its db until the friend 2 comes online. Once friend 2 establishes a connection to any of the xmpp server a friend 2 actor is created and its presence is propagated to all other servers and the xmpp server 1 will notify Friend 2's actor. Friend 2's actor inbox will now get the message
Optional: There is also a option of delivery receipt. Once Friend2 reads the message, a delivery receipt can be sent to friend 1 to indicate the status of the message i.e read, unread, delivered, not delivered etc.
Well I am new to this and I don't know how to do it, so my senior fellows please help!!!!
There is a situation described below:
An HTTP client is sending a request (Request can be of any type, not concerned regarding the request type) that directly hits a Loadbalancer. The Loadbalancer then redirects the traffic, based on the load of the traffic, towards a "Gateway" system running in two V440 Server, GW logic is written in Java, that actually logically routs this request towards another two server nodes which actually process the request.
Now the scene is something like that: there are several parallel connections are established with this Gateway from several HTTP clients. One connection per client. It has been observed that, while making connections to this GW, in case of some clients the CPU utilization is going 98-99%.
Client is creating one connection with the GW on particular port. Opens a socket connection:
ServerSocket _ss = new ServerSocket(_port);
Socket s = _ss.accept();
and then GW waits for the input to come from the client.
Now my question is:
Why this kind of situation is happening, as it seems all fine
for rest of the clients and there connections.
Only few clients who are creating connections with the GW is making the situation?
Is there anyway we can track this client's IP so that we can understand if this
has been occurred by same clients every time?
Is there any resolution for this?
Since it is not happening for all the clients, we are certainly not going to find an immediate answer for this. However, this is what my limited research on the question yields
Firstly, Question 2
Configure your F5 to capture the client's IP. Since it is HTTP, there are multiple ways of tracking the requests. One is to
sniff the header X-FORWARDED-FOR which will give the client's IP
Address
Or try adding this rule in your logging engine
when CLIENT_ACCEPTED { log local0. "clientIP:[IP::client_addr] accessed" }
If you also need other data such as resources you can use one of the
other events such as HTTP_REQUEST:
when HTTP_REQUEST {
log local0. "clientIP:[IP::client_addr] accessed [HTTP::host][HTTP::uri]" }
Refer link for above here
Secondly, Question 1
For this you need to look at your available traffic statistic mechanisms. I read this, this and this. Enable the statistics, monitor them live, test, mock requests and analyze the output. I do not know of any other options other than this, right now.
Another option, if you can modify your Java program is to include some sort of performance logging mechanism for each request. But this means there is a lot of development and that I do not recommend at all.
Thirdly, Question 3
This is primarily opinion based. As far as I think if you figure out the problem, you can resolve it.
I need to implement client/server instant messenger using pure sockets in Java lang.
The server should serve large number of clients and I need to decide which sockets should I use - TCP or UDP.
Thanks, Costa.
TCP
Reason:
TCP: "There is absolute guarantee that the data transferred remains intact and arrives in the same order in which it was sent."
UDP: "There is no guarantee that the messages or packets sent would reach at all."
Learn more at: http://www.diffen.com/difference/TCP_vs_UDP
Would you want your chat message possibly lost?
Edit: I missed the part about "large chat program". I think because of the nature of the chat program it needs to be a TCP server, I cannot imagine the actual text content sent by users over a UDP protocol.
The max limit for TCP servers is 65536 connections at the same time. If you really need to go past that number you could create a dispatcher server that would send incoming connects to the appropriate server depending on current server loads.
You could use both. Use TCP for exchanging the actual messages, (so no data lost and streaming large messages, (eg. containing jpegs etc), is possible. Use UDP only for sending short 'connectNow' messages to clients for which there are messages queued. The clients could have states like (NotLoggedIn, TCPconnected, TCPdisconnected, LoggedOut) with various timeouts to control the state transitions as well as the normal message-exchange events. The UDP 'connectNow' message would instruct clients in 'TCPdisconnected' to connect and so move to 'TCPconnected', where they would stay, exchanging messages, until some inactivity timer instructs the client to disconnect for now. This would, of course, be unreliable and so you may wish to repeat the 'connectNow' message every X seconds for N times until the client connects. The client should, in any case, attempt a poll every X minutes, just in case...
It depends whether the user needs to know if the messages have been delivered to the server. UDP packets have no inherent acknowledgement. If the client sends an IM message to the server and it gets lost in transit, neither the client or the server will know about it.
(The short answer is "use TCP" ... but it is worth thinking through the design implications for yourself.)
TCP would give you reliability, which is certainly desirable when during instant messaging -- you would not want messages to be dropped during converstation.
However, if you intend on using group messaging, then you might end up using mulitcast. For such cases, UDP would be the right chioce since UDP can handle point to multipoint. Using TCP for multicast applications would be hard since now the sender would have to keep track of retransmissions/sending rate for multiple receivers. One alternative could be to use TCP for point-to-point chat and use UDP for group messaging.
I have an application where the user (client #1) enters a local ip and a port and the application sends a picture to client #2 (who is also using the same application). However for the final application, I do not want the user to enter the local ip because they will not know this information, and I want my program to automatically figure this out.
My first idea:
Originally, I thought that I could scan all the local ip's for an open port, but this would take way too long.
My second idea:
My next idea was to have the clients send their local hostnames to a remote server which then swaps them and sends them back to the clients.
However, I do not want to run a dedicated server for my second idea.
Because this is more of a design question, I am not including any code but I will do so if necessary.
Do you guys have any ideas on how I should design my application to automatically figure out the local ips?
I did try to google this but couldn't figure out a solution, and so I gave up after an hour and just put my question here.
you can use something like jgroups (allowing discovery based on multicast [lan] etc) or some peer-to-peer implementatons for that, although the latter require at least some servers for initial discovery.
in principle that works the way, that the clients send a message out to "the world" using some well known address and wait for someone to answer. each client itself waits meanwhile for such a message and replies it with information how to "connect" to the current client. This can be done via a so called blackboard, where this bb is either a special tcp-address for multicast-messages (the os/tcp sends the message to all clients listening concurrently) or one or more servers (seeds) that take and coordinate the request and the "membership" lists. anyway, there are some tools ;)