It's my first question here, so my apologies if I asked wrongly.
In my experiment, multiple android devices are connected using WiFi Direct. To make use of the broadcast nature of wireless tx, all the devices join a single multicast group to exchange their information. My intention is to let the sender send only one copy of its information, while all the 1-hop neighbors receives it.
My trouble is, nodes further away are also receiving it.
Consider the example:
A----B----C
at the same time:
A----D
1) connection is done by wifi direct;
2) they join a single multicast group for message exchange.
What I want: if A sends, B and D can receive, not C; if B sends, A and C can receive, not D. Basically the so-called "1-hop broadcast".
What I get: if A sends, B and D receives, B helps relay it (due to mac layer multicast established by udp multicast I guess?) so C also receives it.
I did some search, multicastSocket has a setTimeToLive() method, with parameters as:
0: not sent on network, only local use;
1: only local network, not going through router;
...
But I somehow need something between 0 and 1, so I can limit the tx to only 1-hop. I couldn't find a solution to this.
You might ask why I need to limit the scope. That's for preventing the flooding, thus reducing network resource consumption.
You might ask why not using unicast to each neighbors. This has a scalability issue in terms of neighbor set cardinality, which should be efficiently solved by multicast/broadcast. Unless wifi direct actually "simulates" multicast/broadcast using unicast at mac layer?
You might also ask why don't I create one distinct multicastSocket for each node to let his neighbors join. I have thought about this, but not sure about the complexity of managing all those sockets.
Sorry for having written so long. I'm looking forward for any suggestion.
EDIT:
--- We tried to setTimeToLive(1), but nodes 2 hops away from a sender can still receive the message.
--- We checked the default TTL and confirmed the default value is already 1.
--- My feeling is that TTL doesn't decrease as it goes from hop to hop, it merely limits the transmission within a "local network" i.e., not going through routers. With wireless nodes connected by WiFi Direct, the network may be treated as a single "local network", hence the relay to all multicast group members.
--- So I doubt if there is any way to explicitly limit the transmission hop counts for a multicastSocket.
--- My two UGLY backup plans are:
1) unicast from a sender to each of its 1-hop neighbors; or
2) each node maintains its own multicastSocket, to let each of its neighbors to join. So nodes 2-hops away will join different multicast groups.
But both solutions will cause the creation and close of lots of sockets, and are subject to the scalability issue (i.e., density).
Can anyone suggest any better solutions to do this? Basically the key target is: implement the 1-hop broadcast functionality for wireless nodes to share local information to its 1-hop neighbors.
Best
Zhang Bo
In C, you have to set the socket option for TTL (Time to Live):
u_char ttl;
setsockopt(sock, IPPROTO_IP, IP_MULTICAST_TTL, &ttl,sizeof(ttl))
In java you have different options:
MulticastSocket.setTimeToLive: http://docs.oracle.com/javase/7/docs/api/java/net/MulticastSocket.html#setTimeToLive(int)
Other options:
Use StandardSocketOptions class to set socket options: http://docs.oracle.com/javase/7/docs/api/java/net/StandardSocketOptions.html#IP_MULTICAST_TTL
To use StandardSocketOptions you need to work with DatagramChannel:
http://docs.oracle.com/javase/7/docs/api/java/nio/channels/DatagramChannel.html
DatagramChannel channel = DatagramChannel.open();
int ttlValue = 1;
channel.setOption(StandardSocketOptions.IP_MULTICAST_TTL, ttlValue);
Related
Ok, so let´s clarify the questions...
I'm studing Sockets in Java, from my understood until now, related to this subject are:
To make multiple clients to connect to only one address in the server (port), then it is necessary to assign each client connection to another thread
Based on that I got confused about somethings AND could not find any acceptable answer here or at Google until now.
If Socket is synchronous, what happens if 2 clients try to connect AT THE SAME TIME and how the server decides who will connect first?
How the server process multiple messages from one client? I mean, does it process in order? Return ordered?
Same question above BUT with multiple messages from multiple clients?
If the messages are not ordered, how to achieve that? (in java)
Sorry about all those questions but for me all of them are related...
Edit:
As the comment said, I misunderstood the concept of synchronization, so changed that part.
Guys we ask here to LEARN not to get judged by other SO think about that before giving -1 vote ok.
what happens if 2 clients try to connect AT THE SAME TIME
It is impossible for 2 clients to connect at exactly the same time: networking infrastructure guarantees it. Two requests happening at the exact same time is called a collision (wikipedia), and the network handles it in some way: it can be through detection or through avoidance.
How the server process multiple messages from one client? I mean, does it process in order?
Yes. The Socket class API uses the TCP/IP protocol, which includes sequence numbers in every segment, and re-orders segments so that they are processed in the order they are sent, which may be different from the order they are received.
If you used DatagramSocket instead, that would use UDP, which does not guarantee ordering.
Same question above BUT with multiple messages from multiple clients?
There are no guarantees of the relative ordering of segments sent from multiple sources.
I am working on socket programming on Java recently and something is confusing me. I have three questions about it.
First one is;
There is a ServerSocket method in Java. And this method can take up to 3 parameters such as port, backlog and ip address. Backlog means # of clients that can connect as a form of queue into a server. Now lets think about this situation.
What happens if 10 clients try to connect this server at the same
time?
Does Server drop last 5 clients which tried to connect? Lets increase the number of clients up to 1 million per hour. How can I handle all of them?
Second question is;
Can a client send messages concurrently without waiting server's response? What happens if a client sends 5 messages into server that has 5 backlog size?
The last one is not a question actually. I have a plan to manage load balancing in my mind. Lets assume we have 3 servers running on a machine.
Let the servers names are A, B and C and both of them are running smoothly. According to my plan, if I gave them a priority according to incoming messages then smallest priority means the most available server. For example;
Initial priorities -> A(0), B(0), C(0) and respond time is at the end of 5. time unit.
1.Message -> A (1), B(0), C(0)
2.Message -> A (1), B(1), C(0)
3.Message -> A (1), B(1), C(1)
4.Message -> A (2), B(1), C(1)
5.Message -> A (2), B(2), C(1)
6.Message -> A (1), B(2), C(2)
.
.
.
Is this logic good? I bet there is a far better logic. What do I do to handle more or less a few million requests in a day?
PS: All this logic is going to be implemented into Java Spring-Boot project.
Thanks
What happens if 10 clients try to connect this server at the same time?
The javadoc explains it:
The backlog argument is the requested maximum number of pending connections on the socket. Its exact semantics are implementation specific. In particular, an implementation may impose a maximum length or may choose to ignore the parameter altogther.
.
Lets increase the number of clients up to 1 million per hour. How can I handle all of them?
By accepting them fast enough to handle them all in one hour. Either the conversations are so quick that you can just handle them one after another. Or, more realistically, you will handle the various messages in several threads, or use non-blocking IO.
Can a client send messages concurrently without waiting server's response?
Yes.
What happens if a client sends 5 messages into server that has 5 backlog size?
Sending messages has nothing to do with the backlog size. The backlog is for pending connections. Messages can only be sent once you're connected.
All this logic is going to be implemented into Java Spring-Boot project.
Spring Boot is, most of the time, not used for low-level socket communication, but to expose web services. You should probably do that, and let standard solutions (a reverse proxy, software or hardware) do the load-balancing for you. Especially given that you don't seem to understand how sockets, non-blocking IO, threads, etc. work yet.
So for your first question, the backlog queue is something where the clients will be held in wait if you are busy with handling other stuff (IO with already connected client e.g.). If the list grows beyond backlog, the those news clients will get a connection refused. You should be ok with 10 clients connect at the same time. It's long discussion, but keep a thread pool, as soon you get a connected socket from accept, hand it to your thread pool and go back to wait in accept. You can't support millions of client "practically" on one single server period! You'll need to load balance.
Your second question is not clear, clients can't send messages, as long as they are on the queue, they will be taken off the queue, once you accept them & then it's not relevant how long the queue is.
And lastly your question about load balancing, I'd suggest if you are going to have to serve millions of clients, invest in some good dedicated load-balancer :), that can do round robin as well as you mentioned.
With all that said, don't reinvent the wheel :), there are some open source java servers, my favorite: https://netty.io/
I have an issue that is driving me crazy! Both design-wise and tech-wise.
I have a need to listen to a LOT of multicast addresses. They are divided into 3 groups per item that I am monitoring/collecting. I have gone down the road of having one process spin-up 100 threads. Each thread uses 2 ports, and three addresses/groups. (2 of the groups are on same port) I am using MulticastChannel for each port, and using SELECT to monitor for data. (I have used datagram but found NIO MulticastChannel much better).
Anyway, I am seeing issues where I can subscribe to about a thousand of these threads, and data hums along nicely. Problem is, after a while I will have some of them stop receiving data. I have confirmed with the system (CentOS) that I am still subscribed to these addresses, but data just stops. I have monitors in my threads that monitor data drops and out-of-order via the RTP headers. When I detect that a thread has stopped getting data, I do a DROP/JOIN, and data then resumes.
I am thinking that a router in my path is dropping my subscription.
I am at my wits end writing code to stabilize this process.
Has anyone ever sent IGMP joins out the network to keep the data flowing? Is this possible, or even reasonable.
BTW: The computer is a HP DL380 Gen-9 with a 10G fiber connection to a 6509 switch.
Any pointers on where to look would really help.
Please do not ask for any code examples.
The joinGroup() operation already sends out IGMP requests on the network. It shouldn't be necessary to send them out yourself, and it isn't possible in pure Java anyway.
You could economize on sockets and threads. A socket can join up to about 20 groups on most operating systems, and if you're using NIO and selectors there's no need for more than one thread anyway.
I have used datagram but found NIO MulticastChannel much better).
I don't know what this means. If you're referring to DatagramSocket, you can't use it for receiving multicasts, so the sentence is pointless. If you aren't, the sentence is meaningless.
I'm programming mobile ad hoc network routing protocol in JAVA (using UDP). That routing protocol consists of ring topology (each node as one predecessor node and one successor node).
First, I've combined one transmitter (one thread) and one receiver (one thread) to form one node. But, I'm facing some problems like:
I'd that a third node could listen transmission from one node to another node. Per example,
node A sends a packet to node B, and if node C is in the range of node A then it might listen that transmission too.
I'd set one channel per ring to reduce interference. But, I don't know which java network API mechanism I should use.
I'd have your guidance.
Thank you in advance (sorry for my poor english)!
Per example, node A sends a packet to node B, and if node C is in the range of node A then it might listen that transmission too.
This is expected behavior for wireless ad-hoc network. If C is not destination (according to MAC-address) you can drop received message.
I'd set one channel per ring to reduce interference.
One channel per ring would oppositely increase interference, especially if you expect high load and many messages being routed around. But it is much easier to manage single channel.
You need to think more what is your environment and requirements.
Are you using 802.11 at MAC level?
Do you want reliable guaranteed delivery?
I have a machine with two network interface cards. I was wondering if I want to send out a multicast message to one of the LANs - is it mandatory to use the machine's ip in that LAN, or can have as input ant ip from the LAN?
That is, let's say the machine's IP is:
190.20.20.20
and another machine in that LAN is:
190.20.20.1
can I put:
multcastiScoket.setInterface("190.20.20.1");
If so - does that machine have to be on turned on?
Thank you.
While multicast has its own routing pecularities, it is important to understand that an explicit choice of the NIC (for both sender and receiver) is important and does make a difference (unless you want to rely on a particular OS'es "automagic" implementations, which can get very tricky in production).
Firstly, let us clarify, that java.net.MulticastSocket can and is used for both: sending and receiving messages (this does not mean that sending MC, and receiving MC are similar; a receiver must perform IGMP joins etc, is visible on ip maddr etc. etc.).
In general, the more you specify while creating these sockets the better (or else you will be at the mercy of the OS, with not-so-easy to debug situations bound to happen).
For the Receiver socket you should:
specify the port (in constructor); firewall should be open for inbound udp traffic on that port,
specify the interface (via socket.setNetworkInterface); else OS will take one of the available interfaces (check via ip maddr),
specify the MC-group (e.g via socket.joinGroup(InetAddress.getByName("230.0.0.0")) (this triggers the IGMP mechanism, visible via tcpdump -i your_interface -n ether multicast)
For the Sender socket you should similarly,
specify the MC group (same command),
specify the interface (same command)
I'd put two arguments if favor of the above,
Tested simple senders and receivers on CentOS 7 with two NIC's (enp0s3, and enp0s8) (code here). Only by setting NIC expliclity on the receiver was I able to confidently tell on which NIC the IGMP join will be issued, and on which NIC the program will be listed (via ip maddr) as having joined the MC group; by setting the Sender onto specific NIC's it is readily verified, that a send on enp0s3 will only reach a receiver on enp0s3, and not enp0s8 (similarly, the tcpdump shows the packets as outbound on the chosen interfaces). This is so, of course, without starting to play with MC routing on the OS, which can be made to route the packets as wished for,
You may imagine a setup like yours, where two different NICs will lead to completely separated LAN's, which may further have all/complete elements of MC routing (designated routers, perhaps rendezvous points etc). Thus, selecting the proper NIC for IGMP joins is essential and determines everything; if these are issued to wrong NIC, you may not be able to receive MC traffic; if they are issued to both NICs, you may get more than you want.
I hope this helps (even if the question is 6 years old by now);
IIRC MulticastSocket is for receiving multicasted messages, and you need to configure it with setGroup to listen for multicast messages bound for a specific multicast IP address.
If you want to send a multicast message things are much simpler: you just send your message to that specific multicast IP address and the router/gateway will handle the actual multicasting logic for you. (So you do need to have a router/gateway which supports multicasting properly.)
EDIT: the Java tutorials cover this topic as well: http://docs.oracle.com/javase/tutorial/networking/datagrams/broadcasting.html
The IP address in the setInterface() method is the address of one of your own interfaces. This is used in the case where you have multiple NICs, all connected to different subnets, and you want your multicast join and leave group messages to go out to a subnet that isn't the default route as per the IP routing tables.
In the case you mention there is no need to call setInterface() at all.
If you want a machine to receive messages it does have to be turned on.