Java socket send latency when MSS exceeded - java

Trying to solve an issue where there is a significant amount of latency on outgoing messages that seems to be related to the socket flush behaviour. I've been taking packet captures of outgoing FIX messages from a quickfixj initiator to an acceptor.
To summarise the environment, the java intiator makes a socket connection to a server socket on another server. Both servers are running Redhat Enterprise Linux 5.10. The MSS from a netstat on the interfaces is 0. The MTU of the NICs are all 1500 (inifinite I believe for the loopback interface). On the application side the messages are encoded into a byte array by quickfixj and written to the socket. The socket is configured with TCP_NODELAY enabled.
I am almost sure I can eliminate the application as the cause of the latency, as when the acceptor (the ServerSocket) is run on the same server as the Initiator using the loopback interface, there is no sender latency. This is an example of some packet capture entries using the loopback interface:
"No.","Time","Source","Destination","Protocol","Length","SendingTime (52)","MsgSeqNum (34)","Destination Port","Info","RelativeTime","Delta","Push"
"0.001606","10:23:29.223638","127.0.0.1","127.0.0.1","FIX","1224","20150527-09:23:29.223","5360","6082","MarketDataSnapshotFullRefresh","0.001606","0.000029","Set"
"0.001800","10:23:29.223832","127.0.0.1","127.0.0.1","FIX","1224","20150527-09:23:29.223","5361","6082","MarketDataSnapshotFullRefresh","0.001800","0.000157","Set"
"0.001823","10:23:29.223855","127.0.0.1","127.0.0.1","FIX","1224","20150527-09:23:29.223","5362","6082","MarketDataSnapshotFullRefresh","0.001823","0.000023","Set"
"0.002105","10:23:29.224137","127.0.0.1","127.0.0.1","FIX","825","20150527-09:23:29.223","5363","6082","MarketDataSnapshotFullRefresh","0.002105","0.000282","Set"
"0.002256","10:23:29.224288","127.0.0.1","127.0.0.1","FIX","2851","20150527-09:23:29.224,20150527-09:23:29.224,20150527-09:23:29.224","5364,5365,5366","6082","MarketDataSnapshotFullRefresh","0.002256","0.000014","Set"
"0.002327","10:23:29.224359","127.0.0.1","127.0.0.1","FIX","825","20150527-09:23:29.224","5367","6082","MarketDataSnapshotFullRefresh","0.002327","0.000071","Set"
"0.287124","10:23:29.509156","127.0.0.1","127.0.0.1","FIX","1079","20150527-09:23:29.508","5368","6082","MarketDataSnapshotFullRefresh","0.287124","0.284785","Set"
The main things of interest there being that 1/ despite the packet length (the biggest here is 2851) the PUSH flag is set on each packet. And 2/ the measure of latency I'm measuring here is the "Sending Time" set by the message before its encoded, and the packet capture time "Time". The packet capture is being done on the same server as the Initiator that is sending the data. For a packet capture of 10,000 packets there is no great difference between "SendingTime" and "Time" when using loopback. For this reason I think I can eliminate the application as the cause of the sending latency.
When the acceptor is moved to another server on the LAN, the sending latency starts to get worse on packets that are greater than the MTU size. This is a snippet of the a capture:
"No.","Time","Source","Destination","Protocol","Length","SendingTime (52)","MsgSeqNum (34)","Destination Port","Info","RelativeTime","Delta","Push"
"68.603270","10:35:18.820635","10.XX.33.115","10.XX.33.112","FIX","1223","20150527-09:35:18.820","842","6082","MarketDataSnapshotFullRefresh","68.603270","0.000183","Set"
"68.603510","10:35:18.820875","10.XX.33.115","10.XX.33.112","FIX","1223","20150527-09:35:18.820","843","6082","MarketDataSnapshotFullRefresh","68.603510","0.000240","Set"
"68.638293","10:35:18.855658","10.XX.33.115","10.XX.33.112","FIX","1514","20150527-09:35:18.821","844","6082","MarketDataSnapshotFullRefresh","68.638293","0.000340","Not set"
"68.638344","10:35:18.855709","10.XX.33.115","10.XX.33.112","FIX","1514","20150527-09:35:18.821","845","6082","MarketDataSnapshotFullRefresh","68.638344","0.000051","Not set"
What's significant here is when the packets are smaller than the MSS (derived from the MTU) then the PUSH flag is set and there is no sender latency. This would be expected as disabling Nagle's algorithm will be causing a PUSH to be set on these smaller packets. When the packet size is bigger than the MSS - a packet size of 1514 in this case - the difference between the time the packet is captured and the SendingTime has jumped to 35ms.
It doesn't seem likely that this 35ms latency is caused by the application encoding the messages, as large packet size messages were sent in <1ms on the loopback interface. The capture also takes place on the sender side, so it doesn't seem that the MTU segmentation that can be the cause either. The most likely reason seems to me that because there is no PUSH flag set - as the packet is larger than the MSS - then the socket and/or TCP stack at the OS level is not deciding to flush it until 35ms later. The test acceptor on the other server is not a slow consumer and is on the same LAN, so ACKs are timely.
Can anyone give any pointers as to what could cause this socket sending latency for > MSS packets? Against a real counterparty in the US this sender latency reaches as high as 300ms. I thought if a packet size was greater than the MSS then it would be sent immediately regardless of previous ACKS (as long as the socket buffer size was not exceeded). Netstat generally shows 0 socket q and wind sizes and the issue seems to occur on all > MSS packets, even from startup. This looks like the socket is deciding not to flush immediately for some reason, but unsure what factors could cause that.
Edit: As pointed out by EJP, there is no flush in linux. The socket send puts the data in the linux kernal's network buffers as I understand it. And it seems for these non-push packets, the kernel is waiting for the ack from the previous packet before it delivers it. This isn't what I'd expect, in TCP I'd expect the packet to still be delivered until the socket buffers filled up.

This is not a comprehensive answer as TCP behaviour will differ depending on a lot of factors. But in this case, this was the reason for the problem we faced.
The congestion window, in the TCP congestion control implementation, allows for an increasing amount of packets to be sent without an acknowledgement as long as it doesn't detect signs of congestion, i.e retransmissions. Generally speaking, when these occur, the congestion algorithm will reset the congestion window limiting the packets that can be sent before an ack can be sent. This manifests itself in the sender latency we witnessed, as packets were held in the kernel buffer awaiting ackowledgements for prior packets. There are no TCP_NODELAY, TCP_CORK etc. type instructions that will override the congestion control behaviour in this regard.
In our case this was made worse by a long round trip time to the other venue. However, as it was a dedicated line with very little packet loss per day, it was not retransmissions that were the cause of the congestion control kicking in. In the end it appears to have been solved by disabling the following flag in linux. This would also cause the congestion window to be reset, but through detecting idleness rather than packet loss:
"tcp_slow_start_after_idle - BOOLEAN
If set, provide RFC2861 behavior and time out the congestion
window after an idle period. An idle period is defined at
the current RTO. If unset, the congestion window will not
be timed out after an idle period.
Default: 1
(Note if you face these issues it is also possible to investigate other forms of congestion control algorithm than the ones your kernel might be currently set up for).

Related

Dimension of packets sent through a socket

I have implemented a system similar to BitTorrent, and I would like to know at what size I should set the packets of each chunk. I was not able to find how BitTorrent does it, what size packets they use. I currently use 100 kilobyte packets, is that a lot?
TCP breaks data into packets automatically. You don't have to worry about the size of network packets.
The size of a TCP packet is constrained by the MTU (maximal transfer unit) of the network, typically around 1500 bytes. If you were making a game or a multimedia program where low latency is important you might have to keep in mind that data is sent in packets, but for a file transfer program it doesn't matter.
There is no such thing as a TCP packet. It's a byte stream. Under the hood it is broken into segments, in a way that is entirely out of your control, and further under the hood those segments are wrapped in IP packets, ditto.
Just write as much as you like in each write, the more the better.

Read most recent UDP packet by disabling SO_RCVBUF in Java DatagramSocket?

I need to read the most recent incoming UDP packet, regardless of dropped packets in between reads. Incoming packets are coming in 3x faster than the maximum application processing speed. In an attempt to achieve this, I used setReceiveBufferSize(int size) of Java's DatagramSocket class to set the SO_RCVBUF to be the same size as my expected packet in bytes.
However, there is still a three packet delay before I get the most recent packet (and if the incoming rate is 10x the receive rate, there is a 10 packet delay). This suggests that SO_RCVBUF contains more than just the newest packet.
First, are the units of setReceiveBufferSize(int size) in bytes? It is not explicitly stated in the javadocs. Second, is there a way to disable SO_RCVBUF so that I only receive the most recent incoming packet? For example, zero is an illegal argument to the function, but I could theoretically set the receive buffer size to one.
this looks like an unusual problem ;)
i would recommend to split your application into separate threads:
reciever (minimal work, no parsing/etc)
handles the incoming packets and puts the last read object into an asyncronous variable
processing (from what you wrote, looks like this takes a long time)
reads the object from the asyncronous space, and processes it (don't forget to ignore the previous)
if you need to hack things like SO_RCVBUF, i think you should step a bit closer to the io processing subsystem with C/C++
You've done exactly the wrong thing. Set the receive buffer as large as possible. 512k for example. Setting it low only increases the probability of dropped packets. And either speed up the receiving code or slow down the sending code. There's no point in sending packets that can't be received.

What would cause UDP packets to be dropped when being sent to localhost?

I'm sending very large (64000 bytes) datagrams. I realize that the MTU is much smaller than 64000 bytes (a typical value is around 1500 bytes, from my reading), but I would suspect that one of two things would happen - either no datagrams would make it through (everything greater than 1500 bytes would get silently dropped or cause an error/exception to be thrown) or the 64000 byte datagrams would get chunked into about 43 1500 byte messages and transmitted transparently.
Over a long run (2000+ 64000 byte datagrams), about 1% (which seems abnormally high for even a LAN) of the datagrams get dropped. I might expect this over a network, where datagrams can arrive out of order, get dropped, filtered, and so on. However, I did not expect this when running on localhost.
What is causing the inability to send/receive data locally? I realize UDP is unreliable, but I didn't expect it to be so unreliable on localhost. I'm wondering if it's just a timing issue since both the sending and receiving components are on the same machine.
For completeness, I've included the code to send/receive datagrams.
Sending:
DatagramSocket socket = new DatagramSocket(senderPort);
int valueToSend = 0;
while (valueToSend < valuesToSend || valuesToSend == -1) {
byte[] intBytes = intToBytes(valueToSend);
byte[] buffer = new byte[bufferSize - 4];
//this makes sure that the data is put into an array of the size we want to send
byte[] bytesToSend = concatAll(intBytes, buffer);
System.out.println("Sending " + valueToSend + " as " + bytesToSend.length + " bytes");
DatagramPacket packet = new DatagramPacket(bytesToSend,
bufferSize, receiverAddress, receiverPort);
socket.send(packet);
Thread.sleep(delay);
valueToSend++;
}
Receiving:
DatagramSocket socket = new DatagramSocket(receiverPort);
while (true) {
DatagramPacket packet = new DatagramPacket(
new byte[bufferSize], bufferSize);
System.out.println("Waiting for datagram...");
socket.receive(packet);
int receivedValue = bytesToInt(packet.getData(), 0);
System.out.println("Received: " + receivedValue
+ ". Expected: " + expectedValue);
if (receivedValue == expectedValue) {
receivedDatagrams++;
totalDatagrams++;
}
else {
droppedDatagrams++;
totalDatagrams++;
}
expectedValue = receivedValue + 1;
System.out.println("Expected Datagrams: " + totalDatagrams);
System.out.println("Received Datagrams: " + receivedDatagrams);
System.out.println("Dropped Datagrams: " + droppedDatagrams);
System.out.println("Received: "
+ ((double) receivedDatagrams / totalDatagrams));
System.out.println("Dropped: "
+ ((double) droppedDatagrams / totalDatagrams));
System.out.println();
}
Overview
What is causing the inability to send/receive data locally?
Mostly buffer space. Imagine sending a constant 10MB/second while only able to consume 5MB/second. The operating system and network stack can't keep up, so packets are dropped. (This differs from TCP, which provides flow control and re-transmission to handle such a situation.)
Even when data is consumed without overflowing buffers, there might be small time slices where data cannot be consumed, so the system will drop packets. (Such as during garbage collection, or when the OS task switches to a higher-priority process momentarily, and so forth.)
This applies to all devices in the network stack. A non-local network, an Ethernet switch, router, hub, and other hardware will also drop packets when queues are full. Sending a 10MB/s stream through a 100MB/s Ethernet switch while someone else tries to cram 100MB/s through the same physical line will cause dropped packets.
Increase both the socket buffers size and operating system's socket buffer size.
Linux
The default socket buffer size is typically 128k or less, which leaves very little room for pausing the data processing.
sysctl
Use sysctl to increase the transmit (write memory [wmem]) and receive (read memory [rmem]) buffers:
net.core.wmem_max
net.core.wmem_default
net.core.rmem_max
net.core.rmem_default
For example, to bump the value to 8 megabytes:
sysctl -w net.core.rmem_max=8388608
To make the setting persist, update /etc/sysctl.conf as well, such as:
net.core.rmem_max=8388608
An in-depth article on tuning the network stack dives into far more details, touching on multiple levels of how packets are received and processed in Linux from the kernel's network driver through ring buffers all the way to C's recv call. The article describes additional settings and files to monitor when diagnosing network issues. (See below.)
Before making any of the following tweaks, be sure to understand how they affect the network stack. There is a real possibility of rendering your network unusable. Choose numbers appropriate for your system, network configuration, and expected traffic load:
net.core.rmem_max=8388608
net.core.rmem_default=8388608
net.core.wmem_max=8388608
net.core.wmem_default=8388608
net.ipv4.udp_mem='262144 327680 434274'
net.ipv4.udp_rmem_min=16384
net.ipv4.udp_wmem_min=16384
net.core.netdev_budget=600
net.ipv4.ip_early_demux=0
net.core.netdev_max_backlog=3000
ethtool
Additionally, ethtool is useful to query or change network settings. For example, if ${DEVICE} is eth0 (use ip address or ipconfig to determine your network device name), then it may be possible to increase the RX and TX buffers using:
ethtool -G ${DEVICE} rx 4096
ethtool -G ${DEVICE} tx 4096
iptables
By default, iptables will log information about packets, which consumes CPU time, albeit minimal. For example, you can disable logging of UDP packets on port 6004 using:
iptables -t raw -I PREROUTING 1 -p udp --dport 6004 -j NOTRACK
iptables -I INPUT 1 -p udp --dport 6004 -j ACCEPT
Your particular port and protocol will vary.
Monitoring
Several files contain information about what is happening to network packets at various stages of sending and receiving. In the following list ${IRQ} is the interrupt request number and ${DEVICE} is the network device:
/proc/cpuinfo - shows number of CPUs available (helpful for IRQ-balancing)
/proc/irq/${IRQ}/smp-affinity - shows IRQ affinity
/proc/net/dev - contains general packet statistics
/sys/class/net/${DEVICE}/queues/QUEUE/rps_cpus - relates to Receive Packet Steering (RPS)
/proc/softirqs - used for ntuple filtering
/proc/net/softnet_stat - for packet statistics, such as drops, time squeezes, CPU collisions, etc.
/proc/sys/net/core/flow_limit_cpu_bitmap - shows packet flow (can help diagnose drops between large and small flows)
/proc/net/snmp
/proc/net/udp
Summary
Buffer space is the most likely culprit for dropped packets. There are numerous buffers strewn throughout the network stack, each having its own impact on sending and receiving packets. Network drivers, operating systems, kernel settings, and other factors can affect packet drops. There is no silver bullet.
Further Reading
https://github.com/leandromoreira/linux-network-performance-parameters
http://man7.org/linux/man-pages/man7/udp.7.html
http://www.ethernetresearch.com/geekzone/linux-networking-commands-to-debug-ipudptcp-packet-loss/
UDP pkts scheduling may be handled by multiple threads on OS level. That would explain why you receive them out of order even on 127.0.0.1.
Your expectations, as expressed in your question and in numerous comments to other answers, are wrong. All the following can happen even in the absence of routers and cables.
If you send a packet to any receiver and there is no room in his socket receive buffer it will get dropped.
If you send a UDP datagram larger than the path MTU it will get fragmented into smaller packets, which are subject to (1).
If all the packets of a datagram don't arrive, the datagram will never get delivered.
The TCP/IP stack has no obligation to deliver packets or UDP datagrams in order.
UDP packets are not guaranteed to reach their destination whereas TCP is!
I don't know what makes you expect a percentage less then 1% of dropped packets for UDP.
That being said, based on RFC 1122 (see section 3.3.2), the maximum buffer size guaranteed not to be split into multiple IP datagrams is 576 bytes. Larger UDP datagrams may be transmitted but they will likely be split into multiple IP datagrams to be reassembled at the receiving end point.
I would imagine that a reason contributing to the high rate of dropped packets you're seeing is that if one IP packet that was part of a large UDP datagram is lost, the whole UDP datagram will be lost. And you're counting UDP datagrams - not IP packets.

Java dropping half of UDP packets

I have a simple client/server setup. The server is in C and the client that is querying the server is Java.
My problem is that, when I send bandwidth-intensive data over the connection, such as Video frames, it drops up to half the packets. I make sure that I properly fragment the udp packets on the server side (udp has a max payload length of 2^16). I verified that the server is sending the packets (printf the result of sendto()). But java doesn't seem to be getting half the data.
Furthermore, when I switch to TCP, all the video frames get through but the latency starts to build up, adding several seconds delay after a few seconds of runtime.
Is there anything obvious that I'm missing? I just can't seem to figure this out.
Get a network tool like Wireshark so you can see what is happening on the wire.
UDP makes no retransmission attempts, so if a packet is dropped somewhere, it is up to the program to deal with the loss. TCP will work hard to deliver all packets to the program in order, discarding dups and requesting lost packets on its own. If you are seeing high latency, I'd bet you'll see a lot of packet loss with TCP as well, which will show up as retransmissions from the server. If you don't see TCP retransmissions, perhaps the client isn't handling the data fast enough to keep up.
Any UDP based application protocol will inevitably be susceptible to packet loss, reordering and (in some circumstances) duplicates. The "U" in UDP could stands for "Unreliable" as in Unreliable Datagram Protocol. (OK, it really stands for "User" ... but it is certainly a good way to remember UDP's characteristics.)
UDP packet losses typically occur because your traffic is exceeding the buffering capacity of one or more of the "hops" between the server and client. When this happens, packets are dropped ... and since you are using UDP, there is no transport protocol-level notification that this is occurring.
If you use UDP in an application, the application needs to take account of UDP's unreliable nature, implementing its own mechanisms for dealing with dropped and out-of-order packets and for doing its own flow control. (An application that blasts out UDP packets with no thought to the effect that this may have on an already overloaded network is a bad network citizen.)
(In the TCP case, packets are probably being dropped as well, but TCP is detecting and resending the dropped packets, and the TCP flow control mechanism is kicking in to slow down the rate of data transmission. The net result is "latency".)
EDIT - based on the OP's comment, the cause of his problem was that the client was not "listening" for a period, causing the packets to (presumably) be dropped by the client's OS. The way to address this is to:
use a dedicated Java thread that just reads the packets and queues them for processing, and
increase the size of the kernel packet queue for the socket.
But even when you take these measures you can still get packets dropped. For example, if the machine is overloaded, the application may not get execution time-slices frequently enough to read and queue all packets before the kernel has to drop them.
EDIT 2 - There is some debate about whether UDP is susceptible to duplicates. It is certainly true that UDP has no innate duplicate detection or prevention. But it is also true that the IP packet routing fabric that is the internet is unlikely to spontaneously duplicate packets. So duplicates, if they do occur, are likely to occur because the sender has decided to resend a UDP packet. Thus, to my mind while UDP is susceptible to problems with duplicates, it does not cause them per se ... unless there is a bug in the OS protocol stack or in the IP fabric.
Although UDP supports packets up to 65535 bytes in length (including the UDP header, which is 8 bytes - but see note 1), the underlying transports between you and the destination do not support IP packets that long. For example, Ethernet frames have a maximum size of 1500 bytes - taking into account overhead for the IP and UDP headers, that means that any UDP packet with a data payload length of more than about 1450 is likely to be fragmented into multiple IP datagrams.
A maximum size UDP packet is going to be fragmented into at least 45 separate IP datagrams - and if any one of those fragments is lost, the entire UDP packet is lost. If your underlying packet loss rate is 1%, your application will see a loss rate of about 36%!
If you want to see less packets lost, don't send huge packets - limit your data in each packet to about 1400 bytes (or even do your own "Path MTU discovery" to figure out the maximum size you can safely send without fragmentation).
Of course, UDP is also subject to the limitations of IP, and IP datagrams have a maximum size of 65535, including the IP header. The IP header ranges in size from 20 to 60 bytes, so the maximum amount of application data transportable within a UDP packet might be as low as 65467.
The problem might be to do with your transmit buffer getting filled up in your UDPSocket. Only send the amount of bytes in one go indicated by UDPSocket.getSendBufferSize(). Use setSendBufferSize(int size) to increase this value.
If #send() is used to send a
DatagramPacket that is larger than the
setting of SO_SNDBUF then it is
implementation specific if the packet
is sent or discarded.
IP supports packets up to 65535 bytes including a 20 byte IP packet header. UDP supports datagrams up to 65507 bytes, plus the 20 byte IP header and the 8 byte UDP header. However the network MTU is the practical limit, and don't forget that that includes not just these 28 bytes but also the Ethernet frame header. The real practical limit for unfragmented UDP is the minimum MTU of 576 bytes less all the overheads.

How to measure the response time between a server and a client that communicate using the UDP protocol?

The aim of the test is to check the shape of the network response time between two hosts (client and server). Network response = the round trip time it takes to send a packet of data and receive it back. I am using the UDP protocol. How could I compute the response time ? I could just subtract TimeOfClientRequest - TimeOfClientResponseRecieved. But I'm not sure if this is the best approach. I can't do this only from inside the code and I'm thinking that the OS and computer load might interfere in the measuring process initiated by the client. By the way, I'm using Java.
I would like to listen to your ideas.
Just use ping - RTT ( round trip time ) is one of the standard things it measures. If the size of the packets you're sending matters then ping also lets you specify the size of the data in each packet.
For example, I just sent 10 packets each with a 1024 byte payload to my gateway displaying only the summary statistics:
ping -c 10 -s 1024 -q 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 1024(1052) bytes of data.
--- 192.168.2.1 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9004ms
rtt min/avg/max/mdev = 2.566/4.921/8.411/2.035 ms
The last line starting with rtt ( round trip time ) is the info you're probably looking for.
I think the method you mention is fine. OS and computer load might interfere, but their effect would probably be negligible compared to the amount of time it takes to send the packets over the network.
To even things out a bit, you could always send several packets back and forth and average the times out.
If you have access to the code, then yes, just measure the time between when the request was sent and the receipt of the answer. Bear in mind that the standard timer in Java only has millisecond resolution.
Alternatively, use Wireshark to capture the packets on the wire - that software also records the timestamps against packets.
Clearly in both cases the measured time depends on how fast the other end responds to your original request.
If you really just want to measure network latency and control the far end yourself, use something like the echo 7/udp service that many UNIX servers still support (albeit it's usually disabled to prevent its use in reflected DDoS attacks).
it would be nice if you could send ICMP packages - I guess, because they are answered directly by the network layer, your answer will loose no time in user mode on the server.
Sending ICMP packages in java seems however not to be possible. You could:
boolean status = InetAddress.getByName(host).isReachable(timeOut)
this will send an ICMP package, but that is not what you want.
However if you start the responder deamon on the server side with a higher priority, you will reduce the effect of server load.
Actually server load does not play a role, as long as it is bellow 100% CPU.
Use ping first, but you can measure the RTT by sending a packet and having the other end sending the packet back.
It is important that you measure when the boxes are under typical load because that will tell you the RTT you can expect to typically get.
You can average the latencies over many packets, millions or even billions to get a consistent value.
Besides few answers mentioned about use ICMP ping to measure the RTT time is a good way.
I'd like to provide another way to measure the RTT via UDP if you can both control the server&client side. The basic flow as follow:
Send a UDP packet P1 with time-stamp C1 from client to server
Server put current time-stamp S1 to packet while received P1.
Server side processing
Server put current time-stamp S2 to packet before send it back to client.
Client put current time-stamp C2 to packet while received P1.
After that, we can calculate the RTT = (C2-C1) - (S2-S1).
It is may not accurate as ICMP ping and need extra control both on client&server side, but it is manageable.
While ping is a good start to measure latency, it is using the ICMP protocol instead of UDP. Packets of different protols usually have different priorities on routers etc.
You could use netperf to measure the UDP roundtrip time:
http://www.netperf.org/netperf/training/Netperf.html#0.2.2Z141Z1.SUJSTF.9R2DBD.T

Categories