Can anyone tell me how to get network statistics using Java? Or how to measure network performance with simple time related metrics?
Thanks a lot!
It depends what do you mean.
If you want to measure network speed when downloading or uploading something you can create network connection, send some garbage and measure how long does it take to send. Better results can be achieved if both server and client sides are under your control.
But you can even write simple client that downloads file from external URL and measure how long does it take.
If you wish to know measure network parameters of other applications I recommend you to read about JPcap.
If you are looking for Bandwidth monitoring using Java, check the link below.
http://docstore.mik.ua/orelly/java-ent/dist/ch08_04.htm
Related
I've tried to look around for data concerning how much of a bandwidth hog a chat application is.
In this case maybe with a Java/AJAX implementation or simply just Java, using Server/Client relationship.
I want to find out, how much bandwidth such a system would use when it's written in Java. The benchmark could be 15-20 users from all over the world and peaking at maybe 8 or 10 max connected at a time. I know it might seem vague, but I simply can't seem to find data on this specific situation.
Can anyone point me to some resources regarding this? Or chip in if possible?
Unless the chat application is sending photos or files, it will use a trivial amount of data. With a max user count of ten people at once you could wrap the messages in a bandwidth hog of xml and I would still stick with my answer: it will use a trivial amount of bandwidth.
Say all ten of your users are fast typers and very chatty. They type non-stop at 100 words per minute. Break that down to 10 sentences per minute and wrap each of these in a message to the server. Add some XML data describing who the message came from and whether it is private to another user or sent to a group of users and maybe you could get 1K per message. So each user is then sending 1K to the server every 6 seconds. With 10 users, we get 10K sent to the server every 6 seconds.
So by my estimate, we could connect your server to a 56K modem from 1995 and you'll be fine.
The reason you can't find data about this is because there's nothing particularly Java- or AJAX-related here. Bandwidth usage depends on the data you send/receive over the network, and therefore is dependent upon the protocol that you design to pass data around; it has nothing to do with whether you use Java only, or AJAX in combination of Java, or CGI scripts, PL/I or Assembler.
You can code a chat application in Assembler that will be a worse bandwidth hog than a chat application coded in Java.
In order to know your bandwidth impact, you need to analyze your data model, data flow and your overall communication protocol: namely, what data is being sent, in what structure, and how frequently.
I have coded a server in Java that will have several clients connected to it. I want to be able to see how much data is sent to each client to be able to make decisions like allowing more clients or decreasing them, or even to increase/decrease the frequency at which the data is sent.
How can I do that?
I'm currently using Java's Socket API, but if any other library gives me this easily, then a change can be done. The server will run in a linux flavor, likely Ubuntu, so a OS specific answer is welcomed too.
When you write data to the socket, you need to remember how much you sent. There really isn't smarter way to do this.
Generally speaking, you would allow the server to have a limited number of connections. Trying to tune the system based on bandwidth restrictions is very hard to get right.
I have to create a Java program that simulates around 50-100 nodes. I want to test a few routing algorithms and analyse network performance. I tried simulating nodes with threads, but my CPU utilization goes up like anything when I use more threads. Is there a method to simulate a network in Java. If so what way?
You can create a proxy server which passes traffic after a delay which can include a delay based on a bandwidth limitation. This is not as good as a real LAN in showing all the problems you can have, but it can be a good start.
I have a Java swing application that can uploads files to a server. It uses all the available upload bandwidth and that's okay when I'm at home. But it uses up a massive amount of upload bandwidth when I'm at work and so I wish to have some setting to limit bandwidth usage. How do I do it?
It's a multithreaded application so overriding the read method and adding extra logic would make the code more complex.
Is there a simple JVM setting for that? Or is there some java method like SomeJREClass.setMaximumAllowedBandwidth(int);
?
Thanks in advance
There is an open source library token-bucket which implements token bucket algorithm in Java. Maybe it can solve your problem.
I'm working on a multiplayer project in Java and I am trying to refine how I gather my latency measurement results.
My current setup is to send a batch of UDP packets at regular intervals that get timestamped by the server and returned, then latency is calculated and recorded. I take number of samples then work out the average to get the latency.
Does this seem like a reasonable solution to work out the latency on the client side?
I would have the client timestamp the outgoing packet, and have the response preserve the original timestamp. This way you can compute the roundtrip latency while side-stepping any issues caused by the server and client clocks not being exactly synchronized.
You could also timestamp packets used in your game protocol . So you will have more data to integrate your statistics. (This method is also useful to avoid the overhead caused by an additional burst of data. You simply used the data you are already exchanging to do your stats)
You could also start to use other metrics (for example variance) in order to make a more accurate estimation of your connection quality.
If you haven't really started your project yet, consider using a networking framework like KryoNet, which has RMI and efficient serialisation and which will automatically send ping requests using UDP. You can get the ping time values easily.
If you are measuring roundtrip latency, factors like clock drift, precision of HW clock and OS api would affect your measurement. Without spending money on the hardware the closest that you can get is by using RDTSC instructions. But RDTSC doesnt go without its own problems, you have to be careful how you call it.