I have to create a Java program that simulates around 50-100 nodes. I want to test a few routing algorithms and analyse network performance. I tried simulating nodes with threads, but my CPU utilization goes up like anything when I use more threads. Is there a method to simulate a network in Java. If so what way?
You can create a proxy server which passes traffic after a delay which can include a delay based on a bandwidth limitation. This is not as good as a real LAN in showing all the problems you can have, but it can be a good start.
Related
I have a simple protocol which tries for example file transfer from one PC to other. I have a test which utilizes "virtual UDP path" and two clients. Test tries to send given file from one instance to the other. Instances talk to each other through "virtual UDP path" which I have implemented instead of UDP Sockets (which are used in real world) by using two blocking queues.
Now I want to start monitoring regressions by monitoring transferred data through my virtual UDP path. Also I want to simulate packet loss and lag in network by some process in virtual UDP path implementation.
Is it possible to monitor any other thing than execution speed performance of some test? I want to monitor custom performance value of test. It will be for example amount of transferred bytes to complete a given test. I want to report the given custom performance values at end of the given test. Is it possible with testing and if yes, how?
I want it to monitor in a such way which is natural for testng framework. So for example to have these custom performance related values in output xml file (testng-results.xml) and possibly to have visualized them in jenkins through performance plugin.
short answer:
yes
long answer:
performance always relates to something else,
it could be speed performance or memory performance.
In Java is a little bit tricky to know the exact memory allocation because you don't explicitly allocate memory space
I'm working on a multiplayer project in Java and I am trying to refine how I gather my latency measurement results.
My current setup is to send a batch of UDP packets at regular intervals that get timestamped by the server and returned, then latency is calculated and recorded. I take number of samples then work out the average to get the latency.
Does this seem like a reasonable solution to work out the latency on the client side?
I would have the client timestamp the outgoing packet, and have the response preserve the original timestamp. This way you can compute the roundtrip latency while side-stepping any issues caused by the server and client clocks not being exactly synchronized.
You could also timestamp packets used in your game protocol . So you will have more data to integrate your statistics. (This method is also useful to avoid the overhead caused by an additional burst of data. You simply used the data you are already exchanging to do your stats)
You could also start to use other metrics (for example variance) in order to make a more accurate estimation of your connection quality.
If you haven't really started your project yet, consider using a networking framework like KryoNet, which has RMI and efficient serialisation and which will automatically send ping requests using UDP. You can get the ping time values easily.
If you are measuring roundtrip latency, factors like clock drift, precision of HW clock and OS api would affect your measurement. Without spending money on the hardware the closest that you can get is by using RDTSC instructions. But RDTSC doesnt go without its own problems, you have to be careful how you call it.
Please stop me before I make a big mistake :) - I'm trying to write a simple multi-player quiz game for Android phones to get some experience writing server code.
I have never written server code before.
I have experience in Java and using Sockets seems like the easiest option for me. A browser game would mean platform independence but I don't know how to get around the lack of push using Http from the Server to the Browser.
This is how the game would play out, it should give some idea of what I require;
A user starts the App and it connects using a Socket to my server.
The server waits for 4 players, groups them into a game and then broadcasts the first question for the quiz.
After all the players have submitted their answers (Or 5 seconds has elapsed) the Server distributes the correct answer with the next question.
That's the basics, you can probably fill in the finer details, it's just a toy project really.
MY QUESTION IS;
What are the pitfalls of using a simple JAR on the server to handle client requests? The server code registers a ServerSocket when it is first run and creates a thread pool for dealing with incoming client connections. Is there an option that is inherently better for connection to multiple clients in real time with two way communication?
A simple example is in the SUN tutorials at the bottom you can see the source for a multithreaded server, except that I have a pool of threads initially to reduce overhead, my server is largely the same.
How many clients do you expect this system to be able to handle? If we have a new thread for each client I can see that being a limit, also the number of free Sockets for concurrent players. Threads seem to top out at around 6500 with the number of sockets available nearly ten times that.
To be honest If my game could handle 20 concurrent players that would be fine but I'm trying to learn if this approach is inherently stupid. Any articles on setting up a simple chess server or something would be amazing, I just can't find any.
Thanks in advance oh knowledgeable ones,
Gav
You can handle 20 concurrent players fine with a Java server. The biggest thing to make sure you do is avoid any kind of blocking UI like it was the devil itself.
As a bonus, if you stick with non-blocking I/O you can probably do the whole thing single-threaded.
Scaling much past 100 users or so may need to get into multiple processes/servers, depending on how much load each user places on your client.
It should be able to do it without an issue as long as you code it properly.
Project Darkstar
You can get around the "push from server to client over HTTP" problem by using the Long Poll method.
However, using TCP sockets for this will be fine too. Plenty of games have been written this way.
We are doing some Java stress runs (involving network IO). Initially things are all fine and the system responds very fast (avg latency in test 2ms). But hours later when I redo the same test I observe the performance goes down (20 - 60ms). It's the same Jar files, same JVM, and the same LAN over which the stress is running. I am not understanding the reason for this behavior.
The LAN is 1GBPS and for the stress requirements I'm sure we are not using all of it.
So my questions:
Can it be because of some switches in the LANs?
Does the machine slow off after some time ( The machines are restarted .. say about 6months back well before the stress can start; They are RHEL5, XEON 64bit Quad core)
What is the general way to debug such an issues?
A few questions...
How much of the environment is under your control and are you putting any measures in place to ensure it's consistent for each run? i.e. are you sharing the network with other systems, is the machine you're using being used solely for your stress testing?
The way I'd look at this is to start gathering details on what your machine and code are up to. That means use perfmon (windows) sar (unix) to find out what the OS and hardware is doing and get a profiler attached to make sure your code is doing the same thing and help pin-point where the bottleneck is occuring from a code perspective.
Nothing terribly detailed but something I hope that will help get you started.
The general way is "measure everything". This, in particular might mean:
Ensure time on all servers is the same (use ntp or something similar);
Measure how long did it take to generate request (what if request generator has a bug?);
Measure when did request leave the client machine(s), or at least how long did it take to do i/o. Sometimes it is enough to know average time necessary for many requests.
Measure when did the request arrive.
Measure how long did it take to generate a response.
Measure how long did it take to send the response.
You can probably start from the 5th element, as this is (you believe) your critical chain. But it is best to log as much as you can - as according to what you've said yourself, it takes days to produce different results.
If you don't want to modify your code, look for cases where you can sniff data without intervening (e.g. define a servlet filter in your web.xml).
I have 100+ channels of video streams to process all at the same time. I need to capture the video, generate thumbnails, and serve them out as a web service. For the generation of thumbnail, I can use JMF etc.(I noticed there is another post talking about how to generate and access: better quality thumbnails from larger image files). But my concern is: how to scale? Java EE EJB or simply Java SE Threads? What's the cons and pros? How to scale horizontally using EJB?
I am not that familiar with scalability issue, and I really appreciate your kind suggestions.
Thanks.
Agree... threads should help to scale on single machine. If you want to scale across different machines - use Terracotta.
Java SE Threads can help you scale on a single machine, but if you are going to need to scale horizontally across different machines, EJB would be one way to do it.
If it was me, I'd probably farm it out to a separate web service tier that could run on as many machines as needed, and then load balance between those machines.
I don't see a reason to use EJB in this situation. You have to ask yourself where is the bottleneck. My bet will be with the video processing. I would profile your application and see how many threads can be processing before they are spending more time waiting for their timeslice than they are processing. After a point adding more threads will not add more throughput. At that point you know what a machine will do and how many machines you will need to sustain a certain throughput. How you scale across machine is another question.
These are two distinct problems.
Capturing/processing sounds like a render farm-like problem. Those are scaled trivially horizontally. Most solutions involve a queue of jobs, you don't even need to do this in Java; just find a simple solution you like. "render farm ffmpeg" or stuff like that should yield results in Google.
Your 'serving out as a web service' part is somewhat undefined. If you want those videos to be accesible, you might just need to put them on an HTTP server- those can be load-balanced easily and thus be horizontally scalable- storage speed or network bandwidth would probably be your first bottlenecks there.
Leave the formal J2EE stack behind.
Rather, a nice message queue that talks JMS with X number of JVMs running Y number of threads as consumers.