This might be a really stupid question, but I don't really find answers online (Not that I can understand at least), I only found some benchmarks results like these with specific benchmark software.
Let me give a bit of context: I am currently developing a java multithread program on a Ubuntu machine, I have a Windows 10 machine in the Office for test purposes, and the program is supposed to run in a Windows 7 machine connected to a production line.
I am not interested in Ubuntu performance in this case, as the customer wants to use it on a Windows machine. When I test the software on the Windows 10 machine, it just "feels like" it runs faster than on the Windows 7 PC, I must say that I don't have total access to the Windows 7 machine, so I can't take the time to test it, also, as far as I know, the machines have the exact same hardware components and run the same Java version (Always the last update of Java 8).
Does this make sense? Can updating windows from Windows 7 to Windows 10 make a Java program run better? (by optimizing threads maybe?). This questions is based on pure speculation and no actual data at all, so I am sorry if it makes no sense.
Thank you all for your patience.
Yes, the order of thread execution is not guaranteed by the JVM which can be influenced by the other processes handled by an OS.
YES OS will have effect on threads performance as the Java threads execute on the threads of the JVM and JVM indeed will be allocated threads from OS on to which it runs, As JVM can not interact with the processor's thread directly.
So the multi-threaded program's performance will be effected by underlying OS which allocates threads to the respective JVM.
Related
I am a newbie to Java and I have this app developed by someone else which is having some issues.
This Java app works well on windows xp 32 bit, but there is a delay while running on 64 bit windows 2008 R2 server. I have asked the customer to make sure that they are running the 32 bit version of JRE. I have checked the traces for the application and the application has an issue while calling a synchronized block always. This synchronized block adds the data into a queue from which it is picked up by some other process. I have checked the traces if some other process is using the block but it isn’t. The only confusing part is that the same app runs perfectly on windows xp 32 bit.
After googling I came to know that there are threading issues in win64
Help me with this.
This isn't exactly an answer, but it might be of some help and it's more than a comment:
Most likely your 64 bit machine has more cores than the 32 machine, making it more likely that two or more threads really will run at the same time and any synchronization problems that never, or rarely, arise on the 32 will quickly pop up on the 64. Also, newer machines tend to execute more instructions at once than older machines. Both types reorder them as they do so. (Also, compilers often reorder the code.) So the 64's are a bit worse to start with, and when you throw in the more extensive, real multithreading the problems multiply.
(This can work the other way: the many-core machines tend to run threads in a predictable order, where a single core machine can wait many minutes to run something you expected would be executed immediately.)
I've usually found the best way to fix bugs is to stare at the code and make sure everything is precisely right. (This works better if you know when something is precisely right.) Failing that, you need logging messages that only print out when they're useful (it's hard to sort through a billion of them). You seem to have no trouble reproducing your problems (which almost makes me think it may not be a multithreading problem at all), which should make it a bit easier to figure out.
I have developed a Java application that normally run on Linux. It's a POJO application with Swing. Performance is reasonably good.
Now I tried to run it on Windows XP with 2Gb RAM, in a machine with similar or greater potency, and performance is much worse. I observe that it uses 100% CPU.
For example:
A process that creates a window very heavy, with many components: Linux 5 seconds, Windows 12.
A process that accesses a PostgreSQL DB with a heavy query (the server and the JDBC driver are the same): 23 seconds Linux, Windows 43.
I tried also with a virtualized Windows machine with similar features, and the result is significantly better!
Is it normal? What parameters can I assign to improve performance?
Unless you are comparing Linux and Windows XP on the same machine it is very hard to say what the difference is. It could be that while the CPU is faster, the GFX card and disk subsystem is slower.
Java passes all of this IO and GFX acitvity to the underlying OS and the only thing you can do differently is to do less work or work more efficiently. This is likely to make both systems faster, as there is not particular to one OS which you can tune.
Try running Java Visual VM (which is distributed as part of the JDK): attach to your application, then use the CPU Profiler to determine precisely where all that CPU time is going.
There may be subtle differences in the behavior of JRE parts (Swing comes to mind), where the JRE responds very unforgiving to a bad practice (like doing thing from the wrong thread in Swing).
Since you have no clues, I would try profiling the same use case in both environments and see if any significant differences turn up where the time is spent. This will hopefully reveal a hint.
Edit: And ensure that you do not run Windows with brakes on (aka. Antivirus and other 'useful' software that can kill system performance).
We are benchmarking existing Java programs. They are threaded applications designed to benefit from multi-core CPUs. We would like to measure the effect of the number of cores on the running speed, but we are unwilling (and unable) to change the code of these applications.
Of course, we could test the software on different machines, but this is expensive and complicated. We would rather have a software solution.
Note: you can assume that the testing platform is either Windows, Linux or Mac. Ideally, we would like to be able to run the tests on either of these platforms.
It's called setting CPU affinity, and it's an OS setting for processes, not specific to Java.
On Linux: http://www.cyberciti.biz/tips/setting-processor-affinity-certain-task-or-process.html
On Windows: http://www.addictivetips.com/windows-tips/how-to-set-processor-affinity-to-an-application-in-windows/
On Mac it doesn't look like you can set it: https://superuser.com/questions/149312/how-to-set-processor-affinity-on-os-x
Turn the problem upside down; instead of figuring out how to restrict the application to only use n cores on an m core machine, figure out how to make sure there are only n cores available to your application. You could write a very simple program that eats up one core by doing something like while (true); and start m-n instances of that program running at the highest priority, thus ensuring that your code only has n cores available.
Alternatively, if your BIOS or OS allows disabling CPU cores, that works out too. I know CHUD tools on Mac OS X used to do it, but I haven't touched those tools since the G5 era, so that might not work anymore.
On many forums I found that people use Solaris for their Java applications.
I interested what are the main advantages of such combination?
My first assumption is that Solaris is very fast.
I also found out that on Solaris it is possible to match one-to-one java threads with kernel threads - as I understand it results in again very fast thread creation.
Please correct me if I'm wrong and are there any other main points?
What Solaris gives you (as its Software not hardware) over Linux or Windows is greater system manageability and low level tracing like DTrace.
What you appear to be asking about is having more threads running concurrently which is a feature of the hardware. If you run Solaris x86 or Linux or Window on the same hardware you will have the same number of logical threads. However if you run Solaris on some SPARC processors which have lots of logical threads (32 or more) running concurrently which reduces overhead if you have a need for that many threads.
The http://en.wikipedia.org/wiki/SPARC_T3 process supports up to 512 logical threads across 16 cores. This can really improve performance where you have a need for so many threads, e.g. using many blocking IO connections.
However if you need only one to six critical threads (and a bunch of non-critcal threads) a plain x64 processor will be much faster, and cheaper. (As it is designed to handle less threads faster and are mass produced on a larger scale)
We use Solaris for java applications at my workplace. I do not know about any exact performance advantage, but the reasons we decided to use Solaris were:
Solaris Service Management Facility (http://www.oreillynet.com/pub/a/sysadmin/2006/04/13/using-solaris-smf.html )
Ability to copy the entire zone backup to another box in case of HW failure.
We run application servers such as Weblogic, and it helps that SMF starts them back up if they crash for any reason. Also, we do backup of our zones at regular intervals, and from what I hear- the zone can be moved to another machine in case of HW failure, and the application back to normal.
Hi I'm trying to test my JAVA app on Solaris Sparc and I'm getting some weird behavior. I'm not looking for flame wars. I just curious to know what is is happening or what is wrong...
I'm running the same JAR on Intel and on the T1000 and while on the Windows machine I'm able to get 100% (Performance monitor) cpu utilisation on the Solaris machine I can only get 25% (prstat)
The application is a custom server app I wrote that uses netty as the network framework.
On the Windows machine I'm able to reach just above 200 requests/responses a second including full business logic and access to outside 3rd parties while on the Solaris machine I get about 150 requests/responses at only 25% CPU
One could only imagine how many more requests/responses I could get out of the Sparc if I can make it uses full power.
The servers are...
Windows 2003 SP2 x64bit, 8GB, 2.39Ghz Intel 4 core
Solaris 10.5 64bit, 8GB, 1Ghz 6 core
Both using jdk 1.6u21 respectively.
Any ideas?
The T1000 uses a multi-core CPU, which means that the CPU can run multiple threads simultaneously. If the CPU is at 100% utilization, it means that all cores are running at 100%. If your application uses less threads than the number of cores, then your application cannot use all the cores, and therefore cannot use 100% of the CPU.
Without any code, it's hard to help out. Some ideas:
Profile the Java app on both systems, and see where the difference is. You might be surprised. Because the T1 CPU lacks out-of-order execution, you might see performance lacking in strange areas.
As Erick Robertson says, try bumping up the number of threads to the number of virtual cores reported via prstat, NOT the number of regular cores. The T1000 uses UltraSparc T1 processors, which make heavy use of thread-level parallelism.
Also, note that you're using the latest-gen Intel processors and old Sun ones. I highly recommend reading Developing and Tuning Applications on UltraSPARC T1 Chip Multithreading Systems and Maximizing Application Performance on Chip Multithreading (CMT) Architectures, both by Sun.
This is quite an old question now, but we ran across similar issues.
An important fact to notice is that SUN T1000 is based on UltraSpac T1 processor which only have 1 single FPU for 8 cores.
So if you application does a lot or even some Float-Point calculation, then this might become an issue, as the FPU will become the bottleneck.