In follow-up to my previous questions (especially this one : Java : VolatileImage slower than BufferedImage), i have noticed that simply drawing an Image (it doesn't matter if it's buffered or volatile, since the computer has no accelerated memory*, and tests shows it's doesn't change anything), tends to be very long.
(*) System.out.println(GraphicsEnvironment.getLocalGraphicsEnvironment()
.getDefaultScreenDevice().getAvailableAcceleratedMemory()); --> 0
How long ? For a 500x400 image, about 0.04 seconds. This is only drawing the image on the backbuffer (obtained via buffer strategy).
Now considering that world of warcraft runs on that netbook (tough it is quite laggy) and that online java games seems to have no problem whatsoever, this is quite thought provoking.
I'm quite certain I didn't miss something obvious, I've searched extensively the web, but nothing will do. So do any of you java whiz have an idea of what obscure problem might be causing this (or maybe it is normal, tough I doubt it) ?
PS : As I'm writing this I realized this might be cause by my Linux installation (archlinux) tough I have the correct Intel driver. But my computer normally has "Integrated Intel Graphics Media Accelerator 950", which would mean it should have accelerated video memory somehow. Any ideas about this side of things ?
I'm also running Arch Linux and noticed my games going slow sometimes, especially when using alpha transparencies with my images. It turns out that even Windows, not only Linux sometimes turns off hardware acceleration by default.
I looked for a solution to the problem and found this:
http://web.archive.org/web/20120926022918/http://www.systemparadox.co.uk/node/29
Enabling OpenGL considerably sped up my framerates, and I assume if you ran your tests again, you'd get faster draws.
I don't know much about java graphics, but if I were in your shoes, I would assume that the measurement means nothing without a comparison value, which it sounds like you might have but are not sharing. Add this information to your question, along with the specs of the comparison system (is it a desktop? does it have a dedicated video card? does it run windows or linux?).
Concerning your measurement that it's 10 times faster on another netbook, does that other notebook run Windows, or is that one also Linux? Linux has historically had very mediocre graphics drivers - they just don't run nearly as well as the Windows equivalents. In fact for a long time the only drivers you could get were not written by ATI/nVidia/etc., but rather by hobbyists. It would not surprise me at all if a Linux machine ran a graphical program ten times slower than a similar machine running Windows.
This was the situation as I understood it about five years ago. I doubt it's changed much.
Related
I am a newbie to Java and I have this app developed by someone else which is having some issues.
This Java app works well on windows xp 32 bit, but there is a delay while running on 64 bit windows 2008 R2 server. I have asked the customer to make sure that they are running the 32 bit version of JRE. I have checked the traces for the application and the application has an issue while calling a synchronized block always. This synchronized block adds the data into a queue from which it is picked up by some other process. I have checked the traces if some other process is using the block but it isn’t. The only confusing part is that the same app runs perfectly on windows xp 32 bit.
After googling I came to know that there are threading issues in win64
Help me with this.
This isn't exactly an answer, but it might be of some help and it's more than a comment:
Most likely your 64 bit machine has more cores than the 32 machine, making it more likely that two or more threads really will run at the same time and any synchronization problems that never, or rarely, arise on the 32 will quickly pop up on the 64. Also, newer machines tend to execute more instructions at once than older machines. Both types reorder them as they do so. (Also, compilers often reorder the code.) So the 64's are a bit worse to start with, and when you throw in the more extensive, real multithreading the problems multiply.
(This can work the other way: the many-core machines tend to run threads in a predictable order, where a single core machine can wait many minutes to run something you expected would be executed immediately.)
I've usually found the best way to fix bugs is to stare at the code and make sure everything is precisely right. (This works better if you know when something is precisely right.) Failing that, you need logging messages that only print out when they're useful (it's hard to sort through a billion of them). You seem to have no trouble reproducing your problems (which almost makes me think it may not be a multithreading problem at all), which should make it a bit easier to figure out.
I am trying to make a visual stimulus for an EEG study. The video is simply a flicker between a black frame and a white frame, and the alternation should occur at a range of rates: 12Hz, 24Hz, 48Hz, 72Hz.
Our monitors have a refresh rate of 144Hz and the computers are also fancy, and I am measuring the success of the videos with an oscilloscope to ensure accuracy. So, the hardware should not be an issue; theoretically, up to half the monitor's refresh rate should be possible. However, I have failed in both Java and MatLab.
I have tried using MatLab:
1) using imwrite() to make a gif
2) using the VideoWriter
3) using getframe() and movie2avi().
In all of these methods, the extra-high framerate is declared, and I can see in my command window that all the frames were inserted during the run. However, the final output file never exceeds 48Hz.
On top of that, the 48Hz as well as 24Hz and even 12Hz files also have serious timing issues with the frames.
I have also tried to make the files using Processing's MovieMaker: I set the framerate to 72Hz -- input a list of 72 .png files as frames -- and it should output a 1-second file that flickers at 72Hz.
However, the result only goes at 48Hz, and again the timing of the frames is not reliable.
I wouldn't be posting here if I hadn't exhausted my search; I'm really out of ideas. MatLab and Processing were both recommended ways of achieving this kind of high fps file, and both have big timing issues even with lower flicker frequencies. If anyone has any tips on increasing the temporal fidelity of the high-Hz flicker (graphics settings? codecs?), or of how to make it all the way to 72Hz, we'd greatly appreciate it!
As I said, I have only used Processing/Java and MatLab, so please feel free to recommend another platform.
THis is not an answer. It needs more than the comment-box, though, so please bear with me.
There are fundamental issues involved:
Simply drawing to whatever facility your OS/Graphics combo exposes, does not at all guarantee the drawed element to be present starting from the next frame (in all systems I know of).
This simply stems from the fact, that all these combos were explicitly NOT ment for an EEG Stimulus, but for consumption by visual understanding
Many combos offer lower-level facilities (e.g. OpenGL), that do carry such a promise, but come with other sets of problems, one of which is the less comfortable programming environment
With most OS/Harware combos, it might be less than trivial to sustain this stimulus - 144 Hz translates to less than 7ms - a time slot, that might be missed by a single bad scheduling decision of the OS or a need for a double-read even on a fast spinning disk. You would need to aim for some realtime-oriented OS dialect.
EDIT
After re-reading your question, I see you use Java. Forget it. a single GC break can easily be more than 7 ms.
There are a couple of free (as in beer and freedom) toolboxes for Matlab that wrap the low level openGL commands that you need in order to gain the type of control you want
MGL only runs on Mac but:
mgl is a set of matlab functions for displaying full screen visual stimuli from matlab. It is based on OpenGL functions, but abstracts these into more simple functions that can be used to code various kinds of visual stimuli.
Psychtoolbox runs on Mac, Windows, and Linux
The attraction of using computer displays for visual psychophysics is
that they allow software specification of the stimulus. Programs to
run experiments are often written in a low-level language (e.g. C or
Pascal) to achieve full control of the hardware for precise stimulus
display... The Psychophysics Toolbox is a software package that adds this capability to the Matlab and Octave application on Macintosh, Linux and Windows computers
It sounds like you are just starting out, in which case I would also suggest looking at the Python based PsychoPy
PsychoPy is an open-source package for running experiments in Python (a real and free alternative to Matlab). PsychoPy combines the graphical strengths of OpenGL with the easy
Python syntax to give scientists a free and simple stimulus
presentation and control package.
STORY
I've been coding in OpenGL for about a year now (on the same hardware), and I've only recently been getting artifacts like the ones in the image above. They show up after running my program consecutively in a short period of time (couple of mins), and appear anywhere: from WordPad (see picture) to my desktop and taskbar or even other games (league of legends launcher, it's a software renderer i think).
They can take any form similar to what you see in the image: black/white blocks or pieces of texture from my application. The artifacts dissapear after the affected application refreshes it's screen.
INFO
My Application looks just Fine itself, no artifacts
I am using an ATI m5800 card with the latest driver (I have an hp elitebook 8540w), windows 7 64bit and Opengl 3.3 or lower.
Why am I not posting a code sample? Because (as obnoxious as it sounds) it does not directly seem to be a specific part of my code that is causing the artifacts, I can easily run the program for 15 minutes without a problem. My Graphics card will heat up to 68 degrees celcius and no artifacts will occur. The artifacts start to happen after multiple consecutive runs, though my video card will never heat up past 68 degrees even when the artifacts occur.
Why am I posting this here? Because I am still not sure whether this is caused by my code or my hardware/drivers, and I think this question is difficult enough from a technical standpoint in a way that I will only get the "buy a new GC" answer in any other place.
I use alot of OpenGL, anything from framebuffers and shaders to 3d textures and buffertextures and texturearrays and what not.
My hardware has not suffered any damage as far as I'm aware, though it is an elitebook so it is prone to overheating.
MY GUESS
(When I mention RAM, I mean video RAM, on my graphics card)
I don't know a whole lot about OpenGL and I know even less about Graphics cards and their drivers, but I'll try to approach this and you can shoot my down at any point. The following is a disorganized train of thought, feel free to only read the bold parts.
All internet sources I can find on graphics artifacts tell me I have bad RAM caused by overheating.
However, if this is the case, then why is this "bad RAM" only accessed after consecutive runs, and never after a fresh start? Shouldn't my OS clean up all graphics memory when my application stops, resetting the state of my GC? If I have bad RAM, it seems that my graphics card cannot keep up with disposing my data and eventually accesses this piece of ram when everything else is "taken". And if the malfunctioning of the RAM is based on temperature, then why can I run my application once for 15 minutes, but not 4 times in the same period if the temperature stays the same?
Also, it it is truly bad ram, then why am I seeing part of my texture? Wouldn't this imply that the RAM worked fine at one part? (The blue blocks in the picture are part of a texture I use)
And more importantly: Why do I only seem to get artifacts with my application and nowhere else? Not one other application I have installed produces these artifacts anywhere!
This would suggest that it is not my hardware but me, I am doing something wrong! (Or OpenGL is doing something wrong on my laptop, since most games likely run on DirectX).
And now the last part, which makes the entire thing a miracle: My hardware is funded partly by my university, this means a friend of mine has identical hardware (truly identical 100%), and he does not get any artifacts running my code.
So... is it a driver bug then? My friend is running the same driver as me... I am totally lost here.
All I can conclude is that my RAM is broken, and somehow everybody manages to avoid the bad parts except for me and my application.
What I am trying to ask is this: How did I get these artifacts? and how are some applications able to avoid them? What is going on in the hardware/software?
PS: I understand this is quite unstructured and chaotic as a question, this is because I've been at this for a while and I've tried to include any piece of information I have found. I will appreciate ANY piece of information anyone might think is related to this subject, today or a year from now, and I will gladly restructure this post if any suggestions come up. I've searched alot about artifacts but a great deal of search results describe anomalies due to code, restricted to the application in question, which is of little help to me. This means I could have missed important sources, if you think I did, please link them.
Contrary to what this might look like, I am only asking for information and discussion, not for a direct solution, the best solution is obviously to buy a new graphics card.
Important source : Diagnosing video card problems
All internet sources I can find on graphics artifacts tell me I have bad RAM caused by overheating.
And they're most likely right. This is the typical kind of error seen with defective graphics RAM.
However, if this is the case, then why is this "bad RAM" only accessed after consecutive runs, and never after a fresh start?
Because after a fresh start only a smaller portion of the graphics RAM are actually used. Something like only the first 64MiB or so. That's a rather miniscule amount of memory, compared to the huge quantities of RAM present on modern graphics card (several 100MiB).
After your program did run for a little while it consumes some RAM, even more so, if it's creating and freeing a lot RAM. For performance reasons all allocations of graphics RAM must be contiguous. And to keep things simple and fast the driver will merely increment the base pointer for further allocations to the beginning of unused, contiguous RAM.
Shouldn't my OS clean up all graphics memory when my application stops, resetting the state of my GC?
Well, say the OS did reserve some new graphics memory while your program was running, even if the program terminates, the other allocations will stay. Or after your program terminates the internal state of the driver will make it hand out different portions of memory than before.
So not all of your graphics RAM is defective. Only later parts which will be accessed only with the first part of (nondefective) RAM being pre-allocated.
Technically it was possible to identify the defective parts of RAM and no longer use it. But there's no support (I know about) for it in the drivers.
So, on 64-bit Ubuntu I'm developing with LWJGL, but code that worked perfectly well on Windows (and Mac, though I've tested that much less) is having issues on my new machine.
Basically, if I attempt to init fullscreen mode, the application ends up in a window instead of taking over the view, an performance is very slow (about 1/2 to 1/3 of what it should be).
Funnily enough, rarely (about 5% of the time) everything works perfectly and performance is good.
After doing some research on Google, it seems like this is due to issues with the X Windowing system. I found an article here that suggests calling XInitThreads() in the application before setting anything else up. Unfortunately, how do I make the call?
I realize that I can use
Process p = Runtime.getRuntime().exec("The system command goes here");
to execute system commands, but I don't know the command to use.
Unfortunately, you can't solve your problem with exec. The process -- in this case, the JVM process -- has to make that call. The link you reference is describing the unfortunate fact that the JVM does not make it. It is very unlikely that you can introduce this for yourself.
Talking to the X API is a fundamental activity of the JVM: it's how AWT is implemented in this environment. Since the JVM is already using X to talk to the display, you can't just introduce one little extra call. The necessary place to put that call is in the middle of the X init code in the JVM.
OpenJDK is open source. You could make your own version, but I can't recommend that.
I feel that while I love J2ME and Java it's hypocritical of them to have two APIs for Java. Java was designed with "One code, many platforms" in mind, and now it's more like "One API for every OS, and one API for everything smaller than a netbook." I see a lot of J2ME emulators and such being ported to things like the PSP, and other consoles for homebrew, and I wonder why no one is doing this with normal Java.
I'd love to write a game to play on my PC, than fire up a simple emulator and play the same game on the PSP, or the Dreamcast, but I can't. J2ME can't even run on a PC, you need an emulator for it, which reduces your market greatly. Plus most emulators are bulky, and not good.
With super-phones like the IPhone coming out people are going to want more than little J2ME games, so if Java can't port their standard JRE to it they might find themselves missing the boat like Microsoft did with the netbook boom.
It just feels like Sun needs to ether work on making the standard JRE smaller and more portable, or making J2ME available on the PC easily.
I think this should be a community Wiki
But to the point, my view is that J2ME is going to die a horrible death and leave us with normal Java. The current Netbook trend combined with the more powerful smartphone trend means that your average cellphone today is much stronger than the machines that ran J2SE when it first came out.
Hence, we can do away with J2ME, which was designed for ancient Nokias, and enjoy the standard Java on a smart doorknob (or a smartphone).
The only problem that Java faces is that the biggest player in smartphone applications - Apple - isn't going to allow a JVM anytime in the foreseeable future.
Even if your monitor had an accelerometer in it, you probably wouldn't want to use it for an iPhone app - so I'd say there are limits to portability after all.
If "write once, run anywhere" is misleading, that's because it was conceived before cell phones became prevalent. As far as the API goes, I agree a common subset would be preferable, but once again, the entire J2ME niche is completely new. The JVM is still useful: a web browser can run on Windows, Linux, and OS X, and a game can run on both Nokia and Samsung phones.
Is the original Java ideal dead?
It still meets the original demands of portable code from workstation to workstation, so no. But it sounds like you've set an even higher bar for future platforms.
There are many things that a Virtual Machine might chose to abstract away.
The OS abstracts away some of the common hardware, by providing standard interfaces to them (block i/o, character i/o, etc).
The JVM set out to abstract out the processor and the OS itself, a mighty goal by itself (at that time)! However, abstracting the peripheral hardware was, and will remain, a difficult goal to achieve.
Perhaps, when we see more convergence of hand-helds/laptops/desktops/servers, the need to abstract out the hardware will diminish.
With newer platforms for mobile like Windows Mobile and Symbian which have captured the market share j2me etc have taken a backseat due to issue like not taking advantage of hardware etc..
J2ME is great. You can package and run J2ME applications with the lean and clean http://www.microemu.org/. Since I have been writing code for J2ME, I'm a better programmer. It forces you to be efficient on memory. I love the small clean API.In the future all my client application will be designed for J2ME and then ported to J2SE/Android/IPhone. The difficult thing is to build your in-house GUI framework flexible enough for the application to run smoothly on any screen size. That takes time.