I just started making some of my first live wallpapers in Android, and I noticed an interesting behavior regarding the PixelFormat. If I use the SurfaceHolder's default PixelFormat, my live wallpaper is a bit laggy. If I set the PixelFormat to RGB_565 it seems to fix this problem. This really should not be too surprising. What was odd was profiling reveal that it was taking just as long to do the rendering in both formats. Could anyone explain this behavior.
Thanks,
Xor
---Edit---
If it of any help, I am rendering on a Canvas. All I do is call drawColor and draw 3 fairly simple, anti-aliased paths. Not really much to it.
PixelFormat shouldn't be a problem. You should be even able to set PixelFormat.RGBA_8888 with no performance hiccups. In some cases this format is useful to reduce color banding on gradients.
Using Handler for animation may be good for simple cases, but you should consider using separate thread for this task. Some time ago I've prepared simple live wallpaper template. You can download whole project for GitHub and experiment a bit with it. I'm sure that you'll get much better performance.
Related
I'm looking to implement image distortions on specific places (such as the eyes of a face). Tools like this are used in many applications that use facial filters like Snapchat.
Tom hardy distorted face
I've already done some research, I've found several leads but I can't figure out what the optimal is, if it works and especially if there's no better way to do this kind of distortion. I've seen that opencv has a remap function, but a lot of people seem to say that the function takes time to execute. I saw that you could do this with opengl but I didn't really find any concrete examples and I'm afraid to start learning opengl for nothing.
Just for information, I am already able to get the necessary information on the faces, I just need a lead to get started in the distortion / deformation of the face.
I'm trying to use code from this answer(the 1st one, the highest rated one):
Android - combine multiple images into one ImageView
After reading extensively the code I found out that the code was using BitmapFactory extensively.
I'm trying to integrate the code into a performance-priority project, and bitmap left me with the impression of being rather taxing on processors, which isn't really something I'm pleased with. I don't want this new part of the code slow everything down noticeably.
My code is already capable of resizing pngs, so I'm guessing either one of the following is likely be the case of the original author's application of BitmapFactory:
resizing pngs uses bitmap processing by default, just because I (author of this question, not of the code IN this question) did not explicitly call relevant functions doesn't mean it hasn't gotten actively involved;
The code also features capability of cutting and reshaping images so that is exclusively the part that needs BitmapFactory, BitmapFactory isn't really necessary if nothing beyond resizing is required.
The code's primary function is to combine multiple images inside a single imageView so to have that, BitmapFactory is instrumental in achieve just that (I've read the code but couldn't find enough evidence to support this assumption).
I need expert answer - a simple yes or no followed by clear elaboration. Thanks in advance. You are of course, welcome to point out my lapse of judgement when claiming that bitmap slows things down.
To answer my own question:
In this particular scenario, unfortunately I need to use bitmap to "map" (no pun intended) my target image, and then resize it and put it into the same ImageView with other images that went through the identical steps. Because I am trying to combine several images inside a single view, this is inevitable.
Maybe there's someone out there who has spent time on this. I'm working on a graph visualization lib in Java and I just did some performance tests.
When I'm adding about 2000 vertices connected by 1000 - 3000 edges, it gets really, really slow. There are tools out there doing way better (gephi for example).. How do they do it? Isn't Java2D hardware accelerated by default? Do I have to use some OpenGL lib?
I'm drawing the graphs inside a JComponent which gets redrawn by a timer every few milliseconds (doesn't really matter, if I give it 100 ms or 1 ms, it stays really slow).
Is my approach flawed or shouldn't I use Java2D for this?
Thank you for any help!
As Torious suggested you probably want to use a VolatileImage if you are working in Java2D to get the benefits of hardware acceleration.
However - If you want absolute best performance, you are probably better off going for an OpenGL - based solution.
LWJGL ( http://lwjgl.org/ ) is designed for games but allows you to use pretty much all the relevant OpenGL functionality so is pretty good for visualisation as well. Might be worth giving it a try!
i was wondering if anyone has knowledge on the recontruction of 3D objects from live video feed. Does any have any java based examples or papers JAVA based that i could be linked to as i have read up on algorithm's used to produce such 3d objects. If possible i would like to construct something such as the program demostrated in the link provided below.
Currently my program logs live video feed.
http://www.youtube.com/watch?v=brkHE517vpo&feature=related
3D reconstruction of an object from a single point of view is not really possible. You have two basic alternatives: a) To have a stereo camera system capturing the object, b) To have only one camera, but rotating the object (so you will have different points of view of the object), like the one in the video. This is a basic concept related with epipolar geometry.
There are other alternatives, but more intrusive. Some time ago I've been working on a 3D scanner based on a single camera and a laser beam.
For this, I used OpenCV which is C++ code, but now I think there are ports for Java. Have in mind that 3D reconstruction is not an easy task, and the resulting app. will have to be largely parametrized to achieve good results.
This isn't a solved problem - certain techniques can do it to a certain degree under the right conditions. For example, the linked video shows a fairly simple flat-faced object being analysed while moving slowly under relatively even lighting conditions.
The effectiveness of such techniques can also be considerably improved if you can get a second (stereo vision) video feed.
But you are unlikely to get it to work for general video feeds. Problem such as uneven lighting, objects moving in front of the camera, fast motion, focus issues etc. make the problem extremely hard to solve. The best you can probably hope for is a partial reconstruction which can then be reviewed and manually edited to correct the inevitable mistakes.
JavaCV and related projects are probably the best resource if you want to explore further. But don't get your hopes too high for a magic out-of-the-box solution!
I would like to implement a visualisation of this video in Java as experience to help me understand all of the 'troubles' in creating visualisations. I have some experience in OpenGL, and a good understanding of how to handle the physics involved. However, if anybody knows of any good game engines that may help (or at least do some of the heavy lifting involved in creating a visualisation of the above) I would be grateful.
Also, I noticed that the linked video must use many separate jets in order to operate in the way it does. Is it likely that it was created using something a little lower level such as C? Is it possible to use a higher level language like Java to control such a system?
Honestly, if you want to implement "just that", I think using a game engine is overkill. Just implement a simple particle engine on your own and you are done.
Seriously, that problem is not so difficult, any language can be used for it. The basic principle behind it is the same as behind steam organs or self player pianos. You have an input data that shows what the pattern to play is and you advance it in a given time.
Here is how I would build the basic control system. You take a black and white image. The width is exactly as wide as the number of "emitters" and the length is as long as the pattern needs to be. You read the image and start at the first line. You walk through each pixel in that line and if the pixel is black you emit a drop and if the pixel is white you don't. You then move in a given interval (maybe 25ms) to the next line and set the emitters accordingly.
The cool thing with images is that you can simply paint them in any graphic program. To get the current time to work you render the time into a image buffer in memory, then pass that into the above code. (You even get fonts if you like...)
You can use jMonkeyEngine.
JAVA OPEN GL GAME ENGINE