I've been working on a simple endless runner android game in 2D. I've opted to go with the canvas route as the learning curve for Open GL seems to be relatively high. I've been following this tutorial so far:
http://www.javacodegeeks.com/tutorials/android-tutorials/android-game-tutorials/
And with the game loop exactly the same:
http://www.javacodegeeks.com/2011/07/android-game-development-game-loop.html
My problem is my game is running at a good 60 FPS on my Galaxy Nexus but when I put the game onto my girlfriend's Nexus 4, the FPS drops to 40 and game performance is very choppy. Same with my cousin's Galaxy S2.
I'm currently using:
MainGamePanel.java
getHolder().setFormat(PixelFormat.RGBA_8888);
I've heard RGB_565 gives better performance, but this actually drops my FPS on my device down to 40 and FPS is same with other devices
Have hardware acceleration enabled, doesn't seem to do much though:
AndroidManifest
android:hardwareAccelerated="true"
In my MainGamePanel, I instantiate all objects that will be drawn onto the canvas before the thread is started:
public MainGamePanel(Context context) {
super(context);
getHolder().addCallback(this);
getHolder().setFormat(PixelFormat.RGBA_8888);
initScreenMetrics(context);
initGameMetrics();
BitmapOptions options = new BitmapOptions();
drillBit = new Drill(getResources(), 0);
drillBit.setX((screenWidth - drillBit.getBitmap().getWidth()) / 2);
drillBackground = new DrillBackground(getResources(), drillBit.getX(), drillBit.getY());
diamond = new Interactable(BitmapFactory.decodeResource(getResources(),R.drawable.diamond, options.getOptions()), screenWidth/2, screenHeight+30);
potion = new Interactable(BitmapFactory.decodeResource(getResources(), R.drawable.potion, options.getOptions()), screenWidth/4, screenHeight+230);
oil = new Interactable(BitmapFactory.decodeResource(getResources(), R.drawable.oil_1, options.getOptions()), 3*screenWidth/4, screenHeight+450);
powerUp = new Interactable(BitmapFactory.decodeResource(getResources(), R.drawable.power_up, options.getOptions()),
3*screenWidth/4, screenHeight+130);
background = new Background(getResources());
thread = new MainThread(getHolder(), this);
setFocusable(true);
}
To animate the drillBit object, I have 4 bitmaps in the constructor and each time the game loops, I switch off the bitmaps (can't think of another way to accomplish this):
drill.java
public Drill(Resources resource, int x) {
BitmapOptions options = new BitmapOptions();
bitmap1 = BitmapFactory.decodeResource(resource, R.drawable.drill_1, options.getOptions());
bitmap2 = BitmapFactory.decodeResource(resource, R.drawable.drill_2, options.getOptions());
bitmap3 = BitmapFactory.decodeResource(resource, R.drawable.drill_3, options.getOptions());
bitmap4 = BitmapFactory.decodeResource(resource, R.drawable.drill_4, options.getOptions());
currentBitmap = bitmap1;
//other stuff
}
draw function in drill:
if (currentBitmap == bitmap1) {
currentBitmap = bitmap2;
}
else if (currentBitmap == bitmap2) {
currentBitmap = bitmap3;
}
else if (currentBitmap == bitmap3) {
currentBitmap = bitmap4;
}
else {
currentBitmap = bitmap1;
}
canvas.draw(currentbitmap, x, y, null);
Any help is appreciated, thanks!
It happens because Canvas draws everything based on pixel.Pixel is different in different resolution devices. So when you execute (screenHeight+30) will draw different in different device. So you can not write directly +30, you need to convert 30 in device dependent pixel.
I use this method to convert static values
public int convertSizeToDeviceDependent(int value,Context mContext) {
DisplayMetrics dm = new DisplayMetrics();
((Activity) mContext).getWindowManager().getDefaultDisplay().getMetrics(dm);
return ((dm.densityDpi * value) / 160);
}
Where to use
diamond = new Interactable(BitmapFactory.decodeResource(getResources(),R.drawable.diamond, options.getOptions()), screenWidth/2, screenHeight+convertSizeToDeviceDependent(30,context));
apply this method to all places where you using hard coded values in this lines
diamond = new Interactable(BitmapFactory.decodeResource(getResources(),R.drawable.diamond, options.getOptions()), screenWidth/2, screenHeight+30);
potion = new Interactable(BitmapFactory.decodeResource(getResources(), R.drawable.potion, options.getOptions()), screenWidth/4, screenHeight+230);
oil = new Interactable(BitmapFactory.decodeResource(getResources(), R.drawable.oil_1, options.getOptions()), 3*screenWidth/4, screenHeight+450);
powerUp = new Interactable(BitmapFactory.decodeResource(getResources(), R.drawable.power_up, options.getOptions()),
3*screenWidth/4, screenHeight+130);
Related
I'm new to OpenCV and I want to run a Java program for face detection using OpenCV.
Only including one haarcascade xml file doesn't give me expected results. So I need to run two,three haarcascade files in the same program. (specially "haarcascade_frontalface_alt.xml" and "haarcascade_profileface.xml" together).
I tried to do it with the following code but it didn't work. Please mention how to proceed.
Thank you.
public class LiveFeed extends WatchDogBaseFrame {
private DaemonThread myThread = null;
int count = 0;
VideoCapture webSource = null;
Mat frame = new Mat();
MatOfByte mem = new MatOfByte();
CascadeClassifier faceDetector1 = new CascadeClassifier("/home/erandi/NetBeansProjects/WatchDog/src/ueg/watchdog/view/haarcascade_frontalface_alt.xml");
CascadeClassifier faceDetector2 = new CascadeClassifier("/home/erandi/NetBeansProjects/WatchDog/src/ueg/watchdog/view/haarcascade_eye.xml");
MatOfRect faceDetections = new MatOfRect();
public LiveFeed(WatchDogBaseFrame parentFrame) {
super(parentFrame);
initComponents();
super.setCloseOperation();
jButtonExit.setVisible(false);
}
//class of demon thread
public class DaemonThread implements Runnable {
protected volatile boolean runnable = false;
#Override
public void run() {
synchronized (this) {
while (runnable) {
if (webSource.grab()) {
try {
webSource.retrieve(frame);
Graphics graphics = jPanelVideo.getGraphics();
faceDetector1.detectMultiScale(frame, faceDetections);
faceDetector2.detectMultiScale(frame, faceDetections);
for (Rect rect : faceDetections.toArray()) {
// System.out.println("ttt");
Imgproc.rectangle(frame, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height),
new Scalar(0, 255, 0));
}
Imgcodecs.imencode(".bmp", frame, mem);
Image im = ImageIO.read(new ByteArrayInputStream(mem.toArray()));
BufferedImage buff = (BufferedImage) im;
if (graphics.drawImage(buff, 0, 0, getWidth(), getHeight() - 150, 0, 0, buff.getWidth(), buff.getHeight(), null)) {
if (runnable == false) {
System.out.println("Paused ..... ");
this.wait();
}
}
} catch (Exception ex) {
System.out.println("Error");
}
}
}
}
}
}
Object Detection using Haar feature-based cascade classifiers is an effective object detection method proposed by Paul Viola and Michael Jones in their paper, "Rapid Object Detection using a Boosted Cascade of Simple Features" in 2001. It is a machine learning based approach where a cascade function is trained from a lot of positive and negative images. It is then used to detect objects in other images.
OpenCV already contains many pre-trained classifiers for face, eyes, smile etc. Those XML files are stored in opencv/data/haarcascades/ folder.
You can't run many cascade files simultaneously and increase performance. But you can use them one by one as a loop and pass input images through that loop.
Example code is given in this link: OpenCv sample code
I am stack with this problem for a couple of days. I want to make an android app that takes a picture and extracts HOG features of that image for future processing. The problem is that the code below always returns the HOG descriptors with rezo values.
#Override
public void onPictureTaken(byte[] data, Camera camera) {
Log.i(TAG, "Saving a bitmap to file");
// The camera preview was automatically stopped. Start it again.
mCamera.startPreview();
mCamera.setPreviewCallback(this);
this.disableView();
Bitmap bitmapPicture = BitmapFactory.decodeByteArray(data, 0, data.length);
myImage = new Mat(bitmapPicture.getWidth(), bitmapPicture.getHeight(), CvType.CV_8UC1);
Utils.bitmapToMat(bitmapPicture, myImage);
Bitmap bm = Bitmap.createBitmap(myImage.cols(), myImage.rows(),Bitmap.Config.ARGB_8888);
Utils.matToBitmap(myImage.clone(), bm);
// find the imageview and draw it!
ImageView iv = (ImageView) getRootView().findViewById(R.id.imageView);
this.setVisibility(SurfaceView.GONE);
iv.setVisibility(ImageView.VISIBLE);
Mat forHOGim = new Mat();
org.opencv.core.Size sz = new org.opencv.core.Size(64,128);
Imgproc.resize( myImage, myImage, sz );
Imgproc.cvtColor(myImage,forHOGim,Imgproc.COLOR_RGB2GRAY);
//forHOGim = myImage.clone();
MatOfFloat descriptors = new MatOfFloat(); //an empty vector of descriptors
org.opencv.core.Size winStride = new org.opencv.core.Size(64/2,128/2); //50% overlap in the sliding window
org.opencv.core.Size padding = new org.opencv.core.Size(0,0); //no padding around the image
MatOfPoint locations = new MatOfPoint(); ////an empty vector of locations, so perform full search
//HOGDescriptor hog = new HOGDescriptor();
HOGDescriptor hog = new HOGDescriptor(sz,new org.opencv.core.Size(16,16),new org.opencv.core.Size(8,8),new org.opencv.core.Size(8,8),9);
Log.i(TAG,"Constructed");
hog.compute(forHOGim , descriptors, new org.opencv.core.Size(16,16), padding, locations);
Log.i(TAG,"Computed");
Log.i(TAG,String.valueOf(hog.getDescriptorSize())+" "+descriptors.size());
Log.i(TAG,String.valueOf(descriptors.get(12,0)[0]));
double dd=0.0;
for (int i=0;i<3780;i++){
if (descriptors.get(i,0)[0]!=dd) Log.i(TAG,"NOT ZERO");
}
Bitmap bm2 = Bitmap.createBitmap(forHOGim.cols(), forHOGim.rows(),Bitmap.Config.ARGB_8888);
Utils.matToBitmap(forHOGim,bm2);
iv.setImageBitmap(bm2);
}
So in the logcat I never get the NOT ZERO message. The problem is that whatever changes I do to this code I always have zeros in the descriptors MatOfFloat... And the strange part is, if I uncomment the HOGDescriptor hog = new HOGDescriptor(); and use that one instead of the one I am using now, my application crashes...
The rest of the code runs fine, the picture is always taken and displayed on my image view as I expect.
Any help will be appreciated.
Thanks in advance.
The problem was inside the library. When I executed the same code with OpenCV 2.4.13 for Linux and not for Android, the code worked great as expected. So I hope they will fix any problems with the HOGDescriptor for OpenCV4Android.
I am doing a project where I need to identify certain areas of the image. After processing the image and removing all the unnecessary things I finally get the area which I need as shown in the image (area inside the green circle).
I am unable to draw a circle around that area using OpenCV. I am currently using the Java version of OpenCV. If someone can point me to the right direction on how to implement that green circle over the image, it will be very helpful.
Things I have tried to detect that area.
blob detector - Did not achieve much.
Cluster - Same as blob detector.
HoughCircles - Draws unnecessary circles in the image.
FindContour - Did not draw anything since it is not a perfect circle, ellipse or any other well known polygon.
I appreciate your help.
Here is a solution:
Opening in order to clean the image from all the thin/elongate patterns.
Connected component labeling in order to count the remaining patterns
Size counting of each remaining pattern
The biggest pattern is the one you want to circle.
Note: is you want to perfectly preserve the pattern, you can replace the opening by an opening by reconstruction (erosion + geodesic reconstruction).
I finally found a solution for my problem. I used the feature detector from the OpenCV library and gave the right threshold to the detector. This did the trick for me. The code in Java looks like below.
public static void main(String[] args){
try{
//Validation whether a file name is passed to the function
if(args.length == 0){
System.out.println("here...");
log.error("No file was passed to the function");
throw new IOException();
}
//Read the image from the input
Mat inputMat = Highgui.imread(args[0],Highgui.CV_LOAD_IMAGE_GRAYSCALE);
//Create a feature detector. In this case we are using SURF (Speeded-Up Robust Features) detector.
MatOfKeyPoint objectKeyPoints = new MatOfKeyPoint();
FeatureDetector featureDetector = FeatureDetector.create(FeatureDetector.SURF);
//A temporary file is created to input Hessian Threshold to the SURF detector
File tempFile = File.createTempFile("config", ".yml");
String settings = "%YAML:1.0\nhessianThreshold: 7000.\noctaves: 3\noctaveLayers: 4\nupright: 0\n";
FileWriter writer = new FileWriter(tempFile, false);
writer.write(settings);
writer.close();
//Read the configuration from the temporary file to assign the threshold for the detector
featureDetector.read(tempFile.getPath());
//Detect the features in the image provided
featureDetector.detect(inputMat, objectKeyPoints);
//Iterate through the list of key points detected in the previous step and find the Key Point with the largest size
List<KeyPoint> objectKeyPointList = objectKeyPoints.toList();
KeyPoint impKeyPoint = new KeyPoint();
for(int i=0; i<objectKeyPointList.size(); i++){
if(impKeyPoint == null){
impKeyPoint = objectKeyPointList.get(i);
}
else if(impKeyPoint.size < objectKeyPointList.get(i).size){
impKeyPoint = objectKeyPointList.get(i);
}
}
//If the size of the Key Point is greater than 120 then reduce the size to 120 and if the size is less than 120 then increase to 120
if(impKeyPoint.size > 120){
KeyPoint tempKeyPoint = new KeyPoint();
tempKeyPoint = impKeyPoint;
tempKeyPoint.size = 120;
impKeyPoint = tempKeyPoint;
}
else if(impKeyPoint.size < 120){
KeyPoint tempKeyPoint = new KeyPoint();
tempKeyPoint = impKeyPoint;
tempKeyPoint.size = 120;
impKeyPoint = tempKeyPoint;
}
//Convert the Key Point to MatOfKeyPoint since drawKeyPoints accepts only MatOfKeyPoint
MatOfKeyPoint impMatOfKeyPoint = new MatOfKeyPoint(impKeyPoint);
//Mat for drawing the circle in the image
Mat outputImage = new Mat(inputMat.rows(), inputMat.cols(), Highgui.CV_LOAD_IMAGE_COLOR);
//Green color for the circle
Scalar greenCircle = new Scalar(0, 255, 0);
//Draw the circle around the optic nerve when detected
Features2d.drawKeypoints(inputMat, impMatOfKeyPoint, outputImage, greenCircle, Features2d.DRAW_RICH_KEYPOINTS);
//Write the image to a file
Highgui.imwrite("surf_keypoints.png", outputImage);
}catch(Exception e){
log.fatal(e.getMessage());
}
}
Hope this is helpful for others.
I have quite the annoying problem. I'm building an app where one can share photos. On the SurfaceView where you take the actual photo, the resolution is great. However, when I retrieve that image and display it in a ListView using Picasso, the resolution goes to crap. The pixelation is real. Is there anything that I'm doing horrendously wrong to cause this? The first code snippet below is where I actually save the photo, and the one below that is my getItemView() method in my adapter for the listview. Thanks in advance.
Note that the "photo" variable you see in my code is a Parse subclass I've created to make it easier working with data associated with each photo. I think you can safely ignore it.
EDIT:
SurfaceView of Camera:
Note that I attempt to set the camera parameters to the highest quality allowed. Unfortunately, when I LOG size.width and size.height, I can only get around 176x144. Is there a way to get a higher resolution for supported camera sizes itself?
camera.setDisplayOrientation(90);
Parameters parameters = camera.getParameters();
parameters.set("jpeg-quality", 70);
parameters.setPictureFormat(ImageFormat.JPEG);
List<Camera.Size> sizes = parameters.getSupportedPictureSizes();
Size size = sizes.get(Integer.valueOf((sizes.size()-1)));
parameters.setPictureSize(size.width, size.height);
camera.setParameters(parameters);
camera.setDisplayOrientation(90);
List<Size> sizes2 = parameters.getSupportedPreviewSizes();
Size size2 = sizes.get(0);
parameters.setPreviewSize(size2.width, size2.height);
camera.setPreviewDisplay(holder);
camera.startPreview();
Saving the photo:
// Freeze camera
camera.stopPreview();
// Resize photo
Bitmap mealImage = BitmapFactory.decodeByteArray(data, 0, data.length);
Bitmap mealImageScaled = Bitmap.createScaledBitmap(mealImage, 640, 640, false);
// Override Android default landscape orientation and save portrait
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap rotatedScaledMealImage = Bitmap.createBitmap(mealImageScaled, 0,
0, mealImageScaled.getWidth(), mealImageScaled.getHeight(),
matrix, true);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
rotatedScaledMealImage.compress(Bitmap.CompressFormat.JPEG, 100, bos);
byte[] scaledData = bos.toByteArray();
// Save the scaled image to Parse with the date and time as its file name.
DateTime currentTime = new DateTime();
DateTimeFormatter fmt = DateTimeFormat.forPattern("HH MM SS");
photoFile = new ParseFile(currentTime.toString(fmt), scaledData);
photo.setPhotoFile(photoFile);
Displaying it:
final ParseImageView photoView = holder.photoView;
ParseFile photoFile = photo.getParseFile("photo");
Picasso.with(getContext())
.load(photoFile.getUrl())
.into(photoView, new Callback() {
#Override
public void onError() {
}
#Override
public void onSuccess() {
}
});
The problem is not with the Picasso
It because this line of code
parameters.set("jpeg-quality", 70);
and this
List<Size> sizes2 = parameters.getSupportedPreviewSizes();
Size size2 = sizes.get(0);
When you setup the camera you already turned down the quality to the 70% (because based on the Android Documentation the range of jpeq-quality is between 0-100)
And then you also need to check is the size of the camera is correct or not, because you are making assumption with that code
you can try this code to get the best preview size with your preffered width and height
private Camera.Size getBestPreviewSize(int width, int height, Camera.Parameters parameters){
Camera.Size bestSize = null;
List<Camera.Size> sizeList = parameters.getSupportedPreviewSizes();
bestSize = sizeList.get(0);
for(int i = 1; i < sizeList.size(); i++){
if((sizeList.get(i).width * sizeList.get(i).height) >
(bestSize.width * bestSize.height)){
bestSize = sizeList.get(i);
}
}
return bestSize;
}
I hope this answer will help you, if you have another question about my answer you can try to ask me in the comment :)
How to change size in libgdx-android-desktop? I am confuse on window sizeing and not sure how to solve this problem.
So for desktop window i want 500x500 but with android i want full screen so i cant hard code it.
For some reason ANDROID_WIDTH is always equal to WINDOW_WIDTH.
int WINDOW_WIDTH = 500;
int WINDOW_WIDTH = 500;
public void create() {
if (Gdx.app.getType() == ApplicationType.Android) {
int ANDROID_WIDTH = Gdx.graphics.getWidth();
int ANDROID_HEIGHT = Gdx.graphics.getHeight();
camera = new OrthographicCamera(ANDROID_WIDTH, ANDROID_HEIGHT);
camera.translate(ANDROID_WIDTH/2, ANDROID_HEIGHT/2);
camera.update();
} else {
camera = new OrthographicCamera(WINDOW_WIDTH, WINDOW_HEIGHT);
camera.translate(WINDOW_WIDTH / 2, WINDOW_HEIGHT / 2);
camera.update();
}
Gdx.input.setInputProcessor(new GameInputProcessor());
}
You cannot change the size of the windows by changing the camera. They are two separate concepts.
You set the size on desktop application in your main method through config lwjpg .
Android application is full screen anyway.
LwjglApplicationConfiguration cfg = new LwjglApplicationConfiguration();
cfg.title = "Title";
cfg.useGL20 = true;
cfg.height = 640;
cfg.width = 360;
new LwjglApplication(new MyGame(), cfg);
You can give the Window size on Startup:
new LwjglApplication(yourApplicationListener(), "Title", 500, 500, true/false (useGL2));
You can also Change it in your game using:
Gdx.graphics.setDisplayMode(500, 500, true/false (fullscreen));
You can surround this by an if Statement like stuntmania said:
if (Gdx.app.getType().equals(ApplicationType.Android)) {
Gdx.graphics.setDisplayMode(500, 500, false);
} else {
Gdx.graphics.setDisplayMode(Gdx.graphics.getWidth(),Gdx.graphics.getHeight(), true);
}
EDIT:
In LibGDX 1.8 the Method Gdx.graphics.setDisplayMode has been renamed to Gdx.graphics.setWindowedMode:
API Change: Graphics#setDisplayMode(int, int, boolean) has been renamed to
Graphics#setWindowedMode(int, int). This will NOT allow you to switch to fullscreen anymore,
use Graphics#setFullscreenMode() instead. If the window is in fullscreen mode, it will be
switched to windowed mode on the monitor the window was in fullscreen mode on.
(Source)
do this in your desktop project
cfg.width=500;
cfg.height=500;
and in your main class
int ANDROID_WIDTH = Gdx.graphics.getWidth();
int ANDROID_HEIGHT = Gdx.graphics.getHeight();
camera = new OrthographicCamera();
camera.setToOrtho(false, ANDROID_WIDTH, ANDROID_HEIGHT);
camera.update();