How to draw a line on the camera screen with j2me? - java

Currently I am developing an application for decoding barcodes using mobile phones.
I have a problem with how to draw a line or a square on the camera screen to easily capture the barcode.
What is the easiest way of doing it?

Unfortunately this isn't as easy as it sounds. If you have a preview image from a phone's camera then it's often rendered as an overlay. This means that the camera preview image doesn't actually form any part of your application's canvas and you can't interact directly with the pixels. The phone simply draws the preview on top of your appliction, completely out of your control.
If you draw a line on your screen, then it will be drawn underneath the preview image.
The way around this isn't too pretty. You need to actually capture an image from the camera. Unfortunately this means capturing a JPEG or a PNG file into a byte buffer. You then load this image using Image.createImage and render that to the screen. You can then safely draw on top of that.
This also has the undesirable downside of giving you an appalling frame-rate. You might want to enumerate all the possible file formats you can capture in and try them all to see which one is quickest.

You can do this by using OverlayControl, assumming that your target devices support it.
I think i remember seeing a good example # Sony Ericsson developer forums.
Edit: found this (does not involve use of OverlayControl)

Related

How to detect what the content on the display is?

I have an app that rotates other apps. It uses an accessibility service so I have the ability to retrieve screen content.
Say I'm rotating a game called Geometry Dash to the portrait direction. Sometimes it will rotate correctly and look like this:
But other times it incorrectly appears like this:
Is it possible to detect when half the screen is black like in the second pic?
Perhaps using AccessibilityNodeInfo is the key since that class gives information about what's on the screen?
You could do it easily be loading the content of the screen into a bitmap then you can split the bitmap in two parts.
Check if one of them is only black.

ARCore - reading superimposed image (screenshot of an AR scene)

I've been experimenting with ARCore for the past few months. I have read almost all the documentation. Talking in reference to the sample app, what I want to do is to extract the superimposed image from the app i.e a frame containing the camera texture and also the bots drawn by opengl (like a screenshot). In preview 2, they have provided TextureReader class which extracts just the camera texture. I've been trying a lot but haven't been able to succeed in getting the superimposed image. Is there a way to do it or is it just impossible?
Sample code specifically for the HelloAR sample to capture the image (and save it to the device) is in this answer: How to take picture with camera using ARCore
I think basically you want to have a screenshot from the OpenGL view. This question should help you: Screenshot on android OpenGL ES application

Real time logo detection in live camera preview with OpenCV on Android

I am new to computer vision but I am trying to code an android app which does the following:
Get the live camera preview and try to detect one logo in that (i have the logo in my resources). In real-time. Draw a rect around the logo if found. If there is no match, dont draw the rectangle.
I already tried a couple of things including template-matching and feature detection using ORB.
Why that didnt work:
Template-matching:
Issues with scaling and rotation. I tried a multi scale variant of it but a) the performance was really bad and b) the rectangle was of course always shown trying to search for the image. There was no way to actually confirm in the code if the logo was found or not.
ORB feature detection:
Also pretty slow (5-6 fps) but it worked ok-ish. The other problem was that also i never could be sure if the logo was in the picture or not. ORB found random matches even if the logo was not in the picture.
Like I said, I am very new to this. I would appreciate the help on what would be the best way to achieve:
Confirm if a picture A (around 200x200 pixels) is in ROI of camera picture (around 600x600 pixels).
This shouldnt take longer than 50ms per frame. I dont know if thats even possible though. So if a correct way to do this would take a bit longer than that, I would just do the work in a seperate thread and only analyze like every fifth camera frame or so.
Would appreciate any hints or code examples on how to achieve that. Thank you!
With logo detection, I would highly recommend using OpenCV HaarClassifier. It is easy to generate training samples from a collection of images of the logo, or one logo image with many distortions.
If you can use a few rules like the minimum and maximum size of the logo to be detected, and possible regions on the image where it can appear, you can run the detector at a speed better than you mention with ORB.

Gauge Animation in Android

I'm trying to create a gauge/meter/dial animation as part of an application I'm trying to build.
What I'm trying to do is to display the acceleration values that I get via the the sensors on a gauge. Now I'm able to successfully get the values from the sensors but I don't know how to display them as an animated gauge/dial.
I've looked at the frame-by-frame and tweened animations, but they don't suit my needs, because I can't seem to use them to change the animation based on input from the code.
I should be able to display an image for the gauge and an image for the needle and then change the rotation of the needle image according to the values so as to make it look animated but I don't know how to do this.
Is there a way a do what I'm trying to do?
Is there a better/alternate way?
I'm using the standard Android SDK and eclipse and trying to support devices with android v2.2 upwards.
Hi Ayos Draw the background, a solid color is best, then draw you indicator in a different color (black). When you get a new value, test to see if its different and if it is draw the indicator in the background color (erase it). Then draw the new indicator. You will need to figure out a way of scaling your raw data and making it in to degrees to draw the indicator. Its best if your difference test allows you to not change the indicator if the change does not move the indicator less that a degree or two. If you use an image or pattern for the background then you have a big job in erasing the indicator. You might have to redraw the background every time, that takes a lot of processing time and the gauge may look jumpy. Cliff

Prevent flipping of the front facing camera

I'm attempting to access the raw feed of android's front facing camera. By default, the front facing camera's preview is flipped horizontally so users can see themselves as if looking into a mirror - that's great, but not what I need. What's the best way to get the raw feed? Is there some way to disable the automatic flipping, or should I attempt to flip it in code myself? My application needs to display a real-time feed of the front facing camera without it being flipped like a mirror.
If you want to use a front-facing camera for barcode scanning you can use TextureView and apply a transformation matrix to it. When the texture is updated you can read the image data and use that.
See https://github.com/hadders/camera-reverse
Specifically from MainActivity.java
mCamera.setDisplayOrientation(90);
Matrix matrix = new Matrix();
matrix.setScale(-1, 1);
matrix.postTranslate(width, 0);
mTextureView.setTransform(matrix);
The data from the front camera is as the camera the sees it, looking at you. The left side of its image is your right side. I think this is what you want already? When put onto a SurfaceView it is flipped so it acts as you say, but that's a separate cosmetic transformation.
At least, this is how every device I've seen works and I've looked hard at this to implement front camera support in Barcode Scanner / Barcode Scanner+.

Categories