I'm new in the world of StackOverflow and in OpenCV programming.
I've opened my camera with some Java code and it worked because the light of camera was on, but when I tried to close the camera, I failed.
Code:
public class camera {
public static void main(String[] args) {
System.loadLibrary("opencv_java244");
VideoCapture camera = new VideoCapture(0);
if (camera.isOpened())
System.out.println("Camera is ready!");
else {
System.out.println("Camera Error!");
return;
}
Mat newMat = new Mat();
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
//e.printStackTrace();
}
camera.read(newMat);
Highgui.imwrite("testfile.jpg", newMat);
camera.release();
if (camera.isOpened()) {
System.out.println("Camera is running!");
}
else {
System.out.println("Camera closed!");
}
}
}
result:
Camera is ready!
Camera closed!
I really got the picture, but the light was still on!
P.S. Everytime when I try to open my camera, my computer will open a drive software named YouCam, and I must close it manually to release the camera.
Try capture.retrieve() instead of capture.read(). Here is a snapshot which works for me without using even Thread.sleep()
VideoCapture capture = new VideoCapture(0);
if (!capture.isOpened()) {
imagePanel.add(new JLabel("Oops! Your camera is not working!"));
return;
}
Mat frame = new Mat();
capture.retrieve(frame);
frame = FaceDetector.detect(frame);
BufferedImage image = GestureUtil.matToBufferedImage(frame);*/
imagePanel.setImage(image);
imagePanel.repaint();
String window_name = "Capture - Face detection.jpg";
Highgui.imwrite(window_name, frame);
capture.release();
I have using this along with Swing. However, you can ignore swing code. Hope this helps
Related
I have an XR app, where display shows the camera (rear) feed. As such, capturing the screen is pretty much the same as capturing the camera feed...
As such, I take screenshots (Bitmaps) and then try to detect faces within them using Googles MLKit.
I'm following the official guide to detect faces.
To do this, I first init my face detector:
FaceDetector detector;
public MyFaceDetector(){
FaceDetectorOptions realTimeOpts =
new FaceDetectorOptions.Builder()
.setContourMode(FaceDetectorOptions.CONTOUR_MODE_ALL)
.build();
detector = FaceDetection.getClient(realTimeOpts);
}
I then have a function which passes in a bitmap. I first convert the bitmap to a byte array. I do this because InputImage.fromBitmap is very slow, and MLKit actually tells me that I should use a byte array:
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 85, byteArrayOutputStream);
byte[] byteArray = byteArrayOutputStream .toByteArray();
Next I make a mutable copy of the Bitmap (so that I can draw onto it), and set up a Canvas object, along with a color that will be used when drawing on to the Bitmap:
BitmapFactory.Options options = new BitmapFactory.Options();
options.inMutable = true;
Bitmap bmp = BitmapFactory.decodeByteArray(byteArray, 0, byteArray.length, options);
Canvas canvas = new Canvas(bmp);
Paint p = new Paint();
p.setColor(Color.RED);
After all is set up, I create an InputImage (used by the FaceDetector), using the byte array:
InputImage image = InputImage.fromByteArray(byteArray, bmp.getWidth(), bmp.getHeight(),0, InputImage.IMAGE_FORMAT_NV21);
Note the image format... There is a InputImage.IMAGE_FORMAT_BITMAP, but using this throws an IllegalArgumentException. Anyway, I next try to process the Bitmap, detect faces, fill each detected face with the color defined earlier, and then save the Bitmap to disk:
Task<List<Face>> result = detector.process(image).addOnSuccessListener(
new OnSuccessListener<List<Face>>() {
#Override
public void onSuccess(List<Face> faces) {
Log.e("FACE DETECTION APP", "NUMBER OF FACES: " + faces.size());
Thread processor = new Thread(new Runnable() {
#Override
public void run() {
for (Face face : faces) {
Rect destinationRect = face.getBoundingBox();
canvas.drawRect(destinationRect, p);
canvas.save();
Log.e("FACE DETECTION APP", "WE GOT SOME FACCES!!!");
}
File file = new File(someFilePath);
try {
FileOutputStream fOut = new FileOutputStream(file);
bmp.compress(Bitmap.CompressFormat.JPEG, 85, fOut);
fOut.flush();
fOut.close();
} catch (Exception e) {
e.printStackTrace();
}
}
});
processor.start();
}
})
.addOnFailureListener(
new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
// Task failed with an exception
// ...
}
});
}
While this code runs (i.e. no exceptions) and the bitmap is correctly written to disk, no faces are ever detected (faces.size() is always 0). I've tried rotating the image. I've tried changing the quality of the Bitmap. I've tried with and without the thread to process any detected faces. I've tried everything I can think of.
Anyone have any ideas?
ML Kit InputImage. fromByteArray only support yv12 and nv21 formats. You will need to convert the bitmap to one of these formats in order for ML kit pipeline to process. Also, if the original image you have is a bitmap, you can probably just use InputImage.fromBitmap to construct an InputImage. It shouldn't be slower than your current approach.
I was having the same issue use ImageInput.fromMediaImage(..., ...)
override fun analyze(image: ImageProxy) {
val mediaImage: Image = image.image.takeIf { it != null } ?: run {
image.close()
return
}
val inputImage = InputImage.fromMediaImage(mediaImage, image.imageInfo.rotationDegrees)
// TODO: Your ML Code
}
Check here for more details
https://developers.google.com/ml-kit/vision/image-labeling/android
I got this error and i dont know what to do and couldnt find any other solution on this site. I run Rserve in the background on my computer and i connect to the local host. But i cant get the frame to popup.
Here is my code:
package rservedemo;
/**
*
* #author Carl
*/
import java.awt.*;
import java.awt.event.*;
import org.rosuda.REngine.*;
import org.rosuda.REngine.Rserve.*;
public class PlotDemo extends Canvas {
public static void main(String[] args) {
try
{
String device = "jpeg";
RConnection c = new RConnection ((args.length>0)?args[0]:"127.0.0.1");
if
(c.parseAndEval("supressWarnings(require('Cairo',quietly=TRUE))").asInteger()>0) device="CarioJPEG";
else
System.out.println("(Consider installing Cairo package for better bitmap output)");
REXP xp = c.parseAndEval("Try("+device+"('test.jpg,quality=90))");
if (xp.inherits("Try error"))
{
System.err.println("Can't open "+device+" graphics device:\n" +xp.asString());
REXP w = c.eval("If (exists('last.warning') && length(last.warning)>0)names(last.warning) [1] else 0");
if (w.isString()) System.err.println(w.asString());
return;
}
c.parseAndEval("data(iris); plot(iris$Sepal.Length, iris$Petal.Length); dev.off()");
xp = c.parseAndEval("r=readBin('test.jpg','raw',1024*1024); unlink('test.jpg');r");
Image img = Toolkit.getDefaultToolkit().createImage(xp.asBytes());
Frame f = new Frame("Test image");
f.add(new PlotDemo (img));
f.addWindowListener(new WindowAdapter(){
public void windowClosing(WindowEvent e){System.exit(0);}
});
f.pack();
f.setVisible(true);
c.close();
}
catch (RserveException rse)
{
System.out.println(rse);
}
catch (REXPMismatchException mme)
{
System.out.println(mme);
mme.printStackTrace();
}
catch (Exception e)
{
System.out.println("Seomthing went wrong, but it's not Rserve: " +e.getMessage());
e.printStackTrace();
}
}
Image img;
public PlotDemo(Image img)
{
this.img=img;
MediaTracker mediaTracker = new MediaTracker(this);
mediaTracker.addImage(img, 0);
try
{
mediaTracker.waitForID(0);
}
catch (InterruptedException ie)
{
System.err.println(ie);
System.exit(1);
}
setSize(img.getWidth(null), img.getHeight(null));
}
public void paint (Graphics g)
{
g.drawImage(img, 0, 0, null);
}
}
And here is the error, i have tried to change the line at 27 but couldnt do anything useful. When i run the
c.parseAndEval("data(iris); plot(iris$Sepal.Length, iris$Petal.Length); dev.off()");
in r and there it works. So that dosent seem to be the problem.
Seomthing went wrong, but it's not Rserve: eval failed, request status: error code: 127
org.rosuda.REngine.REngineException: eval failed, request status: error code: 127
at org.rosuda.REngine.Rserve.RConnection.parseAndEval(RConnection.java:454)
at org.rosuda.REngine.REngine.parseAndEval(REngine.java:108)
at rservedemo.PlotDemo.main(PlotDemo.java:27)
Thankful for help
Usually process exit code 127 means File not found.
In you case problematic can be line:
REXP xp = c.parseAndEval("Try("+device+"('test.jpg,quality=90))");
because you could have mistake (typo) in line:
(c.parseAndEval("supressWarnings(require('Cairo',quietly=TRUE))").asInteger()>0) device="CarioJPEG";
Note: CarioJPEG instead of CairoJPEG
I just finished an app that has a jpanel showing images from the webcam, I used OPENCV 2.4.8 and VideoCapture, when I close the Frame in netbeans the program still keeps running so I have to close it from netbean's stop bottom. I try running the app *.jar in windows the first time it works fine but when I close it and open it again there is a problem with the camara like it is already been use!!! any ideas?
Here is a piece of the code-
public static void main(String[] args) {
Permisos objPer = new Permisos();
objPer.setLocationRelativeTo(null);
objPer.setBounds(0, 0, 468, 328);
objPer.setVisible(true);
objPer.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
webFrame panel = new webFrame();
pan1.add(panel);
Mat webcam = new Mat();
video = new VideoCapture(0);
if (video.isOpened()) {
while (true) {
video.read(webcam);
panel.MatToBufferedImage(webcam);
panel.setSize(webcam.width() + 40, webcam.height() + 60);
panel.repaint();
}
} else {
System.out.println("no video open");
}
}
I GET THE CAMARA TO WORK AGAIN WHEN I UNPLUG IT AND PLUG IT BACK AGAIN TO PC
Got it working! The class VideoCapture had a function named release(). just needed to add it the exit of the app... Thanks for the Help Mayur!
extremely slow frame rates when using the openCV Java method in android for detecting circular shaped objects in images
Imgproc.HoughCircles(mGray, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 50);
when i remove this method it runs fast, but after adding this method inside of this callback
public Mat onCameraFrame(final CvCameraViewFrame inputFrame) {
the frame rate slows to 1 to 2 frames per second, I don't understand why it gets so slow, i tried putting this method in a separate thread and it would not help, the only thing that worked is to use a counter and and an if statement to run the method every 10 frames.
in the OpenCV examples there is a sample project called face detection and it has both a native C++ and Java camera versions and they both are vary fast, how is it possible that when I use similar code I get this slow constipated action from OpenCV?
is there something i am doing wrong here? In the face detection project from openCV examples they take every frame and they don't launch a separate thread. how do I fix this problem and make my code run fast like the sample projects in OpenCV?
in a different project I am also having the same problem of slow frame rate, in this practice project where I am not using openCV, it is just the android Camera class only, in that I am taking the image from the onPreviewFrame(byte[] data, Camera camera) method and doing some light processing like converting the YUV format from the byte array into a bitmap and putting that into another view on the same screen as the camera view, and the result is vary slow frame rate.
EDIT: In some additional experimentation I added the Imgproc.HoughCircles() method to the OpenCV face Detection sample project. putting this method inside the onCameraFrame method of the java detector.
the result is the same as in my project, it became vary slow. so the HoughCircles method probably takes more processing power than the face detection method CascadeClassifier.detectMultiScale(), however that does not explain the fact I watched other circle detection projects on youTube and in their videos the frame rate is not slowed down. that is why I think there is something wrong with what I am doing.
here is a sample of the code I am using
public class CircleActivity extends Activity implements CvCameraViewListener2 {
Mat mRgba;
Mat mGray;
File mCascadeFile;
CascadeClassifier mJavaDetector;
CameraBridgeViewBase mOpenCvCameraView;
LinearLayout linearLayoutOne;
ImageView imageViewOne;
int counter = 0;
private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
#Override
public void onManagerConnected(int status) {
switch (status) {
case LoaderCallbackInterface.SUCCESS:
{
Log.i("OPENCV", "OpenCV loaded successfully");
mOpenCvCameraView.enableView();
} break;
default:
{
super.onManagerConnected(status);
} break;
}
}
};
/** Called when the activity is first created. */
#Override
public void onCreate(Bundle savedInstanceState) {
if (!OpenCVLoader.initDebug()) {
// Handle initialization error
}
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_coffee);
getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.fd_activity_surface_view);
mOpenCvCameraView.setCvCameraViewListener(this);
}
#Override
public void onPause()
{
super.onPause();
if (mOpenCvCameraView != null)
mOpenCvCameraView.disableView();
}
#Override
public void onResume()
{
super.onResume();
OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_3, this, mLoaderCallback);
}
public void onDestroy() {
super.onDestroy();
mOpenCvCameraView.disableView();
}
public void onCameraViewStarted(int width, int height) {
mGray = new Mat();
mRgba = new Mat();
}
public void onCameraViewStopped() {
mGray.release();
mRgba.release();
}
public Mat onCameraFrame(final CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
mGray = inputFrame.gray();
if(counter == 9) {
MatOfRect circles = new MatOfRect();
Imgproc.HoughCircles(mGray, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 50);
// returns number of circular objects found
Log.e("circle check", "circles.cols() " + circles.cols());
}
counterAdder();
return mRgba;
} // end oncamera frame
public void counterAdder() {
if (counter > 10) {
counter = 0;
}
counter++;
}
}
Reducing resolution of camera frames might help
mOpenCvCameraView.setMaxFrameSize(640, 480);
From my brief experience, the running time for HoughCircles greatly depends on the image. A textured image with a lot of potential circles takes much longer than an image with a uniform background. Hope this helps.
I' ve faced this problem either.
I' ve tried to decrease the camera resolution with mOpenCvCameraView.setMaxFrameSize(1280, 720);
However it is still slow. I' ve been trying to work parallel with Threads, but it is still 3.5FPS.
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
//System.gc();
carrierMat = inputFrame.gray();
Thread thread = new Thread(new MultThread(carrierMat, this));
thread.start();
try {
thread.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
return carrierMat;
}
My MultThread class is just like this
public class MultThread implements Runnable {
private Mat source;
private Context context;
public MultThread(Mat source, Context context) {
this.source = source;
this.context = context;
}
#Override
public void run() {
//output = General.Threshold(source);
int x = General.MSERP(source);
Log.i("MtMTxtDtc:Main","x: " + x);
if (x > 10){
((Vibrator) context.getSystemService(Context.VIBRATOR_SERVICE)).vibrate(500);
}
}
}
You have to perform the Hough Circle transform in the background not in the main activity!
Otherwise your app response will be too slow and it may be killed by the operating system due to Application Not Responding (ANR) error.
You need to add this class to your main activity and you are good to go.
private class HoughCircleTransformTask
extends AsyncTask<Mat, Void, Integer> {
#Override
protected Boolean doInBackground(Mat mGray) {
MatOfRect circles = new MatOfRect();
Imgproc.HoughCircles(mGray, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 50);
// returns number of circular objects found
// then display it from onPostExecute()
return circles.cols();
}
#Override
protected void onPostExecute(Integer circlesCols){
// This is only logging
// You can display it in a TextView as well in the main activity
Log.e("circle check", "circles.cols() " + circles.cols());
}
}
And just call it from onCameraFrame with one line of code only
public Mat onCameraFrame(final CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
mGray = inputFrame.gray();
if(counter == 9) {
// call AsyncTask
new HoughCircleTransformTask().execute(mGray);
}
counterAdder();
return mRgba;
} // end oncamera frame
For the last few weeks I have been attempting to alter Zxing to take a photo immediately upon scan. Thanks to help I am at a point where I can be consistently saving an image from the onPreviewFrame class within PreviewCallback.java
The code I use within the onPreviewMethod method shall follow, and then a short rundown of how my app works.
public void onPreviewFrame(byte[] data, Camera camera) {
Point cameraResolution = configManager.getCameraResolution();
Handler thePreviewHandler = previewHandler;
android.hardware.Camera.Parameters parameters = camera.getParameters();
android.hardware.Camera.Size size = parameters.getPreviewSize();
int height = size.height;
int width = size.width;
System.out.println("HEIGHT IS" + height);
System.out.println("WIDTH IS" + width);
if (cameraResolution != null && thePreviewHandler != null) {
YuvImage im = new YuvImage(data, ImageFormat.NV21, width,
height, null);
Rect r = new Rect(0, 0, width, height);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
im.compressToJpeg(r, 50, baos);
try {
FileOutputStream output = new FileOutputStream("/sdcard/test_jpg.jpg");
output.write(baos.toByteArray());
output.flush();
output.close();
System.out.println("Attempting to save file");
System.out.println(data);
} catch (FileNotFoundException e) {
System.out.println("Saving to file failed");
} catch (IOException e) {
System.out.println("Saving to file failed");
}
Message message = thePreviewHandler.obtainMessage(previewMessage, cameraResolution.x,
cameraResolution.y, data);
message.sendToTarget();
previewHandler = null;
} else {
Log.d(TAG, "Got preview callback, but no handler or resolution available");
}}
My application centers around its own GUI and functionality, but can engage Zxing via intent (Zxing is built into the apps build path, yes this is bad as it can intefere if Zxing is already installed). Once Zxing has scanned a QR code, the information encoded on it is returned to my app and stored, and then after a short delay Zxing is automatically re-initiated.
My current code saves an image every frame whilst Zxing is running, the functionality I would like is to have only the frame on scan be saved. Although Zxing stops saving images in the short window where my app takes over again, Zxing is quickly re-initialized however and I may not have time to manipulate the data. A possible workaround however is quickly renaming the saved file so that Zxing doesn't start overwriting it and manipulation can be performed in the background. Nevertheless, saving an image every frame is a waste of resources and less than preferable.
How do I only save an image upon scan?
Thanks in advance.
Updated to show found instances of multiFormatReader as requested:
private final CaptureActivity activity;
private final MultiFormatReader multiFormatReader;
private boolean running = true;
DecodeHandler(CaptureActivity activity, Map<DecodeHintType,Object> hints) {
multiFormatReader = new MultiFormatReader();
multiFormatReader.setHints(hints);
this.activity = activity;
}
#Override
public void handleMessage(Message message) {
if (!running) {
return;
}
if (message.what == R.id.decode) {
decode((byte[]) message.obj, message.arg1, message.arg2);
} else if (message.what == R.id.quit) {
running = false;
Looper.myLooper().quit();
}}
private void decode(byte[] data, int width, int height) {
long start = System.currentTimeMillis();
Result rawResult = null;
PlanarYUVLuminanceSource source = activity.getCameraManager().buildLuminanceSource(data, width, height);
if (source != null) {
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
//here?
try {
rawResult = multiFormatReader.decodeWithState(bitmap);
} catch (ReaderException re) {
// continue
} finally {
multiFormatReader.reset();
}
}
ZXing detects every received frame until finds out correct information. The image saving point is when ZXing returns a string which is not null. In addition, you can save file with different name "timestamp + .jpg", in case previous file will be overwritten.