When I call glGetTexImage() on a texture from a Framebuffer, I get this error:
EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x00007ffabf967a2b, pid=1816, tid=14712
Here is my code:
MagicaAdventura.GAME.gameBuffer.bindTexture();
glGetTexImage(GL_TEXTURE_2D, 0, GL12.GL_BGR, GL_UNSIGNED_BYTE, buf);//line that causes the error
And here is the code for the bind method:
public void bind(int texture) {
if(currentBound[texture] != resource.getID() || lastBoundInFramebuffer != Renderer.CURRENT_FRAMEBUFFER) {
GL13.glActiveTexture(GL13.GL_TEXTURE0 + texture);
lastBoundInFramebuffer = Renderer.CURRENT_FRAMEBUFFER;
currentBound[texture] = resource.getID();
GL11.glBindTexture(GL11.GL_TEXTURE_2D, resource.getID());
}
}
The texture binding code works fine for other things.
The buffer wasn't allocating space for the padding the texture to a power of 2 so I switched to using glReadPixels.
Related
I have a python script that draws landmarks on the contours of eye region with image input got from ckpts files (model) using OpenCV.
I want to draw those points ( landmarks ) in the same picture.
I got predictions points from the picture and I tried to draw those points (x,y) using Canvas but the results are different.
Difference between the two images:
Landmarks are drawn using python script (OpenCV)
Landmarks are draw using java code (Canvas)
I have tried many ways and I use Canvas library to draw points on imageview ( I loaded the same image in assets folder ) but this doesn't solve my problem..
This is a python code that shows how to draw landmarks on image:
predictions = estimator.predict(input_fn=_predict_input_fn)
for _, result in enumerate(predictions):
img = cv2.imread(result['name'].decode('ASCII') + '.jpg')
print(result['logits'])
print(result['name'])
marks = np.reshape(result['logits'], (-1, 2)) * IMG_WIDTH
print("reshape values "+str(np.reshape(result['logits'], (-1,2))))
print("marks "+str(marks))
for mark in marks:
cv2.circle(img, (int(mark[0]), int(
mark[1])), 1, (0, 255, 0), -1, cv2.LINE_AA)
try:
img = cv2.resize(img, (512, 512))
cv2.imshow('result', img)
except Exception as e:
print(str(e))
# output_node_names = [n.name for n in tf.get_default_graph().as_graph_def().node]
# print(output_node_names)
cv2.waitKey()
This file shows print logs from python code:
[0.33135968 0.19592011 0.34212315 0.17297666 0.36624995 0.16413747
0.3894139 0.17440952 0.39828074 0.1978043 0.3891497 0.22268474
0.36345637 0.22974193 0.3401759 0.2193309 0.30167252 0.20411113
0.3167112 0.19134495 0.33793524 0.18388326 0.3642417 0.18049955
0.3903508 0.18533507 0.40906873 0.1957745 0.42142123 0.21091096
0.40550107 0.21829814 0.38345626 0.22071144 0.35900232 0.22142673
0.3363348 0.21877256 0.3161971 0.2133534 0.62843406 0.21482795
0.6389724 0.1914106 0.6628249 0.1835615 0.6858679 0.19583184
0.6946868 0.22111627 0.6840309 0.24444285 0.66027373 0.25241333
0.6351568 0.24192403 0.60499936 0.22642238 0.6210091 0.21289764
0.6423563 0.2042976 0.6685919 0.20277795 0.69201195 0.20948553
0.70882106 0.22015369 0.71931773 0.23518339 0.7076659 0.24166131
0.69054717 0.24350837 0.6694564 0.24258481 0.64537776 0.23927754
0.62199306 0.23511863]
b'C:\\Users\\*******\\cnn-facial-landmark\\targetiris\\irisdata-300VW_Dataset_2015_12_14-017-000880'
reshape values [[0.33135968 0.19592011]
[0.34212315 0.17297666]
[0.36624995 0.16413747]
[0.3894139 0.17440952]
[0.39828074 0.1978043 ]
[0.3891497 0.22268474]
[0.36345637 0.22974193]
[0.3401759 0.2193309 ]
[0.30167252 0.20411113]
[0.3167112 0.19134495]
[0.33793524 0.18388326]
[0.3642417 0.18049955]
[0.3903508 0.18533507]
[0.40906873 0.1957745 ]
[0.42142123 0.21091096]
[0.40550107 0.21829814]
[0.38345626 0.22071144]
[0.35900232 0.22142673]
[0.3363348 0.21877256]
[0.3161971 0.2133534 ]
[0.62843406 0.21482795]
[0.6389724 0.1914106 ]
[0.6628249 0.1835615 ]
[0.6858679 0.19583184]
[0.6946868 0.22111627]
[0.6840309 0.24444285]
[0.66027373 0.25241333]
[0.6351568 0.24192403]
[0.60499936 0.22642238]
[0.6210091 0.21289764]
[0.6423563 0.2042976 ]
[0.6685919 0.20277795]
[0.69201195 0.20948553]
[0.70882106 0.22015369]
[0.71931773 0.23518339]
[0.7076659 0.24166131]
[0.69054717 0.24350837]
[0.6694564 0.24258481]
[0.64537776 0.23927754]
[0.62199306 0.23511863]]
marks [[37.112286 21.943052]
[38.317795 19.373386]
[41.019993 18.383396]
[43.614357 19.533867]
[44.607445 22.154081]
[43.584766 24.940691]
[40.707115 25.731096]
[38.0997 24.565062]
[33.787323 22.860447]
[35.471653 21.430634]
[37.848747 20.594925]
[40.79507 20.21595 ]
[43.719288 20.757528]
[45.815697 21.926743]
[47.199177 23.622028]
[45.41612 24.44939 ]
[42.9471 24.71968 ]
[40.20826 24.799793]
[37.6695 24.502527]
[35.414074 23.89558 ]
[70.38461 24.06073 ]
[71.56491 21.437988]
[74.23639 20.558887]
[76.81721 21.933167]
[77.80492 24.765022]
[76.61146 27.3776 ]
[73.95066 28.270294]
[71.137566 27.095491]
[67.759926 25.359306]
[69.553024 23.844536]
[71.9439 22.881332]
[74.88229 22.71113 ]
[77.50534 23.46238 ]
[79.387955 24.657213]
[80.56358 26.34054 ]
[79.25858 27.066067]
[77.341286 27.272938]
[74.97912 27.169498]
[72.28231 26.799084]
[69.66322 26.333286]]
Java code (Android)
private void drawpoint(ImageView imageView,float x,float y, int raduis){
myOptions.inDither = true;
myOptions.inScaled = false;
myOptions.inPreferredConfig = Bitmap.Config.ARGB_8888;// important
myOptions.inPurgeable = true;
canvas.drawCircle(x,y, raduis, paint);
imageView = (ImageView)findViewById(R.id.imageView);
imageView.setAdjustViewBounds(true);
imageView.setImageBitmap(mutableBitmap);
}
drawpoint(image2, 38, 19,1);
drawpoint(image2,41,18,1);
drawpoint(image2,43,19,1);
drawpoint(image2,40,25,1);
drawpoint(image2,38,24,1);
How can I solve this problem?
Problem solved.
I used the OpenCV library for drawing in Android instead of Canvas library.
I have used exactly this function:
Imgproc.circle()
I received a crash report from Google play store for one of my Android apps created using LibGDX.
Huawei MediaPad T3 7 (hwbg2), Android 6.0
java.lang.IllegalStateException:
at com.badlogic.gdx.graphics.glutils.GLFrameBuffer.build (GLFrameBuffer.java:233)
at com.badlogic.gdx.graphics.glutils.GLFrameBuffer.<init> (GLFrameBuffer.java:87)
at com.badlogic.gdx.graphics.glutils.FrameBuffer.<init> (FrameBuffer.java:51)
at com.badlogic.gdx.graphics.glutils.GLFrameBuffer$FrameBufferBuilder.build (GLFrameBuffer.java:474)
at com.badlogic.gdx.graphics.glutils.FrameBuffer.createFrameBuffer (FrameBuffer.java:72)
at com.badlogic.gdx.graphics.glutils.FrameBuffer.createFrameBuffer (FrameBuffer.java:56)
at MY_PACKAGE.editor.Backup.<init> (Backup.java:21)
at MY_PACKAGE.editor.EditingImage.<init> (EditingImage.java:277)
at MY_PACKAGE.screens.EditingScreen.<init> (EditingScreen.java:227)
at MY_PACKAGE.screens.Screens.<init> (Screens.java:42)
at MY_PACKAGE.MAIN_CLASS$2.run (MAIN_CLASS.java:121)
at MY_PACKAGE.screens.SplashScreen.render (SplashScreen.java:93)
at com.badlogic.gdx.Game.render (Game.java:46)
at com.badlogic.gdx.backends.android.AndroidGraphics.onDrawFrame (AndroidGraphics.java:495)
at android.opengl.GLSurfaceView$GLThread.guardedRun (GLSurfaceView.java:1599)
at android.opengl.GLSurfaceView$GLThread.run (GLSurfaceView.java:1295)
Code at GLFrameBuffer.java:233
if (result == GL20.GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT)
throw new IllegalStateException("frame buffer couldn't be constructed: incomplete attachment");
EditingImage.java is as follows
class EditingImage{
public static final int pixmapWidth = 1024;
public EditingImage{
frameBuffer = FrameBuffer.createFrameBuffer(Pixmap.Format.RGB888,pixmapWidth,pixmapWidth,false);
....
for (int i = 0; i < 50; i++){
final Backup backup = new Backup(pixmapWidth);
availableBackups.add(backup);
}
}
}
Backup.java is as follow
Backup(int width){
frameBuffer = FrameBuffer.createFrameBuffer(Pixmap.Format.RGB888, width, width, false);
....
}
The app has crashed inside Backup.java when creating the FrameBuffer (after how many loops I don't know).
As you can see FrameBuffer created in EditingImage has not crashed and it has been executed before instantiating Backup objects.
It works normally on my phone (Huawei Y6II). Also have tested in some Samsung phones.
Please help!
After 1 year 11 months I found the issue.
According to the documentation:
https://libgdx.badlogicgames.com/ci/nightlies/docs/api/com/badlogic/gdx/graphics/glutils/FrameBuffer.html
It says for the format which is passed as the parameter in the constructor should be RGB565 or RGBA4444 or RGB5_A1
format - the format of the color buffer; according to the OpenGL ES
2.0 spec, only RGB565, RGBA4444 and RGB5_A1 are color-renderable
Where as in my case i have used RGB888
I can't find information about face detection on preview in android.hardware.Camera2, would anybody help me with a complete example?
I saw some questions with camera2 examples in github but I can't understand them.
I used Camera2 sample from Google: https://github.com/googlesamples/android-Camera2Basic.
I set face recognition mode to FULL.
mPreviewRequestBuilder.set(CaptureRequest.STATISTICS_FACE_DETECT_MODE, CameraMetadata.STATISTICS_FACE_DETECT_MODE_FULL);
Also I checked STATISTICS_INFO_MAX_FACE_COUNT and STATISTICS_INFO_AVAILABLE_FACE_DETECT_MODES:
int max_count = characteristics.get(
CameraCharacteristics.STATISTICS_INFO_MAX_FACE_COUNT);
int modes [] = characteristics.get(
CameraCharacteristics.STATISTICS_INFO_AVAILABLE_FACE_DETECT_MODES);
Output: maxCount : 5 , modes : [0, 2]
My CaptureCallback:
private CameraCaptureSession.CaptureCallback mCaptureCallback = new CameraCaptureSession.CaptureCallback() {
private void process(CaptureResult result) {
Integer mode = result.get(CaptureResult.STATISTICS_FACE_DETECT_MODE);
Face [] faces = result.get(CaptureResult.STATISTICS_FACES);
if(faces != null && mode != null)
Log.e("tag", "faces : " + faces.length + " , mode : " + mode );
}
#Override
public void onCaptureProgressed(CameraCaptureSession session, CaptureRequest request,
CaptureResult partialResult) {
process(partialResult);
}
#Override
public void onCaptureCompleted(CameraCaptureSession session, CaptureRequest request,
TotalCaptureResult result) {
process(result);
} `
Output: faces : 0 , mode : 2
public static final int STATISTICS_FACE_DETECT_MODE_FULL = 2;
Faces length is constantly 0. Looks like it doesn't recognise a face properly or I missed something.
I know approach with FaceDetector. I just wanted to check how it works with new camera2 Face.
I need to detect face on preview of camera2!
I think that you can't use CameraMetadata.STATISTICS_FACE_DETECT_MODE_FULL, because some devices do not support this type of face detection. Please, can you verify if your device support STATISTICS_FACE_DETECT_MODE_FULL?
If the answer is "NO", please try to use STATISTICS_FACE_DETECT_MODE_SIMPLE
Look at this Samsung Example
https://developer.samsung.com/galaxy/camera#techdocs
There is a sample explaining how to use face detection with camera2 API
I use:
cvFindContours(gray, mem, contours, Loader.sizeof(CvContour.class) , CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0));
and as the result I have CvSeq contours to iterate (as far as I understand it).
So I use it like that:
if(contours!=null) {
for (ptr = contours; ptr != null; ptr = ptr.h_next()) {
//..do sth with ptr
}
}
It works, but from time to time (quite often) I get:
Exception in thread "Thread-3" java.lang.NullPointerException: This pointer address is NULL.
at com.googlecode.javacv.cpp.opencv_core$CvSeq.h_next(Native Method)
at pl..filter(FullFilter.java:69)
at pl..Window$1.run(Window.java:41)
at java.lang.Thread.run(Unknown Source)
The line in which the exception is thrown is the line with ptr.h_next().
I tried to check for nulls but it doesn't work:
System.out.println("PTR="+ptr); // it's not null here!
if(ptr.h_next()==null) //exception in this line!
System.out.println("NULL");
System.out.println(ptr.h_next());
The first line shows:
PTR=com.googlecode.javacv.cpp.opencv_core$CvSeq[address=0x0,position=0,limit=1,capacity=1,deallocator=com.googlecode.javacpp.Pointer$NativeDeallocator#66d53ea4]
I tried also invoking contours.total() but it always throws the same exception.
So, what is a proper way to use in Java such C-like sequences?
EDIT:
my full method:
public IplImage filter(IplImage image) {
IplConvKernel kernel = cvCreateStructuringElementEx(2,2,1,1,CV_SHAPE_RECT, null);
cvDilate(image, image, kernel, 1);
kernel = cvCreateStructuringElementEx(5,5,2,2,CV_SHAPE_RECT, null);
cvErode(image, image, kernel, 1);
IplImage resultImage = cvCloneImage(image);
IplImage gray = cvCreateImage(cvGetSize(image), IPL_DEPTH_8U, 1);
cvCvtColor(image, gray, CV_BGR2GRAY);
CvMemStorage mem = CvMemStorage.create();
CvSeq contours = new CvSeq();
CvSeq ptr = new CvSeq();
cvThreshold(gray, gray, 20, 255, opencv_imgproc.CV_THRESH_BINARY);
double thresh = 20;
Canny( gray, gray, thresh, thresh*2, 3 ,true);
cvFindContours(gray, mem, contours, Loader.sizeof(CvContour.class) , CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0));
int i=0;
CvRect boundbox;
if(contours!=null) {
for (ptr = contours; ptr != null; ptr = ptr.h_next()) { //EXCEPTION HERE!
System.out.println((i++)+"\t"+ptr);
cvDrawContours( resultImage, ptr, CvScalar.BLUE, CvScalar.RED, -1, 3, 8, cvPoint(0,0) );
System.out.println("PTR="+ptr);
}
}
return resultImage;
}
It works fine for some time and suddenly (probably when no contours found?) it ends with the following exception:
Exception in thread "Thread-3" java.lang.NullPointerException: This pointer address is NULL.
at com.googlecode.javacv.cpp.opencv_core$CvSeq.h_next(Native Method)
at pl.kasprowski.eyetracker.FullFilter2.filter(FullFilter2.java:39)
at pl.kasprowski.eyetracker.Window$1.run(Window.java:42)
at java.lang.Thread.run(Unknown Source)
Im feeding the method directly with images taken from camera (once a second).
EDIT:
After some experiments it occurs that from time to time when I invoke cvFindContours as above I get a contour object which IS NOT NULL but invoking any method like contour.h_next() or contour.total() results in exception like above. What can be wrong? Or - how to check if the contour object is OK?! Of course I could catch NullPointerException but I don't think it's a correct way to solve the problem...
Problem solved.
I added additional condition. Instead of:
if(contours!=null) {
I wrote
if(contours!=null && !contours.isNull()) {
and it works. I don't see exactly why it is necessary but I think it's connected with Java <-> C semantic gap.
Try to use cvGetSeqElem(contours, i)
Example :
for (int i = 0; i < contours.total(); i++) {
CvRect rect = cvBoundingRect(cvGetSeqElem(contours, i),0);
//....... Your code ....//
}
I am trying to use an ArrayList() across two different Activities. It is declared:
public static ArrayList<Mat> Video = new ArrayList<Mat>();
I read in frames from the camera and when I have 50 I go to my next activity.
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
Mat frame = inputFrame.rgba();
if(Video.size() < 50)
{
Log.i(TAG, "Added frame");
Video.add(frame);
}
else
{
String e = Integer.toString(Video.get(1).cols());
Log.v(TAG1, e);
e = Integer.toString(Video.get(1).rows());
Log.v(TAG1, e);
Intent intent = new Intent(this, Analysis.class);
startActivity(intent);
}
return inputFrame.rgba();
}
The log output for this method is:
11-15 22:53:30.225: V/Values(32362): 800
11-15 22:53:30.225: V/Values(32362): 480
This is the correct height and width for the device this is running on(Galaxy S2).
Then in my next activities onCreate() I directly access "Video":
String e = Integer.toString(HomeScreen.Video.get(1).cols());
Log.v(TAG, e);
String h = Integer.toString(HomeScreen.Video.get(1).rows());
Log.v(TAG, h);
But this time the log reads:
11-15 22:53:30.840: V/Values2:(32362): 800
11-15 22:53:30.840: V/Values2:(32362): 0
So my question is, why is the row() value not 480 in both Logs? I need a List of frames because I am recording all the frames and then in another Activity I am going to operate on them and output the display(which I need to number of rows for).
I replaced the line:
Video.add(frame);
With
Video.add(frame.clone());
and it works perfectly!
I think in the top line i was only copying the header part of the frame and not the entire frame's contents.