I am using OpenGL ES 3.2 and the NVIDIA driver 361.00 on a Pixel C tablet with Tegra X1 GPU. I would like to use a compute shader to write data to a colour map and then later I will use some graphics shaders to display the image.
I already have this concept working using desktop GL and now I want to port to mobile. I am implementing the GL in Java rather than in native code. I extend GLSurfaceView and the GLSurfaceView.Renderer and then during the OnSurfaceCreated callback I initialise the shader programs and textures etc.
The compute shader compiles just fine without any errors:
#version 310 es
layout(binding = 0, rgba32f) uniform highp image2D colourMap;
layout(local_size_x = 128, local_size_y = 1, local_size_z = 1) in;
void main()
{
imageStore(colourMap, ivec2(gl_GlobalInvocationID.xy), vec4(1.0f, 0.0f, 0.0f, 1.0f));
}
And I initialise a texture
// Generate a 2D texture
GLES31.glGenTextures(1, colourMap, 0);
GLES31.glBindTexture(GLES31.GL_TEXTURE_2D, colourMap[0]);
// Set interpolation to nearest
GLES31.glTexParameteri(GLES31.GL_TEXTURE_2D, GLES31.GL_TEXTURE_MAG_FILTER, GLES31.GL_LINEAR);
GLES31.glTexParameteri(GLES31.GL_TEXTURE_2D, GLES31.GL_TEXTURE_MIN_FILTER, GLES31.GL_LINEAR);
// Create some dummy texture to begin with so we can see if it changes
float texData[] = new float[texWidth * texHeight * 4];
for (int j = 0; j < texHeight; j++)
{
for (int i = 0; i < texWidth; i++)
{
// Set a few pixels in here...
}
}
Buffer texDataBuffer = FloatBuffer.wrap(texData);
GLES31.glTexImage2D(GLES31.GL_TEXTURE_2D, 0, GLES31.GL_RGBA32F, texWidth, texHeight, 0, GLES31.GL_RGBA, GLES31.GL_FLOAT, texDataBuffer);
After this I set the image unit in the shader here: EDIT: I don't do this now but just assume it will be assigned automatically when the shader program is created as per solidpixel's answer.
GLES31.glUseProgram(idComputeShaderProgram);
int loc = GLES31.glGetUniformLocation(idComputeShaderProgram, "colourMap");
if (loc == -1) Log.e("Error", "Cannot locate variable");
GLES31.glUniform1i(loc, 0);
After every call to GL I check for errors using GLES31.glGetError() -- left out here for clarity.
EDIT: When I dispatch compute I bind the image texture but first query the unit assignment:
GLES31.glUseProgram(idComputeShaderProgram);
int[] unit = new int[1];
GLES31.glGetUniformiv(idComputeShaderProgram, GLES31.glGetUniformLocation(idComputeShaderProgram, "colourMap"), unit, 0);
GLES31.glBindImageTexture(unit[0], velocityMap[0], 0, false, 0, GLES31.GL_WRITE_ONLY, GLES31.GL_RGBA32F);
This final line is the one which errors now. The error code translates to GL_INVALID_OPERATION. The shader compiles correctly and the program object is valid and active. The location of the variable is also valid. I have even used glGetActiveUniform() to get the type of the variable and it returns a type of 36941 which translates to GL_IMAGE_2D which I believe is an integer.
I still think I'm misunderstanding something here but not sure what.
You can't assign your own unit identities for images. See OpenGL ES 3.2 specification section 7.6.
An INVALID_OPERATION error is generated if any of the following
conditions occur:
an image uniform is loaded with any of the Uniform* commands.
You need to query the automatic unit assignment using glGetUniformiv(prog, loc, &unit) to get the unit name.
Related
I'm working on a 2d engine. It already works quite good, but I keep getting pixel-errors.
For example, my window is 960x540 pixels, I draw a line from (0, 0) to (959, 0). I would expect that every pixel on scan-line 0 will be set to a color, but no: the right-most pixel is not drawn. Same problem when I draw vertically to pixel 539. I really need to draw to (960, 0) or (0, 540) to have it drawn.
As I was born in the pixel-era, I am convinced that this is not the correct result. When my screen was 320x200 pixels big, I could draw from 0 to 319 and from 0 to 199, and my screen would be full. Now I end up with a screen with a right/bottom pixel not drawn.
This can be due to different things:
where I expect the opengl line primitive is drawn from a pixel to a pixel inclusive, that last pixel just is actually exclusive? Is that it?
my projection matrix is incorrect?
I am under a false assumption that when I have a backbuffer of 960x540, that is actually has one pixel more?
Something else?
Can someone please help me? I have been looking into this problem for a long time now, and every time when I thought it was ok, I saw after a while that it actually wasn't.
Here is some of my code, I tried to strip it down as much as possible. When I call my line-function, every coordinate is added with 0.375, 0.375 to make it correct on both ATI and nvidia adapters.
int width = resX();
int height = resY();
for (int i = 0; i < height; i += 2)
rm->line(0, i, width - 1, i, vec4f(1, 0, 0, 1));
for (int i = 1; i < height; i += 2)
rm->line(0, i, width - 1, i, vec4f(0, 1, 0, 1));
// when I do this, one pixel to the right remains undrawn
void rendermachine::line(int x1, int y1, int x2, int y2, const vec4f &color)
{
... some code to decide what std::vector the coordinates should be pushed into
// m_z is a z-coordinate, I use z-buffering to preserve correct drawing orders
// vec2f(0, 0) is a texture-coordinate, the line is drawn without texturing
target->push_back(vertex(vec3f((float)x1 + 0.375f, (float)y1 + 0.375f, m_z), color, vec2f(0, 0)));
target->push_back(vertex(vec3f((float)x2 + 0.375f, (float)y2 + 0.375f, m_z), color, vec2f(0, 0)));
}
void rendermachine::update(...)
{
... render target object is queried for width and height, in my test it is just the back buffer so the window client resolution is returned
mat4f mP;
mP.setOrthographic(0, (float)width, (float)height, 0, 0, 8000000);
... all vertices are copied to video memory
... drawing
if (there are lines to draw)
glDrawArrays(GL_LINES, (int)offset, (int)lines.size());
...
}
// And the (very simple) shader to draw these lines
// Vertex shader
#version 120
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
uniform mat4 mP;
varying vec4 vColor;
void main(void) {
gl_Position = mP * vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
}
// Fragment shader
#version 120
#ifdef GL_ES
precision highp float;
#endif
varying vec4 vColor;
void main(void) {
gl_FragColor = vColor.rgb;
}
In OpenGL, lines are rasterized using the "Diamond Exit" rule. This is almost the same as saying that the end coordinate is exclusive, but not quite...
This is what the OpenGL spec has to say:
http://www.opengl.org/documentation/specs/version1.1/glspec1.1/node47.html
Also have a look at the OpenGL FAQ, http://www.opengl.org/archives/resources/faq/technical/rasterization.htm, item "14.090 How do I obtain exact pixelization of lines?". It says "The OpenGL specification allows for a wide range of line rendering hardware, so exact pixelization may not be possible at all."
Many will argue that you should not use lines in OpenGL at all. Their behaviour is based on how ancient SGI hardware worked, not on what makes sense. (And lines with widths >1 are nearly impossible to use in a way that looks good!)
Note that OpenGL coordinate space has no notion of integers, everything is a float and the "centre" of an OpenGL pixel is really at the 0.5,0.5 instead of its top-left corner. Therefore, if you want a 1px wide line from 0,0 to 10,10 inclusive, you really had to draw a line from 0.5,0.5 to 10.5,10.5.
This will be especially apparent if you turn on anti-aliasing, if you have anti-aliasing and you try to draw from 50,0 to 50,100 you may see a blurry 2px wide line because the line fell in-between two pixels.
I'm trying to write real-time raytracer.
I use Java and Jogamp bindings of OpenGL and OpenCL for it (calls Jogl and Jocl).
I already have raytracing code in my .cl kernel and its works well. I get output as FloatBuffer and pass it to the OpenGL texture via glTexImage2D. Now I want to go realtime, and to achive this I want to remove FloatBuffer copy which happens twice in my program (first - from OpenCL kernel result to RAM, and second from RAM to OpenGL texture). Obvious there is a way to point OpenCL buffer from OpenGL texture direct, cause all calculations works on GPU.
I know that there is cl_khr_gl_sharing extention for OpenCL which do what I want. But I can't understand how to use this in Java Jogamp bindings (jocl/jogl). Can somebody helps me or give some sample JAVA code (not C++ which is really differs in details)?
So, after few days of research I find how to do it. Posting an answer for anybody who interest.
In "init" method of Jogl's GLEventListener you create GL context. You must create CL context in that method too.
My sample code for this:
public void init(GLAutoDrawable drawable) {
GL4 gl4 = drawable.getGL().getGL4();
gl4.glDisable(GL4.GL_DEPTH_TEST);
gl4.glEnable(GL4.GL_CULL_FACE);
gl4.glCullFace(GL4.GL_BACK);
buildScreenVAO(gl4);
FloatBuffer pixelBuffer = GLBuffers.newDirectFloatBuffer(width * height * 4);
this.textureIndex = GLUtils.initTexture(gl4, width, height, pixelBuffer);
this.samplerIndex = GLUtils.initSimpleSampler(gl4);
if (clContext == null) {
try {
gl4.glFinish();
this.clContext = CLGLContext.create(gl4.getContext());
this.clDevice = clContext.getMaxFlopsDevice();
//if (device.getExtensions().contains("cl_khr_gl_sharing"))
this.clCommandQueue = clDevice.createCommandQueue();
this.clProgram = clContext.createProgram(new FileInputStream(new File(ResourceLocator.getInstance().kernelsPath + "raytracer.cl"))).build(); // load sources, create and build program
this.clKernel = clProgram.createCLKernel("main");
this.clTexture = (CLGLTexture2d<FloatBuffer>) clContext.createFromGLTexture2d(GL4.GL_TEXTURE_2D, textureIndex, 0, Mem.WRITE_ONLY);
this.viewTransform = clContext.createFloatBuffer(16 * 4, Mem.READ_ONLY);
this.w = clContext.createFloatBuffer(1, Mem.READ_ONLY);
clKernel.putArg(clTexture).putArg(width).putArg(height).putArg(viewTransform).putArg(w);
fillViewTransform(viewTransform);
fillW(w);
clCommandQueue.putWriteBuffer(viewTransform, false);
clCommandQueue.putWriteBuffer(w, false);
clCommandQueue.putAcquireGLObject(clTexture);
clCommandQueue.put1DRangeKernel(clKernel, 0, width * height, 0);
clCommandQueue.putReleaseGLObject(clTexture);
} catch (Exception e) {
e.printStackTrace();
}
}
buildShaderProgram(gl4);
bindObjects(gl4);
}
The core line is: clContext.createFromGLTexture2d(GL4.GL_TEXTURE_2D, textureIndex, 0, Mem.WRITE_ONLY);
You should create an OpenCL texture object for your previous created OpenGL texture. Code of creating OpenGL texture:
gl4.glGenTextures(1, indexBuffer);
int textureIndex = indexBuffer.get();
indexBuffer.clear();
gl4.glBindTexture(GL4.GL_TEXTURE_2D, textureIndex);
gl4.glTexParameterf(GL4.GL_TEXTURE_2D, GL4.GL_TEXTURE_MIN_FILTER, GL4.GL_LINEAR);
gl4.glTexParameterf(GL4.GL_TEXTURE_2D, GL4.GL_TEXTURE_MAG_FILTER, GL4.GL_LINEAR);
gl4.glTexImage2D(GL4.GL_TEXTURE_2D, 0, GL4.GL_RGBA32F, width, height, 0, GL4.GL_RGBA, GL4.GL_FLOAT, pixelBuffer); //TODO
gl4.glTexParameteri(GL4.GL_TEXTURE_2D, GL4.GL_TEXTURE_BASE_LEVEL, 0);
gl4.glTexParameteri(GL4.GL_TEXTURE_2D, GL4.GL_TEXTURE_MAX_LEVEL, 0);
int[] swizzle = new int[] { GL4.GL_RED, GL4.GL_GREEN, GL4.GL_BLUE, GL4.GL_ONE };
gl4.glTexParameterIiv(GL4.GL_TEXTURE_2D, GL4.GL_TEXTURE_SWIZZLE_RGBA, swizzle, 0);
gl4.glBindTexture(GL4.GL_TEXTURE_2D, 0);
return textureIndex;
And the last - you must use right data type of texture argument in your OpenCL kernel. In my case kernel method has the following signature:
kernel void main(write_only image2d_t dst, const uint width, const uint height, global float* viewTransform, global float* w){
and I use write_imagef build-in OpenCL method to write float data (0.0f - 1.0f) into this texture.
Feel free to ask me about this approach if you interested.
I am a relative newbie in game programming. I know how to draw pixels to a BufferedImage using setPixel(). It is horribly slow on larger formats so I moved on and found VolatileImage (took me a week or so). It is fairly easy to draw lines, strings, rects, etc but I can't draw individual pixels. I already tried using drawLine(x,y,x,y) but I get 3-4 FPS on an 800x600 image.
The fact that java didn't include setPixel() or setRGB() in the VolatileImage makes me pretty angry and confused.
I have 4 questions:
Is there a way to draw individual pixels on a VolatileImage? (on 1440x900 formats with FPS > 40)
Can I draw pixels in a BufferedImage with a faster method? (same 1440x900, FPS > 40)
Is there any other way to draw pixels fast enough for 3D games?
Can I make my BufferedImage hardware accelerated( tried using setAccelerationPriority(1F) but it doesn't work)
Please if you have any idea tell me. I can't continue making my game wihout this information. I already made 3D rendering algorithms but i need to be able to draw fast pixels. I have got a good feeling about this game.
Here's the code if it can help you help me:
public static void drawImageRendered (int x, int y, int w, int h) { // This is just a method to test the performance
int a[] = new int[3]; // The array containing R, G and B value for each pixel
bImg = Launcher.contObj.getGraphicsConfiguration().createCompatibleImage(800, 600); // Creates a compatible image for the JPanel object i am working with (800x600)
bImg.setAccelerationPriority(1F); // I am trying to get this image accelerated
WritableRaster wr = bImg.getRaster(); // The image's writable raster
for (int i = 0; i < bImg.getWidth(); i++) {
for (int j = 0; j < bImg.getHeight(); j++) {
a[0] = i % 256;
a[2] = j % 256;
a[1] = (j * i) % 256;
wr.setPixel(i, j, a); // Sets the pixels (You get a nice pattern)
}
}
g.drawImage(bImg, x, y, w, h, null);
}
I would much prefer not using OpenGL or any other external libraries, just plain Java.
Well you're basically drawing one pixel after the other using the CPU. There's no way that this can be accelerated, thus such a method does simply not make any sense for a VolatileImage. The low FPS you get suggest that this even causes a significant overhead, as each pixel drawing operation is sent to the graphics card (with information such as location & colour), which takes longer than to modify 3 or 4 bytes of RAM.
I suggest to either stop drawing each pixel separately or to figure out a way to make your drawing algorithm run directly on the graphics card (which most likely requires another language than Java).
It's been over 4 years since this post got an answer. I was looking for an answer to this question as well and stumbled on this post. After some more searching, I got it to work. Below I'll post the source to rendering pixels with a VolatileImage.
It seems Java hides our ability to plot pixels directly to a VolatileImage, but we can draw buffered images to it. For good reason. Using the software to plot a pixel doesn't really help with acceleration(in Java it seems). If you can plot pixels to a BufferedImage, and then render it on a VolatileImage, you may get a speed bonus since it's hardware accelerated from that point.
The source down below is a self-contained example. You can copy-pasta practically all of it to your project and run it.
https://github.com/Miekpeeps/JavaSnippets-repo/blob/master/src/graphics_rendering/pixels_03/PlottingVolatile.java
In the constructor I save the Graphics environment of the app/game.
private GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
private GraphicsConfiguration gc = ge.getDefaultScreenDevice().getDefaultConfiguration();
Then, when I call a method to enable hardware we create a buffer. I set the transparency to Opaque. In my little engine, I deal with transparency/alpha blending on another thread in the pipeline.
public void setHardwareAcceleration(Boolean hw)
{
useHW = hw;
if (hw)
{
vbuffer = gc.createCompatibleVolatileImage(width, height, Transparency.OPAQUE);
System.setProperty("sun.java2d.opengl", hw.toString()); // may not be needed.
}
}
For each frame I update, I get the Graphics from the VolatileImage and render my buffer there. Nothing gets rendered if I dont flush().
#Override
public void paintComponent(Graphics g)
{
if(useHW)
{
g = vbuffer.getGraphics();
g.drawImage(buffer, 0, 0, null);
vbuffer.flush();
}
else
{
g.drawImage(buffer, 0, 0, null);
buffer.flush();
}
}
There is still a little bit of overhead when calling to plot a pixel on the BufferedImage writable raster. But when we update the screen, we get a speed boost when using the Volatile image instead of using the Buffered image.
Hope this helps some folks out. Cheers.
Please can some expert person explain me whether we can use the cvHaarDetectObjects() method to detect squares and get width and heights? I found a code that use this method for face-detection but I need to know whether I can use it for rectangle detection.
String src="src/squiredetection/MY.JPG";
IplImage grabbedImage = cvLoadImage(src);
IplImage grayImage = IplImage.create(grabbedImage.width(), grabbedImage.height(), IPL_DEPTH_8U, 1);
cvCvtColor(grabbedImage, grayImage, CV_BGR2GRAY);
CvSeq faces = cvHaarDetectObjects(grayImage, cascade, storage, 1.1, 3, 0);//*
for (int i = 0; i < faces.total(); i++) {
CvRect r = new CvRect(cvGetSeqElem(faces, i));
cvRectangle(grabbedImage, cvPoint(r.x(), r.y()), cvPoint(r.x()+r.width(), r.y()+r.height()), CvScalar.RED, 1, CV_AA, 0);
/* hatPoints[0].x = r.x-r.width/10; hatPoints[0].y = r.y-r.height/10;
hatPoints[1].x = r.x+r.width*11/10; hatPoints[1].y = r.y-r.height/10;
hatPoints[2].x = r.x+r.width/2; hatPoints[2].y = r.y-r.height/2;*/
// cvFillConvexPoly(grabbedImage, hatPoints, hatPoints.length, CvScalar.GREEN, CV_AA, 0);
}
when I use above method it throws following exception
OpenCV Error: Bad argument (Invalid classifier cascade) in unknown function, file C:\slave\WinInstallerMegaPack\src\opencv\modules\objdetect\src\haar.cpp, line 1036
Exception in thread "main" java.lang.RuntimeException: C:\slave\WinInstallerMegaPack\src\opencv\modules\objdetect\src\haar.cpp:1036: error: (-5) Invalid classifier cascade
at com.googlecode.javacv.cpp.opencv_objdetect.cvHaarDetectObjects(Native Method)
at com.googlecode.javacv.cpp.opencv_objdetect.cvHaarDetectObjects(opencv_objdetect.java:243)
at squiredetection.Test2.main(Test2.java:52 I have put * on this line)
Please be kind enough to give simple code example for that.
cvHaarDetectObjects() is used for detecting objects or shapes not only for faces, it depends on HaarCascade classifier.
If you pass face haarcascade xml then it will return an array of faces or also can use eye, nose, etc HaarCascade XML file. You can make also custom haarcascade xml by creating your own positive and negative samples using opencv_traincascade.exe
CvSeq faces = cvHaarDetectObjects(grayImage, classifier, storage,
1.1, 3, CV_HAAR_DO_CANNY_PRUNING);
for (int i = 0; i < faces.total(); i++) {
// its ok
}
detail on opencv doc
for rectangle detection :
there is an example for rectangle detection in OpenCV, they use it to detect
the squares in a chessboard. Have a look to squares.c in
..\OpenCV\samples\c\ directory.
see this chessboard detection sample in opencv
Invalid classifier cascade in unknown function error means the classifier you passed is not correctly formatted or something is missing. Check if your classifier xml file is valid.
cvHaarDetectObjects returns multiple faces detected in an image. You have to declare an array of CvSeq to store the result, not just a single CvSeq.
// There can be more than one face in an image.
// So create a growable sequence of faces.
// Detect the objects and store them in the sequence
CvSeq* faces = cvHaarDetectObjects( img, cascade, storage,
1.1, 2, CV_HAAR_DO_CANNY_PRUNING,
cvSize(40, 40) );
The code above was extracted from this site:
http://opencv.willowgarage.com/wiki/FaceDetection
I'm trying to detect the positions of billiards balls on a table from an image taken at a perspective angle. I'm using the getPerspectiveTransform() method to find the transformation matrix and I want to apply that to only the circles I detect using HoughCircles. I'm trying to go from a rather large trapezoidal shape to a smaller rectangular shape. I don't want to do the transformation on the image first and then find the HoughCircles because the image gets too warped for houghcircles to provide useful results.
Here's my code:
CvMat mmat = cvCreateMat(3,3,CV_32FC1);
double srcX1 = 462;
double srcX2 = 978;
double srcX3 = 1440;
double srcX4 = 0;
double srcY = 241;
double srcHeight = 772;
double dstX = 56.8;
double dstY = 33.5;
double dstWidth = 262.4;
double dstHeight = 447.3;
CvSeq seq = cvHoughCircles(newGray, circles, CV_HOUGH_GRADIENT, 2.1d, (double)newGray.height()/40, 85d, 65d, 5, 50);
JavaCV.getPerspectiveTransform(new double[]{srcX1, srcY, srcX2,srcY, srcX3, srcHeight, srcX4, srcHeight},
new double[]{dstX, dstY, dstWidth, dstY, dstWidth, dstHeight, dstX, dstHeight}, mmat);
cvWarpPerspective(seq, seq, mmat);
for(int j=0; j<seq.total(); j++){
CvPoint3D32f point = new CvPoint3D32f(cvGetSeqElem(seq, j));
float xyr[] = {point.x(),point.y(),point.z()};
CvPoint center = new CvPoint(Math.round(xyr[0]), Math.round(xyr[1]));
int radius = Math.round(xyr[2]);
cvCircle(gray, center, 3, CvScalar.GREEN, -1, 8, 0);
cvCircle(gray, center, radius, CvScalar.BLUE, 3, 8, 0);
}
The problem is I get this error on the warpPerspective() method:
error: (-215) seq->total > 0 && CV_ELEM_SIZE(seq->flags) == seq->elem_size in function cv::Mat cv::cvarrToMat(const CvArr*, bool, bool, int)
Also I guess it's worth mentioning that I'm using JavaCV, in case the method calls look a bit different than what you're used to. Thanks for any help.
Answer:
the problem with what you want to do (besides the obvious, opencv wont let you) is that the radius cant really be warped correctly. AFAIK the xy coordinates are pretty easy to calculate x'=((m00x+m01y+m02)/(m20x+m21y+m22)) y'=((m10x+m11y+m12)/(m20x+m21y_m22)) when m is the transformation matrix. the radius you can hack by transforming all the points of the original circle and then find the max distance between x'y' and those points (atleast if the radius in the warped image is expected to cover all those points)
btw, mIJx = m(i,j)*x (just to clarify)
End Answer.
Everything i write is according to the c++ version, i've never used JavaCV but from what i could see its just a wrapper that calls the native c++ lib.
CvSeq is a sequance data structure that behaves like a linked list.
the assert your application crushes at is
CV_Assert(seq->total > 0 && CV_ELEM_SIZE(seq->flags) == seq->elem_size);
which means that either your seq instance is empty (total is the number of elements in the sequence) or somehow the inner seq flags are corrupted.
I'd recommend that you'd check the total member of your CvSeq, and the cvHoughCircles call.
all of this occurs before the actual implementation of cvWarpPerspective (its the first line in the implementation, that only converts your CvSeq to cv::Mat).. so its not the warping but what you're doing before that.
anyway, to understand whats wrong with cvHoughCircles we'll need more info about the creation of newGray and circles.
here is an example i've found on the javaCV page (Link)
IplImage gray = cvCreateImage( cvSize( img.width, img.height ), IPL_DEPTH_8U, 1 );
cvCvtColor( img, gray, CV_RGB2GRAY );
// smooth it, otherwise a lot of false circles may be detected
cvSmooth(gray,gray,CV_GAUSSIAN,9,9,2,2);
CvMemStorage circles = CvMemStorage.create();
CvSeq seq = cvHoughCircles(gray, circles.getPointer(), CV_HOUGH_GRADIENT,
2, img.height/4, 100, 100, 0, 0);
for(int i=0; i<seq.total; i++){
float xyr[] = cvGetSeqElem(seq,i).getFloatArray(0, 3);
CvPoint center = new CvPoint(Math.round(xyr[0]), Math.round(xyr[1]));
int radius = Math.round(xyr[2]);
cvCircle(img, center.byValue(), 3, CvScalar.GREEN, -1, 8, 0);
cvCircle(img, center.byValue(), radius, CvScalar.BLUE, 3, 8, 0);
}
from what i've seen in the implementation of cvHoughCircles, the answer is saved in the circles buff and at the end they create from it the CvSeq to return, so if you've allocated the circles buff wrong, it wont work.
EDIT:
as you can see, the CvSeq instance in case of the return from cvHoughCircles is a list of point-values, that is probably why the assertion failed. you cannot convert this CvSeq into a cv::Mat.. because its just not a cv::Mat. to get only the circles returned from cvHoughCircles in an cv::Mat instance, you'll need to create a new cv::Mat instance and than draw onto it all the circles in the CvSeq - as seen in the provided example above.
than the warping will work (you'll have a cv::Mat instance, and that is what the function expect - a cv::Mat as the only element in the CvSeq)
END EDIT
here is the c++ reference for CvSeq
and if you want to fiddle with the source code than
cvarrToMat is in matrix.cpp
CV_ELEM_SIZE is in types_c.h
cvWarpPerspective is in imgwarp.cpp
cvHoughCircles is in hough.cpp
I hope that will help.
BTW, your next error will probably be:
cv::warpPerspective in the C++ OpencCv asserts that dst.data != src.data
thus
cvWarpPerspective(seq, seq, mmat);
wont work cause your source mat and destination mat referencing the same data.
Not all the functions in OpenCV (and image processing in general) work in-situ (because there is no in-situ algorithm or because its slower then the other version eg. transpose of an n*n mat will work in-situ, but n*m where n!=m will be harder to do in-situ and might be slower)
you cant assume the using the src matrix as the dst will work.