I am using libgdx to generate some 3d mesh in code. Right now I am attempting to generate a flat plane with many vertices in the middle just to test things out, however I am received a Exception in thread "LWJGL Application" java.nio.BufferOverflowException when I called mesh.setIndices(indices); where the variable indices is a short array.
I am not having any trouble if I have less than 180-150 indices. I wasn't able to figure out the exact number from trail and error, however I am certain that the exception will be thrown if I have more than 180 indices.
Here is my code for creating the mesh:
Firstly how I generate my vertices (I don't think this is the problem, but Ill put them in anyways) *Note that my vertex attributes are (VertexAttribute.Position(), VertexAttribute.Normal(), VertexAttribute.ColorUnpacked())
private float[] generateVertices(int width, int height) {
int index = 0;
float vertices[] = new float[width*height*10];
for(int i = 0; i < width; i ++) {
for(int j = 0; j < height; j++) {
//vertex coordinates
vertices[index] = i;
vertices[index+1] = 0;
vertices[index+2] = j;
// normal
vertices[index+3] = 0;
vertices[index+4] = 1;
vertices[index+5] = 0;
// random colors!!!
vertices[index+6] = MathUtils.random(0.3f, 0.99f);
vertices[index+7] = MathUtils.random(0.3f, 0.99f);
vertices[index+8] = MathUtils.random(0.3f, 0.99f);
vertices[index+9] = 1;
index+=10;
}
}
return vertices;
}
Secondly here is how I generate my indices:
private short[] generateIndices(int width, int height) {
int index = 0;
short indices[] = new short[(width-1)*(height-1)*3 * 2];
for(int i = 0; i < width-1; i ++) {
for (int j = 0; j < height-1; j++) {
indices[index] = (short)((j*height) + i);
indices[index+1] = (short)((j*height) + i+1);
indices[index+2] = (short)(((j+1)*height) + i);
indices[index+3] = (short)(((j+1)*height) + i);
indices[index+4] = (short)((j*height) + i+1);
indices[index+5] = (short)(((j+1)*height) + i + 1);
index+= 6;
}
}
return indices;
}
Thirdly this is how I set the vertices and indices (note that 6 and 7 are the width and height of the plane):
mesh.setVertices(generateVertices(6, 7));
mesh.setIndices(generateIndices(6, 7));
Finally, here is how I rendered my mesh through a custom shader.
shaderProgram.setUniformMatrix("u_projectionViewMatrix", camera.combined);
shaderProgram.setUniformMatrix("uMVMatrix", mat4);
// rendering with triangles
mesh.render(shaderProgram, GL20.GL_TRIANGLES);
I don't know what is causing this exception, any help is appreciated. Any suggestions are welcome.
Thanks in advance.
Related
I've been trying to animate a model in OpenGl using Assimp.
The result of my attempts is
this.
Loading bones:
List<Bone> getBones(AIMesh mesh) {
List<Bone> bones = new ArrayList<>();
for (int i = 0; i < mesh.mNumBones(); i++) {
AIBone aiBone = AIBone.create(mesh.mBones().get(i));
Bone bone = new Bone(aiBone.mName().dataString());
bone.setOffset(aiMatrixToMatrix(aiBone.mOffsetMatrix()).transpose());
bones.add(bone);
}
return bones;
}
Loading vertices:
VertexData processVertices(AIMesh mesh) {
float[] weights = null;
int[] boneIds = null;
float[] vertices = new float[mesh.mNumVertices() * 3];
boolean calculateBones = mesh.mNumBones() != 0;
if (calculateBones) {
weights = new float[mesh.mNumVertices() * 4];
boneIds = new int[mesh.mNumVertices() * 4];
}
int i = 0;
int k = 0;
for (AIVector3D vertex : mesh.mVertices()) {
vertices[i++] = vertex.x();
vertices[i++] = vertex.y();
vertices[i++] = vertex.z();
//bone data if any
if (calculateBones) {
for (int j = 0; j < mesh.mNumBones(); j++) {
AIBone bone = AIBone.create(mesh.mBones().get(j));
for (AIVertexWeight weight : bone.mWeights()) {
if (weight.mVertexId() == i - 3) {
k++;
boneIds[k] = j;
weights[k] = weight.mWeight();
}
}
}
}
}
What am I doing wrong.
Are all the matrices required for the bind pose or can I use only the offset for testing?
If I get you code right you do not get the inidecs by the faces, right? You need to iterate over the faces of your mesh to get the correct inidices, if I get the concept you are using right.
So I've got a school project and we have to work with a couple classes our prof gave us and make our own to make an image organizer.
The first part consists of making a set of static methods to edit the images themselves as 2D arrays of Color arrays(ColorImage type).
The first first problem is making a tool to downscale an image by a factor of f(f sided square of pixels in the original becomes 1 pixel in the output), and mine works, but I think it shouldn't and I can't figure why it works, so any help is appreciated. Specifically I'm taking about the loop that averages the colours of each position in the buffer array(avgArr[][]) (line 16). I'm thinking: the value of reds blues and greens would just be overwritten for each iteration and avgColor would just get the vlaue of the last pixel it got the rgb values off of avgArr.
static ColorImage downscaleImg(ColorImage img, int f) {
ColorImage dsi = new ColorImage(img.getWidth()/f, img.getHeight()/f);
Color[][] avgArr = new Color[f][f];
int reds = 0;
int greens = 0;
int blues = 0;
for(int i = 0; i < dsi.getWidth(); i++) {
for(int j = 0; j < dsi.getHeight(); j++) {
for(int x = i*f, xc = 0; x < i*f + (f-1); x++, xc++){
for(int y = j*f, yc = 0; y < j*f + (f-1); y++, yc++) {
avgArr[xc][yc] = img.getColor(x, y);
}
}
for(int k = 0; k < f - 1; k++){
for(int w = 0; w < f - 1; w++) {
reds += avgArr[k][w].getR();
greens += avgArr[k][w].getG();
blues += avgArr[k][w].getB();
}
}
int count = f*f;
Color avgColor = new Color(reds/count, greens/count, blues/count);
dsi.setColor(i, j, avgColor);
reds = 0;
greens = 0;
blues = 0;
}
}
return dsi;
}
Thanks,
EDIT: Turns out, it was in fact just taking the colour of, the last position of avgArr that it looked at. Any suggestions to correct are welcome.
I think you can solve your problem by summing the reds/greens/blues and then dividing them by the total pixels at the end to find the average:
int reds = 0;
int greens = 0;
int blues = 0;
...
for(int k = 0; k < f - 1; k++){
for(int w = 0; w < f - 1; w++) {
reds += avgArr[k][w].getR(); // <-- note the +=
greens += avgArr[k][w].getG();
blues += avgArr[k][w].getB();
}
}
int count = (f-1)*(f-1);
Color avgColor = new Color(reds/count, greens/count, blues/count);
There is matrix for [x][y] order. i want to print its value in clockwise order
I have tried several methods but unable to write the logic of the code. I'm trying it in java but logic is important so you can help me in any language.
When I read your post I've started to play so I'll post you my code maybe it will be halpful for you. I've did it for square if you want for rectangle one need separate stepX and stepY. SIZE would be input parameter in your case, I have it final static for test.
public class clockwise {
private static final int SIZE = 3;
public static void main(String[] args) {
// int[][] test_matrix = {{1,2,3,4},{5,6,7,8},{9,10,11,12},{13,14,15,16}};
int[][] test_matrix = {{1,2,3},{5,6,7},{9,10,11}};
int[][] direction = {{1, 0},{0, 1},{-1, 0},{0, -1}}; //{x,y}
for(int i = 0; i < SIZE; i++) {
for(int j = 0; j < SIZE; j++)
System.out.print(test_matrix[i][j] + " ");
System.out.println("");
}
int x = 0;
int y = 0;
int directionMove = 0;
int stepSize = SIZE;
boolean changeStep = true;
int stepCounter = 0;
for(int i = 0; i < SIZE*SIZE; i++) {
System.out.print(test_matrix[x][y] + " ");
stepCounter++;
if (stepCounter % stepSize == 0) {
directionMove++;
directionMove = directionMove%4;
if(changeStep) { //after first edge one need to decrees step after passing two edges
stepSize--;
changeStep = false;
} else {
changeStep = true;
}
stepCounter = 0;
}
x += direction[directionMove][0];
y += direction[directionMove][1];
}
}
}
I've been trying to convert some opencv C++ code in opencv java and I can't seem to get pixel division to work properly. I take a meanshiftsegmented mat that I convert to grayscale then to 32F.
I then compare the most downsampled then upsampled image (which is comprised of the gray meanshift mat) to the original gray meanshift mat.
I've already read Using get() and put() to access pixel values in OpenCV for Java
however, it and others like it do not work. The error message I am getting is invalid mat type 5. However, even if I were able to see the saliency map I am positive it is wrong. This is because when I pass in image 001.jpg in c++ I am supposed to see the original image + red square around the objects. In java, I am only seeing the original image at the end.
NOTE :
AbstractImageProvider.deepCopy(AbstractImageProvider.matToBufferedImage(Saliency),disp);
Is an API call that works when I attempt to show the original mat, meanShift mat, and the gray meanShift mat. It fails at showing saliency.
c++
I only did a channel split because I was testing out other colorspaces, however in java I only want to use grayscale.
input = MeanShift.clone();
input.convertTo(input, CV_32F);
for(int i = 0; i < Pyramid_Size; i++){DS_Pyramid[i] = input.clone();}
for (int i = 0; i < Pyramid_Size; i++){
for (int k = 0; k <= i; k++){ // Why don't I just downsamplex3 a copy of MeanShift.clone then upsamplex3 that same one? ...
pyrDown (DS_Pyramid[i], DS_Pyramid[i], Size(DS_Pyramid[i].cols/2, DS_Pyramid[i].rows/2));
US_Pyramid[i] = DS_Pyramid[i].clone();
}
for (int j = 0; j <= i; j++){
pyrUp (US_Pyramid[i], US_Pyramid[i], Size(US_Pyramid[i].cols*2, US_Pyramid[i].rows*2));
}
}
top = US_Pyramid[Pyramid_Size - 1].clone(); // most down sampled layer, up sampled.
split(top, top_chs);
split(input.clone(), meanShift_chs); // split into channels result
split(input.clone(), sal_chs); // holder to use for compare
float top_min = 1.0;
float ms_min = 1.0;
for (int i = 0; i < top.rows; i++){ // find the smallest value in both top and meanShift
for (int k = 0; k < top.cols; k++){ // this is so you can sub out the 0 with the min value
for (int j = 0; j < top.channels(); j++){ // later on
float a = top_chs[j].at<float>(i,k);
float b = meanShift_chs[j].at<float>(i,k);
if (a < top_min && a >= 0) {top_min = a;} // make sure you don't have a top_min of zero... that'd be bad.
if (b < ms_min && b >= 0) { ms_min = b;}
}
}
}
for (int i = 0; i < top.rows; i++){
for (int k = 0; k < top.cols; k++){
for (int j = 0; j < top.channels(); j++){
float a,b,c;
a = top_chs[j].at<float>(i,k);
b = meanShift_chs[j].at<float>(i,k);
if (a <= 0){a = top_min;} // make sure you don't divide by zero
if (b <= 0){b = ms_min;} // make sure you really don't divide by zero
if (a <= b){c = 1.0 - a/b;}
else {c = 1.0 - b/a;}
// c = sqrt(c); // makes stuff more salient, but makes noise pop out too
sal_chs[j].at<float>(i,k) = c;
}
}
}
merge(sal_chs, Saliency); // combine into saliency map
imshow("saliency", Saliency);
java
MeanShift = inputImage.clone();
Imgproc.pyrMeanShiftFiltering(MeanShift, MeanShift, MeanShift_spatialRad, MeanShift_colorRad);
Imgproc.cvtColor(MeanShift, MeanShift, Imgproc.COLOR_BGR2GRAY);
MeanShift.convertTo(MeanShift, CvType.CV_32F); // 32F between 0 - 1. ************** IMPORTANT LINE
for (int i = 0; i < PyrSize; i++){
DS_Pyramid.add(new Mat());
UP_Pyramid.add(new Mat());
}
for (int i = 0; i < PyrSize; i++){
DS_Pyramid.set(i, MeanShift);
}
for (int i = 0; i < PyrSize; i++){
for(int k = 0; k <= i; k++){ // At 0 is downsampled once, second twice, third 3 times.
Imgproc.pyrDown(DS_Pyramid.get(i), DS_Pyramid.get(i)); // pyrDown by default img.width / 2 img height / 2
Mat a = new Mat(); // save the sampled down at i
a = DS_Pyramid.get(i);
UP_Pyramid.add(a);
}
for (int j = 0; j <= i; j++){
Imgproc.pyrUp(UP_Pyramid.get(i),UP_Pyramid.get(i));
}
}
top = UP_Pyramid.get(PyrSize-1);
bot = MeanShift.clone();
Saliency = MeanShift.clone();
//http://answers.opencv.org/question/5/how-to-get-and-modify-the-pixel-of-mat-in-java/
//http://www.tutorialspoint.com/java_dip/applying_weighted_average_filter.htm
for (int i = 0; i < top.rows(); i++){
for (int j = 0; j < top.cols(); j++){
int index = i * top.rows() + j;
float[] top_temp = top.get(i, j);
float[] bot_temp = bot.get(i,j);
float[] sal_temp = bot.get(i,j);
if (top_temp[0] <= bot_temp[k]){sal_temp[0] = 1.0f - (top_temp[0]/bot_temp[0]);}
else {sal_temp[0] = 1.0f - (bot_temp[0]/top_temp[0]);}
Saliency.put(i,j, sal_temp);
}
}
AbstractImageProvider.deepCopy(AbstractImageProvider.matToBufferedImage(Saliency),disp);
Found a simple and working solution after a lot of searching. This might help you get past the error- invalid mat type 5
Code:
Mat img = Highgui.imread("Input.jpg"); //Reads image from the file system and puts into matrix
int rows = img.rows(); //Calculates number of rows
int cols = img.cols(); //Calculates number of columns
int ch = img.channels(); //Calculates number of channels (Grayscale: 1, RGB: 3, etc.)
for (int i=0; i<rows; i++)
{
for (int j=0; j<cols; j++)
{
double[] data = img.get(i, j); //Stores element in an array
for (int k = 0; k < ch; k++) //Runs for the available number of channels
{
data[k] = data[k] * 2; //Pixel modification done here
}
img.put(i, j, data); //Puts element back into matrix
}
}
Highgui.imwrite("Output.jpg", img); //Writes image back to the file system using values of the modified matrix
Note: An important point that has not been mentioned anywhere online is that the method put does not write pixels onto Input.jpg. It merely updates the values of the matrix img. Therefore, the above code does not alter anything in the input image. For producing a visible output, the matrix img needs to be written onto a file i.e., Output.jpg in this case. Also, using img.get(i, j) seems to be a better way of handling the matrix elements rather than using the accepted solution above as this helps in visualizing and working with the image matrix in a better way and does not require a large contiguous memory allocation.
public CompressImage(){
}
// compress image method
public static short[] compress(short image[][]){
// get image dimensions
int imageLength = image.length; // row length
int imageWidth = image[0].length; // column length
// convert vertical to horizontal
// store transposed Image
short[][] transposeImage = new short[imageWidth][imageLength];
// rotate by +90
for (int i = 0; i < imageWidth; i++)
{
for (int j = 0; j < imageLength; j++)
{
short temp = image[i][j];
transposeImage[i][j] = image[j][i];
transposeImage[j][i] = temp;
}
}
short temp = image[i][j];
transposeImage[i][j] = image[j][i];
transposeImage[j][i] = temp;
Why are you swapping here? That doesn't make sense - transposeImage is a new matrix, so you don't have to do inplace editing. This is guaranteed to break if imageWidth != imageLength - see if you can figure out why.
And, actually, you're not even swapping. The three lines above are equivalent to:
transposeImage[i][j] = image[j][i];
transposeImage[j][i] = image[i][j];
The body of the nested for loop should really just be:
transposeImage[i][j] = image[j][i];