UnsatisfiedLinkError with TessAPI1 while using tess4j - java

I'm working on Mac OS, using tesseract to do the OCR.
I installed tesseract with homebrew.
The tesseract works well with command line, and the java program works well with the basic example of Tesseract.getInstance().
But since I want to get the confidence value of each character, I switch to use the TessAPI1 and had the error below:
Exception in thread "main" java.lang.UnsatisfiedLinkError: Error looking up function 'TessResultRendererAddError': dlsym(0x7f9df9c38c20, TessResultRendererAddError): symbol not found
at com.sun.jna.Function.<init>(Function.java:208)
at com.sun.jna.NativeLibrary.getFunction(NativeLibrary.java:536)
at com.sun.jna.NativeLibrary.getFunction(NativeLibrary.java:513)
at com.sun.jna.NativeLibrary.getFunction(NativeLibrary.java:499)
at com.sun.jna.Native.register(Native.java:1509)
at com.sun.jna.Native.register(Native.java:1396)
at com.sun.jna.Native.register(Native.java:1156)
at net.sourceforge.tess4j.TessAPI1.<clinit>(Unknown Source)
at TesseractWrapper.doOCR(TesseractWrapper.java:71)
at OCR.main(OCR.java:6)
The error occurred at
handle = TessAPI1.TessBaseAPICreate();
The code looks like below:
TessAPI1.TessBaseAPI handle;
handle = TessAPI1.TessBaseAPICreate();
new File(this.path);
BufferedImage image = ImageIO.read(
new FileInputStream(tiff)); // require jai-imageio lib to read TIFF
ByteBuffer buf = ImageIOHelper.convertImageData(image);
int bpp = image.getColorModel().getPixelSize();
int bytespp = bpp / 8;
int bytespl = (int) Math.ceil(image.getWidth() * bpp / 8.0);
TessAPI1.TessBaseAPIInit3(handle,
"tessdata", lang);
TessAPI1.TessBaseAPISetPageSegMode(handle, TessAPI1.TessPageSegMode.
PSM_AUTO);
TessAPI1.TessBaseAPISetImage(handle, buf, image.getWidth(), image.getHeight(), bytespp, bytespl);
TessAPI1.TessBaseAPIRecognize(handle,
null);
TessAPI1.TessResultIterator ri = TessAPI1.TessBaseAPIGetIterator(handle);
TessAPI1.TessPageIterator pi = TessAPI1.TessResultIteratorGetPageIterator(ri);
TessAPI1.TessPageIteratorBegin(pi);
I found this code from some other question and I guess what I need is to get and 'iterator' and then I could get the character with its confidence value one by one.

Related

How to save video into a file using OpenCV and Java?

Using OpenCV and Java I could live stream the video from the camera with the Java JFrame application, that means accessing the camera and capturing the video from the camera works as
well.
I want to save the video into a file, below is my code:
import ...;
public final class Main {
VideoCapture videoCapture;
Size size;
Mat matrix;
VideoWriter videoWriter;
public Main() throws SocketException, UnknownHostException, IOException {
System.load( "C:\\opencv\\build\\java\\x64\\" + Core.NATIVE_LIBRARY_NAME + ".dll");
System.load( "C:\\opencv\\build\\bin\\opencv_videoio_ffmpeg453_64.dll");
videoCapture = new VideoCapture(CAMERA_ID, CV_CAP_DSHOW);
if (videoCapture.isOpened()) {
Mat m = new Mat();
videoCapture.read(m);
int fourcc = VideoWriter.fourcc('H','2','6','4');
// int fourcc = VideoWriter.fourcc('x','2','6','4'); // Have tried, did not work.
// int fource = (int) videoCapture.get(Videoio.CAP_PROP_FOURCC); // Have tried, did not work.
// int fourcc = VideoWriter.fourcc('m','j','p','g'); // Have tried, did not work.
// int fourcc = VideoWriter.fourcc('D', 'I', 'V', 'X'); // Have tried, did not work.
double fps = videoCapture.get(Videoio.CAP_PROP_FPS);
Size s = new Size((int) videoCapture.get(Videoio.CAP_PROP_FRAME_WIDTH), (int) videoCapture.get(Videoio.CAP_PROP_FRAME_HEIGHT));
videoWriter = new VideoWriter("C:\\Users\\oconnor\\Documents\\bbbbbbbbbbbbb_1000.h264", fourcc, fps, s, true);
// Have tried, did not work
// videoWriter = new VideoWriter("C:\\Users\\oconnor\\Documents\\bbbbbbbbbbbbb_1000.avi", fourcc, fps, s, true);
// videoWriter = new VideoWriter("C:\\Users\\oconnor\\Documents\\bbbbbbbbbbbbb_1000.Mjpeg", fourcc, fps, s, true);
// videoWriter = new VideoWriter("C:\\Users\\oconnor\\Documents\\bbbbbbbbbbbbb_1000.mjpeg", fourcc, fps, s, true);
while (videoCapture.read(m)) {
videoWriter.write(m);
// Have tried, did not work.
// int i = 0;
// Mat clonedMatrix = m.clone();
// Imgproc.putText(clonedMatrix, ("frame" + i), new Point(100,100), 1, 2, new Scalar(200,0,0), 3);
// videoWriter.write(clonedMatrix);
// i++;
}
}
videoCapture.release();
videoWriter.release();
}
}
When I ran the above code, there was no error at all and the file bbbbbbbbbbbbb_1000.h264 has been created as well, but the VLC Media Player and the Windows Media Player could not play the video.
I am using Windows 10. Below is the OpenCV 4.5.3 build information. I did not build the library from sources by myself.
Video I/O:
DC1394: NO
FFMPEG: YES (prebuilt binaries)
avcodec: YES (58.134.100)
avformat: YES (58.76.100)
avutil: YES (56.70.100)
swscale: YES (5.9.100)
avresample: YES (4.0.0)
GStreamer: NO
DirectShow: YES
Media Foundation: YES
DXVA: NO
How can I save the video into a file using OpenCV and Java. It can be in Mjpeg or h264 or whatever it is as long as I can have a file to play back.
According to getBuildInformation(), your OpenCV was built with FFMPEG, so that's good.
The issue is the file extension you tried to use.
The file extension .h264 does not represent an actual container format. .h264 is the extension given to a naked H.264 stream that sits in a file without a proper container structure.
You need to give your file the extension .mp4 or some other suitable container format. MKV would be suitable. MPEG transport streams (.ts) would also be suitable.
Further, H264 may not be a valid "fourcc". avc1 is a valid fourcc.
OpenCV has builtin support for the AVI container (.avi) and an MJPEG video codec (MJPG). That will always work. More containers and codecs may be available if OpenCV was built with ffmpeg or it can use Media APIs of the operating system.
Here's some python but the API usage is the same:
import numpy as np
import cv2 as cv
# fourcc = cv.VideoWriter.fourcc(*"H264")# might not be supported
fourcc = cv.VideoWriter.fourcc(*"avc1") # that's the proper fourcc by standard
# calling with (*"avc1") is equivalent to calling with ('a', 'v', 'c', '1')
(width, height) = (640, 360)
vid = cv.VideoWriter("foo.mp4", fourcc=fourcc, fps=16, frameSize=(width, height), isColor=True)
assert vid.isOpened()
frame = np.zeros((height, width, 3), np.uint8)
for k in range(256):
frame[:,:] = (0, k, 255) # BGR order
vid.write(frame)
vid.release() # finalize the file (write headers and whatnot)
This video will be 256 frames, 16 fps, 16 seconds, and fade red to yellow.
It will also be MP4 and H.264.
Just got it work to create an AVI file. To create an AVI file, the fourcc must be as following.
int fourcc = VideoWriter.fourcc('M','J','P','G');
The file name must have .avi as the extension.
In some OS, videoCapture.get(Videoio.CAP_PROP_FPS) might return 0.0. Set a default frame per second if videoCapture.get(Videoio.CAP_PROP_FPS) returns 0.0. I've set it to 25;
Thanks #Chistoph Rackwitz and his answer in Python here, to save the video in H264, the fourcc can be as following:
int fourcc = VideoWriter.fourcc('a', 'v', 'c', '1');
And the file extension can be .ts.

How to save 16bit depth Image to file in Arcore (java)

I want to save the depth info from the arcore to the storage.
Here is the example from the developer guide.
public int getMillimetersDepth(Image depthImage, int x, int y) {
// The depth image has a single plane, which stores depth for each
// pixel as 16-bit unsigned integers.
Image.Plane plane = depthImage.getPlanes()[0];
int byteIndex = x * plane.getPixelStride() + y * plane.getRowStride();
ByteBuffer buffer = plane.getBuffer().order(ByteOrder.nativeOrder());
short depthSample = buffer.getShort(byteIndex);
return depthSample;
}
So I want to save this bytebuffer into a local file, but my output txt file is not readable. How could I fix this?
Here is what I have
Image depthImage = frame.acquireDepthImage();
Image.Plane plane = depthImage.getPlanes()[0];
int format = depthImage.getFormat();
ByteBuffer buffer = plane.getBuffer().order(ByteOrder.nativeOrder());
byte[] data = new byte[buffer.remaining()];
buffer.get(data);
File mypath=new File(super.getExternalFilesDir("depDir"),Long.toString(lastPointCloudTimestamp)+".txt");
FileChannel fc = new FileOutputStream(mypath).getChannel();
fc.write(buffer);
fc.close();
depthImage.close();
I tried to decode them with
String s = new String(data, "UTF-8");
System.out.println(StandardCharsets.UTF-8.decode(buffer).toString());
but the output is still strange like this
.03579:<=>#ABCDFGHJKMNOPRQSUWY]_b
In order to obtain the depth data provided by the ARCore session, you need to write bytes into your local file. A Buffer object is a container, it countains a finite sequence of elements of a specific primitive type (here bytes for a ByteBuffer). So what you need to write inside your file is your data variable that corresponds to the information previously stored in the buffer (according to buffer.get(data)).
It works fine for me, I managed to draw the provided depth map within a python code (but the idea behind the following code can be easily adapted to a java code):
depthData = np.fromfile('depthdata.txt', dtype = np.uint16)
H = 120
W = 160
def extractDepth(x):
depthConfidence = (x >> 13) & 0x7
if (depthConfidence > 6): return 0
return x & 0x1FFF
depthMap = np.array([extractDepth(x) for x in depthData]).reshape(H,W)
depthMap = cv.rotate(depthMap, cv.ROTATE_90_CLOCKWISE)
For further details, read the information concerning the format of the depthmap (DEPTH16) here: https://developer.android.com/reference/android/graphics/ImageFormat#DEPTH16
You must also be aware that the depthmap resolution is set to 160x120 pixels and is oriented according to a landscape format.
Make also sure to surround your code by a try/catch code bloc in case of a IOException error.

GLFW_PLATFORM_ERROR when using glfwSetWindowIcon()

I currently coding a Project using LWJGL 3. Since GLFW 3.2 its possible to add a window icon. I used it like you can see below, but I keep getting the error, and I have no clue why. What am I doing wrong?
What I have tried already:
I have made sure that the file is present in "res/img/icon.png".
The PNGLoader (Found here) works, as I am using textures and it loads them without any problems.
Code:
PNGDecoder dec = null;
try {
dec = new PNGDecoder(new FileInputStream("res/img/icon.png"));
int width = dec.getWidth();
int height = dec.getHeight();
ByteBuffer buf = BufferUtils.createByteBuffer(width * height * 4);
dec.decode(buf, width * 4, PNGDecoder.Format.RGBA);
buf.flip();
Buffer images = new Buffer(buf);
GLFW.glfwSetWindowIcon(window, images);
} catch (IOException e) {
e.printStackTrace();
}
Here's the error that I get:
[LWJGL] GLFW_PLATFORM_ERROR error
Description : Win32: Failed to create RGBA bitmap
Stacktrace :
org.lwjgl.glfw.GLFW.nglfwSetWindowIcon(GLFW.java:1802)
org.lwjgl.glfw.GLFW.glfwSetWindowIcon(GLFW.java:1829)
ch.norelect.emulator.designer.Designer.initialize(Designer.java:81)
ch.norelect.emulator.designer.Designer.start(Designer.java:48)
ch.norelect.emulator.Main.main(Main.java:27)
You are passing the raw pixel data as a GLFWImage.Buffer, but GLFW needs the width and height as well. GLFWImage.Buffer is just a convenience class for putting several GLFWImages next to each other, which each hold properties about an image.
GLFWImage image = GLFWImage.malloc();
image.set(width, height, buf);
GLFWImage.Buffer images = GLFWImage.malloc(1);
images.put(0, image);
glfwSetWindowIcon(window, images);
images.free();
image.free();

Scan, detect and decode UPC code from an image

I am working on Android development where once I get byte array from Google Glass frame, I am trying to scan array using Zxing library and trying to detect 1d barcode(UPC code).
I have tried this code snippet.
BufferedImage image = ImageIO.read(game);
BufferedImageLuminanceSource bils = new BufferedImageLuminanceSource(image);
HybridBinarizer hb = new HybridBinarizer(bils);
BitMatrix bm = **hb.getBlackMatrix();**
MultiDetector detector = new MultiDetector(bm);
DetectorResult dResult = detector.detect();
if(dResult == null)
{
System.out.println("Image does not contain any barcode");
}
else
{
BitMatrix QRImageData = dResult.getBits();
Decoder decoder = new Decoder();
DecoderResult decoderResult = decoder.decode(QRImageData);
String QRString = decoderResult.getText();
System.out.println(QRString);
}
It works fine for QRcode, detects and decodes QR code well. But does not detect UPC code.
I also tried this code snippet,
InputStream barCodeInputStream = new FileInputStream(game);
BufferedImage barCodeBufferedImage = ImageIO.read(barCodeInputStream);
BufferedImage image = ImageIO.read(game);
LuminanceSource source = new BufferedImageLuminanceSource(image);
BinaryBitmap bitmap = new BinaryBitmap(new GlobalHistogramBinarizer(source));
RSSExpandedReader rssExpandedReader = new RSSExpandedReader();
int rowNumber = bitmap.getHeight()/2;
BitArray row = **bitmap.getBlackRow(0, null);**
Result theResult = rssExpandedReader.decodeRow(rowNumber, row, new Hashtable());
and in both I am getting "Exception in thread "main" com.google.zxing.NotFoundException".
Does anyone know how to fix this issue?
getBlackMatrix() -
Converts a 2D array of luminance data to 1 bit. As above, assume this method is expensive and do not call it repeatedly. This method is intended for decoding 2D barcodes and may or may not apply sharpening. Therefore, a row from this matrix may not be identical to one fetched using getBlackRow(), so don't mix and match between them.
getBlackRow()-
Converts one row of luminance data to 1 bit data. May actually do the conversion, or return cached data. Callers should assume this method is expensive and call it as seldom as possible. This method is intended for decoding 1D barcodes and may choose to apply sharpening.

How to combine two or many tiff image files in to one multipage tiff image in JAVA

I have 5 single page tiff images.
I want to combine all these 5 tiff images in to one multipage tiff image.
I am using Java Advanced Imaging API.
I have read the JAI API documentation and tutorials given by SUN.
I am new to JAI. I know the basic core java.
I dont understand those documentation and turorial by SUN.
So friends Please tell me how to combine 5 tiff image file in to one multipage tiff image.
Please give me some guidence on above topic.
I have been searching internet for above topic but not getting any single clue.
I hope you have the computer memory to do this. TIFF image files are large.
You're correct in that you need to use the Java Advanced Imaging (JAI) API to do this.
First, you have to convert the TIFF images to a java.awt.image.BufferedImage. Here's some code that will probably work. I haven't tested this code.
BufferedImage image[] = new BufferedImage[numImages];
for (int i = 0; i < numImages; i++) {
SeekableStream ss = new FileSeekableStream(input_dir + file[i]);
ImageDecoder decoder = ImageCodec.createImageDecoder("tiff", ss, null);
PlanarImage op = new NullOpImage(decoder.decodeAsRenderedImage(0), null, null, OpImage.OP_IO_BOUND);
image[i] = op.getAsBufferedImage();
}
Then, you convert the BufferedImage array back into a multiple TIFF image. I haven't tested this code either.
TIFFEncodeParam params = new TIFFEncodeParam();
OutputStream out = new FileOutputStream(output_dir + image_name + ".tif");
ImageEncoder encoder = ImageCodec.createImageEncoder("tiff", out, params);
Vector vector = new Vector();
for (int i = 0; i < numImages; i++) {
vector.add(image[i]);
}
params.setExtraImages(vector.listIterator(1)); // this may need a check to avoid IndexOutOfBoundsException when vector is empty
encoder.encode(image[0]);
out.close();

Categories