Using OpenCV and Java I could live stream the video from the camera with the Java JFrame application, that means accessing the camera and capturing the video from the camera works as
well.
I want to save the video into a file, below is my code:
import ...;
public final class Main {
VideoCapture videoCapture;
Size size;
Mat matrix;
VideoWriter videoWriter;
public Main() throws SocketException, UnknownHostException, IOException {
System.load( "C:\\opencv\\build\\java\\x64\\" + Core.NATIVE_LIBRARY_NAME + ".dll");
System.load( "C:\\opencv\\build\\bin\\opencv_videoio_ffmpeg453_64.dll");
videoCapture = new VideoCapture(CAMERA_ID, CV_CAP_DSHOW);
if (videoCapture.isOpened()) {
Mat m = new Mat();
videoCapture.read(m);
int fourcc = VideoWriter.fourcc('H','2','6','4');
// int fourcc = VideoWriter.fourcc('x','2','6','4'); // Have tried, did not work.
// int fource = (int) videoCapture.get(Videoio.CAP_PROP_FOURCC); // Have tried, did not work.
// int fourcc = VideoWriter.fourcc('m','j','p','g'); // Have tried, did not work.
// int fourcc = VideoWriter.fourcc('D', 'I', 'V', 'X'); // Have tried, did not work.
double fps = videoCapture.get(Videoio.CAP_PROP_FPS);
Size s = new Size((int) videoCapture.get(Videoio.CAP_PROP_FRAME_WIDTH), (int) videoCapture.get(Videoio.CAP_PROP_FRAME_HEIGHT));
videoWriter = new VideoWriter("C:\\Users\\oconnor\\Documents\\bbbbbbbbbbbbb_1000.h264", fourcc, fps, s, true);
// Have tried, did not work
// videoWriter = new VideoWriter("C:\\Users\\oconnor\\Documents\\bbbbbbbbbbbbb_1000.avi", fourcc, fps, s, true);
// videoWriter = new VideoWriter("C:\\Users\\oconnor\\Documents\\bbbbbbbbbbbbb_1000.Mjpeg", fourcc, fps, s, true);
// videoWriter = new VideoWriter("C:\\Users\\oconnor\\Documents\\bbbbbbbbbbbbb_1000.mjpeg", fourcc, fps, s, true);
while (videoCapture.read(m)) {
videoWriter.write(m);
// Have tried, did not work.
// int i = 0;
// Mat clonedMatrix = m.clone();
// Imgproc.putText(clonedMatrix, ("frame" + i), new Point(100,100), 1, 2, new Scalar(200,0,0), 3);
// videoWriter.write(clonedMatrix);
// i++;
}
}
videoCapture.release();
videoWriter.release();
}
}
When I ran the above code, there was no error at all and the file bbbbbbbbbbbbb_1000.h264 has been created as well, but the VLC Media Player and the Windows Media Player could not play the video.
I am using Windows 10. Below is the OpenCV 4.5.3 build information. I did not build the library from sources by myself.
Video I/O:
DC1394: NO
FFMPEG: YES (prebuilt binaries)
avcodec: YES (58.134.100)
avformat: YES (58.76.100)
avutil: YES (56.70.100)
swscale: YES (5.9.100)
avresample: YES (4.0.0)
GStreamer: NO
DirectShow: YES
Media Foundation: YES
DXVA: NO
How can I save the video into a file using OpenCV and Java. It can be in Mjpeg or h264 or whatever it is as long as I can have a file to play back.
According to getBuildInformation(), your OpenCV was built with FFMPEG, so that's good.
The issue is the file extension you tried to use.
The file extension .h264 does not represent an actual container format. .h264 is the extension given to a naked H.264 stream that sits in a file without a proper container structure.
You need to give your file the extension .mp4 or some other suitable container format. MKV would be suitable. MPEG transport streams (.ts) would also be suitable.
Further, H264 may not be a valid "fourcc". avc1 is a valid fourcc.
OpenCV has builtin support for the AVI container (.avi) and an MJPEG video codec (MJPG). That will always work. More containers and codecs may be available if OpenCV was built with ffmpeg or it can use Media APIs of the operating system.
Here's some python but the API usage is the same:
import numpy as np
import cv2 as cv
# fourcc = cv.VideoWriter.fourcc(*"H264")# might not be supported
fourcc = cv.VideoWriter.fourcc(*"avc1") # that's the proper fourcc by standard
# calling with (*"avc1") is equivalent to calling with ('a', 'v', 'c', '1')
(width, height) = (640, 360)
vid = cv.VideoWriter("foo.mp4", fourcc=fourcc, fps=16, frameSize=(width, height), isColor=True)
assert vid.isOpened()
frame = np.zeros((height, width, 3), np.uint8)
for k in range(256):
frame[:,:] = (0, k, 255) # BGR order
vid.write(frame)
vid.release() # finalize the file (write headers and whatnot)
This video will be 256 frames, 16 fps, 16 seconds, and fade red to yellow.
It will also be MP4 and H.264.
Just got it work to create an AVI file. To create an AVI file, the fourcc must be as following.
int fourcc = VideoWriter.fourcc('M','J','P','G');
The file name must have .avi as the extension.
In some OS, videoCapture.get(Videoio.CAP_PROP_FPS) might return 0.0. Set a default frame per second if videoCapture.get(Videoio.CAP_PROP_FPS) returns 0.0. I've set it to 25;
Thanks #Chistoph Rackwitz and his answer in Python here, to save the video in H264, the fourcc can be as following:
int fourcc = VideoWriter.fourcc('a', 'v', 'c', '1');
And the file extension can be .ts.
Related
I want to save the depth info from the arcore to the storage.
Here is the example from the developer guide.
public int getMillimetersDepth(Image depthImage, int x, int y) {
// The depth image has a single plane, which stores depth for each
// pixel as 16-bit unsigned integers.
Image.Plane plane = depthImage.getPlanes()[0];
int byteIndex = x * plane.getPixelStride() + y * plane.getRowStride();
ByteBuffer buffer = plane.getBuffer().order(ByteOrder.nativeOrder());
short depthSample = buffer.getShort(byteIndex);
return depthSample;
}
So I want to save this bytebuffer into a local file, but my output txt file is not readable. How could I fix this?
Here is what I have
Image depthImage = frame.acquireDepthImage();
Image.Plane plane = depthImage.getPlanes()[0];
int format = depthImage.getFormat();
ByteBuffer buffer = plane.getBuffer().order(ByteOrder.nativeOrder());
byte[] data = new byte[buffer.remaining()];
buffer.get(data);
File mypath=new File(super.getExternalFilesDir("depDir"),Long.toString(lastPointCloudTimestamp)+".txt");
FileChannel fc = new FileOutputStream(mypath).getChannel();
fc.write(buffer);
fc.close();
depthImage.close();
I tried to decode them with
String s = new String(data, "UTF-8");
System.out.println(StandardCharsets.UTF-8.decode(buffer).toString());
but the output is still strange like this
.03579:<=>#ABCDFGHJKMNOPRQSUWY]_b
In order to obtain the depth data provided by the ARCore session, you need to write bytes into your local file. A Buffer object is a container, it countains a finite sequence of elements of a specific primitive type (here bytes for a ByteBuffer). So what you need to write inside your file is your data variable that corresponds to the information previously stored in the buffer (according to buffer.get(data)).
It works fine for me, I managed to draw the provided depth map within a python code (but the idea behind the following code can be easily adapted to a java code):
depthData = np.fromfile('depthdata.txt', dtype = np.uint16)
H = 120
W = 160
def extractDepth(x):
depthConfidence = (x >> 13) & 0x7
if (depthConfidence > 6): return 0
return x & 0x1FFF
depthMap = np.array([extractDepth(x) for x in depthData]).reshape(H,W)
depthMap = cv.rotate(depthMap, cv.ROTATE_90_CLOCKWISE)
For further details, read the information concerning the format of the depthmap (DEPTH16) here: https://developer.android.com/reference/android/graphics/ImageFormat#DEPTH16
You must also be aware that the depthmap resolution is set to 160x120 pixels and is oriented according to a landscape format.
Make also sure to surround your code by a try/catch code bloc in case of a IOException error.
I'm currently working on an application that plays back sound. I implemented playback for standard WAV File with the Java Sound API, no problems there, everything working fine. Now I want to add support for MP3 as well, but I'm having a strange problem: the playback gets distorted. I'm trying to figure out what I'm doing wrong, I would appreciate any leads in the right direction.
I'm using the Mp3SPI (http://www.javazoom.net/mp3spi/documents.html) for playing back the Mp3 Files.
I have already tried to take a look at the output and recorded a wav-file with the output I get from the mp3, then I compared the waveforms of the original and the recorded file. As it turns out, in the recorded file there are a lot of samples that are 0, or very close to it. Longer tones get broken up and the waveform returns to 0 all the time, then jumping back to the place the waveform is in the original.
I open the file like this:
private AudioInputStream mp3;
private AudioInputStream rawMp3;
private void openMP3(File file) {
// open the Audio INput Stream
try {
rawMp3 = AudioSystem.getAudioInputStream(file);
AudioFormat baseFormat = rawMp3.getFormat();
AudioFormat decodedFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED,
baseFormat.getSampleRate(),
16,
baseFormat.getChannels(),
baseFormat.getChannels() * 2,
baseFormat.getSampleRate(),
false);
mp3 = AudioSystem.getAudioInputStream(decodedFormat, rawMp3);
} catch (UnsupportedAudioFileException | IOException ex) {
Logger.getLogger(SoundFile.class.getName()).log(Level.SEVERE, null, ex);
}
}
The part where I read the Mp3 File:
byte[] data = new byte[length];
// read the data into the buffer
int nBytesRead = 0;
while (nBytesRead != - 1 && nBytesRead < length) {
nBytesRead = mp3.read(data, 0, data.length - nBytesRead);
}
Also I convert the byte-array to doubles, perhaps I do something wrong here (I'm fairly new to using bitwise operators, so maybe there is the problem
double[][] frameBuffer = new double[2][1024]; // 2 channel stereo buffer
int nFramesRead = 0;
int byteIndex = 0;
// convert the data into double and write it to frameBuffer
for (int i = 0; i < length; ++i) {
for (int c = 0; c < 2; ++c) {
byte a = data[byteIndex++];
byte b = data[byteIndex++];
int val = a | b << 8; // a is the least significant byte. | functions as a + here. b << 8 moves 8 zeroes to the end of b.
frameBuffer[c][i] = (double) val / (double) Short.MAX_VALUE;
nFramesRead++;
}
}
The double-array is then later used to play back the sound. When playing a wav file, I do the exact same thing to the buffer, so I'm pretty sure it has to be something during the read process, not me sending faulty bytes to the ouput.
I would expect this to work out of the box with Mp3SPI, but somehow something breaks the audio along the way.
I am also open to trying other libraries to play back the MP3, if you have any recommendations. Just a Decoder for the raw MP3 Data would actually be enough.
As it turns out, the AudioFormat from the mp3 (input) and the AudioFormat of the output didnt match, obviously resulting in distortion. So with those matched up, playback is fine!
I want to convert pictures using Java and OpenCV from RGB to GRAY
All extensions images work correctly and I take the gray image,
just if I make .GIF image (not moving) it's give me this error:
OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cv::cvtColor
the java code :
Mat scrImg = Highgui.imread(path);
Mat dstImg = new Mat(scrImg.rows(),scrImg.cols(),scrImg.type());
Imgproc.cvtColor(scrImg, dstImg, Imgproc.COLOR_RGB2GRAY);
private static BufferedImage Mat2BufferedImage(Mat matrix){
BufferedImage bimOut;
int type;
if(matrix.channels() == 1)
type = BufferedImage.TYPE_BYTE_GRAY;
else
type = BufferedImage.TYPE_3BYTE_BGR;
int dataLength = matrix.channels()*matrix.cols()*matrix.rows();
byte [] buffer = new byte[dataLength];
bimOut = new BufferedImage(matrix.cols(),matrix.rows(),type);
matrix.get(0,0,buffer);
final byte[] bimPixels = ((DataBufferByte) bimOut.getRaster().getDataBuffer()).getData();
System.arraycopy(buffer, 0, bimPixels, 0, buffer.length);
return bimOut;
}
According to the official documentation
Currently, the following file formats are supported:
Windows bitmaps - *.bmp, *.dib (always supported)
JPEG files - *.jpeg, *.jpg, *.jpe (see the Notes section)
JPEG 2000 files - *.jp2 (see the Notes section)
Portable Network Graphics - *.png (see the Notes section)
WebP - *.webp (see the Notes section)
Portable image format - *.pbm, *.pgm, *.ppm *.pxm, *.pnm (always supported)
Sun rasters - *.sr, *.ras (always supported)
TIFF files - *.tiff, *.tif (see the Notes section)
OpenEXR Image files - *.exr (see the Notes section)
Radiance HDR - *.hdr, *.pic (always supported)
Raster and Vector geospatial data supported by Gdal (see the Notes section)
Apparently support is not included Because gif is a proprietary format.
http://answers.opencv.org/question/72134/cv2imread-cannot-read-gifs/
I'm working on Mac OS, using tesseract to do the OCR.
I installed tesseract with homebrew.
The tesseract works well with command line, and the java program works well with the basic example of Tesseract.getInstance().
But since I want to get the confidence value of each character, I switch to use the TessAPI1 and had the error below:
Exception in thread "main" java.lang.UnsatisfiedLinkError: Error looking up function 'TessResultRendererAddError': dlsym(0x7f9df9c38c20, TessResultRendererAddError): symbol not found
at com.sun.jna.Function.<init>(Function.java:208)
at com.sun.jna.NativeLibrary.getFunction(NativeLibrary.java:536)
at com.sun.jna.NativeLibrary.getFunction(NativeLibrary.java:513)
at com.sun.jna.NativeLibrary.getFunction(NativeLibrary.java:499)
at com.sun.jna.Native.register(Native.java:1509)
at com.sun.jna.Native.register(Native.java:1396)
at com.sun.jna.Native.register(Native.java:1156)
at net.sourceforge.tess4j.TessAPI1.<clinit>(Unknown Source)
at TesseractWrapper.doOCR(TesseractWrapper.java:71)
at OCR.main(OCR.java:6)
The error occurred at
handle = TessAPI1.TessBaseAPICreate();
The code looks like below:
TessAPI1.TessBaseAPI handle;
handle = TessAPI1.TessBaseAPICreate();
new File(this.path);
BufferedImage image = ImageIO.read(
new FileInputStream(tiff)); // require jai-imageio lib to read TIFF
ByteBuffer buf = ImageIOHelper.convertImageData(image);
int bpp = image.getColorModel().getPixelSize();
int bytespp = bpp / 8;
int bytespl = (int) Math.ceil(image.getWidth() * bpp / 8.0);
TessAPI1.TessBaseAPIInit3(handle,
"tessdata", lang);
TessAPI1.TessBaseAPISetPageSegMode(handle, TessAPI1.TessPageSegMode.
PSM_AUTO);
TessAPI1.TessBaseAPISetImage(handle, buf, image.getWidth(), image.getHeight(), bytespp, bytespl);
TessAPI1.TessBaseAPIRecognize(handle,
null);
TessAPI1.TessResultIterator ri = TessAPI1.TessBaseAPIGetIterator(handle);
TessAPI1.TessPageIterator pi = TessAPI1.TessResultIteratorGetPageIterator(ri);
TessAPI1.TessPageIteratorBegin(pi);
I found this code from some other question and I guess what I need is to get and 'iterator' and then I could get the character with its confidence value one by one.
This question already has answers here:
Encode a series of Images into a Video
(4 answers)
Closed 8 years ago.
I would like to encode video from sequence of images with java only in my current android project. I mean without any use of external tools such as NDK.
Also is there any availability of java libraries for encoding video from sequence of images ?
You can use a pure java library called JCodec ( http://jcodec.org ). It contains a primitive yet working H.264 ( AVC ) encoder and a fully functioning MP4 ( ISO BMF ) muxer.
Here's a CORRECTED code sample that uses low-level API:
public void imageToMP4(BufferedImage bi) {
// A transform to convert RGB to YUV colorspace
RgbToYuv420 transform = new RgbToYuv420(0, 0);
// A JCodec native picture that would hold source image in YUV colorspace
Picture toEncode = Picture.create(bi.getWidth(), bi.getHeight(), ColorSpace.YUV420);
// Perform conversion
transform.transform(AWTUtil.fromBufferedImage(bi), yuv);
// Create MP4 muxer
MP4Muxer muxer = new MP4Muxer(sink, Brand.MP4);
// Add a video track
CompressedTrack outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, 25);
// Create H.264 encoder
H264Encoder encoder = new H264Encoder(rc);
// Allocate a buffer that would hold an encoded frame
ByteBuffer _out = ByteBuffer.allocate(ine.getWidth() * ine.getHeight() * 6);
// Allocate storage for SPS/PPS, they need to be stored separately in a special place of MP4 file
List<ByteBuffer> spsList = new ArrayList<ByteBuffer>();
List<ByteBuffer> ppsList = new ArrayList<ByteBuffer>();
// Encode image into H.264 frame, the result is stored in '_out' buffer
ByteBuffer result = encoder.encodeFrame(_out, toEncode);
// Based on the frame above form correct MP4 packet
H264Utils.encodeMOVPacket(result, spsList, ppsList);
// Add packet to video track
outTrack.addFrame(new MP4Packet(result, 0, 25, 1, 0, true, null, 0, 0));
// Push saved SPS/PPS to a special storage in MP4
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
// Write MP4 header and finalize recording
muxer.writeHeader();
}
You can download JCodec library from a project web site or via Maven, for this add the below snippet to your pom.xml:
<dependency>
<groupId>org.jcodec</groupId>
<artifactId>jcodec</artifactId>
<version>0.1.3</version>
</dependency>
[UPDATE 1] Android users can use something like below to convert Android Bitmap object to JCodec native format:
public static Picture fromBitmap(Bitmap src) {
Picture dst = Picture.create((int)src.getWidth(), (int)src.getHeight(), RGB);
fromBitmap(src, dst);
return dst;
}
public static void fromBitmap(Bitmap src, Picture dst) {
int[] dstData = dst.getPlaneData(0);
int[] packed = new int[src.getWidth() * src.getHeight()];
src.getPixels(packed, 0, src.getWidth(), 0, 0, src.getWidth(), src.getHeight());
for (int i = 0, srcOff = 0, dstOff = 0; i < src.getHeight(); i++) {
for (int j = 0; j < src.getWidth(); j++, srcOff++, dstOff += 3) {
int rgb = packed[srcOff];
dstData[dstOff] = (rgb >> 16) & 0xff;
dstData[dstOff + 1] = (rgb >> 8) & 0xff;
dstData[dstOff + 2] = rgb & 0xff;
}
}
}