How to add watermark on video using mp4parser in android? - java

I had successfully convert recorded video into "out.h264" format and also audio into ".AAC" format using mp4parser. Now I want to implement "watermark image" on my recorded video. Is this possible with mp4parser to add watermark on video? I have checked GPUimages too. But there is no way to add effect on video, Its' example shows effect for only Images. So my question is How can I add watermark on video?
Below is my code for audio video :
File sdCard = Environment.getExternalStorageDirectory();
IsoFile isoFile = new IsoFile(videosPath);
TrackBox trackBox = (TrackBox) Path.getPath(isoFile, "/moov/trak/mdia/minf/stbl/stsd/avc1/../../../../../");
SampleList sl = new SampleList(trackBox);
File out = new File(sdCard + "/out.h264");
if (out.exists()) {
out.delete();
}
FileChannel fc = new RandomAccessFile(out, "rw").getChannel();
ByteBuffer separator = ByteBuffer.wrap(new byte[] { 0, 0, 0, 1 });
fc.write((ByteBuffer) separator.rewind());
// Write SPS
fc.write(ByteBuffer.wrap(((AvcConfigurationBox) Path.getPath(trackBox, "mdia/minf/stbl/stsd/avc1/avcC")).getSequenceParameterSets().get(0)));
// Warning:
// There might be more than one SPS (I've never seen that but it is possible)
fc.write((ByteBuffer) separator.rewind());
// Write PPS
fc.write(ByteBuffer.wrap(((AvcConfigurationBox) Path.getPath(trackBox, "mdia/minf/stbl/stsd/avc1/avcC")).getPictureParameterSets().get(0)));
// Warning:
// There might be more than one PPS (I've never seen that but it is possible)
int lengthSize = ((AvcConfigurationBox) Path.getPath(trackBox, "mdia/minf/stbl/stsd/avc1/avcC")).getLengthSizeMinusOne() + 1;
for (Sample sample : sl) {
ByteBuffer bb = sample.asByteBuffer();
while (bb.remaining() > 0) {
int length = (int) IsoTypeReaderVariable.read(bb, lengthSize);
fc.write((ByteBuffer) separator.rewind());
fc.write((ByteBuffer) bb.slice().limit(length));
bb.position(bb.position() + length);
}
}
fc.close();
Log.e(TAG, "Converted Path: " + out.getAbsolutePath() + " Start Time Convert: " + new Date());
H264TrackImpl h264Track = new H264TrackImpl(new FileDataSourceImpl(out.getAbsoluteFile()));
AACTrackImpl aacTrack = new AACTrackImpl(new FileDataSourceImpl(audioPath));
CroppedTrack aacTrackShort = new CroppedTrack(aacTrack, 1, aacTrack.getSamples().size());
// MP3TrackImpl accTrackImpl = new MP3TrackImpl(new FileDataSourceImpl(audioPath));
Movie movie = new Movie();
movie.addTrack(h264Track);
movie.addTrack(aacTrackShort);
Container mp4file = new DefaultMp4Builder().build(movie);
File output = new File(sdCard + "/output_KanAK.mp4");
if (output.exists()) {
output.delete();
}
#SuppressWarnings("resource")
FileChannel fc1 = new RandomAccessFile(output, "rw").getChannel();
mp4file.writeContainer(fc1);
fc1.close();
Bitmap largeIcon = BitmapFactory.decodeResource(getResources(), R.drawable.velfee);
gpuImage.saveToPictures(largeIcon, output, 100, new OnPictureSavedListener() {
#Override
public void onPictureSaved(Uri uri) {
// TODO Auto-generated method stub
Log.e(TAG, "Picture save Uri");
GPUImageDifferenceBlendFilter filter;
filter.setBitmap(BitmapFactory.decodeResource(getResources(), R.drawable.ic_launcher));
}
});
Any help would be appreciated!! Thanks in advance.

You can use a String as a watermark using the subtitle example in the github page. Set the limits to your own but ofcourse you wont be able to use image.

Related

android: how to get a screenshot for the whole screen programmtically

here is a code to take a screenshot .. but the problem is it takes a screenshot for the application only .. not the whole screen (back and home button and notification bar )
is there any way that I can take a screenshot for the whole screen not only the application
private void takeScreenshot() {
Date now = new Date();
android.text.format.DateFormat.format("yyyy-MM-dd_hh:mm:ss", now);
try {
// image naming and path to include sd card appending name you choose for file
String mPath = Environment.getExternalStorageDirectory().toString() + "/" + now + ".jpg";
// create bitmap screen capture
View v1 = getWindow().getDecorView().getRootView();
v1.setDrawingCacheEnabled(true);
Bitmap bitmap = Bitmap.createBitmap(v1.getDrawingCache());
v1.setDrawingCacheEnabled(false);
File imageFile = new File(mPath);
FileOutputStream outputStream = new FileOutputStream(imageFile);
int quality = 100;
bitmap.compress(Bitmap.CompressFormat.JPEG, quality, outputStream);
outputStream.flush();
outputStream.close();
// openScreenshot(imageFile);
} catch (Throwable e) {
// Several error may come out with file handling or DOM
e.printStackTrace();
}
}

ML Kit Barcode scanning: Invalid image data size

I would like to detect a barcode within a captured image. I capture an image using android's camera2. Following this, the image's metadata is retrieved and the image is saved to the device. The metadata is all passed along to the next activity, which is where the application attempts to detect a barcode.
This next activity creates a byte[] from the File saved previously. Next, the relevant FirebaseVision objects are created using the data passed with the intent. Finally, the application attempts to call the detectInImage() method, where an error is thrown:
"java.lang.IllegalArgumentException: Invalid image data size."
I suspect this is from the captured image being too large, however I cannot seem to figure out how to capture a smaller image, and I also cannot find anything in the reference documentation regarding the maximum size allowed. Information regarding this error and how to solve it would be very much appreciated. Below is what I believe to be the relevant code.
private final ImageReader.OnImageAvailableListener onImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader imageReader) {
try{
// Semaphore ensures date is recorded before starting next activity
storeData.acquire();
Image resultImg = imageReader.acquireNextImage(); // Image from camera
imgWidth = resultImg.getWidth();
imgHeight = resultImg.getHeight();
ByteBuffer buffer = resultImg.getPlanes()[0].getBuffer();
data = new byte[buffer.remaining()]; // Byte array with the images data
buffer.get(data);
String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss").format(new Date());
// Note: mediaFile directs to Pictures/"ThisProject" folder
File media = new File(mediaFile.getPath() +
File.separator + "IMG_" + timeStamp + ".jpg");
// Saving the image
FileOutputStream fos = null;
try {
fos = new FileOutputStream(media);
fos.write(data);
uri = Uri.fromFile(media);
} catch (IOException e) {
Log.e(TAG, e.getMessage());
} finally {
if (fos != null) {
try {
fos.close();
} catch (IOException e) {
Log.e(TAG, e.getMessage());
}
}
}
resultImg.close();
} catch (InterruptedException e) {
Log.e(TAG, e.getMessage());
}
storeData.release();
}
};
This essentially retrieves the image height & width, then writes it to a file.
The data sent to the next activity consists of the: Image width, Image height, Image rotation, and the Uri directing to the file.
Using this, I try to detect a barcode using Firebase ML Kit:
// uri is the uri referencing the saved image
File f = new File(uri.getPath());
data = new byte[(int) f.length()];
try{
BufferedInputStream bis = new BufferedInputStream(new FileInputStream(f));
DataInputStream dis = new DataInputStream(bis);
dis.readFully(data);
} catch (IOException e) {
Log.e(TAG, e.getMessage());
}
FirebaseVisionBarcodeDetectorOptions options = new FirebaseVisionBarcodeDetectorOptions.Builder().setBarcodeFormats(
FirebaseVisionBarcode.FORMAT_QR_CODE,
FirebaseVisionBarcode.FORMAT_DATA_MATRIX
).build();
FirebaseVisionBarcodeDetector detector = FirebaseVision.getInstance().getVisionBarcodeDetector(options);
FirebaseVisionImage image;
int rotationResult;
switch (imgRotation) {
case 0: {
rotationResult = FirebaseVisionImageMetadata.ROTATION_0;
break;
}
case 90: {
rotationResult = FirebaseVisionImageMetadata.ROTATION_90;
break;
}
case 180: {
rotationResult = FirebaseVisionImageMetadata.ROTATION_180;
break;
}
case 270: {
rotationResult = FirebaseVisionImageMetadata.ROTATION_270;
break;
}
default: {
rotationResult = FirebaseVisionImageMetadata.ROTATION_0;
break;
}
}
FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder()
.setWidth(imgWidth)
.setHeight(imgHeight)
.setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
.setRotation(rotationResult)
.build();
image = FirebaseVisionImage.fromByteArray(data, metadata);
Task<List<FirebaseVisionBarcode>> result = detector.detectInImage(image)
A few things.
Your image format should not be NV21 if you use camera2. See here for all camera2 supported image formats:
https://developer.android.com/reference/android/media/Image#getFormat()
Your byte[] is not NV21 and you specified IMAGE_FORMAT_NV21 and led to the error
Most intuitive integration with camera2 is like below:
Specify JPEG format when you instantiate the ImageReader.
onImageAvailable will give you back an android.media.Image and you can directly use FirebaseVisionImage.fromMediaImage(...) to create a FirebaseVisionImage. (You can find how to compute the rotation info from official doc here)
If you must do two Activities, then you need to work around the fact that android.media.Image is not Parcelable. I'd suggest you convert it to Bitmap first which is Parcelable and you can directly set it as an Intent extra (Up to you. Just thinking from end user's perspective, it's non-common to see the barcode being saved to my image gallery.
So you might want to consider skipping the step of saving it to file). Later, in your 2nd Activity, you can use FirebaseVisionImage.fromBitmap(...).

From getExternalStorageDirectory to internal storage

again I need a little help from you. I have this code for simple photo app, but this code save edited image on SD card, but I want change this to save image on internal memory of phone.
private File captureImage() {
// TODO Auto-generated method stub
OutputStream output;
Calendar cal = Calendar.getInstance();
Bitmap bitmap = Bitmap.createBitmap(ll1.getWidth(), ll1.getHeight(),
Config.ARGB_8888);
/*
* bitmap = ThumbnailUtils.extractThumbnail(bitmap, ll1.getWidth(),
* ll1.getHeight());
*/
Canvas b = new Canvas(bitmap);
ll1.draw(b);
// Find the SD Card path
File filepath = Environment.getExternalStorageDirectory();
// Create a new folder in SD Card
File dir = new File(filepath.getAbsolutePath() + "/background_eraser/");
dir.mkdirs();
mImagename = "image" + cal.getTimeInMillis() + ".png";
// Create a name for the saved image
file = new File(dir, mImagename);
// Show a toast message on successful save
Toast.makeText(SelectedImgActivity.this, "Image Saved to SD Card",
Toast.LENGTH_SHORT).show();
try {
output = new FileOutputStream(file);
// Compress into png format image from 0% - 100%
bitmap.compress(Bitmap.CompressFormat.PNG, 100, output);
output.flush();
output.close();
}
catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return file;
}
Any suggestions how to do this? I think I must change only Environment.getExternalStorageDirectory to something other, but what?
Thank you!
Edited:
I was change this line to File filepath = Environment.getDataDirectory(); and I think this works. But this make new folder in root folder...I want it in pictures... How to archive this?
Edited 2:
Now I was edited code to this
private File captureImage() {
// TODO Auto-generated method stub
OutputStream output;
Calendar cal = Calendar.getInstance();
Bitmap bitmap = Bitmap.createBitmap(ll1.getWidth(), ll1.getHeight(),
Config.ARGB_8888);
/*
* bitmap = ThumbnailUtils.extractThumbnail(bitmap, ll1.getWidth(),
* ll1.getHeight());
*/
Canvas b = new Canvas(bitmap);
ll1.draw(b);
// Find the SD Card path
File filepath = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES);
// File filepath = Environment.getDataDirectory(Environment.DIRECTORY_PICTURES);
// Create a new folder in SD Card
File dir = new File(filepath.getAbsolutePath() + "/Background Remover/");
dir.mkdirs();
mImagename = "image" + cal.getTimeInMillis() + ".png";
// Create a name for the saved image
file = new File(dir, mImagename);
// Show a toast message on successful save
Toast.makeText(SelectedImgActivity.this, "Image Saved",
Toast.LENGTH_SHORT).show();
try {
output = new FileOutputStream(file);
// Compress into png format image from 0% - 100%
bitmap.compress(Bitmap.CompressFormat.PNG, 100, output);
output.flush();
output.close();
}
catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return file;
}
Everything works fine, except Toast show...
Replace:
File dir = new File(filepath.getAbsolutePath() + "/background_eraser/");
With:
File dir = context.getFilesDir().getAbsolutePath() + File.separator + "background_eraser";
You can use:
FileInputStream fis = context.openFileInput(name);
Added in API level 1
Returns the absolute path to the directory on the filesystem where files created with openFileOutput(String, int) are stored.

How do you access an attachment stored as MIME Part?

It seems to me there are two ways to store an attachment in a NotesDocument.
Either as a RichTextField or as a "MIME Part".
If they are stored as RichText you can do stuff like:
document.getAttachment(fileName)
That does not seem to work for an attachment stored as a MIME Part. See screenshot
I have thousands of documents like this in the backend. This is NOT a UI issue where I need to use the file Download control of XPages.
Each document as only 1 attachment. An Image. A JPG file. I have 3 databases for different sizes. Original, Large, and Small. Originally I created everything from documents that had the attachment stored as RichText. But my code saved them as MIME Part. that's just what it did. Not really my intent.
What happened is I lost some of my "Small" pictures so I need to rebuild them from the Original pictures that are now stored as MIME Part. So my ultimate goal is to get it from the NotesDocument into a Java Buffered Image.
I think I have the code to do what I want but I just "simply" can't figure out how to get the attachment off the document and then into a Java Buffered Image.
Below is some rough code I'm working with. My goal is to pass in the document with the original picture. I already have the fileName because I stored that out in metaData. But I don't know how to get that from the document itself. And I'm passing in "Small" to create the Small image.
I think I just don't know how to work with attachments stored in this manner.
Any ideas/advice would be appreciated! Thanks!!!
public Document processImage(Document inputDoc, String fileName, String size) throws IOException {
// fileName is the name of the attachment on the document
// The goal is to return a NEW BLANK document with the image on it
// The Calling code can then deal with keys and meta data.
// size is "Original", "Large" or "Small"
System.out.println("Processing Image, Size = " + size);
//System.out.println("Filename = " + fileName);
boolean result = false;
Session session = Factory.getSession();
Database db = session.getCurrentDatabase();
session.setConvertMime(true);
BufferedImage img;
BufferedImage convertedImage = null; // the output image
EmbeddedObject image = null;
InputStream imageStream = null;
int currentSize = 0;
int newWidth = 0;
String currentName = "";
try {
// Get the Embedded Object
image = inputDoc.getAttachment(fileName);
System.out.println("Input Form : " + inputDoc.getItemValueString("form"));
if (null == image) {
System.out.println("ALERT - IMAGE IS NULL");
}
currentSize = image.getFileSize();
currentName = image.getName();
// Get a Stream of the Imahe
imageStream = image.getInputStream();
img = ImageIO.read(imageStream); // this is the buffered image we'll work with
imageStream.close();
Document newDoc = db.createDocument();
// Remember this is a BLANK document. The calling code needs to set the form
if ("original".equalsIgnoreCase(size)) {
this.attachImage(newDoc, img, fileName, "JPG");
return newDoc;
}
if ("Large".equalsIgnoreCase(size)) {
// Now we need to convert the LARGE image
// We're assuming FIXED HEIGHT of 600px
newWidth = this.getNewWidth(img.getHeight(), img.getWidth(), 600);
convertedImage = this.getScaledInstance(img, newWidth, 600, false);
this.attachImage(newDoc, img, fileName, "JPG");
return newDoc;
}
if ("Small".equalsIgnoreCase(size)) {
System.out.println("converting Small");
newWidth = this.getNewWidth(img.getHeight(), img.getWidth(), 240);
convertedImage = this.getScaledInstance(img, newWidth, 240, false);
this.attachImage(newDoc, img, fileName, "JPG");
System.out.println("End Converting Small");
return newDoc;
}
return newDoc;
} catch (Exception e) {
// HANDLE EXCEPTION HERE
// SAMLPLE WRITE TO LOG.NSF
System.out.println("****************");
System.out.println("EXCEPTION IN processImage()");
System.out.println("****************");
System.out.println("picName: " + fileName);
e.printStackTrace();
return null;
} finally {
if (null != imageStream) {
imageStream.close();
}
if (null != image) {
LibraryUtils.incinerate(image);
}
}
}
I believe it will be some variation of the following code snippet. You might have to change which mimeentity has the content so it might be in the parent or another child depending.
Stream stream = session.createStream();
doc.getMIMEEntity().getFirstChildEntity().getContentAsBytes(stream);
ByteArrayInputStream bais = new ByteArrayInputStream(stream.read());
return ImageIO.read(bais);
EDIT:
session.setConvertMime(false);
Stream stream = session.createStream();
Item itm = doc.getFirstItem("ParentEntity");
MIMEEntity me = itm.getMIMEEntity();
MIMEEntity childEntity = me.getFirstChildEntity();
childEntity.getContentAsBytes(stream);
ByteArrayOutputStream bo = new ByteArrayOutputStream();
stream.getContents(bo);
byte[] mybytearray = bo.toByteArray();
ByteArrayInputStream bais = new ByteArrayInputStream(mybytearray);
return ImageIO.read(bais);
David have a look at DominoDocument,http://public.dhe.ibm.com/software/dw/lotus/Domino-Designer/JavaDocs/XPagesExtAPI/8.5.2/com/ibm/xsp/model/domino/wrapped/DominoDocument.html
There you can wrap every Notes document
In the DominoDocument, there such as DominoDocument.AttachmentValueHolder where you can access the attachments.
I have explained it at Engage. It very powerful
http://www.slideshare.net/flinden68/engage-use-notes-objects-in-memory-and-other-useful-java-tips-for-x-pages-development

create mp4 from pictures and mp3 java using xuggler

Im trying to combine a list of pictures to an mp4 movie with adding an mp3 file.
The length of the movie the user can choose either the length of the mp3 file or choose it manual.
And if the user chooses manual (length!=mp3 file length) the mp3 file should be cut or looped.
No it works with the pictures but without sound :(
private void convertImageToVideo() {
IMediaWriter writer = ToolFactory.makeWriter(outputFilename);
long delay = videotime / PicPathList.size();
long milliseconds = 0;
//adds Pictures to the mp4 stream
for (int i = 0; i < PicPathList.size(); i++) {
BufferedImage bi;
try {
bi = ImageIO.read(new File(PicPathList.get(i)));
bi = Tools.prepareForEncoding(bi);
int width=bi.getWidth();
int height=bi.getHeight();
if(width%2==1){
width++;
}
if(height%2==1){
height++;
}
if (i == 0) {
writer.addVideoStream(0, 0, ID.CODEC_ID_H264, width, height);
}
//debug
// System.out.println(PicPathList.get(i) + ", bi:" + bi.getWidth() + "x"
// + bi.getHeight() + ", ms:" + milliseconds);
writer.encodeVideo(0, bi, milliseconds, TimeUnit.MILLISECONDS);
milliseconds += delay;
} catch (IOException e) {
e.printStackTrace();
System.out.println("Error");
}
}
writer.close();
//at this part Im trying to combine the further generated mp4 file with the mp3 file
String inputVideoFilePath = outputFilename;
String inputAudioFilePath = this.musicFile.getAbsolutePath();
String outputVideoFilePath = "outputFilename";
IMediaWriter mWriter = ToolFactory.makeWriter(outputVideoFilePath);
IContainer containerVideo = IContainer.make();
IContainer containerAudio = IContainer.make();
// check files are readable
containerVideo.open(inputVideoFilePath, IContainer.Type.READ, null);
containerAudio.open(inputAudioFilePath, IContainer.Type.READ, null);
// read video file and create stream
IStreamCoder coderVideo = containerVideo.getStream(0).getStreamCoder();
IPacket packetvideo = IPacket.make();
int width = coderVideo.getWidth();
int height = coderVideo.getHeight();
// read audio file and create stream
IStreamCoder coderAudio = containerAudio.getStream(0).getStreamCoder();
IPacket packetaudio = IPacket.make();
mWriter.addAudioStream(1, 0,coderAudio.getCodecID(), coderAudio.getChannels(), coderAudio.getSampleRate());
mWriter.addVideoStream(0, 0, width, height);
while (containerVideo.readNextPacket(packetvideo) >= 0) {
containerAudio.readNextPacket(packetaudio);
// video packet
IVideoPicture picture = IVideoPicture.make(coderVideo.getPixelType(), width, height);
coderVideo.decodeVideo(picture, packetvideo, 0);
if (picture.isComplete())
mWriter.encodeVideo(0, picture);
// audio packet
IAudioSamples samples = IAudioSamples.make(512, coderAudio.getChannels(), IAudioSamples.Format.FMT_S32);
coderAudio.decodeAudio(samples, packetaudio, 0);
if (samples.isComplete())
mWriter.encodeAudio(1, samples);
}
coderAudio.close();
coderVideo.close();
containerAudio.close();
containerVideo.close();
mWriter.close();
}
I answered this question here which it is a complete answer.
JAVA - Xuggler - Play video while combining an MP3 audio file and a MP4 movie
You may use another jar file to merge your video and audio. Please notice this is not the right way to do it, but I didn't have any choice and time to dig into Xuggler codes.
I hope it works for you, too.
package MP4;
/**
*
* #author Pasban
*/
import com.coremedia.iso.boxes.Container;
import com.googlecode.mp4parser.authoring.Movie;
import com.googlecode.mp4parser.authoring.Track;
import com.googlecode.mp4parser.authoring.builder.DefaultMp4Builder;
import com.googlecode.mp4parser.authoring.container.mp4.MovieCreator;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
public class MuxMp4 {
public static void merge(String audio, String video, String output) throws IOException {
Movie countVideo = MovieCreator.build(video);
Movie countAudioEnglish = MovieCreator.build(audio);
Track audioTrackEnglish = countAudioEnglish.getTracks().get(0);
audioTrackEnglish.getTrackMetaData().setLanguage("eng");
countVideo.addTrack(audioTrackEnglish);
Container out = new DefaultMp4Builder().build(countVideo);
FileOutputStream fos = new FileOutputStream(new File(output));
out.writeContainer(fos.getChannel());
fos.close();
}
}
Check the MP4Parser sample codes for more information.
It is nice to mention that both of your files should be mp4. So you need to convert your mp3 to mp4 as well. and your video should not contain any sound, which in your case it does not.
AS I mentioned earlier, this in not the right way to do the job done.

Categories