Zxing PDF417 edit padding / quiet zone - java

I am generating a PDF417 barcode with the zing library. All good with that...
writer = new PDF417Writer();
bitMatrix = writer.encode(barcodeMessage.getData(),
BarcodeFormat.PDF_417, WIDTH, WIDTH / 2, // To maintain a width/height ratio
ImmutableMap.of(
EncodeHintType.PDF417_COMPACT, Boolean.TRUE,
EncodeHintType.CHARACTER_SET, messageEncoding
)
);
barcodeBg = MatrixToImageWriter.toBufferedImage(bitMatrix);
But I was wondering if there is any way to be able to edit how much quiet zone is left at each side? Something similar to this... http://www.racoindustries.com/barcodegenerator/2d/pdf417.aspx that allows you to select how much space you have on each side.
I have tried adding this EncodeHintType.MARGIN but for this type of barcode it does not work.
Any ideas?

Support for EncodeHintType.MARGIN for PDF417 was added to zxing in the newest release 3.0.0.

Related

Why are complex emojis not merged but split up when drawn on Android canvas?

I want to implement an emoji selector for my keyboard app Keyboard Designer. To do so I want to draw emojis based on unicodes in hexadecimal format. The emoji "\u1F636\u200D\u1F32B\uFE0F" is shown correctly when I write it in the text field (two eyes behind a cloud), but when I draw it on my canvas, it looks like two separate emojis (left top corner of the keyboard):
Hint: The hearts do nothing have to do with the question, they only mark favourite emojis
My source is nested in objects and methods, but to show you how it works, I change it to linear commands:
ArrayList<char[]> utf16Chars = new ArrayList<>();
String hexCode = "\\u1F636\\u200D\\u1F32B\\uFE0F";
int distance;
if (hexCode.startsWith("\\")) {
for (int i = 0; i < hexCode.length(); i += distance) {
distance = hexCode.indexOf("\\", i + 1) - i;
if (distance < 0)
distance = hexCode.length() - i;
String utf16Code = hexCode.substring(i, i + distance);
int decimalCode = Integer.parseInt(utf16Code.length() >= 6 ? utf16Code.substring(2) : utf16Code, 16);
char[] utf8Chars = Character.toChars(decimalCode);
utf16Chars.add(utf8Chars);
}
}
StringBuilder stringBuilder = new StringBuilder();
for(char[] utf8Chars : utf16Chars)
for(char character : utf8Chars)
stringBuilder.append(character);
String emoji = stringBuilder.toString();
canvas.drawText(emoji, 0, emoji.length(), x, y, paint);
Does anyone have an idea what I do wrong?
Update
Also see this bug report.
The Face in Clouds Emoji was added in Emoji Version 13.1 and will probably be generally available in later versions of Android. My answer, below, works for API 31 but not for API 30 or before. There is a backward compatibility issue with that solution.
There is a class EmojiCompat that can supply compatibility that will display the "Face in Clouds" emoji on Android version before API 31.
I have put together a EmojiCompat project based on the EmojiCompat app in Google's user-interface-samples. A couple of notes on this demo:
The EmojiCompatApplication class has some important setup.
The dependency on version 1.1.0 of androidx.emoji:emoji-bundled was updated to version 1.2.0-alpha03. Without this update, the "Face in Clouds" emoji displays as two emojis and not one. As new emojis are released (yearly, I think), this library will need to be updated. I believe an alternative is to use downloadable emoji fonts, but I do not address downloadable fonts here.
In MainActivity, I left everything as it was in the Google project except that I added processing for "MyView" which creates a StaticLayout and displays the content using Layout.draw(canvas) as specified in my previous solution which is what the OP was requesting. Canvas.drawText() is still discouraged.
Here is the output of the demo app on an emulator running API 24:
This was more involved than I thought at first and I could not find a good tutorial online. Maybe someone knows of one and can suggest it.
I used the following simple code to create the combined emoji.
// val cloudy = "\u1F636\u200D\u1F32B\uFE0F"
val cloudyFace = intArrayOf(0x1F636, 0x200D, 0x1F32B, 0xFE0F)
val sb = StringBuilder()
for (i in 0 until cloudyFace.size) {
sb.append(getUtf16FromInt(cloudyFace[i]))
}
binding.textView.text = sb.toString()
fun getUtf16FromInt(unicode: Int) = String(Character.toChars(unicode))
Instead of using Canvas.drawText() use layout.draw(canvas) where layout is a StaticLayout. From the documentation:
This is used by widgets to control text layout. You should not need to use this class directly unless you are implementing your own widget or custom display object, or would be tempted to call Canvas.drawText() directly.
Bottom line: Don't use Canvas.drawText().
You may also use the BoringLayout class if that better suits your needs.

Video Transcode with Android MediaCodec

Struggling with Android MediaCodec, I'm looking for a straight forward process to change the resolution of a video file in Android.
For now I'm trying a single thread transcoding method that makes all the work step by step so I can understand it well, and at high level it looks as follows:
public void TranscodeVideo()
{
// Extract
MediaTrack[] tracks = ExtractTracks(InputPath);
// Decode
MediaTrack videoTrack = tracks.Where(o => o.IsVideo).FirstOrDefault();
MediaTrack rawVideoTrack = DecodeTrack(videoTrack);
// Edit?
// ResizeVideoTrack(rawVideoTrack);
// Encode
MediaFormat newFormat = MediaHelper.CreateVideoOutputFormat(videoTrack.Format);
MediaTrack encodeVideodTrack = EncodeTrack(rawVideoTrack , newFormat);
// Muxe
encodeVideodTrack.Index = videoTrack.Index;
tracks[Array.IndexOf(tracks, videoTrack)] = encodeVideodTrack;
MuxeTracks(OutputPath, tracks);
}
Extraction works fine, returning a track with audio only and a track with video only. Muxing works fine combining again two previous tracks. Decoding works but I don't know how to check it, the raw frames on the track weight much more than the originals so I assume that it's right.
Problem
The encoder input buffer size is smaller than the raw frames size, and also related to the encoding configured format, so I assume that I need to resize the frames in some way but I don't find anything useful. I'm correct on this? I'm missing something? What is the way to go resizing Raw video frames? Any help? :S
PD
Maybe you will notice that I'm using C# (Xamarin.Android) for more fun. But the underlaying API is of course Java.
I'm using ByteBuffers, not Surfaces because it seems easier. I will be the next step using surfaces, any advice is welcome.
I know that the single thread process is highly inefficient, but makes it simple. It will be another next step to connect the decoder output buffer to the encoder input buffer.
I digged through PhilLab, Grafika and Bigflake examples but nothing seems to be very useful for me.
Avoiding to use ffmpeg on Android.
Thank you everyone for your time.
Going off of the comment above to implement libVLC
Add this to your app root's build.gradle
allprojects {
repositories {
...
maven {
url 'https://jitpack.io'
}
}
}
Add this to your dependent app's build.gradle
dependancies {
...
implementation 'com.github.masterwok:libvlc-android-sdk:3.0.13'
}
Here is an example of loading an RTSP stream as an activity
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.camera_stream_layout);
// Get URL
this.rtspUrl = getIntent().getExtras().getString(RTSP_URL);
Log.d(TAG, "Playing back " + rtspUrl);
this.mSurface = findViewById(R.id.camera_surface);
this.holder = this.mSurface.getHolder();
ArrayList<String> options = new ArrayList<>();
options.add("-vvv"); // verbosity
//Add vlc transcoder options here
this.libvlc = new LibVLC(getApplicationContext(), options);
this.holder.setKeepScreenOn(true);
//this.holder.setFixedSize();
// Create media player
this.mMediaPlayer = new MediaPlayer(this.libvlc);
this.mMediaPlayer.setEventListener(this.mPlayerListener);
// Set up video output
final IVLCVout vout = this.mMediaPlayer.getVLCVout();
vout.setVideoView(this.mSurface);
//Set size of video to fit app screen
DisplayMetrics displayMetrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(displayMetrics);
ViewGroup.LayoutParams videoParams = this.mSurface.getLayoutParams();
videoParams.width = displayMetrics.widthPixels;
videoParams.height = displayMetrics.heightPixels;
vout.setWindowSize(videoParams.width, videoParams.height);
vout.addCallback(this);
vout.attachViews();
final Media m = new Media(this.libvlc, Uri.parse(this.rtspUrl));
//Use this to add transcoder options m.addOption("vlc transcode options here");
this.mMediaPlayer.setMedia(m);
this.mMediaPlayer.play();
}
Here is the documentation of vlc transcoder options
https://wiki.videolan.org/Documentation:Streaming_HowTo_New/
You are right, the input buffer size of the encoder is smaller because it expects input to be of the specified dimensions. The encoder only, like the name suggests, encodes.
I read your question as more of a "why" than a "how" question so i'll only point you to where you'll find the "why's"
The decoded frame is a YUV image (is suggest to quickly skim through the wikipedia article), usually NV21 if i'm not mistaken but might be different from device to device. To do this i suggest you use a library as every every plane of the image needs to be scaled down differently and it usually takes care of filtering.Check out libYUV. If you are interested in the actual resizing algorithms check out this and for implementations this.
If you are not required to handle the decoding and encoding with bytebuffers, i suggest to use a surface as you already mentioned. It has multiple benefits over decoding to bytebuffers.
More memory efficient as there is no copy between the native buffer and app allocated buffer, the native buffers are simply geting swapped from and to the surface.
If you plan to render the frame, be it for resizing or displaying, it can be done by the devices graphic processor. On how to do that check out BigFlakes DecodeEditEncode test.
In hope this answers some of your questions.

Android copy built-in video recording quality and framerate using camera2

The image quality and the framerate I get when using the camera2 API does not match the one I get when I manually record a video using the camera app to a file.
I am trying to do real-time image processing using OpenCV on Android. I have manually recorded a video using the built-in camera application and everything worked perfectly: the image quality was good, the framerate was a stable 30 FPS.
My min SDK version is 22, so I am using the camera2 API's repeating requests. I have set it up, together with an ImageReader and the YUV_420_888 format. I have tried both the PREVIEW and the RECORD capture request templates, tried manually setting 18 capture request parameters in the builder (eg. disabling auto-white-balance, setting the color correction mode to fast), but the FPS was still around 8-9 and the image quality was poor as well. Another phone yielded the same results, despite its max. FPS being 16.67 (instead of 30).
The culprit is not my image processing (which happens in another thread, except for reading the image's buffer): I checked the FPS when I don't do anything with the frame (I didn't even display the image), it was still around 8-9.
You can see the relevant code for that here:
//constructor:
HandlerThread thread = new HandlerThread("MyApp:CameraCallbacks", Process.THREAD_PRIORITY_MORE_FAVORABLE);
thread.start();
captureCallbackHandler = new Handler(thread.getLooper());
//some UI event:
cameraManager.openCamera(cameraId, new CameraStateCallback()), null);
//CameraStateCallback#onOpened:
//size is 1280x720, same as the manually captured video's
imageReader = ImageReader.newInstance(size.getWidth(), size.getHeight(), ImageFormat.YUV_420_888, 1);
imageReader.setOnImageAvailableListener(new ImageAvailableListener(), captureCallbackHandler);
camera.createCaptureSession(Collections.singletonList(imageReader.getSurface()), new CaptureStateCallback(), captureCallbackHandler);
//CaptureStateCallback#onConfigured:
CaptureRequest.Builder builder = activeCamera.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
builder.addTarget(imageReader.getSurface());
//setting the FPS range has no effect: this phone only has one option
session.setRepeatingRequest(builder.build(), null, captureCallbackHandler);
//ImageAvailableListener#onImageAvailable:
long current = System.nanoTime();
deltaTime += (current - last - deltaTime) * 0.1;
Log.d("MyApp", "onImageAvailable FPS: " + (1000000000 / deltaTime));
//prints around 8.7
last = current;
try (Image image = reader.acquireLatestImage()) { }
On Samsung Galaxy J3 (2016), doing Camera.Parameters#setRecordingHint(true) (while using the deprecated camera API) achieves exactly what I wanted: the video quality and the framerate becomes the same as the built-in video recorder's. Unfortunately, it also means that I was unable to modify the resolution, and setting that hint did not achieve this same effect on a Doogee X5 MAX.

How do I hide the hand/cursor?

I want to hide the hand (that's the thing in the middle, right?) of my circular gauge. So far, I tried this:
myCircularGauge.getHand().setVisible(false);
However, that seems to produce a crash when the diagram is painted. How can I successfully hide the hand?
Choreographer.doCallbacks(int, long) line: 558
What version are you using? Here using TeeChart Java for Android v3.2012.0808.
You are right that the following seems to crash:
getHand().setVisible(false);
However, this seems to work fine:
getCenter().setVisible(false);
We'll investigate what's happening with the Hand.
Thanks for reporting it.
Try replace a cursor as gif not visible.
int[] pixels = new int[16 * 16];
Image image = Toolkit.getDefaultToolkit().createImage(new MemoryImageSource(16, 16, pixels, 0, 16));
Cursor transparentCursor = Toolkit.getDefaultToolkit().createCustomCursor(image, new Point(0, 0), "Transparent");
// set cursor
frame.setCursor(Cursor);

Copy an OpenOffice slide from one presentation to another w/ Java

I'm building a Java aplication using the OOo SDK where I'm manipulating slides in an OpenOffice Impress presentation. I know how to get an object containing a single slide, now I'm looking for a way to copy a slide from a presentation to another.
That's (shortened) what I do to open the files and selecting the slide:
String filename = "file://....odp";
int offset = 2;
XComponent xSourceComponent = xComponentLoader.loadComponentFromURL(filename, "_blank", 0, loadProps);
XComponent xTargetComponent = xComponentLoader.loadComponentFromURL("private:factory/simpress", "_blank", 0, loadProps);
XDrawPages xDrawPages = ((XDrawPagesSupplier)UnoRuntime.queryInterface(
XDrawPagesSupplier.class, xSourceComponent)).getDrawPages();
XPresentationPage xPage = (XPresentationPage)UnoRuntime.queryInterface(XPresentationPage.class,
xDrawPages.getByIndex(offset));
Based on I tried to get a transferable object like this:
XTransferable t = (XTransferable)UnoRuntime.queryInterface(
XTransferable.class, xPage);
But that doesn't seem to be supported. Anybody has an idea how to do this?
Oh man, good luck. I looked at trying to do something like this about a year ago and ended up using Apache POI instead -- not necessarily sure the OO SDK can't do this, but the documentation for the API is so esoteric that I couldn't figure it out. In POI it's as easy as
SlideShow ss1 = new SlideShow(new FileInputStream(inputFile1));
Slide newSlide = ss.createSlide();
for (Shape shape : ss.getSlides()[0].getShapes()) {
newSlide.addShape(shape);
}
That may not really help you since you're dealing with OO not PPT, but if you're desperate for a solution and not getting help on the OpenOffice front, you could probably string together JODConverter (http://www.artofsolving.com/opensource/jodconverter) and POI.

Categories