Disable text below barcode image in Code39Bean - java

This code creates BarCode Image along with Text below image .
I need to remove the text from image .
Code39Bean does not have any property to disable this .
public static ByteArrayOutputStream generateBarcodeImg(String inputId)
throws Exception {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
Code39Bean bean = new Code39Bean();
final int dpi = 150;
/**
* Configure the bar-code generator and makes the narrow bar width
* exactly one pixel.
*/
bean.setModuleWidth(UnitConv.in2mm(1.0f / dpi));
bean.setWideFactor(3);
bean.doQuietZone(false);
try {
/** Set up the canvas provider for monochrome PNG output. */
BitmapCanvasProvider canvas = new BitmapCanvasProvider(baos,
BarCodeConstant.CONTENT_TYPE, dpi,
BufferedImage.TYPE_BYTE_BINARY, false, 0);
/** Generate the bar code. */
bean.generateBarcode(canvas, inputId);
/** Signal end of generation. */
canvas.finish();
} catch (IOException e) {
logger.error(
"Exception occured in BarcodeGeneration: generateBarcodeImg "
+ e.getLocalizedMessage(), e);
throw new MobileResourceException(
"Exception occured in BarcodeGeneration: generateBarcodeImg",
null);
}
return baos;
}
}

You can remove the text from Image through the "HumanReadablePlacement" property:
bean.setMsgPosition(HumanReadablePlacement.HRP_NONE);
This will suppress the human readable text of the barcode to NONE.

Related

Android MLKit face detection not detecting faces when using Bitmap

I have an XR app, where display shows the camera (rear) feed. As such, capturing the screen is pretty much the same as capturing the camera feed...
As such, I take screenshots (Bitmaps) and then try to detect faces within them using Googles MLKit.
I'm following the official guide to detect faces.
To do this, I first init my face detector:
FaceDetector detector;
public MyFaceDetector(){
FaceDetectorOptions realTimeOpts =
new FaceDetectorOptions.Builder()
.setContourMode(FaceDetectorOptions.CONTOUR_MODE_ALL)
.build();
detector = FaceDetection.getClient(realTimeOpts);
}
I then have a function which passes in a bitmap. I first convert the bitmap to a byte array. I do this because InputImage.fromBitmap is very slow, and MLKit actually tells me that I should use a byte array:
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 85, byteArrayOutputStream);
byte[] byteArray = byteArrayOutputStream .toByteArray();
Next I make a mutable copy of the Bitmap (so that I can draw onto it), and set up a Canvas object, along with a color that will be used when drawing on to the Bitmap:
BitmapFactory.Options options = new BitmapFactory.Options();
options.inMutable = true;
Bitmap bmp = BitmapFactory.decodeByteArray(byteArray, 0, byteArray.length, options);
Canvas canvas = new Canvas(bmp);
Paint p = new Paint();
p.setColor(Color.RED);
After all is set up, I create an InputImage (used by the FaceDetector), using the byte array:
InputImage image = InputImage.fromByteArray(byteArray, bmp.getWidth(), bmp.getHeight(),0, InputImage.IMAGE_FORMAT_NV21);
Note the image format... There is a InputImage.IMAGE_FORMAT_BITMAP, but using this throws an IllegalArgumentException. Anyway, I next try to process the Bitmap, detect faces, fill each detected face with the color defined earlier, and then save the Bitmap to disk:
Task<List<Face>> result = detector.process(image).addOnSuccessListener(
new OnSuccessListener<List<Face>>() {
#Override
public void onSuccess(List<Face> faces) {
Log.e("FACE DETECTION APP", "NUMBER OF FACES: " + faces.size());
Thread processor = new Thread(new Runnable() {
#Override
public void run() {
for (Face face : faces) {
Rect destinationRect = face.getBoundingBox();
canvas.drawRect(destinationRect, p);
canvas.save();
Log.e("FACE DETECTION APP", "WE GOT SOME FACCES!!!");
}
File file = new File(someFilePath);
try {
FileOutputStream fOut = new FileOutputStream(file);
bmp.compress(Bitmap.CompressFormat.JPEG, 85, fOut);
fOut.flush();
fOut.close();
} catch (Exception e) {
e.printStackTrace();
}
}
});
processor.start();
}
})
.addOnFailureListener(
new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
// Task failed with an exception
// ...
}
});
}
While this code runs (i.e. no exceptions) and the bitmap is correctly written to disk, no faces are ever detected (faces.size() is always 0). I've tried rotating the image. I've tried changing the quality of the Bitmap. I've tried with and without the thread to process any detected faces. I've tried everything I can think of.
Anyone have any ideas?
ML Kit InputImage. fromByteArray only support yv12 and nv21 formats. You will need to convert the bitmap to one of these formats in order for ML kit pipeline to process. Also, if the original image you have is a bitmap, you can probably just use InputImage.fromBitmap to construct an InputImage. It shouldn't be slower than your current approach.
I was having the same issue use ImageInput.fromMediaImage(..., ...)
override fun analyze(image: ImageProxy) {
val mediaImage: Image = image.image.takeIf { it != null } ?: run {
image.close()
return
}
val inputImage = InputImage.fromMediaImage(mediaImage, image.imageInfo.rotationDegrees)
// TODO: Your ML Code
}
Check here for more details
https://developers.google.com/ml-kit/vision/image-labeling/android

Gif is not animating through buffered image

I am working on a project in which I need to create graphics (bufferedimage) on another image.
I am using ZK framework for Front end and java for back end.
Here is my sample code and output.
public void createImage() throws Exception{
ladleList = fD.getListLocation();
baseImage = ImageIO.read(new File(BASE_IMG));
Graphics2D baseGraphics = baseImage.createGraphics();
for(LtsLadleLocation ladLoc : ladleList){
setLadlePostion(ladLoc);
setLadleImage(ladle);
Graphics2D imageGraphics = ladleImage.createGraphics();
imageGraphics.setFont(font);
String ladleNo = ladle.getLadleId().intValue() <=9 ? " "+ladle.getLadleId().toString() : ladle.getLadleId().toString();
imageGraphics.drawString(ladleNo, 12, 45);
baseGraphics.drawImage(ladleImage, position.getxVal(), position.getyVal(), 62,62,null,null);
img.setContent(baseImage);
BindUtils.postNotifyChange(null, null, this, "img");
ladle=null;
position = null;
ladleImage = null;
}
}
/**
* #Desc : selects the ladle and position values from the Position List.
* posList is having x and y values for particular locations.
* #param : ladLoc and position
*/
public void setLadlePostion(LtsLadleLocation ladLoc) {
for(Position pos:posList){
if(ladLoc.getLocationdescription().equalsIgnoreCase(pos.getLocName())){
ladle = ladLoc;
position = pos;
break;
}
}
}
/**
* #Desc : This method is used to get the relevent image according to the location
* and according to the status.
* #param ladle2
*/
public void setLadleImage(LtsLadleLocation ladleImageObj) throws Exception {
if("LF_2".equalsIgnoreCase(ladleImageObj.getLocationdescription())){
setLFLadle(ladleImageObj);
}else{
setNormalLadle(ladleImageObj);
}
}
/**
* #Desc : This method is used to set the normal ladle images (Circle Shaped images)
* #param ladleImageObj
*/
public void setNormalLadle(LtsLadleLocation ladleImageObj) throws Exception {
if("OUT".equalsIgnoreCase(ladleImageObj.getStatus())){
ladleImage = ImageIO.read(new File(HEATER_OUT));
}else if("IN".equalsIgnoreCase(ladleImageObj.getStatus())){
if("F".equalsIgnoreCase(ladleImageObj.getLadleStatusFlag())){
ladleImage = ImageIO.read(new File(HEATER_FILLED));
}else{
ladleImage = ImageIO.read(new File(HEATER_EMPTY));
}
}
}
/**
* #Desc : For Setting LF images
* #param ladleImageObj
*/
public void setLFLadle(LtsLadleLocation ladleImageObj) throws Exception {
if("OUT".equalsIgnoreCase(ladleImageObj.getStatus())){
ladleImage = ImageIO.read(new File(LADLE_OUT));
}else if("IN".equalsIgnoreCase(ladleImageObj.getStatus())){
if("F".equalsIgnoreCase(ladleImageObj.getLadleStatusFlag())){
ladleImage = ImageIO.read(new File(LADLE_FILLED));
}else{
ladleImage = ImageIO.read(new File(LADLE_EMPTY));
}
}
}
Where img is an element id of ZK element Image.
and my ZK code is
<window title="Hello World!!" border="normal" apply="org.zkoss.bind.BindComposer"
viewModel = "#id('vm') #init('com.practice.image.ImageViewModel')">
<image id="img" >
<custom-attributes org.zkoss.zul.image.preload="true" />
</image>
<timer id="refresh" repeats="true" onTimer="#command('createImage')" delay="10000"/>
</window>
I need to have GIF image(blinking image) on ladle 7. but its not animating please help me through this.
The output is as followsenter image description here

Taking a photo upon QR code scan

I have an application that has zxing integrated. I've been looking at trying to store a photo when a QR code is scanned. Sean Owen recommended the following:
"The app is getting a continuous stream of frames from the camera to analyze. You can store off any of them by intercepting them in the preview callback."
As far as I am aware the only instances of preview callback is within the CameraManager.java activity (https://code.google.com/p/zxing/source/browse/trunk/android/src/com/google/zxing/client/android/camera/CameraManager.java).
In particular:
public synchronized void requestPreviewFrame(Handler handler, int message) {
Camera theCamera = camera;
if (theCamera != null && previewing) {
previewCallback.setHandler(handler, message);
theCamera.setOneShotPreviewCallback(previewCallback);
}}
Since this runs every frame I don't have a method of saving (preferably as byte date) any particular frame. I would have assumed there to be a point where something is passed back to the CaptureActivity.java class (Link given at bottom) however I haven't found anything myself.
Anyone who has used Zxing will know that after a scan a ghostly image is shown on screen of the scan data, if it is possible to hijack this part of the code and convert and/or save that data as byte code that may also be useful.
Any help, or other ideas would be very appreciated. Requests for any further information will be responded to quickly. Thank you.
Full code available within this folder: https://code.google.com/p/zxing/source/browse/trunk#trunk%2Fandroid%2Fsrc%2Fcom%2Fgoogle%2Fzxing%2Fclient%2Fandroid
Update:
So far the following sections of code appear to be possible places to save byte data, both are within the DecodeHandler.java class.
private void decode(byte[] data, int width, int height) {
long start = System.currentTimeMillis();
Result rawResult = null;
PlanarYUVLuminanceSource source = activity.getCameraManager().buildLuminanceSource(data, width, height);
if (source != null) {
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
//here?
try {
rawResult = multiFormatReader.decodeWithState(bitmap);
} catch (ReaderException re) {
// continue
} finally {
multiFormatReader.reset();
}
}
Handler handler = activity.getHandler();
if (rawResult != null) {
// Don't log the barcode contents for security.
long end = System.currentTimeMillis();
Log.d(TAG, "Found barcode in " + (end - start) + " ms");
if (handler != null) {
Message message = Message.obtain(handler, R.id.decode_succeeded, rawResult);
Bundle bundle = new Bundle();
Bitmap grayscaleBitmap = toBitmap(source, source.renderCroppedGreyscaleBitmap());
//I believe this bitmap is the one shown on screen after a scan has been performed
bundle.putParcelable(DecodeThread.BARCODE_BITMAP, grayscaleBitmap);
message.setData(bundle);
message.sendToTarget();
}
} else {
if (handler != null) {
Message message = Message.obtain(handler, R.id.decode_failed);
message.sendToTarget();
}
}}
private static Bitmap toBitmap(LuminanceSource source, int[] pixels) {
int width = source.getWidth();
int height = source.getHeight();
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixels, 0, width, 0, 0, width, height);
//saving the bitmnap at this point or slightly sooner, before grey scaling could work.
return bitmap;}
Update: Requested code found within PreviewCallback.java
public void onPreviewFrame(byte[] data, Camera camera) {
Point cameraResolution = configManager.getCameraResolution();
Handler thePreviewHandler = previewHandler;
if (cameraResolution != null && thePreviewHandler != null) {
Message message = thePreviewHandler.obtainMessage(previewMessage, cameraResolution.x,
cameraResolution.y, data);
message.sendToTarget();
previewHandler = null;
} else {
Log.d(TAG, "Got preview callback, but no handler or resolution available");
}
The data from preview callback is NV21 format. So if you wanna save it, you could use such code:
YuvImage im = new YuvImage(byteArray, ImageFormat.NV21, width,
height, null);
Rect r = new Rect(0, 0, width, height);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
im.compressToJpeg(r, 50, baos);
try {
FileOutputStream output = new FileOutputStream("/sdcard/test_jpg.jpg");
output.write(baos.toByteArray());
output.flush();
output.close();
} catch (FileNotFoundException e) {
} catch (IOException e) {
}
The save point is when the ZXing could decode the byte[] and return content String successfully.

ANDROID ZXING: Saving a photo in onPreviewFrame saves a photo every frame. How to only save a single photo upon scan?

For the last few weeks I have been attempting to alter Zxing to take a photo immediately upon scan. Thanks to help I am at a point where I can be consistently saving an image from the onPreviewFrame class within PreviewCallback.java
The code I use within the onPreviewMethod method shall follow, and then a short rundown of how my app works.
public void onPreviewFrame(byte[] data, Camera camera) {
Point cameraResolution = configManager.getCameraResolution();
Handler thePreviewHandler = previewHandler;
android.hardware.Camera.Parameters parameters = camera.getParameters();
android.hardware.Camera.Size size = parameters.getPreviewSize();
int height = size.height;
int width = size.width;
System.out.println("HEIGHT IS" + height);
System.out.println("WIDTH IS" + width);
if (cameraResolution != null && thePreviewHandler != null) {
YuvImage im = new YuvImage(data, ImageFormat.NV21, width,
height, null);
Rect r = new Rect(0, 0, width, height);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
im.compressToJpeg(r, 50, baos);
try {
FileOutputStream output = new FileOutputStream("/sdcard/test_jpg.jpg");
output.write(baos.toByteArray());
output.flush();
output.close();
System.out.println("Attempting to save file");
System.out.println(data);
} catch (FileNotFoundException e) {
System.out.println("Saving to file failed");
} catch (IOException e) {
System.out.println("Saving to file failed");
}
Message message = thePreviewHandler.obtainMessage(previewMessage, cameraResolution.x,
cameraResolution.y, data);
message.sendToTarget();
previewHandler = null;
} else {
Log.d(TAG, "Got preview callback, but no handler or resolution available");
}}
My application centers around its own GUI and functionality, but can engage Zxing via intent (Zxing is built into the apps build path, yes this is bad as it can intefere if Zxing is already installed). Once Zxing has scanned a QR code, the information encoded on it is returned to my app and stored, and then after a short delay Zxing is automatically re-initiated.
My current code saves an image every frame whilst Zxing is running, the functionality I would like is to have only the frame on scan be saved. Although Zxing stops saving images in the short window where my app takes over again, Zxing is quickly re-initialized however and I may not have time to manipulate the data. A possible workaround however is quickly renaming the saved file so that Zxing doesn't start overwriting it and manipulation can be performed in the background. Nevertheless, saving an image every frame is a waste of resources and less than preferable.
How do I only save an image upon scan?
Thanks in advance.
Updated to show found instances of multiFormatReader as requested:
private final CaptureActivity activity;
private final MultiFormatReader multiFormatReader;
private boolean running = true;
DecodeHandler(CaptureActivity activity, Map<DecodeHintType,Object> hints) {
multiFormatReader = new MultiFormatReader();
multiFormatReader.setHints(hints);
this.activity = activity;
}
#Override
public void handleMessage(Message message) {
if (!running) {
return;
}
if (message.what == R.id.decode) {
decode((byte[]) message.obj, message.arg1, message.arg2);
} else if (message.what == R.id.quit) {
running = false;
Looper.myLooper().quit();
}}
private void decode(byte[] data, int width, int height) {
long start = System.currentTimeMillis();
Result rawResult = null;
PlanarYUVLuminanceSource source = activity.getCameraManager().buildLuminanceSource(data, width, height);
if (source != null) {
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
//here?
try {
rawResult = multiFormatReader.decodeWithState(bitmap);
} catch (ReaderException re) {
// continue
} finally {
multiFormatReader.reset();
}
}
ZXing detects every received frame until finds out correct information. The image saving point is when ZXing returns a string which is not null. In addition, you can save file with different name "timestamp + .jpg", in case previous file will be overwritten.

Saving an image to .jpg file : Java

The way I am saving a jFreeChart to a jpeg file is :
JFreeChart chart = ChartFactory.createXYLineChart(
"Hysteresis Plot", // chart title
"Pounds(lb)", // domain axis label
"Movement(inch)", // range axis label
dataset, // data
PlotOrientation.VERTICAL, // orientation
false, // include legend
true, // tooltips
false // urls
);
Then:
image=chart.createBufferedImage( 300, 200);
The image appeas as:
My save function is:
public static void saveToFile(BufferedImage img)
throws FileNotFoundException, IOException
{
FileOutputStream fos = new FileOutputStream("D:/Sample.jpg");
JPEGImageEncoder encoder2 =
JPEGCodec.createJPEGEncoder(fos);
JPEGEncodeParam param2 =
encoder2.getDefaultJPEGEncodeParam(img);
param2.setQuality((float) 200, true);
encoder2.encode(img,param2);
fos.close();
}
I am calling it as:
try{
saveToFile(image);
}catch(Exception e){
e.printStackTrace();
}
The saved image appeas as:
Any suggestion, where I am wrong or how to save it the way it appears or may be I need to save as .png. Can anyone let me know how to save as .png?
Thanks
A simple Solution:
public static void saveToFile(BufferedImage img)
throws FileNotFoundException, IOException
{
File outputfile = new File("D:\\Sample.png");
ImageIO.write(img, "png", outputfile);
}
Saved the image, the way it appears.
I would rather suggest that instead of using ImageIo.write in order to save your image, you better use the following function:
ChartUtilities.saveChartAsJPEG("name of your file", jfreechart, lenght, width);
Because then you can manage the size of the picture but also save picture without filters.
Here is a great example on how this could be done.
File imageFile = new File("C:\\LineChart.png");
int width = 640;
int height = 480;
try {
ChartUtilities.saveChartAsPNG(imageFile, chart, width, height);
} catch (IOException ex) {
System.err.println(ex);
}

Categories