Was anybody able to read the PDF417 barcode with use of the ZXing library on the Android OS? They are supporting this - and according to their page it is in 'alpha' stage.
We are not looking for perfect solution - since the PDF417 is pretty complex and needs a very good camera with auto-focus, we can accept that it will be working only on few pre-selected high end devices.
We have tried also the Barcode Scanner + available on the Android Market - it has the PDF417 option in the settings, but whatever we read it always fails.
We were looking also for commercial SDK, also here on stackoverflow, but with no luck.
Any help is appreciated.
Kind Regards,
STeN
It really depends on what you expect. Simple PDF417 reads pretty instantly, like... this or this.
This will never be scanned.
Borderline is stuff that is small or moderately complex: example 1 and example 2.
I can read the first but not the second, even though the first is denser -- size helps.
Make sure to enable PDF 417 decoding; it's off by default
Quiet zone (white space around the code) is required
Focus and light help a lot
You can try PDF417.mobi SDK. It should work on low-end phones if equipped with auto-focus camera. It's a commercial library, but free for developers and non-commercial purposes.
You can try the demo here or play with code directly from GitHub.
Official web site is here http://pdf417.mobi/
Disclaimer: I'm part of the team working on PDF417.mobi
Have used , It can scan PDF417 format. Make sure you give a try with a Gadget containing Auto Focus camera. Have tried It on Samsung Galaxy Tab it works like a charm.
Zxing's solution did not work for me. I used DataSymbol Decoder (turn on 2d codes, by default they are off) on my samsung charge. In less than a second I captured my drivers license...
I got similar results as described by #sean-owen in that only the simple PDF417 were being read. It feels like the ZXing library doesn't have the same error correction for PDF417 that it does for QR Codes. However, with user assistance we were able to eliminate noise and create an artificial quiet zone by:
require the user to hold the phone in landscape mode (this maximizes the pixels captured from the camera, even in 640x480 mode)
require the user to fit the barcode inside a 50:18 clipping rectangle (this ratio seems to best fit the US Driver's License and such a clipping rectangle will empower the user to clip away most of the noise)
allow the user control focus, tilt distortions
By following the above, even some of the notoriously difficult PDF417 images can be scanned.
Google's ML KIT Barcode Scanning which is part of google's Mobile Vision library lists support for PDF-417 Barcodes.
It automatically parses QR Codes, Data Matrix, PDF-417, and Aztec values, for the following supported formats:
URL
Contact information (VCARD, etc.)
Calendar event
Email
Phone
SMS
ISBN
WiFi
Geo-location (latitude and longitude)
AAMVA driver license/ID
Review the Getting Started Page or clone GIT project to get started.
Related
I work on a big project with codenameone(i can't attach my codes because it's really big). I get android app and it's works on android devices. But recently i get ios build for this project and it's not working on ios device(just showing a white page instead of map).
My project is a map-framework that render tiles and ... on graphics(i used graphics class for drawing, transforming, writing text and more).
I used input stream for working with file because of File not supported.
I need a solution to how debug and find my problem about ios build(why tiles doesn't showed).
In fact i don'n know anything about ios and objective-c.
Thanks in advance.
Most of the logging functionality that allows inspecting issues is for pro developers (you can try the trial) its discussed in this video (mostly focused on crashes): http://www.codenameone.com/how-do-i---use-crash-protection-get-device-logs.html
From your description I would guess you created a really large mutable image (larger than screen bounds) and are drawing onto that. This would be both slow on iOS (and on newer Android devices) and might actually produce that result if the image exceeds the maximum texture size of the device.
If that is not the case you would need to explain what you are doing more precisely.
I'm trying to figure out how to resize/scale an mp4 video using mp4parser in an android app. After quite a bit of googling and looking through the mp4parser source and examples, I'm still not sure how to go about doing this.
Does mp4parser have some built in way of doing this?
If not, can I grab the raw video data and resize it myself using mp4parser? (a link to an example would be awesome if possible)
NOTES:
mp4parser website https://code.google.com/p/mp4parser/
I'm willing to consider using a different library than mp4parser, but I'd like something with licensing similar to LGPL. In other words, I am willing to supply library source code and give credit where credit is due, but I'd rather not be forced to make my source code publicly available. (This app will eventually be commercially available).
I need this functionality to append 2 files together that have different resolutions (taken from front camera and back camera).
I have successfully used mp4parser to append 2 files of the same resolutions.
I'm pretty new to video editing.
While I've relied on stackoverflow for many years, this is my very first question asked. Please be gentle. I'll gladly take constructive criticism on the proper way to ask questions here.
mp4parser will not have the ability to do this. To rescale a video, you must decode each frame, rescale then re-encode. ffmpeg (libavformat,libavcodec,swscale) can do this. As for LGPL compatibility, you may be able to achieve it for some codecs, but not all. I assume you are looking for LGPL to include this is a commercial app? If so, you must also license the codecs. For example, x264 is free/open source software. But distributing the videos it creates may require you to pay MPEG-LA.
I am searching it for long time. I couldn't find samples or example.
But i found working app in Google Play.
click to see
I am new to programming on the Android and would like to create an app which requires the ability to scan barcodes to function the way I want it to. I found an open source library called zxing, but after some reading around I found that it requires you to have the zxing app on your device in order to use it. I do not want that app to be a requirement for a couple of reasons, but the main one is that I plan on selling my app and feel that a paid app should be fully functional by itself.
A few people in other forums have mentioned that it would not be a trivial task to implement all the features contained within the zxing library, but I do not need all the functionality it has. All I need is the ability to take GTIN-12 or EAN-13 barcodes (I think those are the types of barcodes commonly used on books, cd's and other household items) and convert them to an arbitrary(or not arbitrary) integer. The numbers don't have to be in any way related to the product or what the barcode is actually supposed to represent. I am not interested in using them to look up products or do anything similar to what various other applications can already do well enough.
My problem is that I don't understand how to process an image taken by the camera in such away that would allow me to do this. For example, how would I crop out the rest of the image (everything besides the barcode itself) and measure the widths of the lines and spaces contained within?
Try http://developer.scanlife.com/products/scanlife-sdk
You will need to register, though, and I don't know what is the level of freedom with that API, but analyzing the barcode from scratch will be no small feat to do, so I suggest you to use one of the available options (like as you mentioned zxing, it would not be the first app that requires a barcode scanner installed)
I am developing a Java ME application which uses the camera to take a snapshot and then decodes it (using ZXing library). The target is Nokia phones.
I need to use the focus to have a clear image, if not, it is difficult to decode the image.
Since the Series 40, the control "videocontrol" and "SnapShopControl" are available. I thought that for the "FocusControl" it was the same, but it isn't.
I discovered that it is almost non-existent, not only for the Series 40 (only some phones), but (more surprisingly) for the Series 60 and Symbian 3.
You can see that in Java ME API support on Nokia devices.
These mobile phones support JSR-234 but for audio and music, not for camera.
As you can imagine, this is very deceiving, Nokia is not doing their work well.
Did you find any solution? Perhaps another "made-by-hand" control? I am afraid I have to start programming in C++ because I haven't got much time.
The solution has been to use Nokia's APIBridge (an extensible mechanism to access device features in WRT, Flash Lite, and Java applications). You can access the software is installed in the phone for the camera and if it is able to use the autofocus, you can use it, and it returns the image you take.
See Tool details for APIBridge for further details.
The implementation is easy (you install the SIS file for the APIBridge in the device, and you can package your application and this SIS file together).
You use the following code:
APIBridge bridge = APIBridge.getInstance();
bridge.Initialize(midlet);
NewFileService service = (NewFileService) bridge.createService("service.newfileservice");
Hashtable filter = new Hashtable();
filter.put("NewFileType", "Image");
BridgeResult res= service.TakePhoto(filter);
Many phones' hardware just don't support focus. Some Sony Ericsson phones (e.g. G502) support FocusControl, but they don't allow to do anything because the hardware does not support it.
I'm afraid to say that you can do probably nothing with this problem in Java ME.
If the phones support focus control, but it is not availble in Java ME, there are probably two ways how to solve it:
Let user to use the builtin camera and load it (preferably the last photo) from Camera album.
Try to use camera focus from a S60 API.
Note that I'm not a S60 developer.
I am looking to create a video training program which records videos - via webcam, user screen capture and captures sound. Now the main problem is that I need a cross-platform (mac and windows) solutions.
I know its possible to use flash to record webcam + audio. But its not possible to record the user's screen via flash.
So am wonder if I should use Java (which i believe will work on mac & windows). I do not want to develop to separate versions because of the cost involved in developing two versions.
Please guide me as I am new to this.
Thank you.
UPDATE
Hello again,
I had a look at the following site: www.screencast-o-matic.com or www.screentoaster.com. I see that they have developed a java applet which helps interact with Windows/Mac to record the screen.
I am wondering how to go about developing something like that and integrating it with Flash (for webcam and audio recording).
Is this a better idea?
This is not an answer to your question, but I strongly recommend against using video for educational programmes. Our company delivers university courses on-line, and we long ago learned that video feeds are only effective under particular scenarios. In general, a talking head is a waste of bandwidth. You're much better off to put together a well designed powerpoint presentation, record a voice-over (and edit it!) and then assemble the whole thing as a flash presentation. This is a non-trivial amount of work, but it provides a much more interesting product for the student.
When to use video:
1) When you are demonstrating something dynamic - Mechanics or Chemistry for example.
2) When you are acting out a scenario or case as an illustration -- For example, threat de-escalation techniques for high school teachers.
When you solve the screen recording problem, seriously consider whether you need full motion or if you can get away with stills. Often the motion is distracting, and a still with good voice over can be more effective. (Hint: Replace mouse pointers with something HUGE before recording -- Like Fox did with hockey pucks)
Try CamStudio. I don't know, if it works on Mac, but on windows, it's the best solution I know. It's open source, so you can use it's source code, if you want to :)
If you're looking to build an application that does all of the recording and screen capture itself, then you might consider using Adobe AIR (essentially, Flash running on the desktop) in combination with Merapi. Merapi is essentially a bridge between Adobe AIR and Java. So for example, for your project, you might use Java to handle the lower-level (but still cross-platform) stuff you can't do natively in AIR, and use Merapi to wire the Java application to your AIR UI.
This is by no means a simple project. Lets get that said and out the way. There are open source (and cross-platform) options for each element, but nothing (I know of) that will do everything for you.
I think the "cleanest" option would be to use Flash for webcam and audio, as you said, and run a VNC server to send the screen video... The only closed-platform code will be the VNC launching code. That should be pretty simple to maintain!
That raises a problem because most people are behind NAT firewalls these days. Setting up port forwarding is a pain in the behind. I've used an app called Gitso before which allows people to connect to me and send their desktop to my screen (for tech support). Its VNC-based and all it really does is add another layer on top of the VNC connection so rather than me connecting to them, they connect to me. That makes the whole business of port forwarding a non-issue.
And once you've recorded everything, there's the final issue of syncing it all back together... Might not be so hard.
Well, Camtasia provides the solution to get your problem done. It can record the onscreen activity and also the webcam video and put them in the same player template. Another screen recorder DemoCreator can publish the screen recording as Flash movie, but can not record the webcam.