Is it possible to create a touch draw area to take signatures in a form?
I am working on an application to collect information though a form and would like to collect signatures by letting users of the app draw a signature on the screen. How would I do this for android. Preferably I would like the "signature" stored as a image to save somewhere.
Does anyone have any idea?
Many thanks,
Check out Google's API example of Fingerpaint -- the full app is included in the samples -- but the code to the main Java file is here.
Basically, make some small section of the view a canvas control (like in finger paint) and then when the user presses done, save the Canvas as an image (using the Bitmmap object attached to the Canvas.)
You may want to turn rotation off for this particular view, otherwise the signature will get cleared when you rotate the image -- unless you implement some way of rotating it.
Related
How take picture using project tango ?
I read this answer: Using the onFrameAvailable() in Jacobi Google Tango API
which works for grabbing a frame but picture quality is not great. Is there any takePicture equivalent ?
Note that java API
public void onFrameAvailable(int cameraId) {
if (cameraId == TangoCameraIntrinsics.TANGO_CAMERA_COLOR) {
mTangoCameraPreview.onFrameAvailable();
}
}
does not provide rgb data. If I use android camera to take picture, tango can not sense depth. There I will have to use TangoCameraPreview.
Thanks
You don't have to use TangoCameraPreview to get frames in Java. It is really just a convenience class provided to help with getting video on the screen. It appears to be implemented entirely in Java with calls to com.google.atap.tangoservice.Tango (i.e no calls to unpublished APIs). In fact, if you look inside the Tango SDK jar file, you can see that someone accidentally included a version of the source file - it has some diff annotations and may not be up to date but examining it is still instructive.
I prefer not to use TangoCameraPreview and instead call Tango.connectTextureId() and Tango.updateTexture() myself to load frame pixels into an OpenGL texture that I can then use however I want. That is exactly what TangoCameraPreview does under the hood.
The best way to capture a frame in pure Java is to draw the texture at its exact size (1280x720) to an offscreen buffer and read it back. This also has the side effect of converting the texture from whatever YUV format it has into RGB (which may or may not be desirable). In OpenGL ES you do this using a framebuffer and renderbuffer.
Adding the framebuffer/renderbuffer stuff to a program that can already render to the screen isn't a lot of code - about on par with the amount needed to save a file - but it is tricky to get right when you do it for the first time. I created an Android Studio sample capture app that saves a Tango texture as a PNG to the pictures folder (when you tap the screen) in case that is helpful for anyone.
I'm making desktop app in Java Swing.
In my app I do some image processing on my image which is a 16 bit, Gray-Scale and tiff image.
In my app the user can open images from tree using drag and drop of image into a JDesktopPane.
Now when user done some process on image like Remove Noise or set Contrast, when they close the image my app should ask if they want to Save Changes in Image?
So how can i check run time that some changes in Original image?
The java.awt.image.Raster contained in a BufferedImage does not override Object#equals(). This is largely because iterating over w * h pixels can get expensive: O(wh). Any optimization depends on the nature of the change. If you're only looking for global changes, such as noise or contrast, comparing a number of samples may suffice. You'll also want to profile your intended usage.
I've done a bit of searching and I don't think this has been asked yet (if it has, then I have been searching with the wrong terminology)
I'm trying to find out how to record touch gestures/actions done on Android touchscreen (with respect to time), then use the path of that gesture as the path for a graphic to follow (using android's tweening capabilities)
I'm also looking to be able to save the animation so that it can be loaded later on or exported as a file.
My thoughts around this are to take the point of touch's (x,y) coordinates and save the pair across a set interval of time. Then the coordinates can be loaded from the file later on for an ImageView (or some other view) to be tweened across the device.
Plus, I figure a method like this could make it cross-compatible on screens of different sizes if I saved the coordinates as percents instead of actual values, they can then be loaded depending on the screen size of the device
My questions are: Am I on the right track? Or would this be an inefficient way of doing it?
If this is the right idea, what is the best way of recording the positions and then using the tweening capabilities to animate the object (or is there a better method than tweening to provide a smooth animation)?
And if not, what would be the suggested solution to my problem?
All answers appreciated!
Bitwize
There is a tool to record gestures in the devloper application examples. You can find on the emulator : GestureBuilder.
Here is a tutorial about gestures.
Though, gestures have been provided to be recognized as gestures not like an animation path. But I believe you can extract data from gestures and get the "path" of given single touch gesture. Here is the main class to represent a gesture.
It has a toPath() method that could be useful to you.
Best reader,
I'm stuck on one of my concepts.
I'm making a program which classroom children can measure themselves with.
This is what the program includes;
- 1 webcam (only used for a simple webcam view.)
- 2 phidgets (don't mind these.)
So, this was my plan. I'll draw a rectangle on the webcamview and make it repaint itself constantly.
When the repainting is stopped by one of the phidgets, the rectangle's value will be returned in centimeters or meters.
I've already written the code of the rectangle that's repainting itself and this was my result:
(It's a roundRectangle, the lines are kind of hard to see in this image, sorry about that.)
As you can see, the background is now simply black.
I want to set the background of this JFrame as a webcam view (if possible) and then draw the
rectangle over the webcam view instead of the black background.
I've already looked into jmf, fmj and such but am getting errors even after checking my webcam path and adding the needed jar libraries. So I want to try other options.
So;
- I simply want to open my webcam, use it as background (yes live stream, if possible in some way).
And then draw this rectangle over it.
I'm thus wondering if this is possible, or if there's other options for me to achieve this.
Hope you understand my situation, and please ask if anything's unclear.
EDIT:
I got my camera to open now trough java. The running camera is of type "Process".
This is where I got the code for my camera to open: http://www.linglom.com/2007/06/06/how-to-run-command-line-or-execute-external-application-from-java/
I adjusted mine a little so it'll open my camera instead.
But now I'm wondering; is it possible to set a process as background of a JFrame?
Or can I somehow add the process to a JPanel and then add it to a JFrame?
I've tried several things without any succes.
My program as it is now, when I run it, opens the measuring frame and the camera view seperatly.
But the goal is to fuse them and make the repainting-rectangle paint over the camera view.
Help much appreciated!
I don't think it's a matter of setting a webcam stream as the background for your interface. More likely, you need to create a media player component, add it to your GUI, and then overlay your rectangles on top of that component.
As you probably know from searching for Java webcam solutions in Stack Overflow already, it's not easy, but hopefully the JMF Specs and API Guide will help you through it. The API guide is a PDF and has sections on receiving media streams, as well as sample code.
I am trying to create an image of one of my JPanels at a fixed resolution, or at a resolution that may be bigger than the current screen resolution. Consequently I cannot use a simple screen capture method as it causes my image resolution to be dependent on the resolution of the screen, which the user sets. Is there a way around this?
Alternatively, is there a way to do this in openGL? Create a virtual buffer, render into it, then create an image based on that virtual space?
Just create the control, you don't need to add it to any JFrame or otherwise cause it to be displayed. You can subsequently use the print method on it to cause it to be rendered to a Graphics object. You can set the size and such as you like without having to take care of the screen boundaries (as the control is never displayed on screen).
Look at JxCapture. It's a commersial product but you can get free license if you're developing open-source (or maybe even non-commercial) project.
Check out the Screen Image class.