Desktop TCP Streaming (java) - java

I want to stream desktop screen captures using sockets.
I don't know the exact way to do this, so I went with AWT's robot :)
Robot robot = new Robot();
BufferedImage image = robot.createScreenCapture(screenRectangle);
The problem is that images coming from the robot are too large to make a stream.
A 1440x900 capture is about 0.3MB and I can't transfer it fast enough to create a smooth 24fps stream.
Currently I'm using a TCP socket, because I had problems cutting down the image into multiple parts and sending them over with UDP.
Probably this isn't the right method, but what is? How are HD video streams transferred?
Thanks in advance

I think you'll need external library to create video (may be platform dependent).
The approach with images is simple but you'll need to send each frame. When you use video codec size is smaller because it sends some of frames in full size and other contain only changed part of the picture
See here:
http://en.wikipedia.org/wiki/Key_frame
http://en.wikipedia.org/wiki/I-frame
Here some open-source libs I just googled:
https://code.google.com/p/java-screen-recorder/
http://www.xuggle.com/xuggler/
I think You also can find some libs to create video stream from images...

How are HD video streams transferred?
Typically as a video stream, which a 'group of images' is not. Video codecs often have clever ways to compress groups of images further, e.g. by only showing the part of the next frame that is different to the previous one.
You might also want to look into encoding the images as a high compression JPEG.
Having said that, I doubt you'll get a very good transfer rate at that size in pixels.

Related

Capturing SurfaceView into video file based on google/grafika examples

I would like to load video from file, make some transformation on it and render it back into file. Said transformation is mainly two videos overlapping and shifting one of them in time. Grafika has some examples relevant to this issue. RecordFBOActivity.java contains some code for rendering video file from surface. I'm having trouble changing two things:
instead of rendering primitives in motion I need to render previously decoded and transformed video
I would like to render surface to file as fast as posible, not along with playback
My only success so far was to load .mp4 file and add some basic seeking features to PlayMovieActivity.java. In my reasearch I came across these examples, which are also using generated video. I didn't found them quite useful, because I couldn't swap this generated video with decoded one from file.
Is it posible to modify code of RecordFBOActivity.java so it can display video from file instead of generated animation?
You can try INDE Media for Mobile, tutorials are here: https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials
Sample code showing how to enable editing or make transformation is on github: https://github.com/INDExOS/media-for-mobile
It has transcoding\remuxing functionality in MediaComposer class and a possibility to edit or transform frames. Since it uses MediaCodec API inside encoding is done on GPU so is very battery friendly and works as fast as possible.

How to scale down the size and quality of an BufferedImage?

I'm working on a project, a client-server application named 'remote desktop control'. What I need to do is take a screen capture of the client computer and send this screen capture to the server computer. I would probably need to send 3 to 5 images per second. But considering that sending BufferedImage directly will be too costly for the process, I need to reduce the size of the images. The image quality need not to be loss less.
How can I reduce the byte size of the image? Any suggestions?
You can compress it with ZIP very easily by using GZIPInputStream and its output counterpart on the other end of the socket.
Edit:
Also note that you can create delta images for transmission, you can use a "transpartent color" for example (magic pink #FF00FF) to indicate that no change was made on that part of the screen. On the other side you can draw the new image over the last one ignoring these magic pixels.
Note that if the picture already contains this color you can change the real pink pixels to #FF00FE for example. This is unnoticable.
An other option is to transmit a 1-bit mask with every image (after painting the no-change pixels to an arbitrary color. For this you can change the color which is mostly used in the picture to result in the best compression ratio (optimal huffman-coding).
Vbence's solution of using a GZIPInputStream is a good suggestion. The way this is done in most commercial software - Windows Remote Desktop, VNC, etc. is that only changes to the screen-buffer are sent. So you keep a copy on the server of what the client 'sees', and with each consecutive capture you calculate what is different in terms of screen areas. Then you only send these screen areas to the client along with their top-left coords, width, height. And update the server copy of the client 'view' with just these new areas.
That will MASSIVELY reduce the amount of network data you use, while I have been typing this answer, only 400 or so pixels (20x20) are changing with each keystroke. This on a 1920x1080 screen is just 1/10,000th of the screen, so clearly worth thinking about.
The only expensive part is how you calculate the 'difference' between one frame and the next. There are plenty of libraries out there to do that cheaply, most of them very mathematical (discrete cosine transform type stuff, way over my head), but it can be done relatively cheaply.
See this thread for how to encode to JPG with controllable compression/quality. The slider on the left is used to control the level.
Ultimately it would be better to encode the images directly to a video codec that can be streamed, but I am a little hazy on the details.
One way would be to use ImageIO API
ImageIO.write(buffimg, "jpg", new File("buffimg.jpg"));
As for the quality and other parameters- I'm not sure, but it should be possible, just dig deeper.

video-steganography in java

I need to create steganographic videos (videos with data hidden in them) for my project.
I need to carry this out by extracting all the frames from a video and then hiding data in the selected frames by replacing bits in the LSB of the pixel color value and then encoding all the frames to create a new video(note here that lossless formats are required otherwise I might end up losing hidden data).
My research motivated me to use xuggler for manipulating videos, 'png' format to save the extracted images as it is a lossless format(handling them as BufferedImage objects), and using 'avi' video files.
As of now I am able to extract all frames from a video and encode my hidden data in the lsb's.
But I am having problems in creating the new avi video file using xuggler. When I extract the frames from the new video they lose the hidden data. I don't understand how to get this right and keep the data intact. This could be due to some lossy compression technique being used to create the new video.The size of the new video does not matter to me. I also can't find the correct codec_id to create the new video. I am extensively using xuggler tutorial available on wiki.
decode and capture frames http://build.xuggle.com/view/Stable/job/xuggler_jdk5_stable/ws/workingcopy/src/com/xuggle/mediatool/demos/DecodeAndCaptureFrames.java
I can post my code as required...
The problem is in the algorithm you are using , as mpeg or other famous video compression techniques are lossy compression techniques you will be losing data when you convert the frames back to video stream . So in lossy video codecs you cannot use LSB techniques for steganography .
Instead what you can do is change the motion vectors of the videos in someway to hide steganographic data . The problem in this is that xuggler being a higher level api might not give you a way to find/alter the motion vectors of the p/b frame . ffmpeg which xuggler uses does have a option for you to visualize the motion vectors so your best bet for motion vectors algo is alter the source code of ffmpeg as its a open-source project . Do reply back if you find a better way to find motion vectors .
Well , there is a simpler video steganography method
You can refer to Real steganography with truecrypt
But if you really want to go with mpeg video compression you can refer to the wonderful
paper : Steganography in Compressed Video Stream but the problem still remains extracting and manipulating the motion vectors

Java2d thumbnails. Can I get thumbnail from OS

I develop an application in which I want to display a grid with a list of images. For each image I create an instance of a class myImage. MyImage class, extends JCompoment and create an thumbnail and after draw it with overide thepaintCompoment(Graphics g).
All is ok, but in big size images I have a lot of delay to create the thumbnail.
Now I think to when I scan the folders for image(to create the list I said above to create an thumbanail of each image and save it to disc. For each image I will have a database record to save the image path and thumbnail path.So is this a good solution of the problem?
Is there a way to get the thumbnails of the system creates for each image, in file manager. Or a more effective solution than I try.
Thank you!!
Your best bet is to use something like imagemagick to convert the image and create the thumbnail. There's a project called JMagick which provides JNI hooks into Imagemagick, but running a process work too.
Imagemagick is heavily optimized C code for manipulating images. It will also be able to handle images that Java won't and with much less memory usage.
I work for a website where we let users upload art and create thumbnails on the fly, and it absolutely needs to be fast, so that's what we use.
The following is Groovy code, but it can modified to Java code pretty easily:
public boolean createThumbnail(InputStream input, OutputStream output){
def cmd = "convert -colorspace RGB -auto-orient -thumbnail 125x125 -[0] jpg:-"
Process p = cmd.execute()
p.consumeProcessErrorStream(System.out)
p.consumeProcessOutputStream(output)
p.out << input
p.out.close()
p.waitForOrKill(8000)
return p.exitValue()==0
}
This creates a thumbnail using pipes without actually writing any data to disk. The outputStream can be to a file if you wanted to immediately write it as well.
One way to avoid OS dependance is to use getScaledInstance(), as shown in this example. See the cited articles for certain limitations. If it's taking too long, use a SwingWorker to do the load and scale in the background.
I haven't used it for the creation of thumbnails, but you may also want to take a look at the ImageIO API.
ImageIO

Produce a simple Video - uncompressed, frame by frame

I need to an algorithm, to write frames (Pictures) into a file, which can be read by some Video-Cutting/Producing-Software to work with.
So I got frames, and I want to give them a input into a function/Method.
Let's do i in Java.
How can I do this?
Is there a simple way, I can write videofiles without using any systemcodecs?
I just need a uncompressed video with a constant Framerate (25 fps or 50 fps)
that will take my true-Color pictures (2D-arrays of Colors), so that I can use that video in my Videoprogramm to work with.
I never found any fileformat that fits to that.
Can You help me?
Greetings from Austria, and thanks. Flo.
Depending on the program you want to use to further process your movie you can also simply create PNGs (or TGAs or BMPs) for the single frames. VirtualDub e.g. can use images as frames for a movie.
The AVI container format can contain streams of uncompressed video, of which there are many types to choose from. Have a look here http://fourcc.org/ at the RGB and YUV formats, and here http://www.alexander-noe.com/video/documentation/avi.pdf for details on the AVI file format.

Categories