I'm developing a sort of remote desktop application, where the Android client is able to show the screen of the desktop (and control it as well).
The problem is streaming the video. My solution was to capture screenshots using the Java Robot class, resized them according to the target device resolution and send them over DatagramPackets to the android client. Then I found out that raw Images were to big to be send over a single UDP packet (because it was over 62KB) and so it would give me one of these.
IOException: Message too long
So I compressed the images to JPEG with a quality setting of 0.25 until I was finally able get the size of the frames reasonably below 62KB. But even still the image sizes sometimes gets too large and cause the frame to be not sent. The code for this at present is:
#Override
public void run() {
while (running) {
try {
BufferedImage screenshot = robot.createScreenCapture(
new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()));
BufferedImage resizedSS = Scalr.resize(screenshot,
Scalr.Method.BALANCED, screenSizeX, screenSizeY);
byte[] imageBuff = compressToJPEG(resizedSS, 0.25f);
DatagramPacket packet = new DatagramPacket(imageBuff, imageBuff.length, address, port);
socket.send(packet);
} catch (IOException e) {
LOGGER.log(Level.SEVERE, "LiveScreenSender.run: error while sending screenshot");
e.printStackTrace();
break;
}
}
}
The code for JPEG compression (note: this is a modified version of code I found somewhere on SO, and don't fully understand how it works)
private static byte[] compressToJPEG(BufferedImage image, float quality) throws IOException {
ImageWriter writer = ImageIO.getImageWritersByFormatName("jpg").next();
ByteArrayOutputStream out = new ByteArrayOutputStream();
ImageOutputStream ios = ImageIO.createImageOutputStream(out);
writer.setOutput(ios);
ImageWriteParam param = writer.getDefaultWriteParam();
// compress to a given quality
param.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
param.setCompressionQuality(quality);
writer.write(null, new IIOImage(image, null, null), param);
ios.close();
writer.dispose();
return out.toByteArray();
}
The client (Android) part also being fairly similar
#Override
public void run() {
while (running) {
try {
socket.receive(dataBuffPacket);
if (dataBuffPacket.getLength() == 0) break;
Bitmap bitmap = BitmapFactory.decodeByteArray(dataBuffPacket.getData(), 0, dataBuffPacket.getLength());
onFrameReceive(bitmap);
} catch (IOException e) {
e.printStackTrace();
}
}
}
Now if UDP packet lengths are limited, what is the conventional way of live video streaming using UDP. I need some suggestions for which direction to go to, for solving this. Any library recommendations would be most appreciated. As far as I've seen on the internet most questions and blogs pertain to streaming content stored on a server, not video generated in real time frame by frame.
My knowledge regarding networking and A/V playback is fairly limited so I would appreciate good tutorials or links as to where I might be able to learn these.
Related
I am developing a swing application where i am displaying profile information along with their photo, after loading about 120 photos i get the exception
java.lang.OutOfMemoryError: Java heap space
I need to display around 1000+ profile informations.
This is how i load my images onto the jtable
try{
byte[] imageAsByteArray = getImageAsByteArray("E:\\Project\\WinPak\\Database\\UserImage\\"+employee.getLink3().trim()+"-1.jpg");
if(imageAsByteArray != null)
{
InputStream imageInputStream =new ByteArrayInputStream(imageAsByteArray);
Image img = ImageIO.read(imageInputStream);
ImageIcon imgIcon = new ImageIcon(img.getScaledInstance(100,100,Image.SCALE_AREA_AVERAGING));
data[index][10] =imgIcon; // data is of type object which i use to populate the jtable
imageInputStream.close();
}
}
catch(Exception e)
{
e.printStackTrace();
}
public byte[] getImageAsByteArray(String url)
{
try
{
InputStream is = new BufferedInputStream(new FileInputStream(url));
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
int nRead;
byte[] data = new byte[16384];
while ((nRead = is.read(data, 0, data.length)) != -1) {
buffer.write(data, 0, nRead);
}
buffer.flush();
is.close();
return buffer.toByteArray();
}
catch(Exception e)
{
e.printStackTrace();
return null;
}
}
How do i overcome this problem. Is there any other way to display the information in swing?
If you are not aware of -Xmx command line argument to Java, you should learn about that and try that as a short term workaround. See documentation here.
The larger issue, though, is whether you really need to keep all of these images in memory at the same time. For a table that is displaying images, you might want to load the image only if it is currently supposed to be displayed or in a row near to what is being displayed.
I don't see anything obviously wrong with the way that you are reading the images, but I haven't worked with I/O of images in Java much, so perhaps that could be improved too.
In outline,
Create thumbnail-sized copies of your images, as shown here and here, to make instances of ImageIcon; use a background thread, as shown here, to keep the GUI responsive.
In your TableModel, return ImageIcon from getColumnClass() for the image column; the default renderer will display it automatically.
Use an adjacent component or popup to display the full size image on demand.
I'm trying to build an application where a client sends its screen to the server, the client only sends its screen if there is a difference between last send screen and the latest captured screen(so that the program is easy on the network). And the server uses a JFrame and JLabel to display the image. But the thing is after a minute or two the server is giving a java.lang.OutOfMemoryError: Java heap space.
Please consider my code
public void go() throws Exception
{
s=new Socket("127.0.0.1",5000);
remoteIP = s.getInetAddress();
remoteIPOnly = remoteIP.toString().split("\\/");
frame=new JFrame(remoteIPOnly[1]);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
InputStream iss=s.getInputStream();
ObjectInputStream is=new ObjectInputStream(iss);
JLabel a=new JLabel();
while(!s.isClosed())
{
if((ImageIcon)is.readObject()!=null)
{
System.out.println("I got here");
imageIcon=(ImageIcon) is.readObject();
image=imageIcon.getImage();
rendered = null;
if (image instanceof RenderedImage)
{
rendered = (RenderedImage)image;
}
else
{
buffered = new BufferedImage(
imageIcon.getIconWidth(),
imageIcon.getIconHeight(),
BufferedImage.TYPE_INT_RGB
);
g = buffered.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();
rendered = buffered;
}
frame.setSize(rendered.getWidth(),rendered.getHeight());
a.setIcon(imageIcon);
frame.add(a);
frame.setVisible(true);
}
}
}
And here is my other piece of code, it is also showing the same problem, pls help me optimize it.
while(true)
{
s=serversocket.accept();
os=s.getOutputStream();
oss=new ObjectOutputStream(os);
image1=r.createScreenCapture(newRectangle(Toolkit.getDefaultToolkit().getScreenSize()));
imageicon=new ImageIcon(image1);
oss.writeObject(imageicon);
while(!s.isClosed()){
image2=r.createScreenCapture(new Rectangle(Toolkit.getDefaultToolkit
().getScreenSize()));
b=checkIfImagesAreEqual(image1,image2);
System.out.println(b);
if(b==false){
image1=image2;
imageicon1=new ImageIcon(image2);
oss.writeObject(imageicon1);
oss.flush();
}
Can anyone tell me if my logic for my purpose is correct or not and why am I getting the java.lang.OutOfMemoryError: Java Heap space and will extending the heap size help me as I'm planning for more than one client to be able to connect to the server?
Sorry if my question is dumb and any help will be appreciated.Thank you.
You keep adding the label to the frame. Shouldn't you just add it once?
There is an another problem: if((ImageIcon)is.readObject()!=null) will read out an image and lose it. You should instead keep it and not read it inside the if block. For instance:
if((imageIcon = (ImageIcon)is.readObject()) != null)
You need to manually signal for garbage collection after manually disposing of large objects. Also, general optimizing will help.
I would recommend (in your client):
Create the frame
Create a BufferedImage of sufficient size to draw incoming images.
Add said BufferedImage to an ImageIcon
Add said ImageIcon to your JLabel
Add said JLabel to said frame
Size your frame
Display the frame
Then in your loop
Read in an Image (only the Image, not an ImageIcon)
Get the above BufferedImage's graphics context
Draw the received image to said context.
Erase pointer to image
call System.gc();
See how that goes. Oh and if your inbound images are of varying size, you'll probably want to wipe the BufferedImage before drawing on it again else you get funky borders :-)
As for your server, it looks alright, I'd just ditch the ImageIcons and go with just passing plain old Images. Something like this:
Begin loop
accept socket (as youre doing)
get object output stream (as youre doing)
get image1 (as youre doing)
write image1
Begin inner loop
get image2 (as youre doing)
compare images (as youre doing)
if(!b) (b==false works; it's just a somewhat odd way to write it)
image1 = image2
System.gc() (because that previous assignment removed the pointer to the old image1)
write image1
flush stream (as youre doing)
And if a perfectly clear image isnt manditory, you might also consider compressing the image
before sending it, and just dealing with it as a byte array; you'd wire much less information.
public static byte[] bufferedImageToJPEGBytes(BufferedImage bi){
try{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write(bi, "jpg", baos);
return baos.toByteArray();
} catch (IOException e){
return null;
}
}
public static BufferedImage jpegBytesToBufferedImage(byte[] bytes){
try{
ByteArrayInputStream bais = new ByteArrayInputStream(rightImageBytes);
return ImageIO.read(bais);
} catch (IOException e){
return null;
}
}
and then just use
oos.writeObject(bufferedImageToJPEGBytes(image1)); //server side
and
image = jpegBytesToBufferedImage((byte[]) ois.readObject()); //client side
I've been searching for some solutions from the internet yet I still haven't found an answer to my problem.
I've been working or doing a program that would get an image file from my PC then will be edited using Java Graphics to add some text/object/etc. After that, Java ImageIO will save the newly modified image.
So far, I was able to do it nicely but I got a problem about the size of the image. The original image and the modified image didn't have the same size.
The original is a 2x3inches-image while the modified one which supposedly have 2x3inches too sadly got 8x14inches. So, it has gone BIGGER than the original one.
What is the solution/code that would give me an output of 2x3inches-image which will still have a 'nice quality'?
UPDATE:
So, here's the code I used.
public Picture(String filename) {
try {
File file = new File("originalpic.jpg");
image = ImageIO.read(file);
width = image.getWidth();
}
catch (IOException e) {
throw new RuntimeException("Could not open file: " + filename);
}
}
private void write(int id) {
try {
ImageIO.write(image, "jpg", new File("newpic.jpg"));
} catch (IOException e) {
e.printStackTrace();
}
}
2nd UPDATE:
I now know what's the problem of the new image. As I check it from Photoshop, It has a different image resolution compared to the original one. The original has a 300 pixels/inch while the new image has a 72 pixels/inch resolution.
How will I be able to change the resolution using Java?
I am resizing many jpeg images using Apache Sanselan which also deals with CMYK colors.
I have a problem when trying to convert jpeg images that has an alpha channel... when doing it the result is an image with different colors, and i guess that java somehow handles these type of images as a different color format.
As i said, the RGB resizing works fine as well as CMYK. ARGB images turn out with different colors.
An example:
Any suggestions? Can i force somehow ignore the alpha channel and handle the image as an RGB image? or convert it to be an RGB image without losing the real colors?
The code that handles this image is:
ImageInputStream stream = ImageIO.createImageInputStream(file);
Iterator<ImageReader> iter = ImageIO.getImageReaders(stream);
while (iter.hasNext()) {
ImageReader reader = iter.next();
reader.setInput(stream);
BufferedImage image = null;
ICC_Profile profile = null;
try {
image = reader.read(0);
} catch (IIOException e) {
... (CMYK conversion if needed)
}
return image;
}
return null;
Thanks in advance
I found a good solution here (first solution worked great):
problem using ImageIO.write jpg file
Edit:
There is a new open source library which supports CMYK processing.
All you need to do is to add the dependency to your project and a new reader will be added to the list of readers (while the known JPEGImageReader can't deal with CMYK).
You will probably want to iterate over these readers and read the image using the first reader which doesn't throw exception.
This package is a release candidate, but i am using it and it solved a huge problem that we had hard time dealing with.
http://mvnrepository.com/artifact/com.twelvemonkeys.imageio/imageio-jpeg/3.0-rc5
You can do the iteration this way to get the BufferedImage, and after you got that, the rest is easy (you can use any existing image converting package to save it as another format):
try (ImageInputStream input = ImageIO.createImageInputStream(source)) {
// Find potential readers
Iterator<ImageReader> readers = ImageIO.getImageReaders(input);
// For each reader: try to read
while (readers != null && readers.hasNext()) {
ImageReader reader = readers.next();
try {
reader.setInput(input);
BufferedImage image = reader.read(0);
return image;
} catch (IIOException e) {
// Try next reader, ignore.
} catch (Exception e) {
// Unexpected exception. do not continue
throw e;
} finally {
// Close reader resources
reader.dispose();
}
}
// Couldn't resize with any of the readers
throw new IIOException("Unable to resize image");
}
I have a servlet to convert and cache smaller versions of photographs. It is implemented using java.awt.image + javax.imageio and a third party resample filter. The originals are all uploaded with an sRGB color profile. When I resample them and save them again they still are in sRGB however this is not recorded in the saved file.
How can I make sure this information is saved in the file?
In case you wondered it makes a difference, images without a profile are much more saturated on my screen (Safari + OSX + Calibrated screen) then when they have the correct sRGB profile. Also I'm sure it's the missing profile information and not the resampling algorithm.
Turns out it is enough to include an EXIF tag ColorSpace=1 that tells it should be processed as sRGB. Succeeded into doing this using Apache Commons Sanselan. This library is unfortunatly not complete so it can only be used to modify the EXIF after the file has been created.
Relevant code, based on Sanselan example:
public void addExifMetadata(File jpegImageFile, File dst)
throws IOException, ImageReadException, ImageWriteException {
OutputStream os = null;
try {
TiffOutputSet outputSet = new TiffOutputSet();
TiffOutputField colorspace = TiffOutputField.create(
TiffConstants.EXIF_TAG_COLOR_SPACE, outputSet.byteOrder, new Integer(1));
TiffOutputDirectory exifDirectory = outputSet.getOrCreateExifDirectory();
exifDirectory.add(colorspace);
os = new FileOutputStream(dst);
os = new BufferedOutputStream(os);
new ExifRewriter().updateExifMetadataLossless(jpegImageFile, os, outputSet);
os.close();
os = null;
} finally {
if (os != null)
try {
os.close();
} catch (IOException e) {
}
}
}