Need to understand some java code - java

I'm fairly new to android programming, but I am a quick learner.
So I found an intresting piece of code here: http://code.google.com/p/camdroiduni/source/browse/trunk/code/eclipse_workspace/camdroid/src/de/aes/camdroid/CameraView.java
And it's about live streaming from your device's camera to your browser.
But I want to know how the code actually works.
These are the things I want to understand:
1) How do they stream to the webbrowser. I understand that they send a index.html file to the ip adress of the device (on wifi) and that file reloads the page every second. But how do they send the index.html file to the desired ip address with sockets?
2) http://code.google.com/p/camdroiduni/wiki/Status#save_pictures_frequently , Here they mention they are using video, but I am still convinced they take pictures and send them as I don't see the mediarecorder anywhere.
Now my question is how they keep sending AND saving those images into the SD folder (i think). I think it's done with this code, but how does it works. Like with c.takepicture, it takes long to save and start previewing again, so that's no option to livestream.
public synchronized byte[] getPicture() {
try {
while (!isPreviewOn) wait();
isDecoding = true;
mCamera.setOneShotPreviewCallback(this);
while (isDecoding) wait();
} catch (Exception e) {
return null;
}
return mCurrentFrame;
}
private LayoutParams calcResolution (int origWidth, int origHeight, int aimWidth, int aimHeight) {
double origRatio = (double)origWidth/(double)origHeight;
double aimRatio = (double)aimWidth/(double)aimHeight;
if (aimRatio>origRatio)
return new LayoutParams(origWidth,(int)(origWidth/aimRatio));
else
return new LayoutParams((int)(origHeight*aimRatio),origHeight);
}
private void raw2jpg(int[] rgb, byte[] raw, int width, int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y=0;
if(yp < raw.length) {
y = (0xff & ((int) raw[yp])) - 16;
}
if (y < 0) y = 0;
if ((i & 1) == 0) {
if(uvp<raw.length) {
v = (0xff & raw[uvp++]) - 128;
u = (0xff & raw[uvp++]) - 128;
}
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0; else if (r > 262143) r = 262143;
if (g < 0) g = 0; else if (g > 262143) g = 262143;
if (b < 0) b = 0; else if (b > 262143) b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) &
0xff0000) | ((g >> 2) &
0xff00) | ((b >> 10) &
0xff);
}
}
}
#Override
public synchronized void onPreviewFrame(byte[] data, Camera camera) {
int width = mSettings.PictureW() ;
int height = mSettings.PictureH();
// API 8 and above
// YuvImage yuvi = new YuvImage(data, ImageFormat.NV21 , width, height, null);
// Rect rect = new Rect(0,0,yuvi.getWidth() ,yuvi.getHeight() );
// OutputStream out = new ByteArrayOutputStream();
// yuvi.compressToJpeg(rect, 10, out);
// byte[] ref = ((ByteArrayOutputStream)out).toByteArray();
// API 7
int[] temp = new int[width*height];
OutputStream out = new ByteArrayOutputStream();
// byte[] ref = null;
Bitmap bm = null;
raw2jpg(temp, data, width, height);
bm = Bitmap.createBitmap(temp, width, height, Bitmap.Config.RGB_565);
bm.compress(CompressFormat.JPEG, mSettings.PictureQ(), out);
/*ref*/mCurrentFrame = ((ByteArrayOutputStream)out).toByteArray();
// mCurrentFrame = new byte[ref.length];
// System.arraycopy(ref, 0, mCurrentFrame, 0, ref.length);
isDecoding = false;
notify();
}
I really hope someone can explain these things as good as possible. That would really much be appreciated.

Ok, If anyone is intrested, I have the answer.
The code repeatedly takes a snapshot from the camera preview using setOneShotPreviewCallback() to call onPreviewFrame(). The frame is delivered in YUV format so raw2jpg() converts it into 32 bit ARGB for the jpeg encoder. NV21 is a YUV planar format as described here.
getPicture() is called, presumably by the application, and produces the jpeg data for the image in the private byte array mCurrentFrame and returns that array. What happens to if afterwards is not in that code fragment. Note that getPicture() does a couple of wait()s. This is because the image acquisition code is running in a separate thread to that of the application.
In the Main activity, the public static byte CurrentJPEG get this: cameraFrame.getPicture(); in public void run(). In this webservice it is send with a socket to the desired ip.
Correct me if I'm wrong.
Now I just still wonder how the image is displayed in the browser as a picture because you send byte data to it right? Please check this out: http://code.google.com/p/camdroiduni/source/browse/trunk/code/eclipse_workspace/camdroid/src/de/aes/camdroid/WebServer.java

Nothing in that code is sending any data to any URL. The getPicture method is returning a byte array, probably being used as an outputstream in some other method/Class that is then funneling it to a web service through some sort of protocol (UDP likely).

Related

Create simple 2D image preview from panoramic HDR

Is there some really simple and basic code for making preview for HDR images (like getting 2D BufferedImage output or something)?
I am using this HDR image.
I tried this (it uses TwelveMonkeys), but no success at all (it simply stuck/frozen at ImageReader reader = readers.next();)
I edited it a bit to suit my needs like this, testing where it got broken/stuck/frozen...and it always happen after TEST 1, that is TEST 2 is never reached, tho no IllegalArgumentException is thrown - if I remove the if() section, then TEST 3 is never reached (I am using NetBeansIDE v12.4, Win7 x64):
public BufferedImage hdrToBufferedImage(File hdrFile) throws IOException {
BufferedImage bi = null;
// Create input stream
// I WROTE DOWN THE STRING FOR THIS EXAMPLE, normally it is taken from the hdrFile
// HDR image size is 23.7MB if it matters at all?
ImageInputStream input = ImageIO.createImageInputStream(new File("Z:/HDR/spiaggia_di_mondello_4k.hdr"));
try {
// Get the reader
Iterator<ImageReader> readers = ImageIO.getImageReaders(input);
System.err.println("=====>>> TEST 1");
if (!readers.hasNext()) {
throw new IllegalArgumentException("No reader for: " + hdrFile);
}
System.err.println("=====>>> TEST 2");
ImageReader reader = readers.next();
System.err.println("=====>>> TEST 3");
try {
reader.setInput(input);
// Disable default tone mapping
HDRImageReadParam param = (HDRImageReadParam) reader.getDefaultReadParam();
param.setToneMapper(new NullToneMapper());
// Read the image, using settings from param
bi = reader.read(0, param);
} finally {
// Dispose reader in finally block to avoid memory leaks
reader.dispose();
}
} finally {
// Close stream in finally block to avoid resource leaks
input.close();
}
// Get float data
float[] rgb = ((DataBufferFloat) bi.getRaster().getDataBuffer()).getData();
// Convert the image to something easily displayable
BufferedImage converted = new ColorConvertOp(null).filter(bi, new BufferedImage(bi.getWidth(), bi.getHeight(), BufferedImage.TYPE_INT_RGB));
return converted;
}
Well, if you don't mind occasional extreme halucinogenic oversaturation of some colors here and there (I was unable solving the issue - if anyone knows how to, please, feel free to update my code), you can try this (it is using JavaHDR) + I also added a bit of brightness and contrast to it as all HDR I tested looked too dark for the preview, so if you do not like that you can remove that part from the code:
public int rgbToInteger(int r, int g, int b) {
int rgb = r;
rgb = (rgb << 8) + g;
rgb = (rgb << 8) + b;
return rgb;
}
public BufferedImage hdrToBufferedImage(File hdrFile) throws IOException {
HDRImage hdr = HDREncoder.readHDR(hdrFile, true);
int width = hdr.getWidth();
int height = hdr.getHeight();
BufferedImage bi = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
int r = (int) (hdr.getPixelValue(x, y, 0) * 255);
int g = (int) (hdr.getPixelValue(x, y, 1) * 255);
int b = (int) (hdr.getPixelValue(x, y, 2) * 255);
bi.setRGB(x, y, rgbToInteger(r, g, b));
}
}
//***** YOU CAN REMOVE THIS SMALL SECTION IF YOU FEEL THE IMAGE IS TOO BRIGHT FOR YOU
float brightness = 2f;
float contrast = 20f;
RescaleOp rescaleOp = new RescaleOp(brightness, contrast, null);
rescaleOp.filter(bi, bi);
//*****
return bi;
}
I can compile and run the code you posted (changing the path obviously) without problems on my two macOS machines, testing on all the LTS Java versions (8, 11 and 17). In addition, I run code similar to this as part of the CI/CD pipeline of my project that tests on Windows and Linux as well. I think there is something wrong with the setup in your IDE or Java on your computer. I am not able to reproduce the "freeze"-situation you describe...
Here is the output of running the program (I also printed the resulting BufferedImage for verification):
=====>>> TEST 1
=====>>> TEST 2
=====>>> TEST 3
image = BufferedImage#5a42bbf4: type = 1 DirectColorModel: rmask=ff0000 gmask=ff00 bmask=ff amask=0 IntegerInterleavedRaster: width = 1024 height = 512 #Bands = 3 xOff = 0 yOff = 0 dataOffset[0] 0
Running with the code as-is (with the NullToneMapper and no post-processing), the image looks like this, due to unnormalized values:
Running with the default/built-in tone mapper, or simply reading the image with ImageIO.read(hdrFile) as suggested in the comments, the image will look like this:
Finally, playing a bit with the code using a custom global tone mapper; param.setToneMapper(new DefaultToneMapper(0.75f)), I get a result like this:
After a long discussion with #HaraldK and his code addition, I am posting the final correct code for this problem, that is in fact mix of #qraqatit code updated a bit with the #HaraldK addition that corrects wrong color tone mapping, here it is:
public int rgbToInteger(int r, int g, int b) {
int rgb = r;
rgb = (rgb << 8) + g;
rgb = (rgb << 8) + b;
return rgb;
}
public BufferedImage hdrToBufferedImage(File hdrFile) throws IOException {
HDRImage hdr = HDREncoder.readHDR(hdrFile, true);
int width = hdr.getWidth();
int height = hdr.getHeight();
BufferedImage bi = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
float colorToneCorrection = 0.75f;
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
float r = hdr.getPixelValue(x, y, 0);
int red = (int) ((r / (colorToneCorrection + r)) * 255);
float g = hdr.getPixelValue(x, y, 1);
int green = (int) ((g / (colorToneCorrection + g)) * 255);
float b = hdr.getPixelValue(x, y, 2);
int blue = (int) (int) ((b / (colorToneCorrection + b)) * 255);
bi.setRGB(x, y, rgbToInteger(red, green, blue));
}
}
//MAKE THE RESULTING IMAGE A BIT BRIGHTER
float brightness = 1.35f;
float contrast = 0f;
RescaleOp rescaleOp = new RescaleOp(brightness, contrast, null);
rescaleOp.filter(bi, bi);
return bi;
}

RAW image data to Android Bitmap

I've been working on an extension of a current application to stream webcam data to an Android device. I can obtain the raw image data, in the form of a RGB byte array. The color space is sRGB. I need to send that array over the network to an Android client, who constructs it into a Bitmap image to display on the screen. My problem is that the color data is skewed. The arrays have the same hashcode before and after being sent, so I'm positive this isn't a data loss problem. I've attached a sample image of how the color looks, you can see that skin tones and darker colors reconstruct okay, but lighter colors end up with a lot of yellow/red artifacts.
Server (Windows 10) code :
while(socket.isConnected()) {
byte[] bufferArray = new byte[width * height * 3];
ByteBuffer buff = cam.getImageBytes();
for(int i = 0; i < bufferArray.length; i++) {
bufferArray[i] = buff.get();
}
out.write(bufferArray);
out.flush();
}
Client (Android) code :
while(socket.isConnected()) {
int[] colors = new int[width * height];
byte[] pixels = new byte[(width * height) * 3];
int bytesRead = 0;
for(int i = 0; i < (width * height * 3); i++) {
int temp = in.read();
if(temp == -1) {
Log.d("WARNING", "Problem reading");
break;
}
else {
pixels[i] = (byte) temp;
bytesRead++;
}
}
int colorIndex = 0;
for(int i = 0; i < pixels.length; i += 3 ) {
int r = pixels[i];
int g = pixels[i + 1];
int b = pixels[i + 2];
colors[colorIndex] = Color.rgb( r, g, b);
colorIndex++;
}
Bitmap image = Bitmap.createBitmap(colors, width, height, Bitmap.Config.ARGB_8888);
publishProgress(image);
}
The cam.getImageBytes() is from an external library, but I have tested it and it works properly. Reconstructing the RAW data into a BufferedImage works perfectly, using the code :
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
image.getRaster().setPixels(0,0,width,height, pixels);
But, of course, BufferedImages are not supported on Android.
I'm about to tear my hair out with this one, I've tried everything I can think of, so any and all insight would be extremely helpful!

Different code(.java file) for different platform?

I have a code where image data is passed from bitmap to FFmpeg frame recorder and converted to a video. But i need to make small changes while running it on LG G3(armv7) from Asus zenfone 5(x86).
Following are the class variables that create the issue:(declared under, class Main Activity)
inputWidth = 1024;
inputHeight = 650;
Following is the method where the issue occurs:
byte [] getNV21(int inputWidth, int inputHeight, Bitmap bitmap) {
int [] argb = new int[inputWidth * inputHeight];
bitmap.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
byte [] yuv = new byte[inputWidth*inputHeight*3/2];
encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
return yuv;
}
void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ( ( -38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ( ( 112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
}
index ++;
}
}
}
Working CODE:
LG G3 :I can use the above variables at any place in the code to get the required output.
Bitmap size returned = 2734200
Asus Zenfone 5: Except at creating the bitmap, I have to use everywhere else bitmap.getHeight() and bitmap.getWidth(), to get the required output.
Surprisingly here Bitmap size returned = 725760 (So its not setting according to set bitmap parameters?)
INCORRECT CODE:
LG G3 : IF i use bitmap.getHeight() and bitmap.getWidth(), i get java.lang.ArrayIndexOutOfBoundsException: length = 102354 , index = 102354. #getNV21 method
Asus Zenfone 5 : If i use inputWidth , inputHeight i get
java.lang.IllegalArgumentException: x + width must be <= bitmap.width() #getNV21 method
How can i generalize the above code for both phones?
In cases like this you can use a Strategy pattern.
Strategy pattern allows you to change algorithms during runtime based on your environment. Basically you define an interface for your strategy. Something like this:
interface MyStrategy {
byte[] getNV21(int inputWidth, int inputHeight, Bitmap bitmap);
}
Then you make multiple implementations of your interface, one for LG, one for Asus and, for example, one for all other devices (device neutral):
class MyStrategyForLG implements MyStrategy {
public byte[] getNV21(int inputWidth, int inputHeight, Bitmap bitmap) {
// ...
}
}
class MyStrategyForAsus implements MyStrategy {
public byte[] getNV21(int inputWidth, int inputHeight, Bitmap bitmap) {
// ...
}
}
class DefaultMyStrategy implements MyStrategy {
public byte[] getNV21(int inputWidth, int inputHeight, Bitmap bitmap) {
// ...
}
}
You can create a factory for MyStrategy so you can avoid use of if-else in your MainActivity. Something like this:
class MyStrategyFactory {
public void createMyStrategy() {
// ...
if ( deviceIsAsus ) {
return new MyStrategyForAsus();
}
if ( deviceIsLg ) {
return new MyStrategyForLG();
}
return new DefaultMyStrategy();
}
}
In your MainActivity you can invoke your strategy like this:
// ...
MyStrategy strategy = new MyStrategyFactory().createMyStrategy();
byte[] bytes = strategy.getNV21(width, height, image);
// ...
The advantage of this method is that you do not need to modify calling site when you add another device, for example, when you notice that Samsung is also a bit weird. Instead you implement MyStrategyForSamsung and change the factory to return it when the code is executed on Samsung device.

Java - Remove pixels below a certain alpha value

I have an image with a lot of anti-aliased lines in it and trying to remove pixels that fall below a certain alpha channel threshold (and anything above the threshold gets converted to full 255 alpha). I've got this coded up and working, its just not as fast as I would like when running it on large images. Does anyone have an alternative method they could suggest?
//This will convert all pixels with > minAlpha to 255
public static void flattenImage(BufferedImage inSrcImg, int minAlpha)
{
//loop through all the pixels in the image
for (int y = 0; y < inSrcImg.getHeight(); y++)
{
for (int x = 0; x < inSrcImg.getWidth(); x++)
{
//get the current pixel (with alpha channel)
Color c = new Color(inSrcImg.getRGB(x,y), true);
//if the alpha value is above the threshold, convert it to full 255
if(c.getAlpha() >= minAlpha)
{
inSrcImg.setRGB(x,y, new Color(c.getRed(), c.getGreen(), c.getBlue(), 255).getRGB());
}
//otherwise set it to 0
else
{
inSrcImg.setRGB(x,y, new Color(0,0,0,0).getRGB()); //white (transparent)
}
}
}
}
per #BenoitCoudour 's comments I've modified the code accordingly, but it appears to be affecting the resulting RGB values of pixels, any idea what I might be doing wrong?
public static void flattenImage(BufferedImage src, int minAlpha)
{
int w = src.getWidth();
int h = src.getHeight();
int[] rgbArray = src.getRGB(0, 0, w, h, null, 0, w);
for (int i=0; i<w*h; i++)
{
int a = (rgbArray[i] >> 24) & 0xff;
int r = (rgbArray[i] >> 16) & 0xff;
int b = (rgbArray[i] >> 8) & 0xff;
int g = rgbArray[i] & 0xff;
if(a >= minAlpha) { rgbArray[i] = (255<<24) | (r<<16) | (g<<8) | b; }
else { rgbArray[i] = (0<<24) | (r<<16) | (g<<8) | b; }
}
src.setRGB(0, 0, w, h, rgbArray, 0, w);
}
What may slow you down is the instantiation of a Color object for every pixel.
Please see this answer to iterate over pixels in a BufferedImage and access the alpha channel : https://stackoverflow.com/a/6176783/3721907
I'll just paste the code below
public Image alpha2gray(BufferedImage src) {
if (src.getType() != BufferedImage.TYPE_INT_ARGB)
throw new RuntimeException("Wrong image type.");
int w = src.getWidth();
int h = src.getHeight();
int[] srcBuffer = src.getData().getPixels(0, 0, w, h, null);
int[] dstBuffer = new int[w * h];
for (int i=0; i<w*h; i++) {
int a = (srcBuffer[i] >> 24) & 0xff;
dstBuffer[i] = a | a << 8 | a << 16;
}
return Toolkit.getDefaultToolkit().createImage(new MemoryImageSource(w, h, pix, 0, w));
}
This is very close to what you want to achieve.
You have a theoretical complexity of O(n) which you optimize by performing byte manipulation.
You can go further and use threads (you have an embarrassing parallel problem), but since most of user machines have at most 8 physical threads it will not get you too far. You could add another level of optimization on top of this by manipulating parts of the image one at the time, adapted to the memory buffers and different cache levels in your system.
Since I already mentioned you have an embarrassing parallel problem, the best solution is to perform GPU programming.
You can follow this tutorial on simple image processing with cuda and change the code of the filter to something like this
void blur(unsigned char* input_image, unsigned char* output_image, int width, int height) {
const unsigned int offset = blockIdx.x*blockDim.x + threadIdx.x;
const int currentoffset = (offset)*4;
if(offset < width*height) {
if (input_image[currentoffset+3]>= threshold )
output_red = input_image[currentoffset];
output_green = input_image[currentoffset+1];
output_blue = input_image[currentoffset+2];
output_alpha = 255;
}else{
output_red = 0;
output_green = 0;
output_blue = 0;
output_alpha = 0;
}
}
}
output_image[currentoffset*3] = output_red;
output_image[currentoffset*3+1] = output_green;
output_image[currentoffset*3+2] = output_blue;
output_image[currentoffset*3+3] = output_alpha
}
}
If you are set on using Java you have here a great answer on how to get started on using java with nvidia gpu

Refresh BufferedImage periodically

I have BufferedImage displayed in JFrame which I want to refresh periodically with raw R, G, B data I receive through Socket(byte[] buffer). Sequence of actions should look something like this:
Receive byte[1280 * 480] array of pure RGB data(one byte per component) -> this part works flawless
Iterate through byte array and call BufferedImage.setRgb(x, y, RGB) for every pixel
I have no problem with receiving and displaying one frame, but when I wrap code which does steps 1. and 2. I receive data regularly but not a single frame is ever shown. My guess was that receiving data is significantly faster than displaying images, in other words my received image gets somehow overwritten by new image etc.
My next idea was to hand over buffer with image to another background thread and block main thread which does network communication until background thread has done 'displaying' image.
Then I heard it can easily be done with SwingWorker here: Can a progress bar be used in a class outside main? but it does exactly the same thing as if I was still doing everything on one thread: no image was ever shown. Here is my code:
public class ConnectionManager {
public static final String serverIp = "192.168.1.10";
public static final int tcpPort = 7;
public static final int bufferSize = 1280;
private Socket client;
private BufferedInputStream networkReader;
private PrintStream printStream;
byte[] buffer;
public ConnectionManager(){}
public void connect() throws IOException{
int dataRead;
while(true){
client = new Socket(serverIp, tcpPort);
printStream = new PrintStream(client.getOutputStream());
networkReader = new BufferedInputStream(client.getInputStream());
dataRead = 0;
buffer = new byte[1280 * 480];
printStream.println(""); // CR is code to server to send data
while(dataRead < (1280 * 480)){
dataRead += networkReader.read(buffer, dataRead, (1280 * 480) - dataRead);
}
DrawBack d = new DrawBack();
d.execute();
try {
d.get(); // here im trying to block main thread purposely
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
}
private class DrawBack extends SwingWorker<Void, Void>{
#Override
protected Void doInBackground() throws Exception {
byte Y, U, V;
int R, G, B, RGB, Yi, Ui, Vi;
boolean alternate = false;
for(int i = 0; i < 480; ++i){
for(int j = 1; j < 1280; j += 2){
if(alternate){
Y = buffer[i * 1280 + j];
V = buffer[i * 1280 + j -1];
U = buffer[i * 1280 + j -3];
} else {
Y = buffer[i * 1280 + j];
U = buffer[i * 1280 + j -1];
V = buffer[i * 1280 + j +1];
}
Yi = Y & 0xFF;
Ui = U & 0xFF;
Vi = V & 0xFF;
R = (int)(Yi + 1.402 * (Vi - 128));
G = (int)(Yi - 0.34414 * (Ui - 128) - 0.71414 * (Vi - 128));
B = (int)(Yi + 1.772 * (Ui - 128));
RGB = R;
RGB = (RGB << 8) + G;
RGB = (RGB << 8) + B;
alternate = !alternate;
Masapp.window.getImage().setRGB(j/2, i, RGB);// reference to buffered image on JFrame
if((i == 100) && (j == 479)){
System.out.println(Yi + " " + Ui + " " + Vi);
}
}
}
return null;
}
}
}
I even tried to wait for completion with while:
DrawBack d = new DrawBack(); // DrawBack extends SwingWorker<Void, Void>
d.execute();
while(!d.isDone());
but it makes no improvement. I tried with calling BufferedImage.flush() and JFrame.invalidate() when I set all pixels.
My question basically is: How to refresh and display buffered image periodically?
Your implementation is incorrectly synchronized in that it updates the GUI from the worker's background thread rather than the event dispatch thread. The resulting behavior is unpredictable. Instead, define a SwingWorker<BufferedImage, BufferedImage> and publish() the image for later rendering in your implementation of process(). For improved liveness, publish portions of the image as they are ready, e.g. publish() a BufferedImage containing one row at a time. Compare the example cited with this related example to see the approach.

Categories