I am trying to take screenshot with Robot in 1 method and then with another method I am tring to write the BufferedImage produced by Robot without interrupting Robot from capturing screenshots. So far I came up with these:
Screenshot Generation:
public static void startRecord2() {
Thread recordThread = new Thread() {
//BufferedImage[] img;
#Override
public void run() {
//int vector_index = -1 ,
int phase_counter=0;
Robot rt;
BufferedImage[] img = null;
try {
rt = new Robot();
record = true;
int buffer_index = 0;
//boolean newCreatorStarted = false;
long very_start = System.currentTimeMillis(), phase_start = 0, phase_end = 0;
while (record) { //(cnt == 0 || record) {
if(buffer_index==0){
img = new BufferedImage[max_limit];
phase_start = System.currentTimeMillis();
phase_counter++;
}
//System.out.println("total_frame_created = "+total_frame_created);
img[ buffer_index++ ] = rt.createScreenCapture(new Rectangle(screenWidth,screenHeight));
total_frame_created++;
//ImageIO.write(img, "jpeg", new File("./"+store+"/"+ System.currentTimeMillis() + ".jpeg"));
if(buffer_index==max_limit||!record) {
buffer_index=0;
CreateImage(img, phase_counter);
img = null;
System.gc();
phase_end = System.currentTimeMillis();
System.out.println("Time taken in phase #"+phase_counter+" = "+ String.valueOf((phase_end-phase_start)/1000.0));
}
}
long very_end = System.currentTimeMillis();
System.out.println("Time taken to capture "+total_frame_created+" shots = "+ (very_end-very_start)/1000 );
} catch (Exception e) {
e.printStackTrace();
}
}
};
recordThread.start();
}
ImageWriting:
public static void CreateImage(BufferedImage[] img, int phase){//, Thread capturerThread) {
Thread imageCreatorThread = new Thread(){
#Override
public void run(){
int index = 0;
while(index<max_limit){
try {
if(img[index]!=null) {
ImageIO.write(img[index++ ], "png", new File("./"+store+"/"+phase+"_"+index+".png"));
img[index-1]=null;
}else{
index-=1;
break;
}
} catch (IOException ex) {
Logger.getLogger(Recorder.class.getName()).log(Level.SEVERE, null, ex);
}
}
System.gc();
System.err.println("\t\t\tWritten "+index+" images to disk");
total_image_created+=index;
}
};
imageCreatorThread.start();
}
What I am doing here is- When some frames (tried to denote it with max_limit with a value 30) the record method calls the ImageCreator method which starts another thread to process the BufferedImages it got through the parameter. Also, I want to keep the recorder method keep running to take screenshots continuously(or after some interval). But problem is,always after capturing 60 or aroung (sometimes found 61) frames, memory error occurs for Robot:
Exception in thread "Thread-1" java.lang.OutOfMemoryError: Java heap space
at sun.awt.windows.WRobotPeer.getRGBPixels(WRobotPeer.java:64)
at java.awt.Robot.createScreenCapture(Robot.java:444)
at Recorder$2.run(Recorder.java:124)
Written 60 images to disk
I tried changing heap memory with 512m, 1024m. Also I tried to make the BufferedImage array used in startRecord2 method null and call the Garbage Collector(Don't know if it can work). Nothing worked. What can I do to establish my algorithm?
Thanks in advance.
Looks like in your own code GC is not happening properly.
Before resolving the issue you must have to find out the culprit code which causing the memory leakage.
Increasing heap size will not resolve your issue as its sure that having issue memory leakage.
You can use any one the below tools to find out the memory leakage issue.
1) JAVA monitor
2) Jconsole
2) visualvm etc.
Related
I have an A.txt file of 100,000,000 records from 1 to 100000000, each record is one line. I have to read file A then write to file B and C, provided that even line writes to file B and the odd line writes to file C.
Required read and write time must be less than 40 seconds.
Below is the code that I already have but the runtime takes more than 50 seconds.
Does anyone have any other solution to reduce runtime?
Threading.java
import java.io.*;
import java.util.concurrent.LinkedBlockingQueue;
public class Threading implements Runnable {
LinkedBlockingQueue<String> queue = new LinkedBlockingQueue<>();
String file;
Boolean stop = false;
public Threading(String file) {
this.file = file;
}
public void addQueue(String row) {
queue.add();
}
public void Stop() {
stop = true;
}
public void run() {
try {
BufferedWriter bw = new BufferedWriter(new FileWriter(file));
while(!stop) {
try {
String rơ = queue.take();
bw.while(row + "\n");
} catch (Exception e) {
e.printStackTrace();
}
}
bw.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
ThreadCreate.java
// I used 2 threads to write to 2 files B and C
import java.io.*;
import java.util.List;
public class ThreadCreate {
public void startThread(File file) {
Threading t1 = new Threading("B.txt");
Threading t1 = new Threading("B.txt");
Thread td1 = new Thread(t1);
Thread td1 = new Thread(t1);
td1.start();
td2.start();
try {
BufferedReader br = new BufferedReader(new FileReader(file));
String line;
long start = System.currentTimeMillis();
while ((line = br.readLine()) != null) {
if (Integer.parseInt(line) % 2 == 0) {
t1.addQueue(line);
} else {
t2.addQueue(line);
}
}
t1.Stop();
t2.Stop();
br.close();
long end = System.currentTimeMillis();
System.out.println("Time to read file A and write file B, C: " + ((end - start)/1000) + "s");
} catch (Exception e) {
e.printStackTrace();
}
}
}
Main.java
import java.io.*;
public class Main {
public static void main(String[] args) throws IOException {
File file = new File("A.txt");
//Write file B, C
ThreadCreate t = new ThreadCreate();
t.startThread(file);
}
}
Why are you making threads? That just slows things down. Threads are useful if the bottleneck is either the calculation itself or the blocking nature of the operation, and they only hurt if it is not. Here, it isn't: The CPU is just idling (the bottleneck will be the disk), and the nature of what it is blocking on means that multithreading does not help either: Telling a single SSD to write 2 boatloads of bytes in parallel is probably no faster (only slower, as it needs to bounce back and forth). If the target disk is a spinning disk, it is way slower - the write head cannot make clones of itself to go any faster, and by making it multithreaded, you are wasting a ton of time by asking the write head to bounce back and forth between the different write locations.
There's nothing that immediately strikes me as ripe for significant speedups.
Sometimes, writing a ton of data to a disk just takes 50 seconds. If that's not acceptable, buy a faster disk.
try memory mapped files
byte[] buffer = "foo bar foo bar text\n".getBytes();
int number_of_lines = 100000000;
FileChannel file = new RandomAccessFile("writeFIle.txt", "rw").getChannel();
ByteBuffer wrBuf = file.map(FileChannel.MapMode.READ_WRITE, 0, buffer.length * number_of_lines);
for (int i = 0; i < number_of_lines; i++)
{
wrBuf.put(buffer);
}
file.close();
Took to my computer (Dell, I7 processor, with SSD, 32GB RAM) a little over half a minute to run this code)
I’m creating an empty file with specified size as below.
final long size = 10000000000L;
final File file = new File("d://file.mp4");
Thread t = new Thread(new Runnable() {
#Override
public void run() {
try {
RandomAccessFile raf = new RandomAccessFile(file, "rw");
raf.setLength(size);
} catch (Exception e) {
e.printStackTrace();
}
}
});
t.start();
For big sizes like 5GB or less and more, this process takes more time on android devices. Now my question is how can I cancel the creating file process Whenever i wanted?
thanks.
raf.setLength calls seek under the hood, which is a native function, so it's not clear if the operation is actually cancellable through an interrupt or by other means.
Can you chunk the creation of the file yourself, something like:
final long size = 10000000000L;
final File file = new File("d://file.mp4");
volatile boolean cancelled = false;
Thread t = new Thread(new Runnable() {
#Override
public void run() {
long bytesRemaining = size;
long currentSize = 0;
RandomAccessFile raf = new RandomAccessFile(file, "rw");
try {
while ( bytesRemaining > 0 && !cancelled ) {
// !!!THIS IS NOT EXACTLY CORRECT SINCE
// YOU WILL NEED TO HANDLE EDGE CONDITIONS
// AS YOU GET TO THE END OF THE FILE.
// IT IS MEANT AS AN ILLUSTRATION ONLY!!!
currentSize += CHUNK_SIZE; // you decide how big chunk size is
raf.setLength(currentSize);
bytesRemaining -= CHUNK_SIZE
}
} catch (Exception e) {
e.printStackTrace();
}
}
});
t.start();
// some other thread could cancel the writing by setting the cancelled flag
Disclaimer: I don't know what kind of performance this will have at the size files you are creating. It will likely have some overhead for each call to seek. Try it out, and see what performance looks like.
I'm trying to get some reliable method of measuring disk read speed, but failing at removal of cache out of the equation.
In How to measure Disk Speed in Java for Benchmarking is in answer from simgineer utility for exactly this, but for some reason, I failed to replicate its behaviour, and running the utility does not yield anything precise either (for read).
From suggestion in different answer, setting test file to something bigger than main memory size seems to work, but I cannot afford to spend whole four minutes for system to allocate 130GB file. (not writing anything in the file results in sparse file and returns bogus times)
Sufficient file size seems to be somewhere between
Runtime.getRuntime().maxMemory()
and
Runtime.getRuntime().maxMemory()*2
The source code of my current solution:
File file = new File(false ? "D:/work/bench.dat" : "./work/bench.dat");
RandomAccessFile wFile = null, rFile = null;
try {
System.out.println("Allocating test file ...");
int blockSize = 1024*1024;
long size = false ? 10L*1024L*(long)blockSize : Runtime.getRuntime().maxMemory()*2;
byte[] block = new byte[blockSize];
for(int i = 0; i<blockSize; i++) {
if(i % 2 == 0) block[i] = (byte) (i & 0xFF);
}
System.out.println("Writing ...");
wFile = new RandomAccessFile(file,"rw");
wFile.setLength(size);
for(long i = 0; i<size-blockSize; i+= blockSize) {
wFile.write(block);
}
wFile.close();
System.out.println("Running read test ...");
long t0 = System.nanoTime();
rFile = new RandomAccessFile(file,"r");
int blockCount = (int)(size/blockSize)-1;
Random rnd = new Random();
for(int i = 0; i<testCount; i++) {
rFile.seek((long)rnd.nextInt(blockCount)*(long)blockSize);
rFile.readFully(block, 0, blockSize);
}
rFile.close();
long t1 = System.nanoTime();
double readB = ((double)testCount*(double)blockSize);
double timeNs = (double)(t1-t0);
return (readB/(1024*1024))/(timeNs/(1000*1000*1000));
} catch (Exception e) {
Logger.logError("Failed to benchmark drive speed!", e);
return 0;
} finally {
if(wFile != null) {try {wFile.close();} catch (IOException e) {}}
if(rFile != null) {try {rFile.close();} catch (IOException e) {}}
if(file.exists()) {file.delete();}
}
I somewhat hoped to get a benchmark that will finish within seconds (caching results for following runs) having only first execution a bit slower.
I could technically crawl the filesystem and bench the read on files that are already on the drive, but that smells like a lot of undefined behaviour and firewalls are not happy about it either.
Any other options left? (platform dependent libraries are off the table)
In the end decided to solve the problem by scouring local work folder for files and load those, hoping we packaged enough with application to get specs speeds. In my current test case, the answer is luckily yes, but there are no guarantees, so I keep the approach from question as a backup plan.
This is not exactly perfect solution, but it somewhat works, getting specs speed at about 2000 test files. Bear in mind that this test cannot be rerun with same results, as all test files from previous execution are now probably cached.
You can always call flushmem ( https://chadaustin.me/flushmem/ ) by Chad Austin, but that takes about as much time to execute as the original approach, so I would advise to simply cache result of the first run and hope for the best.
Used code:
final int MIN_FILE_SIZE = 1024*10;
final int MAX_READ = 1024*1024*50;
final int FILE_COUNT_FRACTION = 4;
// Scour the location of the runtime for any usable files.
ArrayList<File> found = new ArrayList<>();
ArrayList<File> queue = new ArrayList<>();
queue.add(new File("./"));
while(!queue.isEmpty() && found.size() < testCount) {
File tested = queue.remove(queue.size()-1);
if(tested.isDirectory()) {
queue.addAll(Arrays.asList(tested.listFiles()));
} else if(tested.length()>MIN_FILE_SIZE){
found.add(tested);
}
}
// If amount of found files is not sufficient, perform test with new file.
if(found.size() < testCount/FILE_COUNT_FRACTION) {
Logger.logInfo("Disk to CPU transfer benchmark failed to find "
+ "sufficient amount of files to read, slow version "
+ "will be performed!", found.size());
return benchTransferSlowDC(testCount);
}
System.out.println(found.size());
byte[] block = new byte[MAX_READ];
Collections.shuffle(found);
RandomAccessFile raf = null;
long readB = 0;
try {
long t0 = System.nanoTime();
for(int i = 0; i<Math.min(found.size(), testCount); i++) {
File file = found.get(i);
int size = (int) Math.min(file.length(), MAX_READ);
raf = new RandomAccessFile(file,"r");
raf.read(block, 0, size);
raf.close();
readB += size;
}
long t1 = System.nanoTime();
return ((double)readB/(1024*1024))/((double)(t1-t0)/(1000*1000*1000));
//return (double)(t1-t0) / (double)readB;
} catch (Exception e) {
Logger.logError("Failed to benchmark drive speed!", e);
if(raf != null) try {raf.close();} catch(Exception ex) {}
return 0;
}
I want to play streaming media, received from a internet service. The media player works fine, but is sometimes interrupted due to poor download rate.
On receiving of media data I run a thread that does decoding and other manipulations, the abstract code looks like that:
private void startConsuming(final InputStream input) {
consumingThread = new Thread() {
public void run() {
runConsumingThread(input);
}
};
consumingThread.start();
}
My idea is to calculate the buffer size needed to prevent interruption, and to start media playback once the buffer is filled (or, of cause, if the stream ends).
private void startConsuming(final InputStream input) {
consumingThread = new Thread() {
public void run() {
runConsumingThread(input);
}
};
Thread fillBufferThread = new Thread() {
public void run() {
try {
while(input.available() < RECEIVING_BUFFER_SIZE_BYTES) {
log.debug("available bytes: " + input.available());
sleep(20);
}
} catch (Exception ex) {
// ignore
}
consumingThread.start();
}
};
fillBufferThread.start();
}
In debug I get continuously "available bytes: 0" while stream arrives and does not break the while loop. I recognized already, that EOFException will of cause not occur, since I do not read from InputStream.
How can I handle this? I thought that input.available() would increase on data arrival.
Why can runConsumingThread(input) work correctly in nearly the same manner, but my while loop in fillBufferThread does not?
EDIT: Following code nearly works (except that it wrongly consumes the input stream, which is then not played in consumingThread, but that will be easy to solve), but there must be a smarter solution.
[...]
Thread fillBufferThread = new Thread() {
public void run() {
final DataInputStream dataInput = new DataInputStream(input);
try {
int bufferSize = 0;
byte[] localBuffer = new byte[RECEIVING_BUFFER_SIZE_BYTES];
while(bufferSize < RECEIVING_BUFFER_SIZE_BYTES) {
int len = dataInput.readInt();
if(len > localBuffer.length){
if (D) log.debug("increasing buffer length: " + len);
localBuffer = new byte[len];
}
bufferSize += len;
log.debug("available bytes: " + bufferSize);
dataInput.readFully(localBuffer, 0, len);
}
consumingThread.start();
}
};
[...]
It can't be efficient to read from stream until I know, that I have it filled with a number of bytes, or is it?
I have two threads that increase the CPU overhead.
1. Reading from the socket in a synchronous way.
2. Waiting to accept connections from other clients
Problem 1, I'm just trying to read any data that comes from the client, and I can not use readline, because the incoming data has newlines that I mark for knowing the header end of a message. So I'm using that way in a thread, but it increases the CPU overhead
public static String convertStreamToString(TCPServerConnectionListner socket) throws UnsupportedEncodingException, IOException, InterruptedException {
BufferedReader reader = new BufferedReader(new InputStreamReader(socket.getSocket().getInputStream()));
// At this point it is too early to read. So it most likely return false
System.out.println("Buffer Reader ready? " + reader.ready());
// StringBuilder to hold the response
StringBuilder sb = new StringBuilder();
// Indicator to show if we have started to receive data or not
boolean dataStreamStarted = false;
// How many times we went to sleep waiting for data
int sleepCounter = 0;
// How many times (max) we will sleep before bailing out
int sleepMaxCounter = 5;
// Sleep max counter after data started
int sleepMaxDataCounter = 50;
// How long to sleep for each cycle
int sleepTime = 5;
// Start time
long startTime = System.currentTimeMillis();
// This is a tight loop. Not sure what it will do to CPU
while (true) {
if (reader.ready()) {
sb.append((char) reader.read());
// Once started we do not expect server to stop in the middle and restart
dataStreamStarted = true;
} else {
Thread.sleep(sleepTime);
if (dataStreamStarted && (sleepCounter >= sleepMaxDataCounter)) {
System.out.println("Reached max sleep time of " + (sleepMaxDataCounter * sleepTime) + " ms after data started");
break;
} else {
if (sleepCounter >= sleepMaxCounter) {
System.out.println("Reached max sleep time of " + (sleepMaxCounter * sleepTime) + " ms. Bailing out");
// Reached max timeout waiting for data. Bail..
break;
}
}
sleepCounter++;
}
}
long endTime = System.currentTimeMillis();
System.out.println(sb.toString());
System.out.println("Time " + (endTime - startTime));
return sb.toString();
}
Problem 2, I don't know what is the best way for doing that, I'm just having a thread that constantly wait for other clients, and accept it. But this also takes a lot of CPU overhead.
// Listner to accept any client connection
#Override
public void run() {
while (true) {
try {
mutex.acquire();
if (!welcomeSocket.isClosed()) {
connectionSocket = welcomeSocket.accept();
// Thread.sleep(5);
}
} catch (IOException ex) {
Logger.getLogger(TCPServerConnectionListner.class.getName()).log(Level.SEVERE, null, ex);
} catch (InterruptedException ex) {
Logger.getLogger(TCPServerConnectionListner.class.getName()).log(Level.SEVERE, null, ex);
}
finally
{
mutex.release();
}
}
}
}
A Profiler Picture would also help, but I'm wondering why the SwingWorker Thread Takes that much time?
Update Code For Problem One:
public static String convertStreamToString(TCPServerConnectionListner socket) throws UnsupportedEncodingException, IOException, InterruptedException {
byte[] resultBuff = new byte[0];
byte[] buff = new byte[65534];
int k = -1;
k = socket.getSocket().getInputStream().read(buff, 0, buff.length);
byte[] tbuff = new byte[resultBuff.length + k]; // temp buffer size = bytes already read + bytes last read
System.arraycopy(resultBuff, 0, tbuff, 0, resultBuff.length); // copy previous bytes
System.arraycopy(buff, 0, tbuff, resultBuff.length, k); // copy current lot
resultBuff = tbuff; // call the temp buffer as your result buff
return new String(resultBuff);
}
}
![snapshot][2]
Just get rid of the ready() call and block. Everything you do while ready() is false is literally a complete waste of time, including the sleep. The read() will block for exactly the right amount of time. A sleep() won't. You are either not sleeping for long enough, which wastes CPU time, or too long, which adds latency. Once in a while you may sleep for the correct time, but this is 100% luck, not good management. If you want a read timeout, use a read timeout.
You appear to be waiting until there is not more data after some timeout.
I suggest you use Socket.setSoTimeout(timeout in seconds)
A better solution is to not need to do this by having a protocol which allows you to know when the end of data is reached. You would only do this if you the server is poorly implemented and you have no way to fix it.
For Problem 1. 100% CPU may be because you are reading single char from the BufferedReader.read(). Instead you can read chunk of data to an array and add it to your stringbuilder.