I am working on an Android App that changes the CPU Frequency when a foreground app changes. The frequencies for the foreground app is defined in my application itself. But while changing the frequencies my app has to open multiple system files and replace the frequency with my text. This makes my UI slow and when I change apps continuously, it makes the systemUI crash. What can I do to write these multiple files all together at the same time?
I have tried using ASynctaskLoader but that too crashes the SystemUI later.
public static boolean setFreq(String max_freq, String min_freq) {
ByteArrayInputStream inputStream = new ByteArrayInputStream(max_freq.getBytes(Charset.forName("UTF-8")));
ByteArrayInputStream inputStream1 = new ByteArrayInputStream(min_freq.getBytes(Charset.forName("UTF-8")));
SuFileOutputStream outputStream;
SuFileOutputStream outputStream1;
try {
if (max_freq != null) {
int cpus = 0;
while (true) {
SuFile f = new SuFile(CPUActivity.MAX_FREQ_PATH.replace("cpu0", "cpu" + cpus));
SuFile f1 = new SuFile(CPUActivity.MIN_FREQ_PATH.replace("cpu0", "cpu" + cpus));
outputStream = new SuFileOutputStream(f);
outputStream1 = new SuFileOutputStream(f1);
ShellUtils.pump(inputStream, outputStream);
ShellUtils.pump(inputStream1, outputStream1);
if (!f.exists()) {
break;
}
cpus++;
}
}
} catch (Exception ex) {
}
return true;
}
I assume SuFile and SuFileOutputStream are your custom implementations extending Java File and FileOutputStream classes.
Couple of points need to be fixed first.
f.exists() check should be before initializing OutputStream, otherwise it will create the file before checking exists or not. This makes your while loop to become an infinite loop.
as #Daryll suggested, use the number of CPUs with while/for loop. I suggest using for loop.
close your streams after pump(..) method call.
If you want to keep the main thread free, then you can do something like this,
see this code segment:
public static void setFreq(final String max_freq, final String min_freq) {
new Thread(new Runnable() {
//Put all the stuff here
}).start();
}
This should solve your problem.
Determine the number of CPUs before hand and use that number in your loop rather than using a while (true) having to do SuFile.exists() every cycle.
I don't know what SuFileOutputStream is but you may need to close those file output streams or find a faster way to write the file if that implementation is too slow.
Related
I need help with running parallel operations. The goal of the code is to extract a large amount of small files from the same tar in different folders in a very short time
This is the code:
public void decompress(File archive, File destination) throws RuntimeException {
try (InputStream in = new FileInputStream(archive);
BufferedInputStream buff = new BufferedInputStream(in);
TarArchiveInputStream is = (TarArchiveInputStream) new ArchiveStreamFactory().createArchiveInputStream("tar", buff)
) {
TarArchiveEntry entry;
while ((entry = is.getNextTarEntry()) != null) {
File file = new File(destination, entry.getName());
file.getParentFile().mkdirs();
Files.write(file.toPath(), is.readAllBytes());
}
} catch (IOException | ArchiveException e) {
e.printStackTrace();
}
}
When I execute one time this operation, it takes ~900ms
But when I do something like this to execute the same operation, multiple times in parallel it takes 20000ms:
ExecutorService EXECUTOR_SERVICE = Executors.newFixedThreadPool(20);
File archive = ...;
for (int i = 0; i < 5; i++) {
File directory = new File("Dir_" + i);
EXECUTOR_SERVICE.submit(() -> decompress(archive, directory));
}
or
File archive = ...;
for (int i = 0; i < 5; i++) {
File directory = new File("Dir_" + i);
new Thread(() -> decompress(archive, directory)).start();
}
One suspicion is that the directories contain many files, hence File.mkdirs does needlessly much checks.
The constructor of BufferedInputStream may have a custom buffer size. Never helped much, but it might be with your disk. Also with parallelism it could help to prevent much "disk head movements."
You probably already tried Files.copy but still, it might have a better memory behavior that readAllBytes.
So the version becomes (eschewing File in favor of Path):
public void decompress(File archive, File destination) throws RuntimeException {
final int bufferSize = 1024 * 128;
Path archivePath = archive.toPath();
Path destinationPath = destination.toPath();
try (InputStream in = new FileInputStream(archive);
BufferedInputStream buff = new BufferedInputStream(in, bufferSize);
TarArchiveInputStream is = (TarArchiveInputStream)
new ArchiveStreamFactory().createArchiveInputStream("tar", buff)
) {
Path oldFileParent = destinationPath;
oldFileParent.createDirectories();
TarArchiveEntry entry;
while ((entry = is.getNextTarEntry()) != null) {
Path file = Paths.get(destinationPath, entry.getName());
Path fileParent = file.getParent();
if (!fileParent.equals(oldFileParent)) {
oldFileParent = fileParent;
oldFileParent.createDirectories();
}
Files.copy(is, file);
//Files.write(file, is.readAllBytes());
}
} catch (IOException | ArchiveException e) {
e.printStackTrace();
}
}
Throwing a RuntimeException and capturing the IOException/ArchiveException without throwing it back (as new IllegalStateException(e)) is a matter of taste.
Now to adding parallelism: disk output is probably the bottleneck. Writing two files to the same disk in parallel means skipping back and forth on the disk. Small files might just do.
Better seems to parallelize reading a next file and then in another thread write it.
Two threads might theoretically perform better than many threads with enhightened disk traffic. readAllBytes might then be appropriate, to let the writing thread not use is.
As in the tar entry maybe the file size is kept too, that would allow to check whether readAllBytes is efficient enough - for large files.
Logging was mentioned in this question. It is known, that that can consume much time, and with parallelism becomes even more critical. But you seem to be aware of it. You wrote having written your own logger. For a library System.Logger is actually best. It is a façade that uses any logger the application provides. This would have prevented the logger vulnaribility hidden in library dependencies of the past year.
Ignoring the fact that you are not decompressing the file in parallel here (you are running multiple threads decompressing the same file concurrently, essentially overwriting the result), there may be several reasons for this performance hit. I/O is one, so it depends on the underlying implementation. Also, what is the Logger you are using there? While other parts of your code doesn't seem to be shared among multiple threads, the static call to Logger is something that is shared.
Also note: java.nio uses FileChannels which provide synchronous I/O, so depending on how you create the channels, you may get into similar situations (though I don't believe this applies here).
I am given an assignment where we are not allowed to use a DB or libraries but only textfile for data storage.
But it has rather complex requirements, for e.g. many validations, because of that, we need to "access the db" (i.e. read the textfile) many times.
My question is: should I create a class like this:
class SomeRepository{
static ArrayList<Users> users = new ArrayList();
public SomeRepository(){
//instantiate this class on program load
//In constructor, we read the text file, instantiate and store everything inside the arraylist.
}
//public getOneUser(){ // for get methods, we don't read from text file at all }
/public save() { //text file saving code overhere }
}
Is this a good approach to solve the above problem? Currently, what we are doing is reading and writing to the text file every time we want to retrieve some data or write something new.
Wouldn't this be too expensive in terms of heap space memory? Or should I just read/write to the text file for every method?
public class IOManager {
public static void writeObjToTxtFile(String fileName, Object object) {
File file = new File(fileName + ".txt");//File will be created in the root directory where the program runs.
try (FileOutputStream fos = new FileOutputStream(file);
ObjectOutputStream oos = new ObjectOutputStream(fos);) {
oos.writeObject(object);
} catch (IOException e) {
e.printStackTrace();
}
}
public static Object readObjFromTxtFile(String fileName) {
Object obj = null;
File file = new File(fileName + ".txt");
FileInputStream fis = null;
try {
fis = new FileInputStream(file);
ObjectInputStream ois = new ObjectInputStream(fis);
obj = ois.readObject();
} catch (ClassNotFoundException | IOException e) {
e.printStackTrace();
}
return obj;
}
}
Add this class to your project. Since it's general for all Objects, you can pass and receive Objects like these as well: ArrayList<Users>. Play around and Tinker with it to fit whatever your specific purpose is. Hint: You can write other custom methods that calls these methods. eg:
public static void writeUsersToFile(ArrayList<Users> usersArrayList){
writeObjToTxtFile("users",usersArrayList);
}
Ps. Make sure your Objects implement Serializable. Eg:
public class Users implements Serializable {
}
I would suggest reading the contents of your file to a dynamic list such as an arraylist at the start of your program. Make the required queries/changes to your arraylist and then write that arraylist to your file when the program is set to close. This will save significant time over repeated file reads/writes.
This isn't without it's drawbacks, though. You don't want to hogg up memory in case of very large files - but considering this is an assignment, that may not be the case. Additionally, should your program terminate prior to the write at the end, all changes made to your database during the current execution will be lost.
I want to write data to file when it's opened, but it doesn't work. Calendar getTime works nice, System.out.println() proves this. Please, any idea, what's wrong...?
Main class:
public static void main(String[] args) throws IOException {
// TODO code application logic here
CurrentTime ct = new CurrentTime();
}
CurrentTime class:
public class CurrentTime {
public OutputStream output;
public InputStream input;
public Process npp;
CurrentTime() throws IOException
{
Timer t = new Timer();
npp = Runtime.getRuntime().exec("notepad");
output = npp.getOutputStream();
TimerTask task = new TimerTask() {
#Override
public void run()
{
String dateStr = Calendar.getInstance(new Locale("ua", "UA")).getTime().toString();
System.out.println(dateStr);
try {
output.write(dateStr.getBytes());
output.flush();
} catch (IOException ex) {
Logger.getLogger(CurrentTime.class.getName()).log(Level.SEVERE, null, ex);
}
}
};
t.schedule(task, 1000, 2000);
}
}
Maybe this code is wrong in all, np. In this way, I want to discover this moment by any side, is it impossible at all?
UPDATE: it's not actual anymore but just for a note, that time I was trying to implement some kind of tailing operation to the text editor directly and now I understand how abnormal this idea was.. had to be implemented using totally other way of course.
Interesting:
Lets deal this in simple way.
1. Save a file test.txt somewhere.
2. Open that file and keep it opened
In Java write to this file (Standard Code)
FileWriter fw = new FileWriter(new FileOutputStream(new File("c:/test.txt")));
fw.write("ABC")
Now go to notepad file again. I normally used Textpad it does refresh automatically (by an alert) because we changed it behind the scene (In your case through Java).
I hope that will clarify a bit.
To be fare trying to excess the genric notepad exe doesn't gurrantee which file you will write in. I am not sure how windows deal with it because you can open 3 different files at one time and which one you will expect to have your data written through java???
You're doing it wrong - It's impossible. notepad absolutely ignores it's input while it's running (like most GUI-programs). If you want to show a textbox and write text in it, simply create one with Swing/SWT/...
If you just want to write into a file, just create a new PrintWriter and use it to write files: http://docs.oracle.com/javase/6/docs/api/java/io/PrintWriter.html
You shouldn't try to write through Notepad. Check out PrintWriter.
I recently added filelocks to my downloader asynctask:
FileOutputStream file = new FileOutputStream(_outFile);
file.getChannel().lock();
and after download completes, file.close() to release lock.
From a called BroadcastReceiver (different thread), I need to go through the files and see which are downloaded and which are still locked. I started with trylock:
for (int i=0; i<files.length; i++) {
try {
System.out.print((files[i]).getName());
test = new FileOutputStream(files[i]);
FileLock lock = test.getChannel().tryLock();
if (lock != null) {
lock.release();
//Not a partial download. Do stuff.
}
} catch (Exception e) {
e.printStackTrace();
} finally {
test.close();
}
}
Unfortunately I read the file is truncated (0 bytes) when the FileOutputStream is created.
I set it to append, but the lock doesn't seem to take effect, all appear to be un-locked (fully downloaded)
Is there another way to check if a write-lock is applied to the file currently, or am I using the wrong methods here? Also, is there a way to debug file-locks, from the ADB terminal or Eclipse?
None of this is going to work. Check the Javadoc. Locks are held on behalf of the entire process, i.e. the JVM, not by individual threads.
My first thought would be to open it for append per the javadocs
test = new FileOutputStream(files[i], true); // the true specifies for append
I'm writing a play 2 application and I am struggling with a file streaming problem.
I retrieve my files using a third party API with a method having the following signature:
FileMetadata getFile(OutputStream destination, String fileId)
In a traditional Servlet application, if I wanted to send the content to my client I would have done something like:
HttpServletResponse resp;
myService.getFile(resp.getOutpuStream, fileId);
My problem is that in my play 2 Controller class I don't have access to the underlying OuputStream, so the simplest implementation of my controller method would be:
public static downloadFile(String id) {
ByteArrayOutputStream baos = new BAOS(...);
myApi.getFile(baos,id); //Load inside temp Array
ByteArrayInputStream bais = new BAIS(baos.toByteArray())
return Ok(bais);
}
It will work but it requires to load the whole content into memory before serving it so it's not an option (files can be huge).
I was thinking of a solution consisting in:
Defining a ByteArrayOutputStream (baos) inside my controller
Calling the third party API with this baos in parameter
Using the chunk return of the play framework to send the content of
the baos as soon as something is written inside by the 3rd party API
Problem is that I don't know if it possible (call to getFile is blocking so it would require multiple threads with a shared OutputStream) nor if it's overkill.
As someone ever faced this kind of problem and found a solution?
Could my proposed solution solve my problem?
Any insights will be appreciated.
Thanks
EDIT 1
Based on kheraud suggestion I have managed to have a working, but still not perfect, solution (code below).
Unfortunately if a problem occurs during the call to the getFile method, error is not sent back to the client (because I returned Ok) and the browser waits indefinitely for a file that will never come.
Is there a way to handle this case ?
public static Result downloadFile(String fileId {
Thread readerThread = null;
try {
PipedOutputStream pos = new PipedOutputStream();
PipedInputStream pis = new PipedInputStream(pos);
//Reading must be done in another thread
readerThread = new DownloadFileWorker(fileId,pos);
readerThread.start();
return ok(pis);
} catch (Exception ex) {
ex.printStackTrace();
return internalServerError(ex.toString());
}
}
static class DownloadFileWorker extends Thread{
String fileId;
PipedOutputStream pos;
public DownloadFileWorker(String fileId, PipedOutputStream pos) {
super();
this.fileId = fileId
this.pos = pos;
}
public void run(){
try {
myApi.getFile(pos,fileId);
pos.close();
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
EDIT 2
I found a way to avoid infinite loading of the page by simply adding a pos.close in the catch() part of the worker thread. Client ends up with a zero KB file but I guess that's better than an infinite waiting.
There is something in the Play2 Scala framework made for that : Enumerators. This is very close to what you are thinking about.
You should have a look at this doc page for details
I didn't find something similar in the Play2 Java API, but looking in the fw code source, you have a :
public static Results.Status ok(java.io.InputStream content, int chunkSize)
method which seams to be what you are looking for. The implementation can be found in play.mvc.Results and play.core.j.JavaResults classes.
On the Play! mailing list, there recently was a discussion on the same topic:
https://groups.google.com/forum/#!topic/play-framework/YunJzgxPKsU/discussion
It includes a small snippet that allows non-scala-literates (like myself) use the scala streaming interface of Play!.