Why can't the jvm write to gpio on raspberry pi? - java

I have a java method that looks like,
private void exportGpio(){
String fullPath = path + "/export"; // /sys/class/gpio/export
FileWriter writer = null;
try {
writer = new FileWriter(fullPath);
writer.write("" + number);
} catch (IOException e) {
Log.e(TAG + number, "Could not export", e);
}
finally {
if(writer != null){
try {
writer.flush(); <- FAILING HERE
writer.close();
} catch (IOException e) {
Log.e(TAG + number, "Could not close writer", e);
}
}
}
}
Once it gets to the flush it throws an exception
java.io.IOException: Device or resource busyjava.io.IOException:
Device or resource busy at java.io.FileOutputStream.writeBytes(Native
Method) at java.io.FileOutputStream.write(FileOutputStream.java:345)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) at
sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291) at
sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295) at
sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141) at
java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229) at
lights.GPIO.exportGpio(GPIO.java:106) at
lights.GPIO.(GPIO.java:34) at
lights.LightManager.(LightManager.java:34) at
main.Main.createSubsystems(Main.java:17) at
main.Main.main(Main.java:34)
What is going on? Can java not interact with syses on a raspberry pi?

No it can not do it directly so easily.
General Purpose I/O pins are some input/output that we can put some High or Low voltages on them. Or we can read some High or Low voltages from them.
They are not digital port interfaces that we can write bits and bytes of the digital world into them directly. You need some low level programming interfaces to read/write on GPIOs.
These low level programming APIs can translate your 0 or 1 as some high and low voltages.
There is a very elegant library called Pi4J that you can use very easily in your code. They have a very good documentation which help you through working with Raspberry PI board. If you are a programmer from high level programming languages like java, it gives you the good flavor of event based programming with support of EventListeners instead of polling and interrupt for reading from a I/O pins. If you are not forced to work directly on the device it is very good alternative to work with.
Hope this would be helpful.

I did end up figuring this out. I looked at STaefi answers and I think your wrong. The os drivers should handle the voltages. What I discovered was that the FileWritter class in java does not work well with these types of virtual files. I ended up trying the java print writer class and everything works.
PrintWriter writer = null;
try {
writer = new PrintWriter(fullPath, "UTF-8");
writer.write("1");
} catch (IOException e) {
Log.e(TAG + number, "Could not turn on", e);
}
finally {
if(writer != null){
try {
writer.close();
} catch (Exception e) {
Log.e(TAG + number, "Could not close writer", e);
}
}
}
Im still not exactly sure why the file writers don't work but this solution will at least help anyone else that gets stuck here along the way.

Related

Writing huge json object to file

So i have a huge JSONObject that I need to write to a file, right now my code work perfectly on 90% of the devices, the problem is on low memory devices such as Amazon Fire TV the app crashes with an error "java.lang.OutOfMemoryError".
I wonder is there another more memory friendly way to write that json object to file?
That's my code:
try{
Writer output = null;
if(jsonFile.isDirectory()){
jsonFile.delete();
}
if(!jsonFile.exists()){
jsonFile.createNewFile();
}
output = new BufferedWriter(new FileWriter(jsonFile));
output.write(mainObject.toString());
output.close();
} catch (Exception e) {
e.printStackTrace();
}

Is it necessary to call close() in a finally when writing files in Java?

There are a few examples where people call close() in their finally block when writing to a file, like this:
OutputStream out = null;
try {
out = new FileOutputStream(file);
out.write(data);
out.close();
}
catch (IOException e) {
Log.e("Exception", "File write failed: " + e.toString());
} finally {
try {
if (out != null) {
out.close();
}
} catch (IOException e) {
Log.e("Exception", "File write failed: " + e.toString());
}
}
But there are many more examples, including the official Android docs where they don't do that:
try {
OutputStream out = new FileOutputStream(file);
out.write(data);
out.close();
}
catch (IOException e) {
Log.e("Exception", "File write failed: " + e.toString());
}
Given that the second example is much shorter, is it really necessary to call close() in finally as shown above or is there some mechanism that would clean up the file handle automatically?
Every situation is a little bit different, but it's best to always close in a finally block OR use java's try with resources syntax, which is effectively the same thing.
The risk you run by not closing in the finally block is that you end up with an unclosed stream object after the catch gets triggered. Also, if your stream is buffered, you might need that final close to flush the buffer to ensure that the write is fully complete. It may be a critical error to complete without that final close.
Not closing the file may mean the stream contents don't get flushed in a timely way. This is a concern in the case that the code completes normally and doesn't throw an exception.
If you don't close a FileOutputStream, then its finalize method will try to close it. However, finalize doesn't get called immediately, it is not guaranteed to get called at all, see this question. It would be better not to rely on this.
If JDK7's try-with-resources is an option then use it:
try (OutputStream out = new FileOutputStream(file);) {
out.write(data);
}
Otherwise, for JDK6, make sure the exception thrown on close can't mask any exception thrown in the try block:
OutputStream out = new FileOutputStream(file);
try {
out.write(data);
} finally {
try {
out.close();
} catch (IOException e) {
Log.e("Exception", "File write failed: " + e.toString());
}
}
It may be verbose but it makes sure the file gets closed.
Yes, it is. Unless you let someone else manage that for you.
Some solutions for that:
Java try-with-resources (from Java 7 / API Level 19)
Guava Sources and Sinks (has a nice explanation)
It is very important it always close when are writing to or reading a file because if you don't close the writer or the reader. You won't be able to write to a file unless you have the reader closed and vice versa.
Also if you don't close, you might cause some memory leaks. Always remember to close whatever you have open. It doesn't hurt to close, if anything it makes your code more efficient.

Attempting to overwrite files results in blank files

In the app I am working on right now, part of the functionality is to write data saved on the device to a flash drive connected via a USB-OTG adapter. Specifically, the device is a rooted Motorola Xoom running 4.2.2. I can successfully write files to the drive and read them on my computer. That part works fine. However, when I try to replace existing files with new information, the resulting files come out empty. I even delete the existing files before writing new data. What's weird is that after copying the contents of my internal file to the flash drive, I log the length of the resulting file. It always matches the input file and is always a non-0 number, yet the file still shows up as blank on my computer. Can anyone help with this problem? Relevant code from the AsyncTask that I have doing this work is below.
#Override
protected Void doInBackground(Void... params) {
File[] files = context.getFilesDir().listFiles();
for (File file : files) {
if (file.isFile()) {
List<String> nameSegments = Arrays.asList(file.getName().split(
"_"));
Log.d("source file", "size: " + file.length());
String destinationPath = "/storage/usbdisk0/"
+ nameSegments.get(0) + "/" + nameSegments.get(1) + "/";
File destinationPathFile = new File(destinationPath);
if (!destinationPathFile.mkdirs()) {
destinationPathFile.mkdirs();
}
File destinationFile = new File(destinationPathFile,
nameSegments.get(2));
FileReader fr = null;
FileWriter fw = null;
try {
fr = new FileReader(file);
fw = new FileWriter(destinationFile, false);
int c = fr.read();
while (c != -1) {
fw.write(c);
c = fr.read();
}
fw.flush();
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
fr.close();
fw.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Log.d("destination file", "size: " + new File(destinationFile.getPath()).length());
}
}
return null;
}
EDIT:
Per #Simon's suggestion, I added output.flush() to my code. This does not change the result.
EDIT #2:
I did some further testing with this and found something interesting. If I go to Settings->Storage->Unmount USB Storage after writing to the flash drive but before removing it from the OTG adapter, everything works perfectly. However, failing to eject the drive after writing results in the data not being written. What's strange is that the folder structure and file itself are created on the drive, but the file is always empty. One more thing: if I go to a file manager application and open up the file prior to removing the drive, the files all exist as they should. However, even removing the device, plugging it straight back in to the tablet and opening any of the files results in the file looking empty. I can't make heads or tails of this, and this is incredibly frustrating. Can anyone help with this?
EDIT #3:
I also changed to using FileReaders and FileWriters just to wee what would happen. I don't care about efficiency at this point, I simply want file writing to work reliably. This change did not affect the issue. Updated code is posted above.
Try using FileReader.ready() method before your FileReader.read() call,
and ensure if your FileReader really has some bytes in it.
Try this , Used buffered reader for writing
try
{
fw = new FileWriter(destinationFile);
BufferedWriter writer=new BufferedWriter(fw);
writer.append(yourText); // Append can be changed to write or something if you want to overwrite
writer.close();
}
catch (Exception e) {
throw new RuntimeException(e);
}
finally {
if (fw != null) {
try {
fw.flush();
fw.close();
}
catch (IOException e) {
}
I found the solution to my problem. It appears that the Android system buffers some files off of the SD card/flash drive, and then writes them to the flash drive upon eject. The following code after my file operations synchronizes the buffer with the filesystem and allows the flash drive to be immediately removed from the device without data loss. It's worth noting that this DOES require root access; it will not work on a non-rooted device.
try {
Process p = Runtime.getRuntime().exec("su");
DataOutputStream os = new DataOutputStream(p.getOutputStream());
os.writeBytes("sync; sync\n");
os.writeBytes("exit\n");
os.flush();
} catch (Exception e) {
e.printStackTrace();
}
Source of my solution: Android 2.1 programatically unmount SDCard
It sounds like the filesystem is caching your changes, but not actually writing them to the flash drive until you eject it. I don't think there's a way to flush the filesystem cache, so the best solution seems to be just to unmount and then remount the flash drive.

Java Temporary File Multithreaded Application

I'm looking for a foolproof way to generate a temporary file that will have always end up with a unique name on a per JVM basis. Basically I want to be sure in a multithreaded application that if two or more threads attempt to create a temporary file at the exact same moment in time that they will both end up with a unique temporary file and no exceptions will be thrown.
This is the method I have currently:
public File createTempFile(InputStream inputStream) throws FileUtilsException {
File tempFile = null;
OutputStream outputStream = null;
try {
tempFile = File.createTempFile("app", ".tmp");
tempFile.deleteOnExit();
outputStream = new FileOutputStream(tempFile);
IOUtils.copy(inputStream, outputStream);
} catch (IOException e) {
logger.debug("Unable to create temp file", e);
throw new FileUtilsException(e);
} finally {
try { if (outputStream != null) outputStream.close(); } catch (Exception e) {}
try { if (inputStream != null) inputStream.close(); } catch (Exception e) {}
}
return tempFile;
}
Is this perfectly safe for what my goal is? I reviewed the documentation at the below URL but I'm not sure.
See java.io.File#createTempFile
The answer posted at the below URL answers my question. The method I posted is safe in a multithreaded single JVM process environment. To make it safe in a multithreaded multi-JVM process environment (e.g. a clustered web app) you can use Chris Cooper's idea which involves passing a unique value in the prefix argument for the File.createTempFile method within each JVM process.
Is createTempFile thread-safe?
Just use the thread name and current time in millis to name the file.
You can supply a different prefix or suffix to the temporary files for this exact reason.
Assign a unique ID to each process starting up, and use that unique id as the prefix or suffix, multiple threads in the same VM will not clash, and now VMs will not clash either.

IFS file copy using JT400 in code

I have this piece of code that would copy files from IFS to a local drive. And I would like to ask some suggestions on how to make it better.
public void CopyFile(AS400 system, String source, String destination){
File destFile = new File(destination);
IFSFile sourceFile = new IFSFile(system, source);
if (!destFile.exists()){
try {
destFile.createNewFile();
} catch (IOException e) {
e.printStackTrace();
}
}
IFSFileInputStream in = null;
OutputStream out = null;
try {
in = new IFSFileInputStream(sourceFile);
out = new FileOutputStream(destFile);
// Transfer bytes from in to out
byte[] buf = new byte[1024];
int len;
while ((len = in.read(buf)) > 0) {
out.write(buf, 0, len);
}
} catch (AS400SecurityException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if(in != null) {
in.close();
}
if(out != null) {
out.close();
}
} catch (IOException e) {
e.printStackTrace();
}
} // end try catch finally
} // end method
Where
source = full IFS path + filename and
destination = full local path + filename
I would like to ask some things regarding the following:
a. Performance considerations
would this have a big impact in terms for CPU usage for the host AS400 system?
would this have a big impact on the JVM to be used (in terms of memory usage)
would including this to a web app affect app server performance (would it be a heavy task or not)?
would using this to copy multiple files (running it redundantly) be a big burden to all resources involved?
b. Code Quality
Did my implementation of IFSFileInputStream suffice, or would a simple FileInputStream object do the job nicely?
AFAIK, I just needed the AS400 object to make sure the source file referenced is a file from IFS.
I am a noob at AS400 and IFS an would like to ask an honest opinion from experienced ones.
All in all it looks fine (without trying). It should not have a noticeable impact.
in.read() may return 0. Test for -1 instead.
Instead of manually buffering, just wrap in and out with their respective BufferedInputStream/BufferedOutputstream and read one character at a time and test it for -1.
try-catch is hard to get pretty. This will do, but you will later get more experience and learn how to do it somewhat better.
Do NOT swallow exceptions and print them. The code calling you will have no idea whether it went well or not.
When done with an AS400 object, use as400.disconnectAllServices().
See IBM Help example code:
http://publib.boulder.ibm.com/infocenter/iadthelp/v7r1/index.jsp?topic=/com.ibm.etools.iseries.toolbox.doc/ifscopyfileexample.htm
Regards

Categories