I have got a .txt file which stores a players' money. I need this file to increment or detriment a certain amount depending on if the player kills something or if they buy something from the shop.
The issue is that I do not know how to actually increment or detriment the contents. I can delete/recreate the .txt file with the new money, however because multiple threads will be accessing the file, then there is the risk that the file may not exist due to it being deleted and not regenerated yet.
Just to clarify, there will only be one thread at a time modifying the file. Other threads will only be reading the file.
So how would I do this without deleting the data/file first?
Here is the code ,read file first and then increment it and store again -
BufferedWriter out = null;
try {
// Read File Contents - score
BufferedReader br = new BufferedReader(new FileReader("c:\\a.txt"));
String storedScore="0";
int storedScoreNumber = 0;
while ((storedScore = br.readLine()) != null) {
storedScoreNumber=(Integer.parseInt(storedScore==null?"0":storedScore));
}
// Write File Contents - incremented socre
out = new BufferedWriter(new FileWriter("c:\\a.txt", false));
out.write(String.valueOf(storedScoreNumber+1));
} catch (IOException e) {
e.printStackTrace();
} finally {
if (out != null) {
try {
out.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Have a singleton data accessor with a queue so that it is the only one manipulating the file. If necessary acknowledge to client threads after the write.
Related
I am making a save/load feature for the settings in my application. Upon launching the program, it tries to find the file. If it fails, it tries to create a file with default settings (code below)
try (FileWriter fileWriter = new FileWriter(absolutePath))
{
fileWriter.write("theme=light\n");
fileWriter.write("resolution=1280x720\n");
fileWriter.write("printfps=false\n");
System.out.println("Reset settings");
load();
}
catch (FileNotFoundException e)
{
System.out.println("Settings File not found.");
}
catch (IOException e)
{
e.printStackTrace();
}
After it has written this, it goes on to load the file. (calling load() method)
In the load method, the application reads the contents of the file (code below).
try (BufferedReader bufferedReader = new BufferedReader(new FileReader(absolutePath)))
{
String line = bufferedReader.readLine();
System.out.println(line);
while(line != null)
{
if (line.contains("="))
{
String key = line;
String value = line;
while (key.contains("="))
{
key = key.substring(0, key.length() - 1);
}
while (value.contains("="))
{
value = value.substring(1);
}
settings.put(key, value);
}
System.out.println(line);
line = bufferedReader.readLine();
}
System.out.println(settings);
}
However, it returns that the file is empty. After messing with breakpoints, I can confirm that the file is indeed not updated at that point. The rather weird thing is that if I pause the application at a later time, the file seems to contain the text that was written to it, even though the file is not touched later in the program.
This makes me believe that it takes some time for the file to update, thus not updating in time for the load() method. Is this correct, or am I missing something? And is there a workaround?
All help is appreciated :)
You're calling load() before you actually saved the file.
To save the file, call fileWriter.close() or just move the load() call out of the try-with-resource block with the FileWriter:
try (FileWriter fileWriter = new FileWriter(absolutePath))
{
fileWriter.write("theme=light\n");
fileWriter.write("resolution=1280x720\n");
fileWriter.write("printfps=false\n");
}
catch (FileNotFoundException e)
{
System.out.println("Settings File not found.");
}
catch (IOException e)
{
e.printStackTrace();
}
// FileWriter closed now and the file contents saved
System.out.println("Reset settings");
load();
Only one instance of my Java application can run at a time. It runs on Linux. I need to ensure that one thread doesn't modify the file while the other thread is using it.
I don't know which file locking or synchronization method to use. I have never done file locking in Java and I don't have much Java or programming experience.
I looked into java NIO and I read that "File locks are held on behalf of the entire Java virtual machine. They are not suitable for controlling access to a file by multiple threads within the same virtual machine." Right away I knew that I needed expert help because this is production code and I have almost no idea what I'm doing (and I have to get it done today).
Here's a brief outline of my code to upload some stuff (archive files) to a server. It gets the list of files to upload from a file (call it "listFile") -- and listFile can be modified while this method is reading from it. I minimize the chances of that by copying listFile to a temp file and using that temp file thereafter. But I think I need to lock the file during this copy process (or something like that).
package myPackage;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
import com.example.my.FileHelper;
import com.example.my.Logger;
public class BatchUploader implements Runnable {
private int processUploads() {
File myFileToUpload;
File copyOfListFile = null;
try {
copyOfListFile = new File("/path/to/temp/workfile");
File origFile = new File("/path/to/listFile"); //"listFile" - the file that contains a list of files to upload
DataWriter.copyFile(origFile, copyOfListFile);//see code below
} catch (IOException ex) {
Logger.log(ex);
}
try {
BufferedReader input = new BufferedReader(new FileReader(copyOfListFile));
try {
while (!stopRunning && (fileToUploadName = input.readLine()) != null) {
upload(new File(fileToUploadName));
}
} finally {
input.close();
isUploading = false;
}
}
return filesUploadedCount;
}
}
Here is the code that modifies the list of files to be uploaded used in the above code:
public class DataWriter {
public void modifyListOfFilesToUpload(String uploadedFilename) {
StringBuilder content = new StringBuilder();
try {
File listOfFiles = new File("/path/to/listFile"); //file that contains a list of files to upload
if (!listOfFiles.exists()) {
//some code
}
BufferedReader input = new BufferedReader(new FileReader(listOfFiles));
try {
String line = "";
while ((line = input.readLine()) != null) {
if (!line.isEmpty() && line.endsWith(FILE_EXTENSION)) {
if (!line.contains(uploadedFilename)) {
content.append(String.format("%1$s%n", line));
} else {
//some code
}
} else {
//some code
}
}
} finally {
input.close();
}
this.write("/path/to/", "listFile", content.toString(), false, false, false);
} catch (IOException ex) {
Logger.debug("Error reading/writing uploads logfile: " + ex.getMessage());
}
}
public static void copyFile(File in, File out) throws IOException {
FileChannel inChannel = new FileInputStream(in).getChannel();
FileChannel outChannel = new FileOutputStream(out).getChannel();
try {
inChannel.transferTo(0, inChannel.size(), outChannel);
} catch (IOException e) {
throw e;
} finally {
if (inChannel != null) {
inChannel.close();
}
if (outChannel != null) {
outChannel.close();
}
}
}
private void write(String path, String fileName, String data, boolean append, boolean addNewLine, boolean doLog) {
try {
File file = FileHelper.getFile(fileName, path);
BufferedWriter bw = new BufferedWriter(new FileWriter(file, append));
bw.write(data);
if (addNewLine) {
bw.newLine();
}
bw.flush();
bw.close();
if (doLog) {
Logger.debug(String.format("Wrote %1$s%2$s", path, fileName));
}
} catch (java.lang.Exception ex) {
Logger.log(ex);
}
}
}
My I suggest a slightly different approach. Afair on Linux the file rename (mv) operation is atomic on local disks. No chance for one process to see a 'half written' file.
Let XXX be a sequence number with three (or more) digits. You could let your DataWriter append to a file called listFile-XXX.prepare and write a fixed number N of filenames into it. When N names are written, close the file and rename it (atomic, see above) to listFile-XXX. With the next filename, start writing to listFile-YYY where YYY=XXX+1.
Your BatchUploader may at any time check whether it finds files matching the pattern listFile-XXX, open them, read them upload the named files, close and delete them. There is no chance for the threads to mess up each other's file.
Implementation hints:
Make sure to use a polling mechanism in BatchUploader that waits 1 or more seconds if it does not find a file ready for upload (prevent idle wait).
You may want to make sure to sort the listFile-XXX according to XXX to make sure the uploading is kept in sequence.
Of course you could variate the protocol of when listFile-XXX.prepare is closed. If DataWriter has nothing to do for a longer time, you don't want to have files ready for upload hang around just because there are not yet N ready.
Benefits: no locking (which will be a pain to get right), no copying, easy overview over the work queue and it state in the file system.
Here is a slightly different suggestion. Assuming your file names don't have '\n' characters in them (it's a big assumption on linux, I know, but you can have your writer look up for that), why not only read complete lines and ignore the incomplete ones? By incomplete lines, I mean lines that end with EOF and not with \n.
Edit: see more suggestions in comments below.
This my class TagVertex contain one method that read the tag value=string from textual file
and return it
public class TagVertex extends Vertex<String> {
#Override
public String computeVertexValue() {
String s = null;
try {
BufferedReader bf = new BufferedReader(new FileReader(MyDataSource.TagList1K));
for(int i = 1; i < Integer.parseInt(this.getVertexId().substring(this.getVertexId().indexOf("g")+1)); i++){
bf.readLine();
}
s= bf.readLine();
bf.close();
} catch (Exception e) {
e.printStackTrace();
}
this.setVertexValue(s);
return s;
}
the method is called 1000 times ==>the file is read 1000 times too
It is better to use database instead of textual file?
Accessing the hard drive is always a very slow operation. Databases usually also access the hard drive, so they aren't necessarily faster. They can be even slower, because when the database doesn't run on the same system, the network latency is added (even when it runs on localhost, you have latency due to interprocess communication).
I would recommend you to read the file once and cache the value. When you need to be aware immediately when the file is changed, you could use the new WatchService API to reload the file when it was changed. Here is a tutorial. When it isn't that important that changes on filesystem level are registred immediately, you could also save the time the vertex information was read from the hard drive, and only re-read the value when it's older than a few seconds.
You can create your own in memory database like this.
private static final List<String> lines = new ArrayList<>();
#Override
public String computeVertexValue() {
if (lines.isEmpty())
try {
BufferedReader br = new BufferedReader(new FileReader(MyDataSource.TagList1K));
for (String line; (line = br.readLine()) != null; )
lines.add(line);
br.close();
} catch (Exception e) {
e.printStackTrace();
}
return lines.get(Integer.parseInt(this.getVertexId().substring(this.getVertexId().indexOf("g") + 1)));
}
I'm currently building an application where the user will generate data over time and, should he/she has an internet connection, transmit it to the web. However, if he doesn't have web access, I need to store this data in the phone until the user recovers his access, when I'll need to recover this data to be transmitted. However, I'm facing lots of troubles to do this, as per below.
Note: before anything, I'm using a local java-created file because I know no other way to save/restore this data on the device. If you happen to know any other way to store/access this data from within the device please feel free to comment here.
Just for reference,
phantoms is an ArrayList containing objects with the data I need to
store,
Arquivador is the class that I'm using to make my data persistent and to recover it,
Funcionario is the class with the data generated by the program (just a few strings and numbers)
I am able to write a file to the file system through the code below, on my Activity:
try {
arq = new Arquivador();
arq.addFirstObjectInFile(
openFileOutput("dados.jlog", MODE_WORLD_WRITEABLE),
phantoms.get(0));
phantoms.remove(phantoms.get(0));
for (Funcionario func : phantoms) {
arq.addObjectInFile(openFileOutput("dados.jlog", MODE_APPEND),
func);
}
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
}
Here is the code inside Arquivador that adds the data to a file:
public void addObjectInFile(FileOutputStream arquivo,
Object objetoAAdicionar) {
try {
ObjectOutputStream aoos = new ObjectOutputStream(arquivo);
aoos.writeObject(objetoAAdicionar);
aoos.close();
} catch (IOException ioe) {
Log.d(TAG_NAME, "Erro no Appendable OOS.");
}
}
public void addFirstObjectInFile(FileOutputStream arquivo,
Object objetoAAdicionar) {
try {
AppendableObjectOutputStream aoos = new AppendableObjectOutputStream(
arquivo);
aoos.writeObject(objetoAAdicionar);
aoos.close();
} catch (IOException ioe) {
Log.d(TAG_NAME, "Erro no Appendable OOS.");
}
}
You will notice that I'm adding data to persistence in 2 steps, the first Object and the rest of them. This was an idea I saw on this post, here in StackOverflow, to allow appending data to a Java generated file. I have no problem with this code, it works perfectly.
Later on, back on my Activity, the internet connection is detected and I try to recover the file saved on the disk:
phantoms = new ArrayList<Funcionario>();
Object obj = arq.readObjectFromFile(openFileInput("dados.jlog"));
Funcionario func = null;
if (obj instanceof Funcionario) {
func = (Funcionario) obj;
}
while (func != null) {
phantoms.add(func);
arq.removeObjectFromFile(openFileInput("dados.jlog"), func,
getApplicationContext());
func = (Funcionario) arq
.readObjectFromFile(openFileInput("dados.jlog"));
}
The original idea was to read 1 object at a time, then attempt to transmit it and, if successful, erase the object from the file (so it didn't get retransmitted). However, I was having too many error messages with this. Instead, I decided to load all the objects at once, one by one, to see where my problem was more clearly.
Back to the Arquivador class:
public Object readObjectFromFile(FileInputStream arquivo) {
Object retorno = null;
if (arquivo.equals(null)) {
Log.e(TAG_NAME, "FIS is null!");
}
ObjectInputStream ois = null;
try {
ois = new ObjectInputStream(arquivo);
retorno = ois.readObject();
} catch (IOException ioex) {
} catch (ClassNotFoundException e) {
} finally {
try {
if (ois != null) ois.close();
} catch (IOException e) {
}
}
return retorno;
}
public void removeObjectFromFile(FileInputStream arqPrincipal,
Object objetoARemover, Context contexto) {
try {
// Construct the new file that will later be renamed to the original
// filename.
ObjectOutputStream oos = new ObjectOutputStream(
contexto.openFileOutput("dados.jlog.temp",
contexto.MODE_APPEND));
ObjectInputStream ois = new ObjectInputStream(arqPrincipal);
Object obj = null;
// Read from the original file and write to the new
// unless content matches data to be removed.
try {
while ((obj = ois.readObject()) != null) {
if (!(objetoARemover.equals(obj))) {
oos.writeObject(obj);
oos.flush();
}
}
} catch (EOFException eof) {
} finally {
oos.close();
ois.close();
// Delete the original file
File aDeletar = contexto.getFileStreamPath("dados.jlog");
File aRenomear = contexto.getFileStreamPath("dados.jlog.tmp");
if (!aDeletar.delete()) {
return;
} else {
// Rename the new file to the filename the original file
// had.
if (!aRenomear.renameTo(aDeletar)) Log.d(TAG_NAME,
"Error renaming file");
else Log.d(TAG_NAME, "Renaming successful");
}
}
} catch (FileNotFoundException ex) {
ex.printStackTrace();
Log.d(TAG_NAME, "Arquivo não encontrado");
} catch (IOException ex) {
ex.printStackTrace();
Log.d(TAG_NAME, "Erro de entrada/saída");
} catch (ClassNotFoundException e) {
Log.d(TAG_NAME, "Classe Não Encontrada.");
}
}
The method readObjectFromFile() seems to work just fine. I can even convert the read Object to Funcionario class and read its data.
My problems appear when I use removeObjectFromFile(). The idea is to create a temporary file to store objects from "dados.jlog" file other than the one that has been already loaded in the main program, then once this temp file is created the file "dados.jlog" should be deleted and the temporary file should be renamed to replace it.
The first thing I found out to be strange here is that the ois.readobject() keeps throwing an EOFException. While this makes sense, the tutorial I read on the internet doesn't mention this error. In fact, their code indicates that when the readObject() method reaches the EOF, it would return a reference to null, but instead this class throws this EOFException. I handled this exception in the code - though I'm not sure if this would be the right way to do it.
Another thing I find strange is the fact that this code fails to recognize the object that it should NOT copy. When I compare the object read from the file to the one received as argument, no matter what I try ( == , equals(), etc) they seem different objects to the compiler. Funcionario class is serializable has a serialversionUID, so the object read from the file should be identical to the one I stored. Worse than this, these 2 Objects being compared are read from the same file. They should be identical, right?
After creating the temporary file, I try to delete the original file and rename the temporary file. Though this seems to be working, once the removeObjectFromFile() ends the first time, the program is unable to read the data from the file "dados.jlog" again. I can't read the remaining data from the file and the program enters on an endless loop - since the 1st object is never removed from the list in the file.
Please enlighten me with this matter.
Personally I'd use an SQLLite database. Store each object in a row in the database. Once you've successfully transmitted you can remove the row from the database.
You can even reuse most of your code that you've already done. The easiest way to get there from where you are is to use a separate file for each object and store only the filename of the object in the database. You can then iterate over the rows in the database. Each time you transmit an object to your server simply delete that row from the database (and remove the file from the filesystem!). No rows in the database means no objects remain to be transmitted.
Wrote up a basic file handler for a Java Homework assignment, and when I got the assignment back I had some notes about failing to catch a few instances:
Buffer from file could have been null.
File was not found
File stream wasn't closed
Here is the block of code that is used for opening a file:
/**
* Create a Filestream, Buffer, and a String to store the Buffer.
*/
FileInputStream fin = null;
BufferedReader buffRead = null;
String loadedString = null;
/** Try to open the file from user input */
try
{
fin = new FileInputStream(programPath + fileToParse);
buffRead = new BufferedReader(new InputStreamReader(fin));
loadedString = buffRead.readLine();
fin.close();
}
/** Catch the error if we can't open the file */
catch(IOException e)
{
System.err.println("CRITICAL: Unable to open text file!");
System.err.println("Exiting!");
System.exit(-1);
}
The one comment I had from him was that fin.close(); needed to be in a finally block, which I did not have at all. But I thought that the way I have created the try/catch it would have prevented an issue with the file not opening.
Let me be clear on a few things: This is not for a current assignment (not trying to get someone to do my own work), I have already created my project and have been graded on it. I did not fully understand my Professor's reasoning myself. Finally, I do not have a lot of Java experience, so I was a little confused why my catch wasn't good enough.
Buffer from file could have been null.
The file may be empty. That is, end-of-file is reach upon opening the file. loadedString = buffRead.readLine() would then have returned null.
Perhaps you should have fixed this by adding something like if (loadedString == null) loadedString = "";
File was not found
As explained in the documentation of the constructor of FileInputStream(String) it may throw a FileNotFoundException. You do catch this in your IOException clause (since FileNotFoundException is an IOException), so it's fine, but you could perhaps have done:
} catch (FileNotFoundException fnfe) {
System.err.println("File not fonud!");
} catch (IOException ioex {
System.err.println("Some other error");
}
File stream wasn't closed
You do call fin.close() which in normal circumstances closes the file stream. Perhaps he means that it's not always closed. The readLine could potentially throw an IOException in which case the close() is skipped. That's the reason for having it in a finally clause (which makes sure it gets called no matter what happens in the try-block. (*)
(*) As #mmyers correctly points out, putting the close() in a finally block will actually not be sufficient since you call System.exit(-1) in the catch-block. If that really is the desired behavior, you could set an error flag in the catch-clause, and exit after the finally-clause if this flag is set.
But what if your program threw an exception on the second or third line of your try block?
buffRead = new BufferedReader(new InputStreamReader(fin));
loadedString = buffRead.readLine();
By this point, a filehandle has been opened and assigned to fin. You could trap the exception but the filehandle would remain open.
You'll want to move the fin.close() statement to a finally block:
} finally {
try {
if (fin != null) {
fin.close();
}
} catch (IOException e2) {
}
}
Say buffRead.readLine() throws an exception, will your FileInputStream ever be closed, or will that line be skipped? The purpose of a finally block is that even in exceptional circumastances, the code in the finally block will execute.
There are a lot of other errors which may happen other than opening the file.
In the end you may end up with a fin which is defined or not which you have to protect against null pointer errors, and do not forget that closing the file can throw a new exception.
My advice is to capture this in a separate routine and let the IOExceptions fly out of it :
something like
private String readFile() throws IOException {
String s;
try {
fin = new FileInputStream(programPath + fileToParse);
buffRead = new BufferedReader(new InputStreamReader(fin));
s = buffRead.readLine();
fin.close();
} finally {
if (fin != null {
fin.close()
}
}
return s
}
and then where you need it :
try {
loadedString = readFile();
} catch (IOException e) {
// handle issue gracefully
}