Locking files while using Java Logger - java

I am creating a logger that will log things throughout my program. It seems to work fine. This is what the class looks like.
public class TESTLogger {
protected static File file;
protected static Logger logger = Logger.getLogger("");
public TESTLogger(String logName) {
setupLogger(logName);
}
private static void setupLogger(String logName) {
String basePath = Utils.getBasePath();
File logsDir = new File(basePath);
if(logsDir.exists() == false) {
logsDir.mkdir();
}
String filePath = basePath + File.separator + logName + ".%g.log";
file = new File(filePath);
try {
FileHandler fileHandler = new FileHandler(filePath, 5242880, 5, true);
fileHandler.setFormatter(new java.util.logging.Formatter() {
#Override
public String format(LogRecord logRecord) {
if(logRecord.getLevel() == Level.INFO) {
return "[INFO " + createDateTimeLog() + "] " + logRecord.getMessage() + "\r\n";
} else if(logRecord.getLevel() == Level.WARNING) {
return "[WARN " + createDateTimeLog() + "] " + logRecord.getMessage() + "\r\n";
} else if(logRecord.getLevel() == Level.SEVERE) {
return "[ERROR " + createDateTimeLog() + "] " + logRecord.getMessage() + "\r\n";
} else {
return "[OTHER " + createDateTimeLog() + "] " + logRecord.getMessage() + "\r\n";
}
}
});
logger.addHandler(fileHandler);
} catch (IOException e) {
}
}
private static void writeToFile(Level level, String writeThisToFile) {
logger.log(level, writeThisToFile);
}
private static String createDateTimeLog() {
String dateTime = "";
Date date = new Date();
SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd H:mm:ss");
dateTime = simpleDateFormat.format(date);
return dateTime;
}
public void error(String message) {
writeToFile(Level.SEVERE, message);
}
public void warn(String message) {
writeToFile(Level.WARNING, message);
}
public void info(String message) {
writeToFile(Level.INFO, message);
}
}
When my application starts it creats the TESTLogger object. Then whenever I log I run logger.info / logger.warn / logger.error with my log message. That is working great. However, multiple instances of my jar can be running at the same time. When that happens, it creates a new instance of the log. IE: I could have myLog.0.log. When the second instance of the jar logs something it will go under myLog.0.log.1, then myLog.0.log.2 and so on.
I don't want to create all these different instances of my log file. I thought I might use a File Lock (from java.nio.channels package). However, I have not been able to figure out how to do that with the Java Logger class I am using (java.util.logging).
Any ideas how to prevent this from happening would be great. Thanks in advance.
EDIT:
Ok. So I have rewritten writeToFile and it seems to work a little better. However, every now and again I still get a .1 log. It doesn't happen as much as it used to. And it NEVER gets to .2 (it used to get all the way up to .100). I would still like to prevent this .1, though.
This is what my code looks like now:
private static void writeToFile(Level level, String writeThisToFile) {
try {
File file = new File("FileLock");
FileChannel channel = new RandomAccessFile(file, "rw").getChannel();
FileLock lock = null;
try {
lock = channel.tryLock(0, Long.MAX_VALUE, true);
if(lock != null) {
logger.log(level, writeThisToFile);
}
} catch (OverlappingFileLockException e) {
}
finally {
if(lock != null) {
lock.release();
}
channel.close();
}
} catch (IOException e) {}
}
EDIT #2: What it currently looks like.
Entrance point into my JAR:
public class StartingPoint {
public static void main(String[] args) {
MyLogger logger = new MyLogger("myFirstLogger");
logger.info("Info test message");
logger.warn("Warning test message");
logger.error("Error test message");
}
}
MyLogger class:
public class MyLogger {
protected static File file;
protected static Logger logger = Logger.getLogger("");
public MyLogger(String loggerName) {
setupLogger(loggerName);
}
private void setupLogger(String loggerName) {
String filePath = loggerName + "_%g" + ".log";
file = new File(filePath);
try {
FileHandler fileHandler = new FileHandler(filePath, 5242880, 5, true);
fileHandler.setFormatter(new java.util.logging.Formatter() {
#Override
public String format(LogRecord logRecord) {
if(logRecord.getLevel() == Level.INFO) {
return "[INFO " + createDateTimeLog() + "] " + logRecord.getMessage() + "\r\n";
} else if(logRecord.getLevel() == Level.WARNING) {
return "[WARN " + createDateTimeLog() + "] " + logRecord.getMessage() + "\r\n";
} else if(logRecord.getLevel() == Level.SEVERE) {
return "[ERROR " + createDateTimeLog() + "] " + logRecord.getMessage() + "\r\n";
} else {
return "[OTHER " + createDateTimeLog() + "] " + logRecord.getMessage() + "\r\n";
}
}
});
logger.addHandler(fileHandler);
logger.addHandler(new SharedFileHandler()); // <--- SharedFileHandler added
} catch (IOException e) {}
}
private void writeToFile(Level level, String writeThisToFile) {
logger.log(level, writeThisToFile);
}
private static String createDateTimeLog() {
String dateTime = "";
Date date = new Date();
SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd H:mm:ss");
dateTime = simpleDateFormat.format(date);
return dateTime;
}
public void error(String message) {
writeToFile(Level.SEVERE, message);
}
public void warn(String message) {
writeToFile(Level.WARNING, message);
}
public void info(String message) {
writeToFile(Level.INFO, message);
}
}
And finally... SharedFileHandler:
public class SharedFileHandler extends Handler {
private final FileChannel mutex;
private final String pattern;
public SharedFileHandler() throws IOException {
this("loggerLockFile");
}
public SharedFileHandler(String pattern) throws IOException {
setFormatter(new SimpleFormatter());
this.pattern = pattern;
mutex = new RandomAccessFile(pattern, "rw").getChannel();
}
#Override
public void publish(LogRecord record) {
if (isLoggable(record)) {
record.getSourceMethodName(); //Infer caller.
try {
FileLock ticket = mutex.lock();
try {
doPublish(record);
} finally {
ticket.release();
}
} catch (IOException e) {}
catch (OverlappingFileLockException e) {}
catch (NullPointerException e) {}
}
}
private void doPublish(LogRecord record) throws IOException {
final FileHandler h = new FileHandler(pattern, 5242880, 5, true);
try {
h.setEncoding(getEncoding());
h.setErrorManager(getErrorManager());
h.setFilter(getFilter());
h.setFormatter(getFormatter());
h.setLevel(getLevel());
h.publish(record);
h.flush();
} finally {
h.close();
}
}
#Override
public void flush() {}
#Override
public synchronized void close() throws SecurityException {
super.setLevel(Level.OFF);
try {
mutex.close();
} catch (IOException ioe) {}
}
}

The FileHandler does everything it can to prevent two concurrently running JVMs from writing to the same log file. If this behavior was allowed the log file would be almost impossible to read and understand.
If you really want to write everything to one log file then you have to do one of the following:
Prevent concurrent JVM processes from starting by changing how it is launched.
Have your code detect if another JVM is running your code and exit before creating a FileHandler.
Have each JVM write to a distinct log file and create code to safely merge the files into one.
Create a proxy Handler that creates and closes a FileHandler for each log record. The proxy handler would use a predefined file name (different from the log file) and a FileLock to serialize access to the log file from different JVMs.
Use a dedicated process to write to the log file and have all the JVMs send log messages to that process.
Here is an untested example of a proxy handler:
import java.io.IOException;
import java.nio.channels.FileChannel;
import java.nio.channels.FileLock;
import java.nio.channels.OverlappingFileLockException;
import java.nio.file.Paths;
import java.util.logging.*;
import static java.nio.file.StandardOpenOption.*;
public class SharedFileHandler extends Handler {
private final FileChannel mutex;
private final String pattern;
public SharedFileHandler() throws IOException {
this("%hjava%g.log");
}
public SharedFileHandler(String pattern) throws IOException {
setFormatter(new SimpleFormatter());
this.pattern = pattern;
Path p = Paths.get(new File(".").getCanonicalPath(),
pattern.replace("%", "") + ".lck");
mutex = FileChannel.open(p, CREATE, WRITE, DELETE_ON_CLOSE);
}
#Override
public void publish(LogRecord record) {
if (isLoggable(record)) {
record.getSourceMethodName(); //Infer caller.
try {
FileLock ticket = mutex.lock();
try {
doPublish(ticket, record);
} finally {
ticket.release();
}
} catch (IOException | OverlappingFileLockException ex) {
reportError(null, ex, ErrorManager.WRITE_FAILURE);
}
}
}
private synchronized void doPublish(FileLock ticket, LogRecord record) throws IOException {
if (!ticket.isValid()) {
return;
}
final FileHandler h = new FileHandler(pattern, 5242880, 5, true);
try {
h.setEncoding(getEncoding());
h.setErrorManager(getErrorManager());
h.setFilter((Filter) null);
h.setFormatter(getFormatter());
h.setLevel(getLevel());
h.publish(record);
h.flush();
} finally {
h.close();
}
}
#Override
public void flush() {
}
#Override
public synchronized void close() throws SecurityException {
super.setLevel(Level.OFF);
try {
mutex.close();
} catch (IOException ioe) {
this.reportError(null, ioe, ErrorManager.CLOSE_FAILURE);
}
}
}
Here is a simple test case
public static void main(String[] args) throws Exception {
Random rnd = new Random();
logger.addHandler(new SharedFileHandler());
String id = ManagementFactory.getRuntimeMXBean().getName();
for (int i = 0; i < 600; i++) {
logger.log(Level.INFO, id);
Thread.sleep(rnd.nextInt(100));
}
}

Related

RMI does not return response over internet

I have a simple rmi-server and rmi-client. When i run this server and client in same network, my server function returns the result properly. But my server and client are in different networks and if the process time is more than 3-4 minutes client can not get the result, although server fihishes the operation.
here is my entire server code:
public class SimpleServer {
ServerRemoteObject mRemoteObject;
public static int RMIInPort = 27550;
public static int delay = 0;
public byte[] handleEvent(byte[] mMessage) throws Exception {
String request = new String(mMessage, "UTF-8");
// if ("hearthbeat".equalsIgnoreCase(request)) {
// System.out.println("returning for hearthbeat");
// return "hearthbeat response".getBytes("UTF-8");
// }
System.out.println(request);
Thread.sleep(delay);
System.out.println("returning response");
return "this is response".getBytes("UTF-8");
}
public void bindYourself(int rmiport) {
try {
mRemoteObject = new ServerRemoteObject(this);
java.rmi.registry.Registry iRegistry = LocateRegistry.getRegistry(rmiport);
iRegistry.rebind("Server", mRemoteObject);
} catch (Exception e) {
e.printStackTrace();
mRemoteObject = null;
}
}
public static void main(String[] server) {
int rmiport = Integer.parseInt(server[0]);
RMIInPort = Integer.parseInt(server[1]);
delay = Integer.parseInt(server[2]);
System.out.println("server java:" + System.getProperty("java.version"));
System.out.println("server started on:" + rmiport + "/" + RMIInPort);
System.out.println("server delay on:" + delay);
SimpleServer iServer = new SimpleServer();
iServer.bindYourself(rmiport);
while (true) {
try {
Thread.sleep(10000);
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
and here is my client code:
public class SimpleClient {
ISimpleServer iServer;
public SimpleClient(String p_strServerIp, String p_strCMName, int nRMIPort) {
try {
if (nRMIPort == 1099) {
iServer = (ISimpleServer) Naming.lookup("rmi://" + p_strServerIp + "/" + p_strCMName);
} else {
Registry rmiRegistry = null;
rmiRegistry = LocateRegistry.getRegistry(p_strServerIp, nRMIPort);
iServer = (ISimpleServer) rmiRegistry.lookup(p_strCMName);
}
} catch (Exception ex) {
ex.printStackTrace();
iServer = null;
}
}
public static void main(String... strings) {
String ip = strings[0];
int rmiport = Integer.parseInt(strings[1]);
System.out.println("client java:" + System.getProperty("java.version"));
System.out.println("client is looking for:" + ip + ":" + rmiport);
SimpleClient iClient = new SimpleClient(ip, "Server", rmiport);
try {
byte[] response = iClient.iServer.doaction("this is request".getBytes("UTF-8"));
System.out.println(new String(response, "UTF-8"));
} catch (Exception e) {
e.printStackTrace();
}
}
}
and here is my rmi-registry code:
public class SimpleRMI implements Runnable {
Registry mRegistry = null;
public SimpleRMI(int nPort) {
try {
mRegistry = new sun.rmi.registry.RegistryImpl(nPort);
} catch (RemoteException e1) {
e1.printStackTrace();
}
}
#Override
public void run() {
while (true) {
try {
Thread.sleep(360000);
} catch (Exception e) {
e.printStackTrace();
}
}
}
public static void main(String... strings) {
int rmiport = Integer.parseInt(strings[0]);
System.out.println("rmi java:" + System.getProperty("java.version"));
System.out.println("rmi started on:" + rmiport);
SimpleRMI iRegisry = new SimpleRMI(rmiport);
Thread tThread = new Thread(iRegisry);
tThread.start();
byte[] bytes = new byte[1];
while (true) {
try {
System.in.read(bytes);
if (bytes[0] == 13) {
try {
iRegisry.listRegistry();
} catch (Exception exc2) {
exc2.printStackTrace();
}
}
} catch (Exception exc) {
exc.printStackTrace();
}
}
}
private void listRegistry() {
String[] strList = null;
try {
strList = mRegistry.list();
if (strList != null) {
for (int i = 0; i < strList.length; i++) {
int j = i + 1;
String name = strList[i];
java.rmi.Remote r = mRegistry.lookup(name);
System.out.println(j + ". " + strList[i] + " -> "
+ r.toString());
}
}
System.out.println();
} catch (Exception exc) {
exc.printStackTrace();
}
}
}
and my remote interface and remote object:
public interface ISimpleServer extends java.rmi.Remote {
public byte[] doaction(byte[] message) throws java.rmi.RemoteException;
}
#SuppressWarnings("serial")
public class ServerRemoteObject extends UnicastRemoteObject implements ISimpleServer {
SimpleServer Server = null;
public ServerRemoteObject(SimpleServer pServer) throws RemoteException {
super(SimpleServer.RMIInPort);
Server = pServer;
}
#Override
public byte[] doaction(byte[] message) throws RemoteException {
try {
return Server.handleEvent(message);
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
}
when i run client and server in different networks. (i run client in my home network) and if delay is more than 3-4 mins server prints returning response but client still waits for the response. If delay is only 1 minute, clients gets the result properly.
Can you please help me to find where the problem is?

Can not figure out an exception

I am writing simple code to asynchronously write logs to file, but find it difficult to figure out one issue.
I get java.util.NoSuchElementException in logNodes.removeFirst(). How can this happen if I check if the list is empty?
This issue mostly occurs if I log very frequently.
If anyone can explain to me why is this happening, it would be very appreciated.
My code:
private static class FileLogger extends Thread {
private File logFile;
private PrintWriter logWriter;
private final LinkedList<LogNode> logNodes = new LinkedList<>();
public FileLogger(Context context) {
String dateString = (String) DateFormat.format("yyyy-MM-dd_HH:mm:ss", new Date());
File logsDir = new File(context.getCacheDir(), "logs");
if (logsDir.exists()) {
for (File file : logsDir.listFiles()) {
file.delete();
}
}
try {
logFile = new File(logsDir, dateString + ".log");
if (!logFile.exists()) {
logFile.getParentFile().mkdirs();
logFile.createNewFile();
}
logWriter = new PrintWriter(new FileOutputStream(logFile));
start();
} catch (IOException ignored) {
}
}
public void log(Date date, String tag, String msg) {
if (isAlive()) {
logNodes.addLast(new LogNode(date, tag, msg));
synchronized (this) {
this.notify();
}
}
}
#Override
public void run() {
while (true) {
if (logNodes.isEmpty()) {
try {
synchronized (this) {
this.wait();
}
} catch (InterruptedException e) {
logWriter.flush();
logWriter.close();
return;
}
} else {
LogNode node = logNodes.removeFirst();
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS", Locale.US);
logWriter.println(String.format(
"%s %s.%s", dateFormat.format(node.date), node.tag, node.msg
));
logWriter.flush();
}
}
}
private class LogNode {
final Date date;
final String tag;
final String msg;
public LogNode(Date date, String tag, String msg) {
this.date = date;
this.tag = tag;
this.msg = msg;
}
}
}
Reason
You did not synchronize mutiple log threads.
Suppose you have thread1 and thread2:
thread1 has written node1 into the queue.
FileLogger noticed node1 when it call isEmpty, while thread2 did
not notice it.
thread2 think this list is empty, and let the list's first and
last node be node2, which means node1 has be overwritten.
Since you did not has any other synchronization, node2 might not be noticed by FileLogger, NoSuchElementException will be thrown.
Solution
Instead implementing a blocking queue yourself, try use BlockigQueue provided by java.util.concurrent, let it do the synchronization for you.
private static class FileLogger extends Thread {
private File logFile;
private PrintWriter logWriter;
private final BlockingQueue<LogNode> logNodes = new LinkedBlockingQueue<>();
public FileLogger(Context context) {
String dateString = (String) DateFormat.format("yyyy-MM-dd_HH:mm:ss", new Date());
File logsDir = new File(context.getCacheDir(), "logs");
if (logsDir.exists()) {
for (File file : logsDir.listFiles()) {
file.delete();
}
}
try {
logFile = new File(logsDir, dateString + ".log");
if (!logFile.exists()) {
logFile.getParentFile().mkdirs();
logFile.createNewFile();
}
logWriter = new PrintWriter(new FileOutputStream(logFile));
start();
} catch (IOException ignored) {
}
}
public void log(Date date, String tag, String msg) {
if (isAlive()) {
logNodes.add(new LogNode(date, tag, msg));
}
}
#Override
public void run() {
while (true) {
try {
LogNode node = logNodes.take();
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS", Locale.US);
logWriter.println(String.format(
"%s %s.%s", dateFormat.format(node.date), node.tag, node.msg
));
logWriter.flush();
} catch (InterruptedException e) {
logWriter.flush();
logWriter.close();
return;
}
}
}
}

Splitting huge CSV by custom filter?

I have huge (>5GB) CSV file in format:
username,transaction
I want to have as an output separate CSV file for each user with only all of his transactions in the same format. I have few ideas in mind, but i want to hear other ideas for effective (fast and memory efficient) implementation.
Here is what i done up to now. First test is read/process/write in single thread, second test is with many threads. Performance is not that good, so i think i'm doing something wrong. Please correct me.
public class BatchFileReader {
private ICsvBeanReader beanReader;
private double total;
private String[] header;
private CellProcessor[] processors;
private DataTransformer<HashMap<String, List<LoginDto>>> processor;
private boolean hasMoreRecords = true;
public BatchFileReader(String file, DataTransformer<HashMap<String, List<LoginDto>>> processor) {
try {
this.processor = processor;
this.beanReader = new CsvBeanReader(new FileReader(file), CsvPreference.STANDARD_PREFERENCE);
header = CSVUtils.getHeader(beanReader.getHeader(true));
processors = CSVUtils.getProcessors();
} catch (IOException e) {
e.printStackTrace();
}
}
public void read() {
try {
readFile();
} catch (IOException e) {
e.printStackTrace();
} finally {
if (beanReader != null) {
try {
beanReader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
private void readFile() throws IOException {
while (hasMoreRecords) {
long start = System.currentTimeMillis();
HashMap<String, List<LoginDto>> usersBatch = readBatch();
long end = System.currentTimeMillis();
System.out.println("Reading batch for " + ((end - start) / 1000f) + " seconds.");
total +=((end - start)/ 1000f);
if (processor != null && !usersBatch.isEmpty()) {
processor.transform(usersBatch);
}
}
System.out.println("total = " + total);
}
private HashMap<String, List<LoginDto>> readBatch() throws IOException {
HashMap<String, List<LoginDto>> users = new HashMap<String, List<LoginDto>>();
int readLoginCount = 0;
while (readLoginCount < CONFIG.READ_BATCH_SIZE) {
LoginDto login = beanReader.read(LoginDto.class, header, processors);
if (login != null) {
if (!users.containsKey(login.getUsername())) {
List<LoginDto> logins = new LinkedList<LoginDto>();
users.put(login.getUsername(), logins);
}
users.get(login.getUsername()).add(login);
readLoginCount++;
} else {
hasMoreRecords = false;
break;
}
}
return users;
}
}
public class BatchFileWriter {
private final String file;
private final List<T> processedData;
public BatchFileWriter(final String file, List<T> processedData) {
this.file = file;
this.processedData = processedData;
}
public void write() {
try {
writeFile(file, processedData);
} catch (IOException e) {
e.printStackTrace();
} finally {
}
}
private void writeFile(final String file, final List<T> processedData) throws IOException {
System.out.println("START WRITE " + " " + file);
FileWriter writer = new FileWriter(file, true);
long start = System.currentTimeMillis();
for (T record : processedData) {
writer.write(record.toString());
writer.write("\n");
}
writer.flush();
writer.close();
long end = System.currentTimeMillis();
System.out.println("Writing in file " + file + " complete for " + ((end - start) / 1000f) + " seconds.");
}
}
public class LoginsTest {
private static final ExecutorService executor = Executors.newSingleThreadExecutor();
private static final ExecutorService procExec = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() + 1);
#Test
public void testSingleThreadCSVtoCSVSplit() throws InterruptedException, ExecutionException {
long start = System.currentTimeMillis();
DataTransformer<HashMap<String, List<LoginDto>>> simpleSplitProcessor = new DataTransformer<HashMap<String, List<LoginDto>>>() {
#Override
public void transform(HashMap<String, List<LoginDto>> data) {
for (String field : data.keySet()) {
new BatchFileWriter<LoginDto>(field + ".csv", data.get(field)).write();
}
}
};
BatchFileReader reader = new BatchFileReader("loadData.csv", simpleSplitProcessor);
reader.read();
long end = System.currentTimeMillis();
System.out.println("TOTAL " + ((end - start)/ 1000f) + " seconds.");
}
#Test
public void testMultiThreadCSVtoCSVSplit() throws InterruptedException, ExecutionException {
long start = System.currentTimeMillis();
System.out.println(start);
final DataTransformer<HashMap<String, List<LoginDto>>> simpleSplitProcessor = new DataTransformer<HashMap<String, List<LoginDto>>>() {
#Override
public void transform(HashMap<String, List<LoginDto>> data) {
System.out.println("transform");
processAsync(data);
}
};
final CountDownLatch readLatch = new CountDownLatch(1);
executor.execute(new Runnable() {
#Override
public void run() {
BatchFileReader reader = new BatchFileReader("loadData.csv", simpleSplitProcessor);
reader.read();
System.out.println("read latch count down");
readLatch.countDown();
}});
System.out.println("read latch before await");
readLatch.await();
System.out.println("read latch after await");
procExec.shutdown();
executor.shutdown();
long end = System.currentTimeMillis();
System.out.println("TOTAL " + ((end - start)/ 1000f) + " seconds.");
}
private void processAsync(final HashMap<String, List<LoginDto>> data) {
procExec.execute(new Runnable() {
#Override
public void run() {
for (String field : data.keySet()) {
writeASync(field, data.get(field));
}
}
});
}
private void writeASync(final String field, final List<LoginDto> data) {
procExec.execute(new Runnable() {
#Override
public void run() {
new BatchFileWriter<LoginDto>(field + ".csv", data).write();
}
});
}
}
Would it not be better to use unix commands to sort and then split the original file?
Something like: cat txn.csv | sort > txn-sorted.csv
From there get a listing of the unique usernames via grep and then grep the sorted file for each username
If you know Camel already, I'd write a simple Camel route to:
Read line from file
Parse the line
Write to the correct output file
Its a very simple route but if you want it as fast as possible it is then trivially easy make it multithreaded
eg your route would look something like:
from("file:/myfile.csv")
.beanRef("lineParser")
.to("seda:internal-queue");
from("seda:internal-queue")
.concurrentConsumers(5)
.to("fileWriter");
If you don't know Camel then its not worth learning some this one task. However you are probably going to need to make it multithreaded to get the maximum performance. You'll have to experiment where best to put the threading as it will depend on what parts of the operation are slowest.
The multithreading will use up more memory so you'll need to balance memory efficiency against performance.
I would open/append a new output file for each user. If you wanted to minimize memory usage and incur more I/O overhead, you could do something like the following, though you'd probably want to use a real CSV parser like Super CSV (http://supercsv.sourceforge.net/index.html):
Scanner s = new Scanner(new File("/my/dir/users-and-transactions.txt"));
while (s.hasNextLine()) {
String line = s.nextLine();
String[] tokens = line.split(",");
String user = tokens[0];
String transaction = tokens[1];
PrintStream out = new PrintStream(new FileOutputStream("/my/dir/" + user, true));
out.println(transaction);
out.close();
}
s.close();
If you've got a reasonable amount of memory, you could create a Map of user name to OutputStream. Each time you see a user string, you could get the existing OutputStream for that user name or create a new one if none exists.

Watching a Directory for Changes in Java

I want to watch a directory for file changes. And I used WatchService in java.nio. I can successfully listen for file created event. But I can't listen for file modify event. I checked official java tutorial, but still struggling.
Here is the source code.
import static java.nio.file.LinkOption.NOFOLLOW_LINKS;
import static java.nio.file.StandardWatchEventKinds.ENTRY_CREATE;
import static java.nio.file.StandardWatchEventKinds.OVERFLOW;
import static java.nio.file.StandardWatchEventKinds.ENTRY_DELETE;
import static java.nio.file.StandardWatchEventKinds.ENTRY_MODIFY;
import java.io.File;
import java.io.IOException;
import java.nio.file.FileSystem;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.WatchEvent;
import java.nio.file.WatchEvent.Kind;
import java.nio.file.WatchKey;
import java.nio.file.WatchService;
public class MainWatch {
public static void watchDirectoryPath(Path path) {
// Sanity check - Check if path is a folder
try {
Boolean isFolder = (Boolean) Files.getAttribute(path,
"basic:isDirectory", NOFOLLOW_LINKS);
if (!isFolder) {
throw new IllegalArgumentException("Path: " + path
+ " is not a folder");
}
} catch (IOException ioe) {
// Folder does not exists
ioe.printStackTrace();
}
System.out.println("Watching path: " + path);
// We obtain the file system of the Path
FileSystem fs = path.getFileSystem();
// We create the new WatchService using the new try() block
try (WatchService service = fs.newWatchService()) {
// We register the path to the service
// We watch for creation events
path.register(service, ENTRY_CREATE);
path.register(service, ENTRY_MODIFY);
path.register(service, ENTRY_DELETE);
// Start the infinite polling loop
WatchKey key = null;
while (true) {
key = service.take();
// Dequeueing events
Kind<?> kind = null;
for (WatchEvent<?> watchEvent : key.pollEvents()) {
// Get the type of the event
kind = watchEvent.kind();
if (OVERFLOW == kind) {
continue; // loop
} else if (ENTRY_CREATE == kind) {
// A new Path was created
Path newPath = ((WatchEvent<Path>) watchEvent)
.context();
// Output
System.out.println("New path created: " + newPath);
} else if (ENTRY_MODIFY == kind) {
// modified
Path newPath = ((WatchEvent<Path>) watchEvent)
.context();
// Output
System.out.println("New path modified: " + newPath);
}
}
if (!key.reset()) {
break; // loop
}
}
} catch (IOException ioe) {
ioe.printStackTrace();
} catch (InterruptedException ie) {
ie.printStackTrace();
}
}
public static void main(String[] args) throws IOException,
InterruptedException {
// Folder we are going to watch
// Path folder =
// Paths.get(System.getProperty("C:\\Users\\Isuru\\Downloads"));
File dir = new File("C:\\Users\\Isuru\\Downloads");
watchDirectoryPath(dir.toPath());
}
}
Actually you have incorrectly subscribed to events. Only last listener has been registered with ENTRY_DELETE events type.
To register for all kind of events at once you should use:
path.register(service, ENTRY_CREATE, ENTRY_MODIFY, ENTRY_DELETE);
Warning! Shameless self promotion!
I have created a wrapper around Java 1.7's WatchService that allows registering a directory and any number of glob patterns. This class will take care of the filtering and only emit events you are interested in.
DirectoryWatchService watchService = new SimpleDirectoryWatchService(); // May throw
watchService.register( // May throw
new DirectoryWatchService.OnFileChangeListener() {
#Override
public void onFileCreate(String filePath) {
// File created
}
#Override
public void onFileModify(String filePath) {
// File modified
}
#Override
public void onFileDelete(String filePath) {
// File deleted
}
},
<directory>, // Directory to watch
<file-glob-pattern-1>, // E.g. "*.log"
<file-glob-pattern-2>, // E.g. "input-?.txt"
... // As many patterns as you like
);
watchService.start();
Complete code is in this repo.
I made some classes for this.
public interface FileAvailableListener {
public void fileAvailable(File file) throws IOException;
}
and
public class FileChange {
private long lastModified;
private long size;
private long lastCheck;
public FileChange(File file) {
this.lastModified=file.lastModified();
this.size=file.length();
this.lastCheck = System.currentTimeMillis();
}
public long getLastModified() {
return lastModified;
}
public long getSize() {
return size;
}
public long getLastCheck() {
return lastCheck;
}
public boolean isStable(FileChange other,long stableTime) {
boolean b1 = (getLastModified()==other.getLastModified());
boolean b2 = (getSize()==other.getSize());
boolean b3 = ((other.getLastCheck()-getLastCheck())>stableTime);
return b1 && b2 && b3;
}
}
and
public class DirectoryWatcher {
private Timer timer;
private List<DirectoryMonitorTask> tasks = new ArrayList<DirectoryMonitorTask>();
public DirectoryWatcher() throws URISyntaxException, IOException, InterruptedException {
super();
timer = new Timer(true);
}
public void addDirectoryMonitoringTask(DirectoryMonitorTask task,long period) {
tasks.add(task);
timer.scheduleAtFixedRate(task, 5000, period);
}
public List<DirectoryMonitorTask> getTasks() {
return Collections.unmodifiableList(tasks);
}
public Timer getTimer() {
return timer;
}
}
and
class DirectoryMonitorTask extends TimerTask {
public final static String DIRECTORY_NAME_ARCHIVE="archive";
public final static String DIRECTORY_NAME_ERROR="error";
public final static String LOCK_FILE_EXTENSION=".lock";
public final static String ERROR_FILE_EXTENSION=".error";
public final static String FILE_DATE_FORMAT="yyyyMMddHHmmssSSS";
private String name;
private FileAvailableListener listener;
private Path directory;
private File directoryArchive;
private File directoryError;
private long stableTime;
private FileFilter filter;
private WatchService watchService;
private SimpleDateFormat dateFormatter = new SimpleDateFormat(FILE_DATE_FORMAT);
private Hashtable<File,FileChange> fileMonitor = new Hashtable<File,FileChange>();
public DirectoryMonitorTask(String name,FileAvailableListener listener,Path directory,long stableTime,FileFilter filter) throws IOException {
super();
this.name=name;
this.listener=listener;
this.directory=directory;
this.stableTime=stableTime;
if (stableTime<1) {
stableTime=1000;
}
this.filter=filter;
validateNotNull("Name",name);
validateNotNull("Listener",listener);
validateNotNull("Directory",directory);
validate(directory);
directoryArchive = new File(directory.toFile(),DIRECTORY_NAME_ARCHIVE);
directoryError = new File(directory.toFile(),DIRECTORY_NAME_ERROR);
directoryArchive.mkdir();
directoryError.mkdir();
//
log("Constructed for "+getDirectory().toFile().getAbsolutePath());
initialize();
//
watchService = FileSystems.getDefault().newWatchService();
directory.register(watchService,StandardWatchEventKinds.ENTRY_CREATE,StandardWatchEventKinds.ENTRY_DELETE,StandardWatchEventKinds.ENTRY_MODIFY);
log("Started");
}
private void initialize() {
File[] files = getDirectory().toFile().listFiles();
for (File file : files) {
if (isLockFile(file)) {
file.delete();
} else if (acceptFile(file)) {
fileMonitor.put(file,new FileChange(file));
log("Init file added -"+file.getName());
}
}
}
public SimpleDateFormat getDateFormatter() {
return dateFormatter;
}
public Path getDirectory() {
return directory;
}
public FileAvailableListener getListener() {
return listener;
}
public String getName() {
return name;
}
public WatchService getWatchService() {
return watchService;
}
public long getStableTime() {
return stableTime;
}
public File getDirectoryArchive() {
return directoryArchive;
}
public File getDirectoryError() {
return directoryError;
}
public FileFilter getFilter() {
return filter;
}
public Iterator<File> getMonitoredFiles() {
return fileMonitor.keySet().iterator();
}
#Override
public void run() {
WatchKey key;
try {
key = getWatchService().take();
// Poll all the events queued for the key
for (WatchEvent<?> event : key.pollEvents()) {
#SuppressWarnings("unchecked")
Path filePath = ((WatchEvent<Path>) event).context();
File file = filePath.toFile();
if ((!isLockFile(file)) && (acceptFile(file))) {
switch (event.kind().name()) {
case "ENTRY_CREATE":
//
fileMonitor.put(file,new FileChange(file));
log("File created ["+file.getName()+"]");
break;
//
case "ENTRY_MODIFY":
//
fileMonitor.put(file,new FileChange(file));
log("File modified ["+file.getName()+"]");
break;
//
case "ENTRY_DELETE":
//
log("File deleted ["+file.getName()+"]");
createLockFile(file).delete();
fileMonitor.remove(file);
break;
//
}
}
}
// reset is invoked to put the key back to ready state
key.reset();
} catch (InterruptedException e) {
e.printStackTrace();
}
Iterator<File> it = fileMonitor.keySet().iterator();
while (it.hasNext()) {
File file = it.next();
FileChange fileChange = fileMonitor.get(file);
FileChange fileChangeCurrent = new FileChange(file);
if (fileChange.isStable(fileChangeCurrent, getStableTime())) {
log("File is stable ["+file.getName()+"]");
String filename = getDateFormatter().format(new Date())+"_"+file.getName();
File lockFile = createLockFile(file);
if (!lockFile.exists()) {
log("File do not has lock file ["+file.getName()+"]");
try {
Files.createFile(lockFile.toPath());
log("Processing file ["+file.getName()+"]");
getListener().fileAvailable(file);
file.renameTo(new File(getDirectoryArchive(),filename));
log("Moved to archive file ["+file.getName()+"]");
} catch (IOException e) {
file.renameTo(new File(getDirectoryError(),filename));
createErrorFile(file,e);
log("Moved to error file ["+file.getName()+"]");
} finally {
lockFile.delete();
}
} else {
log("File do has lock file ["+file.getName()+"]");
fileMonitor.remove(file);
}
} else {
log("File is unstable ["+file.getName()+"]");
fileMonitor.put(file,fileChangeCurrent);
}
}
}
public boolean acceptFile(File file) {
if (getFilter()!=null) {
return getFilter().accept(file);
} else {
return true;
}
}
public boolean isLockFile(File file) {
int pos = file.getName().lastIndexOf('.');
String extension="";
if (pos!=-1) {
extension = file.getName().substring(pos).trim().toLowerCase();
}
return(extension.equalsIgnoreCase(LOCK_FILE_EXTENSION));
}
private File createLockFile(File file) {
return new File(file.getParentFile(),file.getName()+LOCK_FILE_EXTENSION);
}
private void createErrorFile(File file,IOException exception) {
File errorFile = new File(file.getParentFile(),file.getName()+ERROR_FILE_EXTENSION);
StringWriter sw = null;
PrintWriter pw = null;
FileWriter fileWriter = null;
try {
//
fileWriter = new FileWriter(errorFile);
if (exception!=null) {
sw = new StringWriter();
pw = new PrintWriter(sw);
exception.printStackTrace(pw);
fileWriter.write(sw.toString());
} else {
fileWriter.write("Exception is null.");
}
//
fileWriter.flush();
//
} catch (IOException e) {
} finally {
if (sw!=null) {
try {
sw.close();
} catch (IOException e1) {
}
}
if (pw!=null) {
pw.close();
}
if (fileWriter!=null) {
try {
fileWriter.close();
} catch (IOException e) {
}
}
}
}
private void validateNotNull(String name,Object obj) {
if (obj==null) {
throw new NullPointerException(name+" is null.");
}
}
private void validate(Path directory) throws IOException {
File file = directory.toFile();
if (!file.exists()) {
throw new IOException("Directory ["+file.getAbsolutePath()+"] do not exists.");
} else if (!file.isDirectory()) {
throw new IOException("Directory ["+file.getAbsolutePath()+"] is not a directory.");
} else if (!file.canRead()) {
throw new IOException("Can not read from directory ["+file.getAbsolutePath()+"].");
} else if (!file.canWrite()) {
throw new IOException("Can not write to directory ["+file.getAbsolutePath()+"] .");
}
}
private void log(String msg) {
//TODO
System.out.println("Task ["+getName()+"] "+msg);
}
}
package p1;
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import static java.nio.file.LinkOption.NOFOLLOW_LINKS;
import java.nio.file.StandardWatchEventKinds;
import java.nio.file.WatchEvent;
import java.nio.file.WatchKey;
import java.nio.file.WatchService;
import java.util.List;
public class WatchForFile {
public void WatchMyFolder(String path )
{
File dir = new File(path);
Path myDir= dir.toPath();
try
{
Boolean isFolder = (Boolean) Files.getAttribute(myDir,"basic:isDirectory", NOFOLLOW_LINKS);
if (!isFolder)
{
throw new IllegalArgumentException("Path: " + myDir + " is not a folder");
}
}
catch (IOException ioe)
{
ioe.printStackTrace();
}
System.out.println("Watching path: " + myDir);
try {
WatchService watcher = myDir.getFileSystem().newWatchService();
myDir.register(watcher, StandardWatchEventKinds.ENTRY_CREATE,StandardWatchEventKinds.ENTRY_DELETE, StandardWatchEventKinds.ENTRY_MODIFY);
WatchKey watckKey = watcher.take();
List<WatchEvent<?>> events = watckKey.pollEvents();
for (WatchEvent event : events) {
if (event.kind() == StandardWatchEventKinds.ENTRY_CREATE) {
System.out.println("Created: " + event.kind().toString());
}
if (event.kind() == StandardWatchEventKinds.ENTRY_DELETE) {
System.out.println("Delete: " + event.context().toString());
}
if (event.kind() == StandardWatchEventKinds.ENTRY_MODIFY) {
System.out.println("Modify: " + event.context().toString());
}
}
}
catch (Exception e)
{
System.out.println("Error: " + e.toString());
}
}
}
Check this Code...
https://github.com/omkar9999/FileWatcherHandler
This project allows watching files for different file events like create, modify & delete and then act on these events in a generic way.
How to Use?
Create a Path object representing the directory to monitor for file events.
Path path = Paths.get("/home/omkar/test");
Implement the FileHandler interface to perform an action detected by file event registered.
public class FileHandlerTest implements FileHandler {
private static final Logger LOGGER = Logger.getLogger(FileHandlerTest.class.getName());
/*
* This implemented method will delete the file
*
* #see com.io.util.FileHandler#handle(java.io.File,
* java.nio.file.WatchEvent.Kind)
*/
public void handle(File file, Kind<?> fileEvent) {
LOGGER.log(Level.INFO,"Handler is triggered for file {0}",file.getPath());
if(fileEvent == StandardWatchEventKinds.ENTRY_CREATE) {
try {
boolean deleted = Files.deleteIfExists(Paths.get(file.getPath()));
assertTrue(deleted);
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
Create an instance of an Implemented FileHandler
FileHandlerTest fileHandlerTest = new FileHandlerTest();
Create an instance of a FileWatcher by passing path, an instance of an Implemented FileHandler, and types of file events that you want to monitor separated by commas.
FileWatcher fileWatcher = new FileWatcher(path, fileHandlerTest, StandardWatchEventKinds.ENTRY_CREATE);
Now Create and start a new Thread.
Thread watcherThread = new Thread(fileWatcher);
watcherThread.start();
This thread will start polling for your registered file events and will invoke your custom handle method once any of the registered events are detected.
public class FileWatcher implements Runnable {
private static final Logger LOGGER =Logger.getLogger(FileWatcher.class.getName());
private WatchService watcher;
private FileHandler fileHandler;
private List<Kind<?>> watchedEvents;
private Path directoryWatched;
/**
* #param directory
* #Path directory to watch files into
* #param fileHandler
* #FileHandler implemented instance to handle the file event
* #param watchRecursive
* if directory is to be watched recursively
* #param watchedEvents
* Set of file events watched
*
* #throws IOException
*/
public FileWatcher(Path directory, FileHandler fileHandler, boolean watchRecursive,
WatchEvent.Kind<?>... watchedEvents) throws IOException {
super();
this.watcher = FileSystems.getDefault().newWatchService();
this.fileHandler = fileHandler;
this.directoryWatched = directory;
this.watchedEvents = Arrays.asList(watchedEvents);
if (watchRecursive) {
// register all subfolders
Files.walkFileTree(directory, new SimpleFileVisitor<Path>() {
#Override
public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException {
LOGGER.log(Level.INFO, "Registering {0} ", dir);
dir.register(watcher, StandardWatchEventKinds.ENTRY_CREATE, StandardWatchEventKinds.ENTRY_DELETE,
StandardWatchEventKinds.ENTRY_MODIFY);
return FileVisitResult.CONTINUE;
}
});
} else {
directory.register(watcher, watchedEvents);
}
}
#SuppressWarnings({ "unchecked" })
public void run() {
LOGGER.log(Level.INFO, "Starting FileWatcher for {0}", directoryWatched.toAbsolutePath());
WatchKey key = null;
while (true) {
try {
key = watcher.take();
if (key != null) {
for (WatchEvent<?> event : key.pollEvents()) {
WatchEvent.Kind<?> kind = event.kind();
WatchEvent<Path> ev = (WatchEvent<Path>) event;
//directory in which file event is detected
Path directory = (Path) key.watchable();
Path fileName = ev.context();
if (watchedEvents.contains(kind)) {
LOGGER.log(Level.INFO, "Invoking handle on {0}", fileName.toAbsolutePath());
fileHandler.handle(directory.resolve(fileName).toFile(), kind);
}
}
key.reset();
}
} catch (InterruptedException ex) {
LOGGER.log(Level.SEVERE, "Polling Thread was interrupted ", ex);
Thread.currentThread().interrupt();
}
}
}
}

Java File handles won't close

Yes this question has been asked before however the issue is a little more complex it seems. I have used all solutions from previous questions that relate to this.
Relates to: Freeing Java File Handles
, Java keeps file locks no matter what
package me.test;
import java.io.File;
import java.util.logging.FileHandler;
import java.util.logging.Formatter;
import java.util.logging.Level;
import java.util.logging.LogRecord;
import java.util.logging.Logger;
public class Test {
Logger log = Logger.getAnonymousLogger();
FileHandler handle;
final static String newline = System.lineSeparator();
/**
* #param args
*/
public static void main(String[] args) {
Test t = new Test();
t.run();
}
public void run()
{
for (int i = 0; i < 6; i++) {
testLogs();
change();
}
testLogs();
if (handle != null)
{
handle.close();
log.removeHandler(handle);
}
}
public static FileHandler craftFileHandler(File file, boolean append)
{
if (file == null)
return null;
FileHandler fh = null;
try {
fh = new FileHandler(file.getPath(), append);
fh.setFormatter(new Formatter() {
#Override
public String format(LogRecord record) {
return "[test] " + "[" + record.getLevel().toString() + "]" + String.format(record.getMessage(), record.getParameters()) + newline;
}
});
return new FileHandler(file.getPath(), append);
} catch (Exception e) {
if (fh != null)
fh.close();
return null;
}
}
public void change()
{
if (handle != null)
{
handle.flush();
handle.close();
log.removeHandler(handle);
}
handle = null;
File f = new File("log.log");
handle = craftFileHandler(f, true);
System.out.println(f.getAbsolutePath());
if (handle != null)
log.addHandler(handle);
}
public void testLogs()
{
if (log == null)
{
log = Logger.getLogger("test");
log.setLevel(Level.ALL);
}
log.info("This is info #1");
log.warning("Warning 1");
log.info("meh info again.");
log.severe("SEVERE HELL YA NICE TEST");
log.info("You sure its good here?");
log.info("Handler count " + log.getHandlers().length);
}
}
This code is meant to be a test code. I made this test file so I can figure out how to fix this issue on a project I have.
The reason I have a loop of this is because the issue occurs too fast to explain. So a loop was the best way to simulate it. In my project there is a config for a Log file to choose where to put it. But if the file is not changed in config on its reload. It tends to leave the file locked and create extra files EVERY reload
I would like to get this working. If this starts working properly. Then I can properly implement it on my project.
You are getting multiple files created because you are creating a FileHandler and never closing it.
fh = new FileHandler(file.getPath(), append);
...
return new FileHandler(file.getPath(), append);
The fix?
return fh;
Finally or not makes absolutely no difference. In this case, you actually do want to be closing in the catch block since nothing will be able to close it if you don't.
Always close resources inside finally block-
try {
fh = new FileHandler(file.getPath(), append);
fh.setFormatter(new Formatter() {
#Override
public String format(LogRecord record) {
return "[test] " + "[" + record.getLevel().toString() + "]" + String.format(record.getMessage(), record.getParameters()) + newline;
}
});
return new FileHandler(file.getPath(), append);
} catch (Exception e) {
return null;
// never close in catch
} finally {
// lastly close anything that may be open
if (fh != null){
try {
fh.close();
} catch (Exception ex){
// error closing
}
}
}
After use the log method close all handlers.
this.logger.log(Level.SEVERE, (exception.getClass().getName() + ": " + exception.getMessage()) + "\r\n" + exception.getCause() + "\r\n" + "\r\n");
for (Handler handler : this.logger.getHandlers())
{
handler.close();
}
Well, one issue is here:
try {
fh = new FileHandler(file.getPath(), append);
fh.setFormatter(new Formatter() {
#Override
public String format(LogRecord record) {
return "[test] " + "[" + record.getLevel().toString() + "]" + String.format(record.getMessage(), record.getParameters()) + newline;
}
});
return new FileHandler(file.getPath(), append);
} catch (Exception e) {
if (fh != null)
fh.close();
return null;
You never close the file in the try statement, only closing it if there is an error. You should close the file as soon as you done with it:
try {
fh = new FileHandler(file.getPath(), append);
fh.setFormatter(new Formatter() {
#Override
public String format(LogRecord record) {
return "[test] " + "[" + record.getLevel().toString() + "]" + String.format(record.getMessage(), record.getParameters()) + newline;
}
});
//close fh
fh.close();
return new FileHandler(file.getPath(), append);
} catch (Exception e) {
if (fh != null)
fh.close();
return null;

Categories