Java logging to File: Alternative persistence scheme - java

New to Java. I would like to use the logger, but with a different file persistence scheme. Instead of rotating files and overriding, I would like the logs to be created in a time based file system hierarchy, where log files contain logs of the past minute: Example: if a log is generated on 2015-03-08 13:05, it will be placed in log_05.txt under /home/myUser/logs/2015/03/08/13
in other words, the file full path would be /home/myUser/logs/2015/03/08/13/log_05.txt.
Any suggestions?

I ended up implementing a library. Tested on Linux & Windows. It provides the desired file persistence scheme, and allows asynchronous logging. Would appreciate comments.
package com.signin.ems;
/**
* The EMSLogger JAR wraps the Java Logger for two purposes:
* 1. Implement a custom file persistence scheme (other than a single file, or a rotating scheme).
* In particular, the scheme implemented is one minute files, placed in hourly directories.
* The file name format is <mm>.log (mm=00..59), and the directory name format is YYYYMMDD24HH.
*
* 2. Logging should be done asynchronously. For this, a dedicated thread is created. When a message is logged,
* the LogRecord is placed in a BlockingQueue instead of writing the LogRecord to file. The dedicated thread
* performs a blocking wait on the queue. Upon retrieving a LogRecord object, it writes the LogRecord to the
* proper file
*
*
*/
public class EMSLogger
{
private static final int m_iQueSize = 100000;
private static BlockingQueue<LogRecord> m_LogRecordQueue;
private static EMSLoggerThread m_EMSLoggerThread;
private static Thread m_thread;
private static final Logger m_instance = createInstance();
protected EMSLogger()
{
}
public static Logger getInstance() {
return m_instance;
}
private static Logger createInstance()
{
MyFileHandler fileHandler = null;
Logger LOGGER = null;
try
{
// initialize the Log queue
m_LogRecordQueue = new ArrayBlockingQueue<LogRecord>(m_iQueSize);
// get top level logger
LOGGER = Logger.getLogger("");
LOGGER.setLevel(Level.ALL);
// create our file handler
fileHandler = new MyFileHandler(m_LogRecordQueue);
fileHandler.setLevel(Level.ALL);
LOGGER.addHandler(fileHandler);
// create the logging thread
m_EMSLoggerThread = new EMSLoggerThread(m_LogRecordQueue, fileHandler);
m_thread = new Thread(m_EMSLoggerThread);
m_thread.start();
}
catch (IOException e)
{
e.printStackTrace();
}
return LOGGER;
}
public static void Terminate ()
{
m_thread.interrupt();
}
}
public class MyFileHandler extends FileHandler
{
private final BlockingQueue<LogRecord> m_queue;
private BufferedOutputStream m_BufferedOutputStream;
private String m_RootFolderName;
private String m_CurrentDirectoryName;
private String m_CurrentFileName;
private SimpleDateFormat m_SDfh;
private SimpleDateFormat m_SDfm;
public MyFileHandler (BlockingQueue<LogRecord> q) throws IOException, SecurityException
{
super ();
// use simple formatter. Do not use the default XML
super.setFormatter (new SimpleFormatter ());
// get root folder from which to create the log directory hierarchy
m_RootFolderName = System.getProperty ("user.home") + "/logs";
// Service can optionally set its name. All hourly directories will
// be created below the provided name. If no name is given, "Default"
// is used
String sName = System.getProperty ("EMS.ServiceName");
if (sName != null)
{
System.out.println ("EMS.ServiceName = " + sName);
}
else
{
sName = "Default";
System.out.println ("Using \"" + sName + "\" as service name");
}
m_RootFolderName += "/" + sName;
// make sure the root folder is created
new File (m_RootFolderName).mkdirs ();
// initialize format objects
m_SDfh = new SimpleDateFormat ("yyyyMMddHH");
m_SDfm = new SimpleDateFormat ("mm");
m_CurrentDirectoryName = "";
m_CurrentFileName = "";
m_BufferedOutputStream = null;
m_queue = q;
}
// post the record the the queue. Actual writing to the log is done in a dedicated thread
// note that placing in the queue is done without blocking while waiting for available space
#Override
public void publish (LogRecord record)
{
m_queue.offer (record);
}
// check if a new file needs to be created
private void SetCurrentFile ()
{
boolean bChangeFile = false;
Date d = new Date (System.currentTimeMillis());
String newDirectory = m_RootFolderName + "/" + m_SDfh.format(d);
String newFile = m_SDfm.format(d);
if (!newDirectory.equals(m_CurrentDirectoryName))
{
// need to create a new directory and a new file
m_CurrentDirectoryName = newDirectory;
new File(m_CurrentDirectoryName).mkdirs();
bChangeFile = true;
}
if (!newFile.equals(m_CurrentFileName))
{
// need to create a new file
m_CurrentFileName = newFile;
bChangeFile = true;
}
if (bChangeFile)
{
try
{
if (m_BufferedOutputStream != null)
{
m_BufferedOutputStream.close ();
}
System.out.println("Creating File: " + m_CurrentDirectoryName + "/" + m_CurrentFileName + ".log");
m_BufferedOutputStream = new BufferedOutputStream
(new FileOutputStream (m_CurrentDirectoryName + "/" + m_CurrentFileName + ".log", true),2048);
this.setOutputStream(m_BufferedOutputStream);
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
// method _published is called from the dedicated thread
public void _publish(LogRecord record)
{
// check if a new file needs to be created
SetCurrentFile ();
super.publish(record);
}
}
class EMSLoggerThread implements Runnable
{
private final BlockingQueue<LogRecord> m_queue;
private final MyFileHandler m_MyFileHandler;
// Constructor
EMSLoggerThread(BlockingQueue<LogRecord> q, MyFileHandler fh)
{
m_queue = q;
m_MyFileHandler = fh;
}
public void run()
{
try
{
while (true)
{
m_MyFileHandler._publish(m_queue.take());
}
}
catch (InterruptedException ex)
{
}
}
}

This is covered in How to create the log file for each record in a specific format using java util logging framework. You have to modify those examples to create directories since the FileHandler will not create directories. If you are going to create an asynchronous handler, you should follow the advice in Using java.util.logger with a separate thread to write on file.

Related

How to check if file or console is associated with Standard Output?

I am using following code to redirect standard out and standard error out to Log file depending on the boolean value of a variable.
if (logToFile==true){
java.io.File outputFile = new java.io.File(logFilePath);
System.setOut(new java.io.PrintStream(new java.io.FileOutputStream(outputFile, true), true));
System.setErr(new java.io.PrintStream(new java.io.FileOutputStream(outputFile, true), true));
}
Moving further down my code, I want to find out whether my standard out and error out are associated with file - only then I would want to log few things. And over there, I don't have access to logToFile variable.
Is there any way to find out whether the standard out and error out are associated with file or the default console currently? And if they are associated to file then can we get the file path?
Moving further down my code, I want to find out whether my standard out and error out are associated with file - only then I would want to log few things. And over there, I don't have access to logToFile variable.
What about storing the value of logToFile in a static variable, like for example:
if (logToFile) {
StandardStreams.redirectToFile(new File(logFilePath));
}
public class StandardStreams {
private static boolean redirectedToFile;
public static void redirectToFile(File file) throws FileNotFoundException {
PrintStream stream = new PrintStream(new FileOutputStream(file, true), true);
System.setOut(stream);
System.setErr(stream);
redirectedToFile = true;
}
public static boolean areRedirectedToFile() {
return redirectedToFile;
}
}
And then:
if (StandardStreams.areRedirectedToFile()) {
// Log few things
}
Is there any way to find out whether the standard out and error out are associated with file or the default console currently? And if they are associated to file then can we get the file path?
Create your own PrintStream:
class ConsoleLinkedFile extends PrintStream {
private final File file;
ConsoleLinkedFile(File file) throws FileNotFoundException {
super(new FileOutputStream(file, true), true);
this.file = file;
}
File getFile() {
return file;
}
}
if (logToFile) {
PrintStream stream = new ConsoleLinkedFile(new File(logFilePath));
System.setOut(stream);
System.setErr(stream);
}
To find out and retrieve the file path:
public static Optional<File> getFileIfRedirected(PrintStream stream) {
if (stream instanceof ConsoleLinkedFile) {
ConsoleLinkedFile linkedFile = (ConsoleLinkedFile) stream;
return Optional.of(linkedFile.getFile());
}
return Optional.empty();
}
if (getFileIfRedirected(System.out).isPresent()) {
// Log few things
}
Note that the same PrintStream can be shared between standard input and standard error.
If you cannot create your own PrintStream, then you need to use reflection:
private static final VarHandle OUT, PATH;
static {
final Class<?> OUT_class = FilterOutputStream.class;
final Class<?> PATH_class = FileOutputStream.class;
MethodHandles.Lookup lookup = MethodHandles.lookup();
try {
OUT = MethodHandles.privateLookupIn(OUT_class, lookup)
.findVarHandle(OUT_class, "out", OutputStream.class);
PATH = MethodHandles.privateLookupIn(PATH_class, lookup)
.findVarHandle(PATH_class, "path", String.class);
} catch (ReflectiveOperationException e) {
throw new ExceptionInInitializerError(e);
}
}
private static Optional<String> getFileIfRedirected(PrintStream stream) {
Object out = OUT.get(stream);
if (out instanceof BufferedOutputStream) {
out = OUT.get(out);
}
return Optional.ofNullable((String) PATH.get(out));
}
VarHandle is faster than java.lang.reflect. In Java 8, you can use the latter:
private static final Field OUT, PATH;
static {
try {
OUT = FilterOutputStream.class.getDeclaredField("out");
OUT.setAccessible(true);
PATH = FileOutputStream.class.getDeclaredField("path");
PATH.setAccessible(true);
} catch (NoSuchFieldException e) {
throw new ExceptionInInitializerError(e);
}
}

Switching database connection in mysql

Here my problem :
I have two mysql databases directory and I want to use one after the other.
The only way that I have actualy found, to switch from one database to the other, is to shutdown the mysql daemon and to start it again pointing to the second database directory.
Are there any other way to perform that ?
Thanks
EDIT :
My application manage "missions directory" that embed a Database.
This missions are copied to an hard disk, that is connected to an external device that will fill this database.
Then when the mission is done, we collect the mission and the database with the application to generate report.
That why we have multiple database with the same schema, but placed in different place, we need also to read this database by an external application that why we need to have only one database open at each time.
My question is not if it possible to run two database from two different directories at the same time, because I know that is possible, but how to switch from one database to another, without kiling the daemon.
PS: I'm working on Java application and I do all this action by system access in Java like Runtime.getRuntime().exec(MY_CMD), not by choice. Maybe it's better to use Java Library, I already use hibernate.
Here the code to switch :
new Thread(new Task<T>() {
#Override
protected T call() throws Exception {
// Close the previous database
if (isDaemonRunning()) {
close();
}
// try to open the new one
if (!open()) {
notifyConnectedStatus(false);
return null;
}
// create the hibernate session object
_session = HibernateUtil.getSessionFactory().openSession();
notifyConnectedStatus(true);
// no return is waiting, then return null
return null;
}
}).start();
Here the called methods :
private boolean open() {
int exitVal = 0;
try {
Process p = Runtime.getRuntime().exec(getRunDaemonCmd());
p.waitFor(1, TimeUnit.SECONDS);
if (p.isAlive()) {
return true;
}
exitVal = p.exitValue();
} catch (Exception e) {
_logger.log(Level.SEVERE, e.getMessage(), e);
return false;
}
return (0 == exitVal);
}
private void close() {
do {
try {
if (null != _session) {
_session.close();
_session = null;
}
Process p = Runtime.getRuntime().exec(SHUTDOWN_CMD);
p.waitFor();
} catch (Exception e) {
_logger.log(Level.SEVERE, e.getMessage(), e);
return;
}
} while (isDaemonRunning());
_connected = false;
}
private String[] getRunDaemonCmd() {
return new String[] { MYSQLD, INI_FILE_PARAM + _myIniFile, DATADIR_PARAM + _databasePath };
}
private boolean isDaemonRunning() {
int exitVal = 0;
try {
Process p = Runtime.getRuntime().exec(PING_CMD);
p.waitFor();
exitVal = p.exitValue();
} catch (Exception e) {
_logger.log(Level.SEVERE, e.getMessage(), e);
}
return (0 == exitVal);
}
And Here the constants :
private static final String MYSQLD = "mysqld";
private static final String INI_FILE_PARAM = "--defaults-file=";
private static final String DATADIR_PARAM = "--datadir=";
private static final String MYSQLADMIN = "mysqladmin";
private static final String USER_PARAM = "-u";
private static final String PASSWORD_PARAM = "-p";
private static final String SHUTDOWN = "shutdown";
private static final String PING = "ping";
private static final String[] PING_CMD = new String[] { MYSQLADMIN, PING };
private static final String[] SHUTDOWN_CMD = new String[] { MYSQLADMIN, USER_PARAM + DatabaseSettings.getUser(),
PASSWORD_PARAM + DatabaseSettings.getPassword(), SHUTDOWN };
private String _myIniFile = DatabaseSettings.getDefaultIniFile();
so, you can use multiple persistence unit to connect with multiple data-source or database if you use hibernate.

java.util.Logger creating more files than it should

Hi this is my CustomLogger class using java.util.Logger
public class CustomLogger {
private String pathToLogFiles = "/tmp/sos/logs/";
private Logger logger;
public CustomLogger(String prefix) {
logger = Logger.getLogger(prefix);
if( Utils.detectEnvironment() == Environment.LIVE ) {
String date = new SimpleDateFormat("yyyy-MM-dd").format(new Date());
String filename = "log_" + date + ".txt";
FileHandler fileHandler = null;
try {
fileHandler = new FileHandler(this.pathToLogFiles + filename);
logger.addHandler(fileHandler);
fileHandler.setFormatter(new SimpleFormatter());
} catch (IOException e) {
logger.addHandler(new ConsoleHandler());
this.error(e.getMessage());
}
}
else {
logger.addHandler(new ConsoleHandler());
}
}
public void info(String message) {
logger.info(message);
}
public void error(String message) {
logger.warning(message);
}
}
When on Development Environment the logging to the console works fine, but on the live environment instead of logging to the one file as it should 12 different files are created containg a xml for every sent log message.
:/tmp/sos/logs# ls
log_2016-09-09.txt log_2016-09-09.txt.1.lck log_2016-09-09.txt.2.lck log_2016-09-09.txt.3.lck log_2016-09-09.txt.4.lck log_2016-09-09.txt.5.lck log_2016-09-09.txt.6.lck
log_2016-09-09.txt.1 log_2016-09-09.txt.2 log_2016-09-09.txt.3 log_2016-09-09.txt.4 log_2016-09-09.txt.5 log_2016-09-09.txt.6 log_2016-09-09.txt.lck
Can somebody tell me what is wrong there?
Thanks
Every time a CustomLogger constructor is executed you are creating and opening a new FileHandler. You need to ensure that you are only creating and adding the FileHandler once per JVM process.
Otherwise, you need to determine a proper time to close the previous FileHandler before opening a new FileHandler.

TrueZip compression taking too much time

I am using TrueZip for compression. Here is what my code looks like
public String compress() throws IOException {
if (logLocations.isEmpty()) {
throw new IllegalStateException("no logs provided to compress");
}
removeDestinationIfExists(desiredArchive);
final TFile destinationArchive = new TFile(desiredArchive + "/diagnostics");
for (final String logLocation : logLocations) {
final TFile log = new TFile(logLocation);
if (!log.exists()) {
LOGGER.debug("{} does not exist, ignoring.");
continue;
}
if (log.isDirectory()) {
log.cp_r(destinationArchive);
} else {
final String newLogLocation =
new TFile(destinationArchive.getAbsolutePath()) + SLASH +
getLogNameFromPath(logLocation);
log.cp(new TFile(newLogLocation));
}
}
return destinationArchive.getEnclArchive().getAbsolutePath();
}
and my test
#Test
public void testBenchMarkWithHprof() throws IOException {
final FileWriter logLocations;
String logLocationPath = "/Users/harit/Downloads/tmp/logLocations.txt";
{
logLocations = new FileWriter(logLocationPath);
logLocations.write("Test3");
logLocations.write("\n");
logLocations.close();
}
final LPLogCompressor compressor = new LPLogCompressor("/Users/harit/Downloads/tmp",
new File(logLocationPath),
"/Users/harit/Downloads/tmp/TestOut");
final long startTime = System.currentTimeMillis();
compressor.compress();
System.out.println("Time taken (msec): " + (System.currentTimeMillis() - startTime));
}
and my data directory Test3 looks like
Test3/
java_pid1748.hprof
The file size is 2.83GB
When I ran the test, it took over 22 minutes.
However when I compress the same file using Native OSX compress (right click -> compress), it takes only 2 minutes
Why there is so much of difference?
Thanks
UPDATE
Based on #Satnam recommendation, I attached a debugger to see whats going on and this is what I find
None of the TrueZip Threads are running? really? Apologies I am using profiler for the first time
The reason in this case was using default Deflater which is Deflater.BEST_COMPRESSION.
I override the ZipDriver class to over the level as
import de.schlichtherle.truezip.fs.archive.zip.ZipDriver;
import de.schlichtherle.truezip.socket.IOPoolProvider;
import java.util.zip.Deflater;
public class OverrideZipDriver extends ZipDriver {
public OverrideZipDriver(final IOPoolProvider ioPoolProvider) {
super(ioPoolProvider);
}
#Override
public int getLevel() {
return Deflater.DEFAULT_COMPRESSION;
}
}
and then in my Compressor class, I did
public LPLogCompressor(final String logProcessorInstallPath, final File logLocationsSource,
final String desiredArchive) throws IOException {
this.desiredArchive = desiredArchive + DOT + getDateTimeStampFormat() + ZIP;
logLocations = getLogLocations(logProcessorInstallPath, logLocationsSource);
enableLogCompression();
}
private static void enableLogCompression() {
TConfig.get().setArchiveDetector(
new TArchiveDetector(TArchiveDetector.NULL, new Object[][]{
{"zip", new OverrideZipDriver(IOPoolLocator.SINGLETON)},}));
TConfig.push();
}
You can read the thread here

how to append log in java using FileHandler

i am trying to store logs into file using log4j
i tried to create one separate class
public class MyLogger {
FileHandler fh;
Logger log;
public MyLogger(String className) {
log = Logger.getLogger(className);
try {
String location = this.getClass().getProtectionDomain().getCodeSource().getLocation().getPath().replace("%20", " ").replaceFirst("/", "") + "logs.log";
fh = new FileHandler(location);
log.addHandler(fh);
SimpleFormatter formatter = new SimpleFormatter();
fh.setFormatter(formatter);
} catch (Exception e) {
System.out.println("error in MyLogger class, method getLogger \n stack trace below \n");
e.printStackTrace();
}
}
public void log(Level l, String Message, Object o) {
log.log(l, Message, o);
flushStream();
}
public void log(Level l, String Message) {
log.log(l, Message);
flushStream();
}
private void flushStream() {
fh.flush();
fh.close();
}
}
i am calling this class each and every time , when i need to log some messages to the file.
it runs fine but it overwrites the old log data each and every time it is called.
how can i append new logs using this code? or is there any other way of logging ?
There is another constructor for FileHandler which you can use to explicitly tell FileHandler that you want to append log. The code is like this:
fh = new FileHandler(location, true);
Setting the second arguments of this constructor to be true to append log.

Categories