Has anyone tried to write log4j log file directly to Hadoop Distributed File System ?
If yes, please reply how to achieve this.
I think I will have to create an Appender for it.
Is this the way?
My necessity is to write logs to a file at particular intervals and query that data at a later stage.
I recommend to use Apache Flume for this task. There is Flume appender for Log4j. This way, you send logs to Flume, and it writes to HDFS. Good thing about this approach is that Flume becomes single point of communication with HDFS. Flume makes it easy to add new data sources without writing bunch of code for interaction with HDFS again and again.
the standard log4j(1.x) does not support write to HDFS. but lucky, log4j is very easy to extends. I have written one HDFS FileAppender to write log to MapRFS(compatible with Hadoop). the file name can be something like "maprfs:///projects/example/root.log". It works good in our projects. I extract the appender part of code and paste it below. the code snippets may not be able to run. but this will give you the idea how to write you appender. Actually, you only need to extends the org.apache.log4j.AppenderSkeleton, and implement append(), close(), requiresLayout(). for more information, you can also download the log4j 1.2.17 source code and see how the AppenderSkeleton is defined, it will give you all the information there. good luck!
note: the alternative way to write to HDFS is to mount the HDFS to all your nodes, so you can write the logs just like write to local directory. maybe this is a better way in practice.
import org.apache.log4j.AppenderSkeleton;
import org.apache.log4j.spi.LoggingEvent;
import org.apache.log4j.Layout;
import org.apache.hadoop.conf.Configuration;
import java.io.*;
public class HDFSFileAppender {
private String filepath = null;
private Layout layout = null;
public HDFSFileAppender(String filePath, Layout layout){
this.filepath = filePath;
this.layout = layout;
}
#Override
protected void append(LoggingEvent event) {
String log = this.layout.format(event);
try {
InputStream logStream = new ByteArrayInputStream(log.getBytes());
writeToFile(filepath, logStream, false);
logStream.close();
}catch (IOException e){
System.err.println("Exception when append log to log file: " + e.getMessage());
}
}
#Override
public void close() {}
#Override
public boolean requiresLayout() {
return true;
}
//here write to HDFS
//filePathStr: the file path in MapR, like 'maprfs:///projects/aibot/1.log'
private boolean writeToFile(String filePathStr, InputStream inputStream, boolean overwrite) throws IOException {
boolean success = false;
int bytesRead = -1;
byte[] buffer = new byte[64 * 1024 * 1024];
try {
Configuration conf = new Configuration();
org.apache.hadoop.fs.FileSystem fs = org.apache.hadoop.fs.FileSystem.get(conf);
org.apache.hadoop.fs.Path filePath = new org.apache.hadoop.fs.Path(filePathStr);
org.apache.hadoop.fs.FSDataOutputStream fsDataOutputStream = null;
if(overwrite || !fs.exists(filePath)) {
fsDataOutputStream = fs.create(filePath, overwrite, 512, 3, 64*1024*1024);
}else{ //append to existing file.
fsDataOutputStream = fs.append(filePath, 512);
}
while ((bytesRead = inputStream.read(buffer)) != -1) {
fsDataOutputStream.write(buffer, 0, bytesRead);
}
fsDataOutputStream.close();
success = true;
} catch (IOException e) {
throw e;
}
return success;
}
}
Related
using java 8, tomcat 8
Hi, i am loading a file using properties, but i have a check before loading which returns the same properties object if its already been loaded (not null). which is a normal case scenario but i want to know if there is any way that if any change occur in target file, and some trigger should be called and refreshes all the properties objects. here is my code.
public static String loadConnectionFile(String keyname) {
String message = "";
getMessageFromConnectionFile();
if (propertiesForConnection.containsKey(keyname))
message = propertiesForConnection.getProperty(keyname);
return message;
}
public static synchronized void getMessageFromConnectionFile() {
if (propertiesForConnection == null) {
FileInputStream fileInput = null;
try {
File file = new File(Constants.GET_CONNECTION_FILE_PATH);
fileInput = new FileInputStream(file);
Reader reader = new InputStreamReader(fileInput, "UTF-8");
propertiesForConnection = new Properties();
propertiesForConnection.load(reader);
} catch (Exception e) {
Utilities.printErrorLog(Utilities.convertStackTraceToString(e), logger);
} finally {
try {
fileInput.close();
} catch (Exception e) {
Utilities.printErrorLog(Utilities.convertStackTraceToString(e), logger);
}
}
}
}
the loadConnectionFile method executes first and calls getMessageFromConnectionFile which has check implemented for "null", now if we remove that check it will definitely load updated file every time but it will slower the performance. i want an alternate way.
hope i explained my question.
thanks in advance.
Java has a file watcher service. It is an API. You can "listen" for changes in files and directories. So you can listen for changes to your properties file, or the directory in which your properties file is located. The Java Tutorials on Oracle's OTN Web site has a section on the watcher service.
Good Luck,
Avi.
I would like to know how to get a file from a Vaadin Upload Component. Here is the example on the Vaadin Website
but it does not include how to save it other than something about OutputStreams.
Help!
To receive a file upload in Vaadin, you must implement Receiver interface, wich provides you with a receiveUpload(filename, mimeType)method, used to receive the info. The simplest code to do this would be (Taken as example from Vaadin 7 docs):
class FileUploader implements Receiver {
private File file;
private String BASE_PATH="C:\\";
public OutputStream receiveUpload(String filename,
String mimeType) {
// Create upload stream
FileOutputStream fos = null; // Stream to write to
try {
// Open the file for writing.
file = new File(BASE_PATH + filename);
fos = new FileOutputStream(file);
} catch (final java.io.FileNotFoundException e) {
new Notification("Could not open file<br/>",
e.getMessage(),
Notification.Type.ERROR_MESSAGE)
.show(Page.getCurrent());
return null;
}
return fos; // Return the output stream to write to
}
};
With that, the Uploader will write you a file in C:\. If you wish to something after the upload has completed successfully, or not, you can implement SucceeddedListener or FailedListener. Taking the example above, the result (with a SucceededListener) could be:
class FileUploader implements Receiver {
//receiveUpload implementation
public void uploadSucceeded(SucceededEvent event) {
//Do some cool stuff here with the file
}
}
I've got an AppEngine app with two different instances, one for prod and one for staging. Accordingly, I'd like to configure the staging instance slightly differently, since it'll be used for testing. Disabling emails, talking to a different test backend for data, that kind of thing.
My first intuition was to use a .properties file, but I can't seem to get it to work. I'm using Gradle as a build system, so the file is saved in src/main/webapp/WEB-INF/staging.properties (and a matching production.properties next to it). I'm trying to access it like so:
public class Config {
private static Config sInstance = null;
private Properties mProperties;
public static Config getInstance() {
if (sInstance == null) {
sInstance = new Config();
}
return sInstance;
}
private Config() {
// Select properties filename.
String filename;
if (!STAGING) { // PRODUCTION SETTINGS
filename = "/WEB-INF/production.properties";
} else { // DEBUG SETTINGS
filename = "/WEB-INF/staging.properties";
}
// Get handle to file.
InputStream stream = this.getClass().getClassLoader().getResourceAsStream(filename);
if (stream == null) {
// --> Crashes here. <--
throw new ExceptionInInitializerError("Unable to open settings file: " + filename);
}
// Parse.
mProperties = new Properties();
try {
mProperties.load(stream);
} catch (IOException e) {
throw new ExceptionInInitializerError(e);
}
}
The problem is that getResourceAsStream() is always returning null. I checked the build/exploded-app directory, and the .properties file shows up there. I also checked the .war file, and found the .properties file there as well.
I've also tried moving the file into /WEB-INF/classes, but that didn't make a difference either.
What am I missing here?
Try
BufferedReader reader = new BufferedReader(new FileReader(filename));
or
InputStream stream = this.getClass().getResourceAsStream(filename);
i want to copy a file from a server to a client in java.this is my code up to now
import java.io.File;
import java.io.FileOutputStream;
import java.io.InputStream;
import java.net.URL;
public class Copy {
private ListDirectory dir = new ListDirectory();
public Copy() {
}
public String getCopyPath(String file) throws Exception {
String path = dir.getCurrentPath();
path += "\\" + file;
return path;
}
public void copyFile(String file) {
try {
File inputFile = new File(dir.getCurrentPath());
URL copyurl;
InputStream outputFile;
copyurl = new URL(getCopyPath(file));
outputFile = copyurl.openStream();
FileOutputStream out = new FileOutputStream(inputFile);
int c;
while ((c = outputFile.read()) != -1)
out.write(c);
outputFile.close();
out.close();
} catch (Exception e) {
System.out.println("Failed to Copy File from server");
e.printStackTrace();
}
}
public static void main(String args[]) {
String a = "put martin";
String b = a.substring(0, 3);
String c = a.substring(4);
System.out.println(a);
System.out.println(b);
System.out.println(c);
}
}
Problem is , the server is not uploadded online , but it is on my local drive, and the URL thing doesnt work. is there any other way? is this way correct? thanks
If you're expecting to access your file from the local file system (whether that be via network drive or a local disk), you'll need to treat this as if it is a straight file copy.
If you're expecting to access your file as if it is available for download from an HTTP server, you will need to treat it as an HTTP download (which is what it looks like you're trying to do with the URL).
If you want to test the HTTP download functionality using a file on your local system, just set up a simple HTTP server on your dev machine with a directory on your local system, and give your HTTP-downloading code a URL pointing to that local server (on http://localhost, or using your IP address).
Unfortunately, HTTP is a very different animal from a file system, and I don't think there's any way to use the same code to handle both scenarios. If you want your program to ultimately support both protocols, you should build methods/classes to handle both situations, and then have your program detect and use the appropriate protocol for a given path. You'll need to do the same for any other protocol you wish to support (FTP, SFTP, etc).
I'm trying to configure the Java Logging API's FileHandler to log my server to a file within a folder in my home directory, but I don't want to have to create those directories on every machine it's running.
For example in the logging.properties file I specify:
java.util.logging.FileHandler
java.util.logging.FileHandler.pattern=%h/app-logs/MyApplication/MyApplication_%u-%g.log
This would allow me to collect logs in my home directory (%h) for MyApplication and would rotate them (using the %u, and %g variables).
Log4j supports this when I specify in my log4j.properties:
log4j.appender.rolling.File=${user.home}/app-logs/MyApplication-log4j/MyApplication.log
It looks like there is a bug against the Logging FileHandler:
Bug 6244047: impossible to specify driectorys to logging FileHandler unless they exist
It sounds like they don't plan on fixing it or exposing any properties to work around the issue (beyond having your application parse the logging.properties or hard code the path needed):
It looks like the
java.util.logging.FileHandler does not
expect that the specified directory
may not exist. Normally, it has to
check this condition anyway. Also, it
has to check the directory writing
permissions as well. Another question
is what to do if one of these check
does not pass.
One possibility is to create the
missing directories in the path if the
user has proper permissions. Another
is to throw an IOException with a
clear message what is wrong. The
latter approach looks more consistent.
It seems like log4j version 1.2.15 does it.
Here is the snippet of the code which does it
public
synchronized
void setFile(String fileName, boolean append, boolean bufferedIO, int bufferSize)
throws IOException {
LogLog.debug("setFile called: "+fileName+", "+append);
// It does not make sense to have immediate flush and bufferedIO.
if(bufferedIO) {
setImmediateFlush(false);
}
reset();
FileOutputStream ostream = null;
try {
//
// attempt to create file
//
ostream = new FileOutputStream(fileName, append);
} catch(FileNotFoundException ex) {
//
// if parent directory does not exist then
// attempt to create it and try to create file
// see bug 9150
//
String parentName = new File(fileName).getParent();
if (parentName != null) {
File parentDir = new File(parentName);
if(!parentDir.exists() && parentDir.mkdirs()) {
ostream = new FileOutputStream(fileName, append);
} else {
throw ex;
}
} else {
throw ex;
}
}
Writer fw = createWriter(ostream);
if(bufferedIO) {
fw = new BufferedWriter(fw, bufferSize);
}
this.setQWForFiles(fw);
this.fileName = fileName;
this.fileAppend = append;
this.bufferedIO = bufferedIO;
this.bufferSize = bufferSize;
writeHeader();
LogLog.debug("setFile ended");
}
This piece of code is from FileAppender, RollingFileAppender extends FileAppender.
Here it is not checking whether we have permission to create the parent folders, but if the parent folders is not existing then it will try to create the parent folders.
EDITED
If you want some additional functionalily, you can always extend RollingFileAppender and override the setFile() method.
You can write something like this.
package org.log;
import java.io.IOException;
import org.apache.log4j.RollingFileAppender;
public class MyRollingFileAppender extends RollingFileAppender {
#Override
public synchronized void setFile(String fileName, boolean append,
boolean bufferedIO, int bufferSize) throws IOException {
//Your logic goes here
super.setFile(fileName, append, bufferedIO, bufferSize);
}
}
Then in your configuration
log4j.appender.fileAppender=org.log.MyRollingFileAppender
This works perfectly for me.
To work around the limitations of the Java Logging framework, and the unresolved bug: Bug 6244047: impossible to specify driectorys to logging FileHandler unless they exist
I've come up with 2 approaches (although only the first approach will actually work), both require your static void main() method for your app to initialize the logging system.
e.g.
public static void main(String[] args) {
initLogging();
...
}
The first approach hard-codes the log directories you expect to exist and creates them if they don't exist.
private static void initLogging() {
try {
//Create logging.properties specified directory for logging in home directory
//TODO: If they ever fix this bug (http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6244047) in the Java Logging API we wouldn't need this hack
File homeLoggingDir = new File (System.getProperty("user.home")+"/webwars-logs/weblings-gameplatform/");
if (!homeLoggingDir.exists() ) {
homeLoggingDir.mkdirs();
logger.info("Creating missing logging directory: " + homeLoggingDir);
}
} catch(Exception e) {
e.printStackTrace();
}
try {
logger.info("[GamePlatform] : Starting...");
} catch (Exception exc) {
exc.printStackTrace();
}
}
The second approach could catch the IOException and create the directories listed in the exception, the problem with this approach is that the Logging framework has already failed to create the FileHandler so catching and resolving the error still leaves the logging system in a bad state.
As a possible solution I think there are 2 approaches (look at some of the previous answers). I can extend a Java Logging Handler class and write my own custom handler. I could also copy the log4j functionality and adapt it to the Java Logging framework.
Here's an example of copying the basic FileHandler and creating a CustomFileHandler see pastebin for full class:
The key is the openFiles() method where it tries to create a FileOutputStream and checking and creating the parent directory if it doesn't exist (I also had to copy package protected LogManager methods, why did they even make those package protected anyways):
// Private method to open the set of output files, based on the
// configured instance variables.
private void openFiles() throws IOException {
LogManager manager = LogManager.getLogManager();
...
// Create a lock file. This grants us exclusive access
// to our set of output files, as long as we are alive.
int unique = -1;
for (;;) {
unique++;
if (unique > MAX_LOCKS) {
throw new IOException("Couldn't get lock for " + pattern);
}
// Generate a lock file name from the "unique" int.
lockFileName = generate(pattern, 0, unique).toString() + ".lck";
// Now try to lock that filename.
// Because some systems (e.g. Solaris) can only do file locks
// between processes (and not within a process), we first check
// if we ourself already have the file locked.
synchronized (locks) {
if (locks.get(lockFileName) != null) {
// We already own this lock, for a different FileHandler
// object. Try again.
continue;
}
FileChannel fc;
try {
File lockFile = new File(lockFileName);
if (lockFile.getParent() != null) {
File lockParentDir = new File(lockFile.getParent());
// create the log dir if it does not exist
if (!lockParentDir.exists()) {
lockParentDir.mkdirs();
}
}
lockStream = new FileOutputStream(lockFileName);
fc = lockStream.getChannel();
} catch (IOException ix) {
// We got an IOException while trying to open the file.
// Try the next file.
continue;
}
try {
FileLock fl = fc.tryLock();
if (fl == null) {
// We failed to get the lock. Try next file.
continue;
}
// We got the lock OK.
} catch (IOException ix) {
// We got an IOException while trying to get the lock.
// This normally indicates that locking is not supported
// on the target directory. We have to proceed without
// getting a lock. Drop through.
}
// We got the lock. Remember it.
locks.put(lockFileName, lockFileName);
break;
}
}
...
}
I generally try to avoid static code but to work around this limitaton here is my approach that worked on my project just now.
I subclassed java.util.logging.FileHandler and implemented all constructors with their super calls. I put a static block of code in the class that creates the folders for my app in the user.home folder if they don't exist.
In my logging properties file I replaced java.util.logging.FileHandler with my new class.