I have a log4j with several loggers, appenders used in a multi-threaded application. In one scenario, I will try to connect to a remote service. If the connection fails, I will try again repeatedly.
I would like that only the first time log4j uses its original configuration. But for every other subsequent attempts, I want to use a less verbose configuration. This should not change the logging configuration of the other threads that might operate on the same objects. Note that I cannot know in advance which loggers are used inside the call to connect to the remote service.
So, is there a way to alter the logging globally for the duration of one call without changing the behavior of other concurrent threads?
Look at this part of the API:
http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Logger.html
You can call the setLevel(Level lev) method and do some changes. However, I'm not 100% sure that you won't affect other threads as normally the Logger is based on a class.
I would think that you would have to get a Logger object for each user session, but I'm not 100% sure if it could cause heap issues.
Maybe controlling the output from your code (only send some messages to the log if it's the first attempt for instance).
Regards
Related
I would like to inject a piece of information into all messages logged by anything, including third-party code. The reason is that, in our web-services-oriented application, requests come in with unique IDs and I want those unique IDs to be attached to all log messages that occur while processing a request, to assist in later analysis. I am already tracking the "current request" using ThreadLocal<> techniques, so I have the ability to fetch the "current request" from anywhere.
To that end, I would like to configure log4j such that I can inject the requestID into messages before they reach the root logger (or appender?). I know that I can just make a whole new Appender that implements append() and does whatever it wants with the output, but that's not what I'm asking for. I want output to ultimately go to whatever appender is configured at startup, but with the additional information attached.
I am using log4j 1.x, but if a move to log4j 2.x or using slf4j makes this significantly easier, I would consider it.
I am trying to implement row level security so our application can enforce more stringent access control.
One of the technologies we are looking into is Oracle's Virtual Private Database, which allows row level security by basically augmenting all queries against specific tables with a where clause predicate. Since we are in a web environment, we need to set up a special context within Oracle, inside a single request's thread. We use connection pooling with a service account.
I started to look into Eclipse Link and Hibernate. Eclipse Link seems to have events that fit perfectly into this model.
This would involve us migrating from hibernate, which is not a problem, but we would then be bound to EL for these events.
Oracle seems to imply that they implement at the data source level in Web Logic product.
The context is set and cleared by the WebLogic data source code.
Question: Is it more appropriate to do this at the DataSource level with some series of events. What are the events or methods that I should pay the most attention too?
Added Question: How would I extend a connection pool to safely initialize an oracle context with some custom data? I am digging around in Apache, and it seems like extending BasicDataSource doesn't give me access to anything that would allow me to clean up the connection when Spring is done with it.
I need to set up a connection, and clean up a connection as the exit / enter the connection pool. I am hoping for an implementation that is so simple, no one can mess it up by breaking some delicate balance of products.
- Specifically we are currently using Apache Commons DBCP Basic Data Source
This would allow us to use various ways to connect to the database and still have our security enforced. But I don't see a great example or set of events to work with, and rolling my own security life cycle is never a good idea.
I eventually solved my problem by extending some of the Apache components.
First I extended org.apache.commons.pool.impl.GenericObjectPool and overrode both borrowObject() and returnObject(). I knew the type of the objects in the pool (java.sql.Connection) so I could safely cast and work with them.
Since for my case I was using Oracle VPD, I was able to set information in the Application context. I recommend you read about that in more depth. It is a little complicated and there are a lot of different options to hide or share data at various contexts level, and across RAC nodes. Start
In essence what I did was generate a nonce and use it to instantiate a session within oracle, and then set the access level of the user to a variable in that session, that the Oracle VPD policy would then read and use to do the row level filtering.
I instantiated and destroyed that information in my overridden borrowObject() and returnObject() The SQL I ran was something like this:
CallableStatement callStat =
conn.prepareCall("{call namespace.cust_ctx_pkg.set_session_id(" + Math.random() + ")}");
callStat.execute();
Note math.random() isn't a good nonce.
Next was to simply extend org.apache.commons.dbcp.BasicDataSource and set my object pool by overriding createConnectionPool(). Note that the way I did this disabled some functionality I did not need, so you may need to rewrite more or less than I did.
You can try any object level security mechanism for simplicity, like Spring Security ACL.
You will want to do this at the application layer. You will want a pre-commit hook and a post read hook.
The pre-commit hook is used to ensure that data from the client is being presented by a user authorized to modify that data. This prevents an unauthorized user from overwriting data that they shouldn't be able to access.
It's not intuitive, but the post read hook is used to keep the client from accessing data the user shouldn't be allowed to view. This happens post-view because this is being enforced at the application layer, not at the data layer. The application has no way to know if the caller is allowed to access the data until it's been retrieved from the data layer. In the post read hook you evaluate the credential on each row returned against the credential of the logged in user in order to determine whether or not access is allowed. If access is denied on any row then an exception would be raised and the data would not be returned to the client.
Application level security done in this way requires that you have a way to connect each row in a table to a permission/role required to access it and a way to evaluate a user's permissions on the server at runtime.
Hope that helps.
You will get better control by using one of the other Commons DBCP Datasources.
The Basic one is just that: basic :)
The ones in org.apache.commons.dbcp.datasources package gives you more fine-grained control.
I am developing an Eclipse RCP application and have gone to some pains to get log4j2 to work within the app. All seems to work fine now, and as a finishing touch I wanted to make all loggers asynchronously.
I've managed to get the LMAX Disruptor on the classpath, and think I've solved the issue of providing sun.misc as well. Set the VM argument -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector in the run config and set up log4j2.xml file correctly as well. I think. And that's where the problem is. I'd like to be able to verify that my application logs asynchronously in the proper fashion, so I can enjoy the benefits latency-wise.
How can I - then - verify that my loggers are working asynchronously, utilising the LMAX Dirsuptor in the process?
There are two types of async logger, handled by different classes.
All loggers async: the AsyncLogger class - activated when you use AsyncLoggerContextSelector
Mixing sync with async loggers: the AsyncLoggerConfig class - when your configuration file has <AsyncRoot> or <AsyncLogger> elements nested in the configuration for <Loggers>.
In your case you are making all loggers async, so you want to put your breakpoint in AsyncLogger#logMessage(String, Level, Marker, Message, Throwable).
Another way to verify is by setting <Configuration status="trace"> at the top of your configuration file. This will output internal log4j log messages on log4j is configured. You should see something like "Starting AsyncLogger disruptor...". If you see this all loggers are async.
Put a breakpoint in org.apache.logging.log4j.core.async.AsyncLoggerConfig#callAppenders. Then you can watch as the event is put into the disruptor. Likewise org.apache.logging.log4j.core.config.LoggerConfig#callAppenders should be getting hit for synchronous logging OR getting hit from the other side of the disruptor for async logging (at which point everything is synchronous again).
I'm working with core java and IBM Websphere MQ 6.0. We have a standalone module say DBcomponent that hits the database and fetches a resultset based on the runtime query. The query is passed to the application via MQ messaging medium. We have a trigger configured for the queue which invokes the DBComponent whenever a message is available in the queue. The DBComponent consumes the message, constructs the query and returns the resultset to another queue. In this overall process we use log4j to log statements on a log file for auditing.
The connection is pooled to the database using Apache pool. I am trying to check whether the log messages are logged correctly using a sample program. The program places the input message to the queue and checks for the logs in the log file. Its expected for the trigger method invocation to complete before i try to check for the message in log file, but every time my program to check for log message gets executed first leading my check to failure.
Even if i introduce a Thread.sleep(time) doesn't solves the case. How can i make it to keep my method execution waiting until the trigger operation completes?
Any suggestion will be helpful.
I suggest you go and read up about the concurrency primitives that Java offers you. http://tutorials.jenkov.com/java-concurrency/index.html seems to cover the bases, the Thread Signalling chapter in particular.
I would recommend against relying on log4j (or any logging functionality) even in a simple test program.
Have your test run as you would expect it to, putting debugging/tracing statements in the log as you see fit (be liberal about it, log4j is very fast!) Then, when it's done, check the log yourself.
Writing log parsing will only complicate your goals.
Write your test, view the result, view the logs. If you want automated testing, consider setting up a functional test. You can set up tests free using Selenium. (http://seleniumhq.org/) There's no need to write your own functional testing/parsing stuff when there's easy to configure, easy to use, easy to customize frameworks out there! :-)
There is only one file. And it is written simultaneously as web app copies run.
How do you filter only one session log messages from other log lines?
Using a servlet filter with either NDC or MDC information is the best way I've seen. A quick comparison of the two is available at http://wiki.apache.org/logging-log4j/NDCvsMDC.
I've found MDC has worked better for me in the past. Remember that you'll need to update your log4j properties file to include whichever version you prefer (pattern definitions at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/PatternLayout.html).
A full example of configuring MDC with a servlet filter is available at http://veerasundar.com/blog/2009/11/log4j-mdc-mapped-diagnostic-context-example-code/.
A slightly easier to configure, but significantly inferior option: You could opt to just print out the thread ID (via the properties file) for each request and make sure that the first thing you log about each request is a session identifier. It isn't as proper (or useful), but it can work for low-volume applications.
You could set a context message including the identifier of the specific app instance using org.apache.log4j.NDC, like this:
String appInstanceId = "My App Instance 1";
org.apache.log4j.NDC.push(appInstanceId);
// handle request
org.apache.log4j.NDC.clear();
You can set up the context during the initialization of your web app instance, or inside the doPost() method of your servlets. As its name implies, you can also nest contexts within contexts with multiple push calls at different levels.
See the section "Nested Diagnostic Contexts" in the Log4J manual.
Here is a page that sets up an MDC filter for web-app -> http://rtner.de/software/MDCUserServletFilter.html
Being a servlet filter it will free you from managing MDC/NDC in each of your servlets.
Of course, you should modify it to save information more pertinent to your web-app.
If you want to differentiate sessions in the same application then the MDC is the way to go. But if you want to differentiate the web applications writing to the same file, then MDC won't help because it works on a thread basis. In such case I used to make my own appender which knows which application instance it serves. This can be done through appender configuration properties. Such appender would stick application name into each logging event as a property before writing it into the media, and then you can use a layout to show this property value in the text file it writes to. Using MDC in such case won't work because every thread will have to MDC.put(applicationName) and that is quite ugly. MDC is only good for single process, not for several processes. If someone knows the other way, I'd like to hear.