I want to show the frontend user all the logs. How can I transport all the log statements to a frontend? For example:
void process(){
..
// currently this is shown in a file and in a console
log.info("process called..");
}
How can I transport this log message to the frontend in an efficient manner? Should I append the logs into a StringBuilder? How can I do this with Log4j2?
Currently, I have a no jdbc store. But I can store all my logs to a no sql database. I cannot use JDBCAppender (or CassandraAppender). Should I avoid a Logger and do it myself:
Instead of
log.info("process called..");
I could use
user.addLog("process called..");
Would it be better to get the string value of log.info(). If so, how?
The best idea would would be to store your logs in a database with a JDBCAppender. When the user requests the logs, you can decide how many of the logs to load and return in your response.
If you would hold all your logs in memory e.g. in a StringBuffer, you could run out of memory and kill your application. Also on a server restart, all your logs would be lost. Both is prevented by storing the logs into a database and access it on demand.
If you really need a StringAppender for custom integration, you have to write it yourself extending on AbstractOutputStreamAppender.
Here is a blog post with code about it.
Related
I have a Spring Boot app, and an Elastic stack (Elasticsearch + Kibana + Filebeat, no Logstash). I want to log some information whenever a request comes to my Spring app (say, the request url and request user id).
A first naive way is to simply log that (log.info("request comes url={} user={}", url, user);). Then filebeat will happily collect that into my elasticsearch and I can visualize it using Kibana.
However, I do not want these data to be mixed with all the other logs in the filebeat-* index pattern. I want them to be, say, request-info (or request-info-*) while the other normal log data in filebeat-* index pattern. Is there any way to do so? Thank you very much!
You can use a conditional output in your filebeat.yml based on a string present in your message.
Something like:
output.elasticsearch:
hosts: ["http://localhost:9200"]
indices:
- index: "request-info-%{+yyyy.MM.dd}"
when.contains:
message: "request comes"
If your message contains the string request comes it will be sent to the index request-info-2020.11.28 for example, if it does not contains, it will be sent to the default index.
You can read more about the options to the elasticsearch output in this documentation link
I have a requirement to process a list of large number of users daily to send them email and SMS notifications based on some scenario. I am using Java EE batch processing model for this. My Job xml is as follows:
<step id="sendNotification">
<chunk item-count="10" retry-limit="3">
<reader ref="myItemReader"></reader>
<processor ref="myItemProcessor"></processor>
<writer ref="myItemWriter"></writer>
<retryable-exception-classes>
<include class="java.lang.IllegalArgumentException"/>
</retryable-exception-classes>
</chunk>
</step>
MyItemReader's onOpen method reads all users from database, and readItem() reads one user at a time using list iterator. In myItemProcessor, the actual email notification is sent to user, and then the users are persisted in database in myItemWriter class for that chunk.
#Named
public class MyItemReader extends AbstractItemReader {
private Iterator<User> iterator = null;
private User lastUser;
#Inject
private MyService service;
#Override
public void open(Serializable checkpoint) throws Exception {
super.open(checkpoint);
List<User> users = service.getUsers();
iterator = users.iterator();
if(checkpoint != null) {
User checkpointUser = (User) checkpoint;
System.out.println("Checkpoint Found: " + checkpointUser.getUserId());
while(iterator.hasNext() && !iterator.next().getUserId().equals(checkpointUser.getUserId())) {
System.out.println("skipping already read users ... ");
}
}
}
#Override
public Object readItem() throws Exception {
User user=null;
if(iterator.hasNext()) {
user = iterator.next();
lastUser = user;
}
return user;
}
#Override
public Serializable checkpointInfo() throws Exception {
return lastUser;
}
}
My problem is that checkpoint stores the last record that was executed in the previous chunk. If I have a chunk with next 10 users, and exception is thrown in myItemProcessor of the 5th user, then on retry the whole chunck will be executed and all 10 users will be processed again. I don't want notification to be sent again to the already processed users.
Is there a way to handle this? How should this be done efficiently?
Any help would be highly appreciated.
Thanks.
I'm going to build on the comments from #cheng. My credit to him here, and hopefully my answer provides additional value in organizing and presenting the options usefully.
Answer: Queue up messages for another MDB to get dispatched to send emails
Background:
As #cheng pointed out, a failure means the entire transaction is rolled back, and the checkpoint doesn't advance.
So how to deal with the fact that your chunk has sent emails to some users but not all? (You might say it rolled back but with "side effects".)
So we could restate your question then as: How to send email from a batch chunk step?
Well, assuming you had a way to send emails through an transactional API (implementing XAResource, etc.) you could use that API.
Assuming you don't, I would do a transactional write to a JMS queue, and then send the emails with a separate MDB (as #cheng suggested in one of his comments).
Suggested Alternative: Use ItemWriter to send messages to JMS queue, then use separate MDB to actually send the emails
With this approach you still gain efficiency by batching the processing and the updates to your DB (you were only sending the emails one at a time anyway), and you can benefit from simple checkpointing and restart without having to write complicated error handling.
This is also likely to be reusable as a pattern across batch jobs and outside of batch even.
Other alternatives
Some other ideas that I don't think are as good, listed for the sake of discussion:
Add batch application logic tracking users emailed (with ItemProcessListener)
You could build your own list of either/both successful/failed emails using the ItemProcessListener methods: afterProcess and onProcessError.
On restart, then, you could know which users had been emailed in the current chunk, which we are re-positioned to since the entire chunk rolled back, even though some emails have already been sent.
This certainly complicates your batch logic, and you also have to persist this success or failure list somehow. Plus this approach is probably highly specific to this job (as opposed to queuing up for an MDB to process).
But it's simpler in that you have a single batch job without the need for a messaging provider and a separate app component.
If you go this route you might want to use a combination of both a skippable and a "no-rollback" retryable exception.
single-item chunk
If you define your chunk with item-count="1", then you avoid complicated checkpointing and error handling code. You sacrifice efficiency though, so this would only make sense if the other aspects of batch were very compelling: e.g. scheduling and management of jobs through a common interface, the ability to restart at the failing step within a job
If you were to go this route, you might want to consider defining socket and timeout exceptions as "no-rollback" exceptions (using ) since there's nothing to be gained from rolling back, and you might want to retry on a network timeout issue.
Since you specifically mentioned efficiency, I'm guessing this is a bad fit for you.
use a Transaction Synchronization
This could work perhaps, but the batch API doesn't especially make this easy, and you still could have a case where the chunk completes but one or more email sends fail.
Your current item processor is doing something outside the chunk transaction scope, which has caused the application state to be out of sync. If your requirement is to send out emails only after all items in a chunk have successfully completed, then you can move the emailing part to a ItemWriterListener.afterWrite(items).
We have a webserver and multiple users log in to it. We generally put log level to ERROR or INFO level. But sometimes, for debugging purpose, we need to see logs. There is one way to set it at runtime, but this process is not so good in case of loads of traffic. Important logs will be missed and also we don't know for how much time we need to keep it that way. I have written a wrapper in log4j v1.2, which just ignores the level check if userid belongs to some TestUsersList. So, it opens all logs for a particular user[a thread] only. A snippet is below-
public void trace(Object message) {
Object diagValue = MDC.get(LoggerConstants.IS_ANALYZER_NUMBER);
if (valueToMatch.equals(diagValue)) { // Some condition to check test number
forcedLog(FQCN, Level.TRACE, message, null);
return;
}
if (repository.isDisabled(Level.TRACE_INT))
return;
if (Level.TRACE.isGreaterOrEqual(this.getEffectiveLevel()))
forcedLog(FQCN, Level.TRACE, message, null);
}
But now I have moved to log4j2, I don't want to write this wrapper again. Is there any inbuilt functionality which log4j2 provides for this?
This can be done with filters. Add a logger to the configuration that logs all the messages you want, then add a ThreadContextMapFilter that has a KeyValuePair for each user you want to log.
Then put the user ids in the Thread Context within the code.
I have a requirement of reading log file accepting date and log level(optional) as parameter(Spring Web Services) and sending back to customer, is there any way excluding manual Read Write operation to accomplish it.
I am facing a strange issue:
I have a page with an email field in it when I submit the page the control goes to a servlet where I am saving the email value in session by using
request.getSession().setAttribute("email_Value", request.getParameter("email_Value"));
Now, on the basis of this email value I lookup the database and extracts the information for this user if information found then remove the session attribute by
request.getSession().removeAttribute("email_Value");
if not then redirect the request to same page with an error message and prefilled email value which I am extracting from session using
if(null!= request.getSession().getAttribute("email_Value")){
String Email = (String)(request.getSession().getAttribute("email_Value"));
request.getSession().removeAttribute("email_Value");
}
It works fine on our deleopment, UAT environments but problem is coming only on PROD where we have load balancer.
The issue is that while coming back to the same page it change the email address field witch some different email value which I have not even entered on my machine i.e. it is accessing someone else session.
Could someone provide any pointer to resolve this issue. As this is Production issue, any help would be appreciated.
Thanks
looks like you need to use sticky-sessions. This must be configured in the apache
Http is a stateless protocol meaning, the server doesnt know to identify a client over a period of time.
When a client makes a call to the server (load balanced, say server_1 & server_2), it could reach either server_1 or server_2, assume the request reaches the server_1, now your code creates a session and adds the email to the session.
When the same client makes another call to the server, this time it hits server_2, the email which is in server_1 session is not available to server_2 and server_2 might have email from another session thats why you are seeing another email address.
Hope its clear.
Solution:
URL Rewriting
Cookies
If your application is deployed on multiple servers, chances are there that your sessions may get transferred between servers. Also, in such scenarios, if you are storing any objects in sessions, they HAVE TO implement Serializable interface. If they don't, then the data will not be persisted when the session gets migrated.
Also, it seems that the session gets interchanged with another one. Are you storing anything at Application level?
I would also advice you to look into HttpSessionActivationListener for your case.