I work for an enterprise that has a many millions of lines of Java code base. Unfortunately, there were very poor practices put in place to track when one java EAR calls another EAR on another system. The problem gets even worse, we run DB2 and all the DB2 schemas run on the same data connection. This means there is no standard way to look at a config file or database connection to even tell what databases the application accesses. This problem extends to other REST services, since we have REST data services, MQ systems, JMS, EJB RMI, etc. Trying to do impact analysis is a nightmare.
Is there a tool that exists, maybe a findbugs plugin, that I can run on an application and have it generate a report of the systems that the application accesses?
If not, if I put say TRACE on the java.io and java.nio to log everything, should that capture any network connections that Java attempts to make thru the app server?
My ultimate goal, if i can't find a static analysis system that can help with these problems, i would like to write some AOP app that would live between the EAR and WebSphere and log all outbound and possibly inbound connections to the EAR resources.
is this possible?
Tricky one ;-)
Findbugs can help you identify all communication related places in the java code. But you have to do some stuff for that:
Identify all kinds of connections you want to flag (e.g. DB connections, EJB communication, ReST client code ...)
If you have that you need to write your own findbugs plugin which detect those places. May sound complicated however depending on how many places you want to identify a versed developer can do that in 2-3 days I would guess. As starting point have a look at the sourcecode of the available bug patterns in findbugs, look for a similar one and use that as a starting point. There are also lots of tutorials in the web on how to write your own bug pattern...
Configure findbugs to only use your bug pattern and run it on your code base (otherwise all the other bugs will clutter the result especially if your codebase is this huge).
Findbugs will generate a report / show you all the "communication" places...
Related
I am trying to replace a requirement our dev teams have where they manually have to fill out a form that includes a list of their app's external connections (for example any database connections, calls to other services/applications, backing services, etc...). This is required in order to get approval to deploy to production. Mgmt/Security and our last mile folks use this information to determine risk level and to make sure that any scheduled dependencies are looked at (e.g., make sure the deployment is not scheduled for a time when one of the backing services is down so all the integration tests don't fail). Any suggestions to capture this automatically by scanning the code in Git? Or can Dynatrace provide this information if we have it monitoring in the lower environments pre-prod? Some other tool?
A little background in case you need it - we are using Jenkins with OpenShift to deploy docker containers to AWS PaaS. Code is stored in Git, we use Bitbucket. In the pipeline we have SonarQube scanning and a tool that scans third party libraries the app is using (e.g., struts, cucumber, etc..). We have dynatrace to monitor the app in production (but we can also use it in dev if we want). Mostly Java apps but we also have Node and Python and .NET.
I can't suggest a way to automate this. I suspect there isn't one.
I would have thought it was advisable that the dev teams did this by hand anyway. Surely they should have a handle on what external connections the apps ought to be making. Expecting the production / security team to take care of it all means that they need to develop a deeper understanding of the app's functionality and architecture so that they can make a reasoned decision on whether particular access is necessary,
I do have one suggestion though. You could conceivably do your testing on machines with firewalls that block out-going connections for all but a set of white-listed hosts and ports. That white-list could be the starting point for the forms you need to fill in.
Have you looked into tagging? Manual or environment based variable set up looks painful (which is why I have avoided), but might be worthwhile? https://www.dynatrace.com/support/help/how-to-use-dynatrace/tags-and-metadata/
I am used to adding logging to standalone Java applications and writing the logs to the files using log4j and sl4j. I am moving some applications to a Java web start format and I am not clear on what is the best way to perform the logging to monitor the behaviour of the application. I have thought of two options
Write the log to the local machine and provide an option to send the information to the central server under some condition (time, error etc..)
Send the output of the log to the server directly
What is best practice?
I've seen 1. implemented by many programs.
But 2. seems bandwidth intensive, intrusive, and overkill.
Agreed, 2 seems like it's not such a good option. An error with webservices wouldn't be logged in that case. I was wondering if there was any other option but I can't think of any.
I was thinking of entirely local sources of problems connecting to the server, but good point.
What is best practice?
Stick with the majority and use method 1. Unless you have a marvelous inspiration about how the entire logging/reporting system can be improved, I'd go with "tried and tested". It is likely to be easiest, best supported by existing frameworks, and should your code falter, has the greatest number of people who have 'been there, done that' to potentially help.
I'm looking for a way to centralise the logging concerns of distributed software (written in Java) which would be quite easy, since the system in question has only one server. But keeping in mind, that it is very likely that more instances of the particular server will run in the future (and there are going to be more application's in need for this), there would have to be something like a Logging-Server, which takes care of incoming logs and makes them accessable for the support-team.
The situation right now is, that several java-applications use log4j which writes it's data to local files, so if a client expiriences problems the support-team has to ask for the logs, which isn't always easy and takes a lot of time. In the case of a server-fault the diagnosis-problem is not as big, since there is remote-access anyways, but even though, monitoring everything through a Logging-Server would still make a lot of sense.
While I went through the questions regarding "centralised logging" I found another Question (actually the only one with a (in this case) useable answer. Problem being, all applications are running in a closed environment (within one network) and security-guidelines do not permit for anything concerning internal software to go out of the environments network.
I also found a wonderful article about how one would implement such a Logging-Server. Since the article was written in 2001, I would have thought that someone might have already solved this particular problem. But my search-results came up with nothing.
My Question: Is there a logging-framework which handle's logging over networks with a centralised server which can be accessed by the support-team?
Specification:
Availability
Server has to be run by us.
Java 1.5 compatibility
Compatibility to a heterogeneous network.
Best-Case: Protocol uses HTTP to send logs (to avoid firewall-issues)
Best-Case: Uses log4j or LogBack or basically anything that implements slf4j
Not necessary, but nice to have
Authentication and security is of course an issue, but could be set back for at least a while (if it is open-software we would extend it to our needs OT: we always give back to the projects).
Data mining and analysis is something which is very helpful to make software better, but that could as well be an external application.
My worst-case scenario is that their is no software like that. For that case, we would probably implement this ourselves. But if there is such a Client-Server Application I would very much appreciate not needing to do this particularly problematic bit of work.
Thanks in advance
Update: The solution has to run on several java-enabled platforms. (Mostly Windows, Linux, some HP Unix)
Update: After a lot more research we actually found a solution we were able to acquire. clusterlog.net (offline since at least mid-2015) provides logging services for distributed software and is compatible to log4j and logback (which is compatible to slf4j). It lets us analyze every single users way through the application. Thus making it very easy to reproduce reported bugs (or even non reported ones). It also notifies us of important events by email and has a report system were logs of the same origin are summorized into an easily accessable format. They deployed (which was flawless) it here just a couple of days ago and it is running great.
Update (2016): this question still gets a lot of traffic, but the site I referred to does not exist anymore.
You can use Log4j with the SocketAppender, thus you have to write the server part as LogEvent processing.
see http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/net/SocketAppender.html
NXLOG or LogStash or Graylogs2
or
LogStash + ElasticSearch (+optionally Kibana)
Example:
1) http://logstash.net/docs/1.3.3/tutorials/getting-started-simple
2) http://logstash.net/docs/1.3.3/tutorials/getting-started-centralized
Have a look at logFaces, looks like your specifications are met.
http://www.moonlit-software.com/
Availability (check)
Server has to be run by us. (check)
Java 1.5 compatibility (check)
Compatibility to a heterogeneous network. (check)
Best-Case: Protocol uses HTTP to send logs (to avoid firewall-issues) (almost TCP/UDP)
Best-Case: Uses log4j or LogBack or basically anything that implements slf4j (check)
Authentication (check)
Data mining and analysis (possible through extension api)
There's a ready-to-use solution from Facebook - Scribe - that is using Apache Hadoop under the hood. However, most companies I'm aware of still tend to develop in-house systems for that. I worked in one such company and dealt with logs there about two years ago. We also used Hadoop. In our case we had the following setup:
We had a small dedicated cluster of machines for log aggregation.
Workers mined logs from production service and then parse individual lines.
Then reducers would aggregate the necessary data and prepare reports.
We had a small and fixed number of reports that we were interested in. In rare cases when we wanted to perform a different kind of analysis we would simply add a specialized reducer code for that and optionally run it against old logs.
If you can't decide what kind of analyses you are interested in in advance then it'll be better to store structured data prepared by workers in HBase or some other NoSQL database (here, for example, people use Mongo DB). That way you won't need to re-aggregate data from the raw logs and will be able to query the datastore instead.
There are a number of good articles about such logging aggregation solutions, for example, using Pig to query the aggregated data. Pig lets you query large Hadoop-based datasets with SQL-like queries.
Our development team hosts many different applications both .Net and Java based. Currently, we handle our error logging with Log4J and use emails to alert the development team when problems arise. Currently, we get thousands of alerts a day and it's becoming a little tedious to maintain.
We've been discussing creating a central dashboard for all our apps. The ideal tool would track errors, warnings, info etc. over the life of an application (it doesn't necessarily need to be db driven). The idea is that the data can be viewed on a dashboard, drillable to specific errors with the capability of alerting via emal when triggers and or thresholds are met.
Elmah is good for .Net but we need a tool that could also work for Java EE? What is the best way to go about this? Should we:
Just use Elmah for the .Net apps and find something similar for Java and build our own dashboard to create a united look & feel?
OR
Is there a tool that already exists that we can leverage to do this cross platform?
I've tried looking in Sourceforge but it's difficult to describe what I'm looking for.
I don't think you have a logging problem, I think that you have an integration problem, no matter if it is logging, or any other area your root issue is the same... How do I make my completely different components talk to each other?
There is a lot of approaches, but probably the easiest to implement for different technologies is Web services or REST... You will probably need to have a central logger that you need to implement independently, and then build a Web service/REST interface to which you are going to have to connect to...
Maybe a different line of investigation for you is to see if there is a logging product out in the market that takes web service calls... If that's the case, you only need to change your components to make a service call every time.
Something else that you need to consider is that your remote logging should never superseed your local logging, that's it do both, the reason is very simple, remote calls can fail, so code as if they will fail.
We have been using http://www.exceptional.io/ for error tracking for some time now: it's cheap and extremely simple.
To report errors you just post a json document to its endpoint.
This is more of a question if such a piece of software exists:
The problem right now in our applications is if there is a gack we mail it out. This quickly turns bad if there is a really bad problem that just spams our email over night or something.
Is there a tool that maybe populates these errors in some sort of database that we can query against (by different components) and build a nice little monitoring site of all the exceptions that get thrown by each component?
I've been searching around and found nothing of the sort, right now I'm looking into just log file monitoring since there seem to be a bunch of tools around that existing.
Thanks
If you are using log4j for logging, it has an option for logging required information to different destinations, including database tables.
Log4j also has a component called chainsaw, which could help you with the monitoring aspect. You may need to explore these two to help fit into your requirement.
-RR
I think you would love to know that there is a team out to solve just the same problem in the most beautiful fashion.
https://www.takipi.com/
How it works
Takipi supports all JVM-based languages, and does not require code changes or build configurations to use.
From : https://www.takipi.com/how-it-works
To begin, install the Takipi daemon process on the target machine. You
can then monitor a target application by adding a standard -agentlib
parameter to its list of JVM arguments. The agent library detects all
caught and uncaught exceptions, HTTP and log errors from the JVM,
without needing to access log files.
I would prefer errbit. It is the open source alternative to airbrake. It comes from the rails world, but with a log4j appender you can easily connect it to a java world. It clusters error cases, support environment informations, supports various error tracker and send emails.
Have a look at Ctrlflow Automated Error Reporting. It is optimized for Java and comes with several logging integrations, which send error reports home to a central server.
If you are developing in Eclipse you may already have seen it in action, as Eclipse uses this service to report and track its errors.
The server then automatically filters and aggregates the incoming error reports. If you get the same error multiple times, the server detects that and groups all error reports into a single problem, so you don't have to checkout each individual report. These problems are then assigned to their correct component or project.
The server synchronizes itself with your bug tracker and can automatically open new bugs. It also offers dashboards and digest mails sent at regular intervals to your developers.
And its free for small projects.
Note that I am one of its developers.
You can also find this tool JSnapshot useful to monitor exceptions in Java application in real time.
JSnapshot is an advanced java exception logging, monitoring and analysis tool. It traces thrown exceptions in real-time and logs a snapshot of call stack, variables and objects for every thrown exception. With this tool, you can examine all of the exception details as if the application was stopped at the breakpoint in the debugger when exception happened.
And it's integrated with Eclipse IDE.