Suppose I have a big program that consists of hundreds of methods in it. And according to the nature of input the program flow is getting changed.
Think I want to make a change to the original flow. And it is big hassle to find call hierarchy/ references and understand the flow.
Do I have any solution for this within Eclipse? Or a plugin? As an example, I just need a Log of method names that is in order of time. Then I don't need to worry about the methods that are not relevant with my "given input"
Update : Using debug mode in eclipse or adding print messages are not feasible. The program is sooooo big. :)
You could use AspectJ to log the name of all methods called without changing your original program.
See tracing for instance.
aspect SimpleTracing {
pointcut tracedCall():
call(void FigureElement.draw(GraphicsContext));
before(): tracedCall() {
System.out.println("Entering: " + thisJoinPoint);
}
}
If all you want to know is what methods got called, rather than the precise order, you might consider using a test coverage tool. These tools instrument the source code to collect "this got executed" facts at various degrees of granularity (method call only, and/or every code block controlled by a conditional).
The SD Test Coverage Tool is a tool that will do this.
It won't collect the call graph or even the order of the calls.
If you want more control over the instrumentation, you can consider using the DMS Software Reengineering Toolkit. DMS will parse, transform, and prettyprint Java in arbitrary ways controlled by custom source-to-source program transformation rewrite rules. It would be easy to insert logging transformations into the start and exit of each method (and in fact this is almost exactly how the SD test coverage tool works). Given the raw enter-X and exit-X data, constructing the runtime call tree is a straightforward task.
Related
I am struggling with the following problem and ask for help.
My application has a logger module. This takes the trace level and the message (as string).
Often should be messages constructed from different sources and/or different ways (e.G. once using String.format in prior of logging, other times using .toString methods of different objects etc). Therefore: the construction method of the error messages cannot be generalized.
What I want is, to make my logger module effective. That means: the trace messages would only then be constructed if the actual trace level gets the message. And this by preventing copy-paste code in my application.
With C/C++, by using macros it was very easy to achive:
#define LOG_IT(level, message) if(level>=App.actLevel_) LOG_MSG(message);
The LOG_MSG and the string construction was done only if the trace level enabled that message.
With Java, I don't find any similar possibility for that. That to prevent: the logging would be one line (no if-else copy-pastes everywhere), and the string construction (expensive operation) only be done if necessary.
The only solution I know, is to surrond every logger-calls with an IF-statement. But this is exactly what I avoided previously in the C++ app, and what I want to avoid in my actual Java-implementation.
My problem is, on the target system only Java 1.6 is available. Therefore the Supplier is not a choice.
What can I do in Java? How can this C/C++ method easily be done?
Firstly, I would encourage you to read this if you're thinking about implementing your own logger.
Then, I'd encourage you to look at a well-established logging API such as SLF4j. Whilst it is possible to create your own, using a pre-existing API will save you time, effort and above all else provide you with more features and flexibility out of the box (I.e file based configuration, customisability (look at Mapped Diagnostic Context)).
To your specific question, there isn't a simple way to do what you're trying to do. C/C++ are fundamentally different to java in that the preprocessor allows for macros like you've created above. Java doesn't really have an easy-to-use equivalent, though there are examples of projects that do make use of compile time code generation which is probably the closest equivalent (i.e. Project Lombok, Mapstruct).
The simplest way I know of to avoid expensive string building operations whilst logging is to surround the building of the string with a simple conditional:
if ( logger.isTraceEnabled() )
{
// Really expensive operation here
}
Or, if you're using Java 8, the standard logging library takes a java.util.function.Supplier<T> argument which will only be executed if the current log level matches that of the logging method being called:
log.fine(()-> "Value is: " + getValue());
There is also currently a ticket open for SLF4j to implement this functionality here.
If you're really really set on implementing your own logger, the two above features are easy enough to implement yourself, but again I'd encourage you not to.
Edit: Aspectj compile time weaving can be used to achieve something similar to what you're trying to achieve. It would allow you to wrap all your logging statements with a conditional statement in order to remove the boilerplate checking.
Newest logging libraryies, including java.util.logging, have a second form of methods, taking a Supplier<String>.
e.g. log.info( ()->"Hello"); instead of log.info("Hello");.
The get() method of the supplier is only called if the message has effectively to be logged, therefore your string is only constructed in that case.
I think the most important thing to understand here is that the C/C++ macro solution, does not save computational effort by not constructing the logged message, in case the log level was such that the message would not be logged.
Why is so? Simply because the macro method would make the pre-processor substitute every usage of the macro:
LOG_IT(level, message)
with the code:
if(level>=App.actLevel_) LOG_MSG(message);
Substituting anything you passed as level and anything you passed as message along with the macro itself. The resulting code to be compiled will be exactly the same as if you copied and pasted the macro code everywhere in your program. The only thing macros help you with, is to avoid the actual copying and pasting, and to make the code more readable and maintainable.
Sometimes they manage to do it, other times they make the code more cryptic and thus harder to maintain as a result. In any case, macros do not provide deferred execution to save you from actually constructing the string, as Java8 Logger class does by using lambda expressions. Java defers the execution of the body of a lambda until the last possible time. In other words, the body of the lambda is executed after the if statement.
To go back to your example in C\C++, you as a developer, would probably want the code to work regardless of the log level, so you would be forced to construct a valid string message and pass it to the macro. Otherwise in certain log levels, the program would crash! So, since the message string construction code must be before the call to the macro, you will execute it every time, regardless of the log level.
So, to make the equivalent to your code is quite simple in Java 6! You just use the built-in class: Logger. This class provides support for logging levels automatically, so you do not need to create a custom implementation of them.
If what you are asking is how to implement deferred execution without lambdas, though, I do not think it is possible.
If you wanted to make real deferred execution in C\C++ you would have to make the logging code such, as to take a function pointer to a function returning the message string, you would make your code execute the function passed to you by the function pointer inside the if statement and then you would call your macro passing not a string but a function that creates and returns the string! I believe the actual C\C++ code to do this is out of scope for this question... The key concept here, is that C\C++ provide you the tools to make deferred execution, simply because they support function pointers. Java does not support function pointers, until Java8.
Whenever I program, I seem to accumulate a lot of "trash" code, code that is not in use anymore. Just to keep my code neat, and to avoid making any expensive and unnecessary computations, Is there an easy way to tell if there is code that is not being used?
One of the basic principles which will help you in this regard is to reduce visibility of everything as much as possible. If a class can be private don't make it default, protected or public. Same applies for methods and variables. It is much easier when you can say for sure if something is not being used outside a class. In cases like this even IDEs like Eclipse and IntelliJ Idea will suggest you about unused code.
Using this practice while developing and refactoring code is the best way to clean unused code confidently without the possibility of breaking the application. This will help in scenarios even when reflection is being used.
It's difficult to do in Java since it's a reflective language. (You can't simply hunt for calls to a certain class or function, for example, since reflection can be used to call a function using strings that can only be resolved at runtime.)
So in full generality, you cannot be certain.
If you have adequate unit tests for your code base then the possibility of redundant code should not be a cause for concern.
I think "unused code" means the code that is always not executed at runtime. I hope I interpreted you correctly.
The way to do a simple check on this is very easy. Just use IntelliJ IDEA to write your code. It will tell you that parts of your code that will never be executed and also the parts where the code can be simplified. For example,
if (x == 5) {
}
And then it will tell you that this if statement is redundant. Or if you have this:
return;
someMethod();
The IDE will tell you that someMethod() can never be reached. And it also provides a lot of other cool features.
But sometimes this isn't enough. What if you have
if (x == 5) {
someMethod();
}
But actually in your code, x can only be in the range of 1 to 4? The IDE won't tell you about this. You can use a tool that shows your code coverage by running lots of tests. Then you can see which part of your code is not executed.
If you don't want to use such a tool, you can put breakpoints in your methods. Then run some tests by hand. When the debugger steps through your code, you can see exactly where the code goes and exactly which piece(s) of code is not executed.
Another method to do this is to use the Find/Replace function of the IDE. Check if some of your public/private methods are not being called anywhere. For example, to check whether someMethod() is called, search for someMethod in the whole project and see if there are occurrences other than the declaration.
But the most effective way would be,
Stop writing this kind of code in the first place!
i think the best way to check that is to install a plugin of coverage like eclemma and create unit and integration tests to get 100% of coverage of the code that accomplish the use code/task you have.
The code that don't need to be tested or don't pass over it after the tests are completed and run, is code that you are not using
Try to avoid accumulating trash in the first place. Remove stuff you don't need anymore. (You could make a backup or better use a source code management system.)
You should also write unit tests for your functions. So you know if it still works after you remove something.
Aside from that, most IDEs will show you unused local variables and private methods.
I do imagine situation when you have app developed by years and some part of your functions doesn't used anymore even they still working. Example: Let's assume you make some changes on internal systems when specific event occured but it is not occurs anymore.
I would say you could use AspectJ to obtain such data / log and then analyze after some time.
I'm working on a Scala-based script language (internal DSL) that allows users to define multiple data transformations functions in a Scala script file. Since the application of these functions could take several hours I would like to cache the results in a database.
Users are allowed to change the definition of the transformation functions and also to add new functions. However, then the user restarts the application with a slightly modified script I would like to execute only those functions that have been changed or added. The question is how to detect those changes? For simplicity let us assume that the user can only adapt the script file so that any reference to something not defined in this script can be assumed to be unchanged.
In this case what's the best practice for detecting changes to such user-defined functions?
Until now I though about:
parsing the script file and calculating fingerprints based on the source code of the function definitions
getting the bytecode of each function at runtime and building fingerprints based on this data
applying the functions to some test data and calculating fingerprints on the results
However, all three approaches have their pitfalls.
Writing a parser for Scala to extract the function definitions could be quite some work, especially if you want to detect changes that indirectly affect the behaviour of your functions (e.g. if your function calls another (changed) function defined in the script).
The bytecode analysis could be another option, but I never worked with those libraries. Thus I have no idea if they can solve my problem and how they deal with Java's dynamic binding.
The approach with example data is definitely the simplest one, but has the drawback that different user-defined functions could be accidentally mapped to the same fingerprint if they return the same results for my test data.
Does someone has experience with one of these "solutions" or can suggest me a better one?
The second option doesn't look difficult. For example, with Javassist library obtaining bytecode of a method is as simple as
CtClass c = ClassPool.getDefault().get(className);
for (CtMethod m: c.getDeclaredMethod()) {
CodeAttribute ca = m.getMethodInfo().getCodeAttribute();
if (ca != null) { // i.e. if the method is not native
byte[] byteCode = ca.getCode();
...
}
}
So, as long as you assume that results of your methods depend on the code of that methods only, it's pretty straighforward.
UPDATE:
On the other hand, since your methods are written in Scala, they probably contain some closures, so that parts of their code reside in anonymous classes, and you may need to trace usage of these classes somehow.
I'm looking for a way to list all the methods called (a call tree) by another method, during java runtime.
I'm looking for an api or an output that could allow me to use the data with a script.
Any ideas?
Thx
Coverage and profiling tools mostly employ two techniques : polling periodically the JVM for the state of various threads or instrumenting application bytecode to push out of the JVM relevant data.
There is no direct API support in the java language itself, however there are many tools you can exploit :
You can catch thread dumps, using jstack or similar tools, save them toa file, then analyze (or write a script/program to analyze), this is polling.
Use ASM or BCEL to modify bytecode of your application, this is push but is very hard to do.
Use AspectJ, using the load time weaving agent it's just a command line parameter to enable it.
Solution 3 is by far easier, clean and versatile.
To print, at runtime, whatever method is called as a consequence of the execution of a method is a simple tracing aspect, something similar to :
public aspect TraceCalls {
pointcut aCall : call(* *(..));
pointcut inside : cflow(execution(public MyClass.MyMethod(..)));
before() : aCall() && inside() {
System.out.println(thisJoinPoint);
}
}
Obviously, you can access much more data, print them in a file, format it etc...
(Please note that I wrote this code here, so it could be full of syntax errors)
Well...it's a good question in the first place...I highly doubt if there is any such utility available in the market...however, there are ways to get around this...like using a debugger tool in one of your favorite IDEs like Eclipse and Netbeans...
The right way to do this is with a static analyzer, that looks at your code
and determines the set of possible callers. This would tell you what
can call a method under any (conservative) circumstance.
Our DMS Software Reengineering Toolkit with its Java Front End could be configured to do this. It has support for computing points-to analysis; method calls are calls essentially via class-method vectors.
As a poor man's substitute, you could use a profiling tool that captures the call tree.
This will only get you calls exercised by a particular program run, as opposed to all possible calling contexts.
Our Java Timing Profiler will produce a call tree.
I have a Java program which its main method (in the main class) expects command line arguments. The program is also concurrent (uses threads and stuff).
I want to do massive refactoring to the program. Before I start refactoring I would like to create a test suit for the main method. I would like to test the main method with different cmd line arguments. I'll want to run these tests automatically after each refactoring step I make. How do I create a test which passes cmd line arguments?
I cannot use JUnit because as far as I know it doesn't work well with concurrent programs. I'm also not sure if you can pass cmd line arguments with JUnit.
I'm using eclipse.
Take a look at multithreadedtc. http://code.google.com/p/multithreadedtc/
Consider using JMeter. With JUnit sampler you can make concurrent JUnit tests easily, and see the result. See this question for more details.
I'm not familiar with the various automation tools available specifically for multi-threading, so I won't comment on them. But on simple yet effective option is to log key events from the running program to a CSV file. You could log the final result (if it is a calculation type program) or log out at every instance in the program where some key state is changed or event occurs. Because it's a multi-threaded app you would have to pay attention to comparing the sequence of logged data, if you can not guarantee the relative ordering of the key events you expect to see then compare output using key-value type results. Either way, the idea would be to create test data files which you can use for automated comparison when back testing.
Awaitility can also be useful to help you write deterministic unit tests. It allows you to wait until some state somewhere in your system is updated. For example:
await().untilCall( to(myService).myMethod(), greaterThan(3) );
or
await().atMost(5,SECONDS).until(fieldIn(myObject).ofType(int.class), equalTo(1));
It also has Scala and Groovy support.
await until { something() > 4 } // Scala example