Replacing / Swapping a Java class in Java agent instrumentation [closed] - java

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I've read this post which does the bytecode instrumentation in a "line by line" approach. It's clumsy and bug-prone. I wonder if Javassit supports "replacing" or "swapping" a class with an instrumented class. I see the redefineClasses method but I'm not sure it's used for that purposes, plus I can't find any examples around that.
I appreciate if anyone in SO can give me an example on using redefineClasses in Javassist
My goal is to use Java instrumentation to extract some meaningful data inside multiple Java classes and methods, much more than just printing start/end time in those examples. That's why I think "swapping a Java class" approach is more efficient during development.
What do you guys think and recommend? Thank you.

Questions not presenting any of your own code but asking others for complete sample code or other resources are likely to be closed as off-topic. At the time of writing this, your question already attracted 2 of 3 necessary close votes. Please remember what I told you in your other question about how to ask good questions and how an MCVE helps you do that.
Because you are new to Java instrumentation, I want to elaborate a little more on Johannes' correct comments: I recommend you to not just read the Baeldung article but also some related javadocs.
For example, the Java 8 API documentation for Instrumentation.redefineClasses clearly states the limitations when redefining classes:
The redefinition may change method bodies, the constant pool and attributes. The redefinition must not add, remove or rename fields or methods, change the signatures of methods, or change inheritance. These restrictions maybe be lifted in future versions.
Alas, the restrictions have not been lifted as of Java 17. The same method is described there as follows:
The supported class file changes are described in JVM TI RedefineClasses.
The document pointed to basically says the same as the Java 8 documentation, only in some more detail:
The redefinition may change method bodies, the constant pool and attributes (unless explicitly prohibited). The redefinition must not add, remove or rename fields or methods, change the signatures of methods, change modifiers, or change inheritance. The redefinition must not change the NestHost, NestMembers, Record, or PermittedSubclasses attributes. These restrictions may be lifted in future versions.
Besides, the very same restrictions apply to Instrumentation.retransformClasses, the difference basically being that you do not start from scratch there but use existing class bytes as an input and can chain multiple transformers in order to incrementally instrument your existing class. But even with redefinition, the base line stays the original class, if it was loaded before.

Related

Pros and Cons of Java Reflection? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I was looking at some code today and I came across a piece of code using reflection to take a generic object and do different things with it based on the the type. I have never seen anything like this before, and I am wondering what are the pros and cons to using reflection in java?
There are no pros or cons of reflection in Java. It's a tool which you should use in a specific situation. For example:
When you create a library which needs runtime manipulation with code.
When you have compiled jar without source code and author of jar made a mistake and didn't expose proper API.
So basically there is even no question should you use or not use reflection, it's a matter of situation. You should NOT use reflection if it possible to do the job without using it in 99.99% of cases.
UPD
Couldn't you use it for everything though? Like if you were a really big jerk you could use it to invoke every method you call, so what is stopping you from just doing that?
Mostly slowness, unmaintainable code, losing of compile time code check, breaking of encapsulation.
using reflection to take a generic object and do different things with
it based on the the type
In general, this is usually a bad idea, for reasons of performance, clarity, and robustness.
It throws away the advantages of a static type system; if you pass in types that the reflection code doesn't handle then you will get runtime errors rather than compile-time errors. If one of the classes changes implementation (e.g. renaming a method) then this will also not be detected at compile time.
If these various types have something in common, then it is usually better to handle this using polymorphism: abstract out the commonality into an interface or abstract class; each subclass can then implement the specific behaviour it needs, without other code needing to poke into the internals using reflection.
If these various types don't have anything in common, then why are they being handled together?

Why does Lucene's IndexWriter updateDocument method take only one Term in its parameters? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have noticed that the updateDocument method takes only one Term in its parameters.
I find it strange because the deleteDocument can take multiple terms or even a query to select the document(s) to be deleted...
Why does updateDocument not let us specify more than one Term ? Is there a technical reason behind it or is it just that it hasn't been implemented yet ?
Disclaimer: I did not write this code nor I know the exact reasons, so I can only guess.
First of all, update in Lucene has always meant combination of delete + insert whereas delete has always been single operation. Yes, update is now atomic but you still need to .commit() for changes to take effect.
Secondly, I guess it's hard to design clean API for updating multiple documents. For each document (which itself is a collection of fields) you would have to pass a collection of terms, so for multiple documents you would have to have a collection of collections (or a specially designed command object), yuck. And when in doubt, leave it out! What's wrong with asking a customer to have a loop? It's not that complex really.

Should I document the self-explanatory private methods? (Java) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I like properly documented code and it is no-brainer for me to have properly documented public methods describing the contract and same applies for private or package internal methods to explain the code internals / implementation.
However I am not sure if I should in non-public and non-protected methods:
adhere to all the formalities as are description of parameters, return value and exception
if I should document self-explanatory private methods like fireSomeEvent where is obvious what it does at first sight as this only clutters the code
What is a standard approach to this?
Yes.
If anyone is ever going to look at your code, document. It's worth the extra line or two. Your code will appear more consistant and clear. If anyone else ever will look at your code you should definately comment.
Even people using code look at the source code of the code, even if it is documented. This helps the client better understand the library. By adding documentation, you make your code a lot easier to understand for clients too.
I personally document anything that might cause ambiguity later. I wouldn't document
def next = counter.incrementAndGet()
as its self explanatory. Anyone who thinks you should document methods like that has too much time on their hands.
Also, in private methods, I wouldn't personally worry about adhering to Javadoc standards. Just by writing some comments you're in my good books. I don't need #param or #return. Those are for public APIs.

What is the rationale behind Apache Jena's *everything is an interface if possible* design philosophy? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
If you are familiar with the Java RDF and OWL engine Jena, then you have run across their philosophy that everything should be specified as an interface when possible. This means that a Resource, Statement, RDFNode, Property, and even the RDF Model, etc., are, contrary to what you might first think, Interfaces instead of concrete classes.
This leads to the use of Factories quite often. Since you can't instantiate a Property or Model, you must have something else do it for you -- the Factory design pattern.
My question, then, is, what is the reasoning behind using this pattern as opposed to a traditional class hierarchy system given the nature of the content the library aims to serve? It is often perfectly viable to use either one. For example, if I want a memory backed Model instead of a database-backed Model I could just instantiate those classes, I don't need to ask a Factory to give me one.
As an aside, I'm in the process of writing a library for manipulating Pearltrees data, which is exported from their website in the form of an RDF/XML document. As I write this library, I have many options for defining the relationships present in the Peartrees data. What is nice about the Pearltrees data is that it has a very logical class system: A tree is made up of pearls, which can be either Page, Reference, Alias, or Root pearls.
My question comes from trying to figure out if I should adopt the Jena philosophy in my library which uses Jena, or if I should disregard it, pick my own design philosophy, and stick with it.

SLOC tool for java with man-years estimation [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm looking for a tool to count source lines of code for Java as well as giving an estimate of the number of man-years invested in the code. Since Java code tends to be more verbose than other languages, with a lot of boilerplate code (anemic beans) generated by the IDE, I want the tool's metric to take this into account.
If someone can just provide the formula to convert source line count to man years (for java), this is also good.
This sounds like a really bad idea.
The best way to estimate the number of man years work on a piece of code it to look at who worked on it and how long.
Trying to infer this man years from SLOC is likely to be highly inaccurate and misleading. For example:
At some point in the software lifecycle many lines of code can be added. In some periods of maintenance / refactoring code may be actually taken away.
Code that has had a lot of requirements changes and quick hacks is likely to have more SLOC than equivalent code that was cleanly designed and written in the first place.
The same functionality can be written with 100 lines or 1000 lines depending on the libraries / frameworks used.
Are you going to count SLOC in libraries too? What about the JVM? What about the underlying OS?
In short, any estimate of man years derived from SLOC is likely to be pretty meaningless.
Although you want the information for bad purposes SLOC is a nice, easy, not very useful metric. Make sure you read this older conversation first
One of my most productive days was throwing away 1000 lines of
code.(Kent beck).
It is not going to be accurate for various reasons. Some from my experience ..
Code gets added , changed or deleted: If you really want query your
SCM for change history and then map to changed lines.
Architectural changes/Introducing a library replacing your code. : In
our case it reduced Coding only part of the change: Design
discussions, client interactions, documentation etc will not be
reflected in code, even though I consider they are development effort
Finally developers are f varying productivity (1 : 40 , some said):
How are you going to map into developer time?
SLOC is a useful tool to say my code base it 'this large' or 'this small'..
Looks like http://www.dwheeler.com/sloccount/ is the best bet.
at the office i use ProjectCodeMeter to estimate man-years invested in a source code, it's kind of a luxury tool at that price, but i did use the free trial version at home on occasions :)

Categories