What are the subphases of the semantics analysis compiler phase? - java

I took an interest in finding out how a compiler really works. I looked through several books and all of them agree on the fact that the compiler phases are roughly as this(correct me if I'm wrong): lexical analysis, syntax analysis, semantic analysis, intermediate code, code optimization, code generation. The lexical and syntax phases look pretty clear and straightforward as methods(but this does not mean easy of course). However, I'm still not able to find what the semantic phase really consist of. For one, I know that there should be some subphases like scope checking, declaration checking and type checking but question that has been bothering me is: are there other things that have to be done. Can you tell me what are the mandatory steps that have to taken during this phase. I know this strongly depends on the programming language and the compiler implementation but could you give me some examples concerning C/C++, Java. And could you please point me to a book/page/article where can I read those things in-depth. Thanks.
Edit:
The books I look through were "Compilers: Principles, Techniques, and Tools",Aho and "Modern Compiler Design", Grune, Reeuwijk. I haven't been able to answer this question using them. If you find this question too broad could you please give an answer considering an compiler implementation of your choice for either C,C++ or Java.

There are typical "semantic analysis" phases that many compilers go through in one form or another. After lexing and parsing, the following actions typically occur in this order:
Name and type resolution. Determines lexical scopes, identifiers declared in such scopes, the type information for those identifiers, and for each non-declaration use of an identifier, the declaration to which it refers
Control flow analysis. The construction of a control flow graph over the computations explicit and/or implied (e.g., constructors) by the code.
Data flow analysis. Determines where variables recieve new values, and where those values are read by other parts of the program. (This often has a local analysis done within procedures, followed possibly by one across the procedures).
Also often done, as part of data flow analysis:
Points-to analysis. Determination for each pointer, at each location in the code, which entities that pointer might reference
Call graph. Construction of a call graph across the procedures, often taking into account indirect function pointers whose estimated values occur during the points-to analysis.
As a practical matter, some of these need to be interleaved to produce better results.
Beyond this, there are many analyses used to support various optimizations and code generation passes. If you really want to know more, consult any decent compiler book.

As already mentioned by templatetypedef, semantic analysis is language specific. For C++ it would among other things involve what template instantiations are required (the C++ language tends towards more and more semantic analysis), and for Java there would need to be some checked exception analysis.
Even for C the GNU C compiler can be configured to check arguments given to string-interpolations. I guess there are hundres of semi semantic analysis-related options for GCC to choose from. If you are doing a paper on the subject, you could spend an afternoon counting them :)
Besides availability, I find that the semantic analysis is what differentiates the statically typed imperative object-oriented languages of today.

You can't necessarily divide it into sub-phases at all. There are a number of things that have to be done, but at least conceptually they are all done while walking the parse tree from top to bottom and back up again. What exactly they are and how exactly it all happens depends on the language, the statement being processed, the specific compiler writer, ...
You could start to make a list:
Build symbol table.
Find the declarations of variables referenced.
Check compatibility of variable datatypes.
Establish subexpression types.
...
You can see that already these must be somewhat intermingled in practice, rather than constitute separable sub-phases.

Related

Higher-level, semantic search-and-replace in Java code from command-line

Command-line tools like grep, sed, awk, and perl allow one to carry out textual search-and-replace operations.
However, is there any tool that would allow me to carry out semantic search-and-replace operations in a Java codebase, from command-line?
The Eclipse IDE allows me, e.g., to easily rename a variable, a field, a method, or a class. But I would like to be able to do the same from command-line.
The rename operation above is just one example. I would further like to be able to select the replacee text with additional semantic constraints such as:
only the scopes of methods M1, M2 of classes C, D, and E;
only all variables or fields of class C;
all expressions in which a variable of some class occurs;
only the scope of the class definition of a variable;
only the scopes of all overridden versions of method M of class C;
etc.
Having selected the code using such arbitrary semantic constraints, I would like to be able to then carry out arbitrary transformations on it.
So, basically, I would need access to the symbol-table of the code.
Question:
Is there an existing tool for this type of work, or would I have to build one myself?
Even if I have to build one myself, do any tools or libraries exist that would at least provide me the symbol-table of Java code, on top of which I could add my own search-and-replace and other refactoring operations?
The only tool that I know can do this easily is the long awaited Refaster. However it is still impossible to use it outside of Google. See [the research paper](http:// research.google.com/pubs/pub41876.html) and status on using Refaster outside of Google.
I am the author of AutoRefactor, and I am very interested in implementing this feature as part of this project. Please follow up on the github issue if you would like to help.
What you want is the ability to find code according to syntax, constrained by various semantic conditions, and then be able to replace the found code with new syntax.
access to the symbol table (symbol type/scope/mentions in scope) is just one kind of semantic constraint. You'll probably want others, such as control flow sequencing (this happens after that) and data flow reaching (data produced here is consumed there). In fact there are an unbounded number of semantic conditions you might consider important, depending on the properties of the language (does this function access data in parallel to that function?) or your application interests (is this matrix an upper triangular matrix?)
In general you can't have a tool that has all possible semantic conditions of interest off the shelf. That means you need to be to express new semantic conditions when you discover the need for them.
The best you might hope for is a tool that
knows the language syntax
has some standard semantic properties built in (my preference is symbol tables, control and data flow analysis)
can express patterns on the source in terms of the source code
can constrain the patterns based on such semantic properties
can be extended with new semantic analyses to provide additional properties
There is a classic category of tools that do this, call source to source program transformation systems.
My company offers the DMS Software Reengineering Toolkit, which is one of these. DMS has been used to carry out production transformations at scale on a wide variety of languages (including OP's target: Java). DMS's rewrite rules are of the form:
rule <rule_name>(syntax_parameters): syntax_category =
<match_pattern> -> <replacement_pattern>
if <semantic_condition>;
You can see a lot more detail of the pattern language and rewrite rules look like: DMS Rewrite Rules.
It is worth noting that the rewrite rules represent operations on trees. This means that while they might look like text string matches, they are not. Consequently a rewrite rule matches in spite of any whitespace issues (and in DMS's case, even in spite of differences in number radix or character string escapes). This makes the DMS pattern matches far more effective than a regex, and a lot easier to write since you don't have worry about these issues.
This Software Recommendations link shows how one can define rules with DMS, and (as per OP's request) "run them from the command line": This isn't as succinct as running SED, but then it is doing much more complex tasks.
DMS has a Java front with symbol tables, control and data flow analysis. If one wants additional semantic analyses, one codes them in DMS's underlying programming language.

Can a language ever have compile-time checking but the characteristics of dynamic typing?

Upon reading the following:
A lot of people define static typing and dynamic typing with respect
to the point at which the variable types are checked. Using this
analogy, static typed languages are those in which type checking is
done at compile-time, whereas dynamic typed languages are those in
which type checking is done at run-time.
This analogy leads to the analogy we used above to define static and
dynamic typing. I believe it is simpler to understand static and
dynamic typing in terms of the need for the explicit declaration of
variables, rather than as compile-time and run-time type checking.
Source
I was thinking that the two ways we define static and dynamic typing: compile-time checking and explicit type declaration are a bit like apples and oranges. A characteristic in all statically typed languages (from my knowledge) is the reference variables have a defined type. Can there be a language that has the benefits of compile-time checking (like Java) but also the ability to have variables unbounded to a specific type (like Python)?
Note: Not exactly type inference in a language like Java, because the variables are still assigned a type, just implicitly. This theoretical language wouldn't have reference types, so there would be no casting. I'm trying to avoid the use of "static typing" vs "dynamic typing" because of the confusion.
There could be, but should there be?
Imagine in hypothetical-pseudo-C++:
class Object
{
public:
virtual Object invoke(const char *name, std::list<Object> args);
virtual Object get_attr(const char *name);
virtual const Object &set_attr(const char *name, const Object &src);
};
And that you have a language that arranges:
to make Object class the root base class of all classes
syntactic sugar to turn blah.frabjugate() into blah.invoke("frabjugate") and
blah.x = 10 into blah.set_attr("x", 10)
Add to this something combining attributes of boost::variant and boost::any and you have a pretty good start. All the dynamicism (both good and runtime bugs bad) of Python with the eloquence and rigidity (yay!) of C++ or Java. With added run-time bloat and efficiency of hash-table lookups vs. call/jmp machine instructions.
In languages like Python, when you call blah.do_it() it has to do potentially multiple hash table lookups of the string "do_it" to find out if your instance blah or its class has a callable thing called "doit" every time it is called. This is the most extreme late-binding that could be imaged:
flarg.do_it() # replaces flarg.do_it()
flarg.do_it() # calls a different flarg.do_it()
You could have your hypothetical language give some control over when the binding occurs. C++-like standard methods are crudely static bound to the apparent reference type, not the real instance type. C++ virtual methods are late-bound to the object instance type. Python-like attributes and methods are extremely late bound to the current version of the object instance.
I think you could definitely program in a strong static typed language in a dynamic style, just as you could build an interpreter in a language like C++ or Java. Some syntax hooks could make it look a little more seamless. But maybe you could do the same in reverse: maybe a Python decorator that automatically checks argument types, or a MetaClass that does it at compile time? [no, I don't think this is possible...]
I think you should view it as a union of features. but you'd get both the best and the worst of both worlds...
Can there be a language that has the benefits of compile-time checking (like Java) but also the ability to have variables unbounded to a specific type (like Python)?
Actually mostly language have support for both, so yes. The difference is which form is preferred/easier and generally used. Java prefers static types but also supports dynamic casts and reflection.
This theoretical language wouldn't have reference types, so there would be no casting.
You have to consider that language also need to perform reasonably well so you have to consider how they will be implemented. You could have a super type but this makes optimisation very hard and you code will most likely either run slowly or use much more resources.
The more popular languages tend to make pragmatic implementation choices. They are not purely one type or another and are willing to borrow styles even if they don't handle them as cleanly as a "pure" language.
what exactly do they allow the compiler or programmer to do that dynamic types can't?
It is generally accepted that the quicker you find a bug, the cheaper it is to fix. When you first start programming, the cost of maintenance isn't high in your mind, but once you have much more experience you will realise that a successful project costs far more to maintain than it did to develop and fixing long standing bugs can be really costly.
static languages have two advantages
you pick up bugs sooner rather than later. The sooner the better. With dynamic languages you might never discover a bug if the code is never run.
the cost of maintenance is easier. Static languages make clearer the assumption made when the code was first written and are more likely to detect issues if you don't have enough test coverage (btw, you never have enough test coverage)
No you cannot. The difference here boils down to early binding versus late binding. Early binding means matching everything up on the binary level upfront, fixing it in code. The result is rigid, type-safe and fast code. Late binding means there is some kind of runtime interpretation involved. This results in flexiblility (potentially unsafe) at the cost of performance.
The two approaches are different on a technical level (compilation versus interpretation) and the programmer would have to choose which is desired when, which would defeat the benefit of having both in the first place.
In languages that use a (common) language runtime however you do get some of what you are asking for through reflection. But it is organized differently and still type-safe. It is not the implicit kind of binding you refer to but requires a bit of work and awareness from the programmer.
As far as what is possible with static types that is impossible with dynamic types: nothing. They are both Turing complete
The value of static types is finding bugs early. In Python, something as simple as a misspelled name isn't caught until you run the program, and even then only if the line of code with the misspelling is run.
class NuclearReactor():
def turn_power_off(self):
...
def shut_down_cleanly(self):
self.turn_power_of()

Why don't compilers use asserts to optimize? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Following pseudo-C++-code:
vector v;
... filling vector here and doing stuff ...
assert(is_sorted(v));
auto x = std::find(v, elementToSearchFor);
find has linear runtime, because it's called on a vector, which can be unsorted. But at that line in that specific program we know that either: The program is incorrect (as in: it doesn't run to the end if the assertion fails) or the vector to search for is sorted, therefore allowing a binary search find with O(log n). Optimizing it into a binary search should be done by a good compiler.
This is only the easiest worst case behavrior I found so far (more complex assertions may allow even more optimization).
Do some compilers do this? If yes, which ones? If not, why don't they?
Appendix: Some higher level languages may easily do this (especially in case of FP ones), so this is more about C/C++/Java/similar languages
Rice's Theorem basically states that non-trivial properties of code cannot be computed in general.
The relationship between is_sorted being true, and running a faster search is possible instead of a linear one, is a non-trivial property of the program after is_sorted is asserted.
You can arrange for explicit connections between is_sorted and the ability to use various faster algorithms. The way you communicate this information in C++ to the compiler is via the type system. Maybe something like this:
template<typename C>
struct container_is_sorted {
C c;
// forward a bunch of methods to `c`.
};
then, you'd invoke a container-based algorithm that would either use a linear search on most containers, or a sorted search on containers wrapped in container_is_sorted.
This is a bit awkward in C++. In a system where variables could carry different compiler-known type-like information at different points in the same stream of code (types that mutate under operations) this would be easier.
Ie, suppose types in C++ had a sequence of tags like int{positive, even} you could attach to them, and you could change the tags:
int x;
make_positive(x);
Operations on a type that did not actively preserve a tag would automatically discard it.
Then assert( {is sorted}, foo ) could attach the tag {is sorted} to foo. Later code could then consume foo and have that knowledge. If you inserted something into foo, it would lose the tag.
Such tags might be run time (that has cost, however, so unlikely in C++), or compile time (in which case, the tag-state of a given variable must be statically determined at a given location in the code).
In C++, due to the awkwardness of such stuff, we instead by habit simply note it in comments and/or use the full type system to tag things (rvalue vs lvalue references are an example that was folded into the language proper).
So the programmer is expected to know it is sorted, and invoke the proper algorithm given that they know it is sorted.
Well, there are two parts to the answer.
First, let's look at assert:
7.2 Diagnostics <assert.h>
1 The header defines the assert and static_assert macros and
refers to another macro,
NDEBUG
which is not defined by <assert.h>. If NDEBUG is defined as a macro name at the point in the source file where <assert.h> is included, the assert macro is defined simply as
#define assert(ignore) ((void)0)
The assert macro is redefined according to the current state of NDEBUG each time that <assert.h> is included.
2 The assert macro shall be implemented as a macro, not as an actual function. If the macro definition is suppressed in order to access an actual function, the behavior is undefined.
Thus, there is nothing left in release-mode to give the compiler any hint that some condition can be assumed to hold.
Still, there is nothing stopping you from redefining assert with an implementation-defined __assume in release-mode yourself (take a look at __builtin_unreachable() in clang / gcc).
Let's assume you have done so. Now, the condition tested could be really complicated and expensive. Thus, you really want to annotate it so it does not ever result in any run-time work. Not sure how to do that.
Let's grant that your compiler even allows that, for arbitrary expressions.
The next hurdle is recognizing what the expression actually tests, and how that relates to the code as written and any potentially faster, but under the given assumption equivalent, code.
This last step results in an immense explosion of compiler-complexity, by either having to create an explicit list of all those patterns to test or building a hugely-complicated automatic analyzer.
That's no fun, and just about as complicated as building SkyNET.
Also, you really do not want to use an asymptotically faster algorithm on a data-set which is too small for asymptotic time to matter. That would be a pessimization, and you just about need precognition to avoid such.
Assertions are (usually) compiled out in the final code. Meaning, among other things, that the code could (silently) fail (by retrieving the wrong value) due to such an optimization, if the assertion was not satisfied.
If the programmer (who put the assertion there) knew that the vector was sorted, why didn't he use a different search algorithm? What's the point in having the compiler second-guess the programmer in this way?
How does the compiler know which search algorithm to substitute for which, given that they all are library routines, not a part of the language's semantics?
You said "the compiler". But compilers are not there for the purpose of writing better algorithms for you. They are there to compile what you have written.
What you might have asked is whether the library function std::find should be implemented to potentially seek whether or not it can perform the algorithm other than using linear search. In reality it might be possible if the user has passed in std::set iterators or even std::unordered_set and the STL implementer knows detail of those iterators and can make use of it, but not in general and not for vector.
assert itself only applies in debug mode and optimisations are normally needed for release mode. Also, a failed assert causes an interrupt not a library switch.
Essentially, there are collections provided for faster lookup and it is up to the programmer to choose it and not the library writer to try to second guess what the programmer really wanted to do. (And in my opinion even less so for the compiler to do it).
In the narrow sense of your question, the answer is they do if then can but mostly they can't, because the language isn't designed for it and assert expressions are too complicated.
If assert() is implemented as a macro (as it is in C++), and it has not been disabled (by setting NDEBUG in C++) and the expression can be evaluated at compile time (or can be data traced) then the compiler will apply its usual optimisations. That doesn't happen often.
In most cases (and certainly in the example you gave) the relationship between the assert() and the desired optimisation is far beyond what a compiler can do without assistance from the language. Given the very low level of meta-programming capability in C++ (and Java) the ability to do this is quite limited.
In the wider sense I think what you're really asking for is a language in which the programmer can make asserts about the intention of the code, from which the compiler can choose between different translations (and algorithms). There have been experimental languages attempting to do that, and Eiffel had some features in that direction, but I'm now aware of any mainstream compiled languages that could do it.
Optimizing it into a binary search should be done by a good compiler.
No! A linear search results in a much more predictable branch. If the array is short enough, linear search is the right thing to do.
Apart from that, even if the compiler wanted to, the list of ideas and notions it would have to know about would be immense and it would have to do nontrivial logic on them. This would get very slow. Compilers are engineered to run fast and spit out decent code.
You might spend some time playing with formal verification tools whose job is to figure out everything they can about the code they're fed in, which asserts can trip, and so forth. They're often built without the same speed requirements compilers have and consequently they're much better at figuring things out about programs. You'll probably find that reasoning rigorously about code is rather harder than it looks at first sight.

Prefixing variables names to indicate their respective scope or origin?

In the companies I've been working, I've seen a lot the use of prefixes to indicate the scope or the origin of variables, for example m for classes members, i for methods intern variables and a (or p) for methods parameters:
public class User {
private String mUserName;
public String setUserName(final String aUserName) {
final String iUserName = "Mr " + aUserName;
mUserName = iUserName;
}
}
What do you think about it? Is it recommended (or precisely not)? I found it quite ugly in a first phase, but the more I use it, the more I find it quite convenient when working on big methods for example.
Please note that I'm not talking about the Hungarian notation, where prefixes indicate the type rather than the scope.
I've also worked in shops that had rigid prefix notation requirements, but after awhile this became a "smell" that the code had grown out-of-control and global variables were leaking from everywhere indicating poor code/review.
Java's "this." notation is the prefered way to reference a field, over a local. The use of "m" prefix for variables was popularized by that "Micro.." company as a branding gimmick (they even said "don't use that because we do").
The general rule I follow is to name the variable according to what it is used to store. The variable name is simply an alias. If it stores a user name, then userName is valid. If it is a list of user names, then userNames or userNameList is valid. However, I avoid including the "type" in the variable-name now, because the type changes quite often (shouldn't a collection of user names be a set, in practice? and so on...)
At the end of the day, if the variable name is useful to you to remember what the code was doing down the road, it is probably a good idea. Maintainability and Readability trump "perceived" efficiency and terse syntax, especially because modern compilers are rewriting your code according to macro usage patterns.
I hope this helps somewhat, and am happy to supply more details of any claims herein.
ps. I highly recommend the Elements of Java Style for these types of questions. I used to work with the authors and they are geniuses when it comes to style!
Note: yours is a very opinion-based question (generally frowned on in StackOverflow these days), but I still think it's a worthwhile topic.
So here's my perspective, for what it's worth:
Personally, I think an indicator of scope in variable names can be helpful, both when writing and reading code. Some examples:
If I am reading a class method and I don't see any "m_XXX" being used, I can conclude "this function might as well be static — it doesn't use instance data." This can be done with a quick scan of variables, if the names have that information.
Any time I see "g_XXX" (global), I can start being worried, and pay closer attention (: Especially writing to a global is a big red flag, and especially especially if there is any concurrency/threading involved.
Speaking of concurrency, there is a pretty clear ordering of "safeness" for mutable data: locals are okay, members are dangerous, globals are very dangerous. So, when thinking about such code, having variable scope in mind is important. For this reason, in C/C++ I think having a prefix for function-static variables is useful, too (they're essentially "global" across invocations of that function). More an indication of lifetime than scope, in that case.
It can help junior developers think about the above issues more actively.
The popularity of this convention varies by language. I see it in C++ and C most often. Java somewhat frequently. Not very much in Python, Perl, Bash or other "scripting" languages. I wonder if there is some correlation between "high performance" code and benefit from such a scheme. Maybe just historical happenstance, though. Also, some languages have syntax that already includes some of this info (such as Python's self.xxx).
I say disregard any arguments along the lines of "oh, Microsoft invented that for XYZ, ignore it" or "it looks clunky." I don't care who invented it or why or what it looks like, as long as it's useful (:
Side note: Some IDEs can give you scope information (by hovering your mouse, doing special highlighting, or otherwise), and I can understand that people using such systems find putting that info in the variable names redundant. If your whole team uses a standard environment like that, then great; maybe you don't need a naming scheme. Usually there is some variation across people though, or maybe your code review and diff tools don't offer similar features, so there are still often cases where putting the info inside the text itself is useful.
In an ideal world, we would have only small functions that don't use lots of variables, and the problem such naming prefixes try to solve would not exist (or be small enough to not warrant "corrupting" all your code with such a scheme just to improve some corner cases).
But we do not live in an ideal world.
Small functions are great, but sometimes that is not practical. Your algorithm may have inherent complexity that cannot be expressed succinctly in your language, or you may have other constraints (such as performance or available development time) that require you to write "ugly" code. For the above-mentioned reasons, a naming scheme can help in such cases, and others.
Having variable naming convention works great for teams, and IMO must be used for languages that are not string typed. e.g. JScript.
In Jscript the only type is a ‘var’ which is basically evaluated at run-time. It becomes more important in Jscript to decorate the variables with the type of the data that is expected to be in them. Use consistent prefixes, that your team can decide, e.g. s or str, or txt for string, i/j/l/l/m/n for integers ( like FORTRAN :-) ), q for double, obj or o for non primitive data etc. basically using common sense. Do not use variable names without any prefix unless the variable name clearly indicates the data type.
e.g.
variable name “answer” is a bad name since answer could be text, or a number, so it must be
strAnswer or sAnswer , jAnswer, qAnswer etc.
But
“messageText” or "msgTxt" is a good enough because it is clear that the content is some text.
But naming a variable "dataResponse" or "context" is confusing.
Sometimes on the server one needs to fix or debug something and the only editors are vi or notepad in worst cases nano / sed, where there is no contextual help from the editor, having a coding convention can be really helpful.
Reading the code is also faster if the convention is followed. It is like having a prefix Mr. or Ms. to determine the gender. Mr.Pat or Ms.Pat... when Pat itself does not tell the gender of Patrick or Patricia...

Java Program Specialization - What is it? I don't understand it

I'm reading about program specialization - specifically java and I don't think I quite understand it to be honest. So far what I understand is that it is a method for optimizing efficiency of programs by constraining parameters or inputs? How is that actually done? Can someone maybe explain to me how it helps, and maybe an example of what it actually does and how its done?
Thanks
I have been reading:
Program Specialization - java
Program specialization is the process of specializing a program when you know in advance what are the arguments you're going to have.
One example is if you have a test and you know that with your arguments, you're never going to enter the block, you can eliminate the test.
You create a specialized version of the program for a certain kind of input.
Basically, it helps to get rid off useless with your input. However, with the modern architectures and compilers (at least in C), you're not going to win a lot in terms of performance.
From the same authors, i would recommend the Tempo work.
EDIT
From the Toplas paper:
Program specialization is a program
transformation technique that
optimizes a pro- gram fragment with
respect to information about a context
in which it is used, by generating an
implementation dedicated to this usage
context. One approach to automatic
program specialization is partial
evaluation, which performs aggressive
inter-procedural constant propagation
of values of all data types, and
performs constant folding and
control-flow simplifications based on
this information [Jones et al. 1993].
Partial evaluation thus adapts a
program to known (static) informa-
tion about its execution context, as
supplied by the user (the programmer).
Only the program parts controlled by
unknown (dynamic) data are
reconstructed. Par- tial evaluation
has been extensively investigated for
functional languages [Bondorf 1990;
Consel 1993], logic languages [Lloyd
and Shepherdson 1991], and imperative
languages [Andersen 1994; Baier et al.
1994; Consel et al. 1996].
Interesting.
It's not a very common term, at least I haven't come across it before.
I don't have time to read the whole paper, but it seems to refer to the potential to optimise a program depending on the context in which it will be run. An example in the paper shows an abstract "power" operation being optimised through adding a hard-coded "cube" operation. These optimisations can be done automatically, or may require programmer "hints".
It's probably worth pointing out that specialization isn't specific to Java, although the paper you link to describes "JSpec", a Java code specializer.
It looks like Partial Evaluation applied to Java.
That idea is if you have a general function F(A,B) having two parameters A and B, and (just suppose) every time it is called, A is always the same. Then you could transform F(A,B) into a new function FA(B) that only takes one parameter, B. This function should be faster because it is not having to process the information in A - it already "knows" it. It can also be smaller, for the same reason.
This is closely related to code generation.
In code generation, you write a code generator G to take input A and write the small, fast specialized function FA. G(A) -> FA.
In specialization, you need three things, the general program F, the specializer S, and the input A: S(F,A) -> FA.
I think it's a case of divide-and-conquer.
In code generation, you only have to write G(A), which is simple because it only has to consider all As, while the generated program considers all the Bs.
In Partial Evaluation, you have to get an S somewhere, and you have to write F(A,B) which is more difficult because it has to consider the cross product of all possible As and Bs.
In personal experience, a program F(A,B) had to be written to bridge real-time changes from an older hierarchical database to a newer relational one. A was the meta-description of how to map the old database to the new, in the form of another database. B was the changes being made to the original database, and F(A,B) computed the corresponding changes to the newer database. Since A changed at low frequency (weekly), F(A,B) did not have to be written. Instead a generator G(A) was written (in C) to generate FA(B) (in C). Time saved was roughly an order of magnitude of development time, and two orders of magnitude of run time.

Categories