I'm trying to implement an external DSL using a handcrafted compiler. I'm done with lexing and parsing, however currently I'm lost with regards to resolving symbols from separate files (i.e. inheritance).
I've tried searching for this but nothing comes up relating to the handling on the compiler level. However, I've stumbled upon object files, linkers and loaders, but upon further research they seem to play a role after compilation rather than during.
Thank you to anyone who could help.
This is heavily dependent on the nature of your language.
If you're C-like, you have header files defining the shared symbols, and you #include the header every place it is used (recompiling the header as part of that file).
If you're Java-like, you have a standard naming convention and package/directory hierarchy to allow you to locate a symbol.
If you're Javascript-like, you don't resolve any symbols at compile time; you just throw an error if a symbol is not defined when it is used. (For small scripting languages this is often the simplest answer)
If the total amount of code in the DSL is small, a third option is just to load and parse it all and then do the symbol-resolving pass on the whole thing at once.
Related
I have spent a lot of time during the last few weeks dealing with PMML files that have bad syntax or logic errors. My current process involves looking at the PMML file and stepping through JPMML code until I can figure out what's wrong.
Common issues I have found,
Variable in PMML not defined anywhere
DerivedField If-Else statement falls through to a missing variable and does not use missingValueReplacement
Attempted multiplication of string values
invalidValueTreatment defined in MiningSchema of categorical variable that is not enumerated in the DataDictionary which means that the invalidValueTreatment would never be used.
What debugging tools are available for PMML?
Any tools to help with Syntax or Logic debugging would be helpful.
There are probably no such tools available yet. My logic goes that a debugger would be built on top of an evaluator. Since the JPMML-Evaluator library is the state-of-the-art evaluator, and it does not provide dedicated debugging tools, then it's hard to see how some other tool could beat it in this area.
When debugging PMML, you can have two types of issues. First, there are "static errors" that relate to the structure of the PMML document such as missing, invalid or misplaced XML elements and attributes. They can be discovered by performing XML validation against PMML XSD file, or using JPMML-Model Visitor APIs. Second, there are "dynamic errors" that relate to the evaluation path of some data record. For example, as you just pointed out, it may happen that a categorical field is assigned a value for which there is no handler.
The JPMML-Evaluator library should throw an exception whenever it detects an error condition. If you have SAX Locator information enabled, then the exception message provides the line number of the offending PMML content. Of course, the debugging work would be much easier if the JPMML-Evaluator library did proper logging.
You could also try analyzing the source code of your PMML producer application. Why is it doing so in the first place?
In Android applications, resources are specified in xml documents, which automatically are built into the R class, readily accessible within the source code as strongly typed.
Is there any way I could use a similar approach for a regular Java desktop application?
What I'd like to accomplish, is both the removal of strings from the code (as a separation of "layers", more or less) and to make it easy to add support for localization, by simply telling the program to choose the xml file corresponding to the desired language.
I've googled around a bit, but the things I'm looking for seem to be drowning in results about parsing or outputting xml, rather than tools utilizing xml to generate code.
Eclipse's message bundle implementation (used by plugins for example) integrates with the Externalize Strings feature and generates both a static class and a resource properties file for your strings:
http://www.eclipse.org/eclipse/platform-core/documents/3.1/message_bundles.html
For this integration to work Eclipse needs to see org.eclipse.osgi.util.NLS on the class path. From memory, the dependencies of the libraries it was available in were a little tricky for the project I used this approach in, so I just got the source and have it as a stand-alone class in my core module (see the comments for more on that).
It provides the type safety you're looking for and the IDE features save a lot of time. I've found no downsides to the approach so far.
Edit: this is actually what ghostbust555 mentioned in the comments, but not clear in that article that this isn't limited to Eclipse plugins and you refer to your resources via static members of a messages class.
I haven't seen any mention of others using this approach with their own applications, but to me it makes complete sense given the IDE integration and type safety.
I'm not sure if this is what you mean but check out internationalization- http://netbeans.org/kb/docs/java/gui-automatic-i18n.html
Are you looking for something that parses XML files and generates Java instances of similar "struct-like" objects, like JAXP, and JAXB?
I came across ResGen which, given resource bundle XML files generates Java files that can be used to access the resources in a type-safe way.
http://eigenbase.sourceforge.net/resgen/
I'm parsing Java source files and I need to extract type information to guess at signatures of called methods.
e.g. I have foo.x(bar) and I need to figure out the type of foo and bar.
I'm using a java parser that gives me a complete AST, but I'm running into problems with scoping. Is there a different parser I could use that resolves this?
This can't be resolved perfectly because of reflection, but I'm hoping a good parser can deal with scoping and casting issues in the least.
edit: I can't assume that other source files will be present, so I can't simply follow the method call to its source and read the signature from the method declaration
Java's Pluggable Annotation Processing Framework (and its associated APIs) are designed to model Java code in the way you're talking about. You can invoke the Java compiler at runtime, giving you access to the model of the source using the APIs in the javax.lang.model packages. An introductory article is available here.
Basically getting types of identifier would require you to run a big part of a Java compiler yourself. In Java there is a long way from parsing to resolving types, so implementing this is a challenging task even for good programmers.
Perhaps your best way through this would be to take the Java Compiler from OpenJDK, run the relevant compiler phases and extract the types from that.
What you need is all the name and type resolution machinery. As another poster observed, one way you can get that is to abuse the Java compiler.
But you likely have some goal in mind other than compiling java; once you have those names, you want to do something with them. The Java compiler is unlikely to help you here.
What you really want is a foundation for building a tool that processes the Java language, including name and type resolution, that will help you do the rest of your task.
Our DMS Software Reengineering Toolkit is generalized program analysis and transformation machinery. It parses code, builds ASTs, manages symbol tables, provides generic flow analysis mechnisms, supports AST modification (or construction) both procedurally and in terms of surface syntax pattens, including (re)generation of compilable text from the ASTs including any comments.
DMS has a Java Front End that enables DMS to process Java, build Java ASTs, do all that name and type resolution you want. Yes, that's a lot of machinery, equivalent to what the Java compiler has, go read your latest Java reference manual. You can build whatever custom tool you need on top of that foundation.
What you won't be able to do as a practical issue is full, accurate name and type resolution without the rest of the Java source files (or corresponding class files), no matter how you tackle it. You might be able to produce some heuristic guess, but that's all it would be.
I'm looking for a way to automatically generate source code for new methods within an existing Java source code file, based on the fields defined within the class.
In essence, I'm looking to execute the following steps:
Read and parse SomeClass.java
Iterate through all fields defined in the source code
Add source code method someMethod()
Save SomeClass.java (Ideally, preserving the formatting of the existing code)
What tools and techniques are best suited to accomplish this?
EDIT
I don't want to generate code at runtime; I want to augment existing Java source code
What you want is a Program Transformation system.
Good ones have parsers for the language you care about, build ASTs representing the program for the parsed code, provide you with access to the AST for analaysis and modification, and can regenerate source text from the AST. Your remark about "scanning the fields" is just a kind of traversal of the AST representing the program. For each interesting analysis result you produce, you want to make a change to the AST, perhaps somewhere else, but nonetheless in the AST.
And after all the chagnes are made, you want to regenerate text with comments (as originally entered, or as you have constructed in your new code).
There are several tools that do this specifically for Java.
Jackpot provides a parser, builds ASTs, and lets you code Java procedures to do what you want with the trees. Upside: easy conceptually. Downside: you write a lot more Java code to climb around/hack at trees than you'd expect. Jackpot only works with Java.
Stratego and TXL parse your code, build ASTs, and let you write "surce-to-source" transformations (using the syntax of the target language, e.g., Java in this case) to express patterns and fixes. Additional good news: you can define any programming language you like, as the target language to be processed, and both of these have Java definitions.
But they are weak on analysis: often you need symbol tables, and data flow analysis, to really make analyses and changes you need. And they insist that everything is a rewrite rule, whether that helps you or not; this is a little like insisting you only need a hammer in toolbox; after all, everything can be treated like a nail, right?
Our DMS Software Reengineering Toolkit allows the definition of an abitrary target language (and has many predefined langauges including Java), includes all the source-to-source transformation capabilities of Stratego, TXL, the procedural capability of Jackpot,
and additionally provides symbol tables, control and data flow analysis information. The compiler guys taught us these things were necessary to build strong compilers (= "analysis + optimizations + refinement") and it is true of code generation systems too, for exactly the same reasons. Using this approach you can generate code and optimize it to the extent you have the knowledge to do so. One example, similar to your serialization ideas, is to generate fast XML readers and writers for specified XML DTDs; we've done that with DMS for Java and COBOL.
DMS has been used to read/modify/write many kinds of source files. A nice example that will make the ideas clear can be found in this technical paper, which shows how to modify code to insert instrumentation probes: Branch Coverage Made Easy.
A simpler, but more complete example of defining an arbitrary lanauges and transformations to apply to it can be found at How to transform Algebra using the same ideas.
Have a look at Java Emitter Templates. They allow you to create java source files by using a mark up language. It is similar to how you can use a scripting language to spit out HTML except you spit out compilable source code. The syntax for JET is very similar to JSP and so isn't too tricky to pick up. However this may be an overkill for what you're trying to accomplish. Here are some resources if you decide to go down that path:
http://www.eclipse.org/articles/Article-JET/jet_tutorial1.html
http://www.ibm.com/developerworks/library/os-ecemf2
http://www.vogella.de/articles/EclipseJET/article.html
Modifying the same java source file with auto-generated code is maintenance nightmare. Consider generating a new class that extends you current class and adds the desired method. Use reflection to read from user-defined class and create velocity templates for the auto-generating classes. Then for each user-defined class generate its extending class. Integrate the code generation phase in your build lifecycle.
Or you may use 'bytecode enhancement' techniques to enhance the classes without having to modify the source code.
Updates:
mixing auto-generated code always pose a risk of someone modifying it in future to just to tweak a small behavior. It's just the matter of next build, when this changes will be lost.
you will have to solely rely on the comments on top of auto-generated source to prevent developers from doing so.
version-controlling - Lets say you update the template of someMethod(), now all of your source file's version will be updated, even if the source updates is auto-generated. you will see redundant history.
You can use cglib to generate code at runtime.
Iterating through the fields and defining someMethod is a pretty vague problem statement, so it's hard to give you a very useful answer, but Eclipse's refactoring support provides some excellent tools. It'll give you constructors which initialize a selected set of the defined members, and it'll also define a toString method for you.
I don't know what other someMethod()'s you'd want to consider, but there's a start for you.
I'd be very wary of injecting generated code into files containing hand-written code. Hand-written code should be checked into revision control, but generated code should not be; the code generation should be done as part of the build process. You'd have to structure your build process so that for each file you make a temporary copy, inject the generated source code into it, and compile the result, without touching the original source file that the developers work on.
Antlr is really a great tool that can be used very easily for transforming Java source code to Java source code.
We have a need to generate Java source code. We do this by modeling the abstract syntax tree and have a tree walker that generate the actual source code text. This far all good.
Since my AST code is a bit old, it does not have support for annotations and generics. So I'm looking around for open projects to use for future projects with code generation needs. And this is where the actual issue comes. We want to test that the code generated has the correct behavior.
Here is where I got the idea to actually evaluate the AST instead of generating the java source code, compile it, and run tests against that code. An evaluator would speed up the unit tests, and one could evaluate smaller pieces of generated code, such as only a method, making the "units" more reasonable.
So far i have found the com.sun.codemodel project that seems quite nice as for being a modern (support for java5 and 6 features) AST based code-generating solution.
Anyone know if there is another project that would allow me to evaluate pieces of AST directly (such as a single generated method)?
To evaluate Java, you need all the semantic analysis that goes along with it ("what is the scope of this identifier? What type does it have?") as well as an interpreter.
To get that semantic analysis, you need more than just an AST: you need full name resolution (symbol table building) and type resolution (determination of expression types and validation that expressions are valid in the context in which they are found),
as well as class lookup (which actual method does foo refer to?)
With that, you can consider building an interpreter by crawling over the trees in execution order. You'll also need to build a storage manager; you might not need to do a full garbage collector, but you'll need something. You'll also need an interpreter
for .class files if you really want to run something, which means you need a parser
(and name/type resolution for the class files, too).
I don't know if Eclipse has all this (at least the storage manager part you can get for free :). I'd sort of expect it to, given that its original design was to support Java development, but I've been sorely disappointed by lots of tools over the years.
The DMS Software Reengineering Toolkit is a program analysis/transformation too that handles many languages. It has a full Java front end including parsing, AST building, symbol table construction and name resolution, type resolution, builds call graphs (needed to resolve virtual function calls), and has a .class file reader to boot with name resolution. So it would be a good foundation for building an interpreter.
DMS can construct arbitrary ASTs, too, and then generate source code from them, so it would handle the code generation end, too, just fine.
[The reason DMS exists is the "sorely disappointed" part].
I'm not sure if this is what you're looking for, but Eclipse's JDT project provides a very good view on the Java AST (including the Java 5 and 6 features). It has a series of utilities and tools for code viewing/rewriting (not necessarily generation). They're all licensed under the Eclipse Public License.
You can get more info at http://eclipse.org/jdt/