In my current project, I am using xText editor for writing my dsl specifications (i.e., voc.mydsl, arch.mydsl, and network.mydsl). I like the xText editor because of its code-completion and other functionalities.
However, I have a separate Java program. This java program takes text files (i.e., voc.txt, arch.txt, network.txt) as inputs, parse these files using ANTLR parser, and generates code using StringTemplate files.
Now, my problem is that currently, I have to follow these steps manually:
(1) I write dsl specifications in XText editor (voc.mydsl, arch.mydsl, and network.mydsl).
(2) I copy-paste these specification into three text files (i.e., voc.txt, arch.txt, network.txt).
(3) Finally, I run the Java program to parse these .txt files and generate code.
Is there any way that I can automize (performed in a single click) all the above three steps? Let me know if you need any detail.
You could write a "special" generator for your DSL. XText will call this generator whenever you edit and save a *.mydsl file. What you actually do in this "Generator" thing is of no interest to Xtext. So your MydslGenerator.xtend generator could look like this:
// whereever Xtext generates your empty version of this file
package mydsl.xtext.generator
// add imports
#Singleton
class MydslGenerator implements IGenerator {
override void doGenerate(Resource resource, IFileSystemAccess fsa) {
// calculate new filename
val newFilename= resource.URI.lastSegment.replaceAll(".mydsl", ".txt")
// get text representation of parsed model
val textContent = resource.contents.map[NodeModelUtils::getNode(it).text].join
// write text content to new file
fsa.generateFile(newFilename, textContent);
// TODO: call ANTLR parser on new file here
}
}
In the last step you can call your "other" program either by calling its main method directly from Eclipse or by invoking a new JVM. The later is only advisable if the other generator works quickly because it is called whenever you save a *.mydsl file. The first method is only advisable when the other program has no memory leaks and has not to many jar dependencies.
Related
We need a Java code which automatically converts csv files into pbix files, so they can be opened and further worked on in the PowerBI Desktop. Now, I know PowerBI offers this super cool feature, which converts csv files and many other formats into pbix manually. However, we need a function which automatically converts our reports directly into pbix, so that no intermediate files need to be created and stored somewhere.
We have already been able to develop a function with three parameters: The first one corresponds to the selected report, from our database; the second corresponds to the directory, in which the converted report should be generated; and finally the third one is the converted output file itself. The two first parameters work well and the code is able to generate a copy of any report we select into any directory we select. However, it is able to generate csv files only. Any other format will have the same size as the csv and won't be able to open.
This is what we've tried so far for the conversion part of the code:
Util.writeFile("C:\\" + "test.csv", byteString);
The above piece of code works just fine, however csv is not what we wanted, the original reports are already in csv format anyway.
Util.writeFile("C:\\" + "test.pbix", byteString);
Util.writeFile("C:\\" + "test.pdf", byteString);
Util.writeFile("C:\\" + "test.xlsx", byteString);
Each of the three lines above generates one file in the indicated format, however each of the generated files are just as large as its corresponding csv(but should be much larger) and therefore are unable to open.
File file = new File("C:\\" + "test1.csv");
File file2 = new File("C:\\" + "test1.pbix");
file.renameTo(file2);
The above piece of code does not generate any file at all, but I thought it could be worth mentioning it, as it doesn't throw any exception at all either.
P.S. We would also be interested in a java code which converts csv in any other BI reporting software besides PowerBI, like Tableau, BIRT, Knowage, etc.
P.S.2 The first piece of code uses objects of a class (sailpoint.tools.Util) which is apparently only available for those who have access to Sailpoint.
I'm trying to create an IntelliJ plugin that iterates over all files in the project folder and parses all the .java files and then makes some changes in them. The problem is that after reading the documentation I don't have a clear idea how to iterate files over the whole project folder, I think I may use PSI files but I am not sure. Does anyone know or has an idea on how to accomplish this?
To iterate all files in project content, you can use ProjectFileIndex.SERVICE.getInstance(project).iterateContent.
Then you can get PSI files from them (PsiManager#findFile), check if they're Java (instanceof PsiJavaFile) and do whatever you like.
If you don't need PSI, you can just check the file type
(VirtualFile#getFileType == JavaFileType.INSTANCE) and perform the modifications via document (FileDocumentManager#getDocument(file)) or VFS (LoadTextUtil#loadText, VfsUtil#saveText).
A possible way is to use AllClassesGetter, like this:
Processor<PsiClass> processor = new Processor<PsiClass>() {
#Override
public boolean process(PsiClass psiClass) {
// do your actual work here
return true;
}
};
AllClassesGetter.processJavaClasses(
new PlainPrefixMatcher(""),
project,
GlobalSearchScope.projectScope(project),
processor
);
processJavaClasses() will look for classes matching a given prefix in a given scope. By using an empty prefix and GlobalSearchScope.projectScope(), you should be able to iterate all classes declared in your project, and process them in processor. Note that the processor handles instances of PsiClass, which means you won't have to parse files manually. To modify classes, you just have to change the tree represented by these PsiClasses.
I'm trying to populate a Word content control with XML data using docx4j (version 3.2.1). I'm evaluating this in order to use it for invoice generation. The documents we want to generate are not very complicated so this looks like a good approach to me.
I have created the content control through Word 2010 dev tools. This is how I try to inject the XML into the docx (taken from this example):
WordprocessingMLPackage wordMLPackage = Docx4J.load(new File(input_DOCX));
FileInputStream xmlStream = new FileInputStream(new File(input_XML));
Docx4J.bind(wordMLPackage, xmlStream, Docx4J.FLAG_BIND_INSERT_XML & Docx4J.FLAG_BIND_BIND_XML);
I get the following exception:
org.docx4j.openpackaging.exceptions.Docx4JException: Couldn't find CustomXmlDataStoragePart! exiting..
at org.docx4j.Docx4J.bind(Docx4J.java:300)
at org.docx4j.Docx4J.bind(Docx4J.java:271)
How can I add the CustomXmlDataStoragePart with docx4j, if it doesn't exist yet? Or should/can I do this in Word directly?
Note: I decided to prepare templates in Word directly, because later on these templates must be edited by non-technical users and I don't want to burden them with extra tools, if possible.
You say you "created the content control through Word 2010 dev tools". Unless you mean the content control toolkit, you need to use that or better, either of the OpenDoPE Word addins. Not both.
These tools add a custom xml part into the docx, and allow you to associate it with your content controls via XPath data bindings.
Then, when at runtime you invoke Docx4J.bind, docx4j finds that existing custom xml part, and replaces it with the xml file you provide which contains your runtime data.
I'm using the template engine StringTemplate for some templates (obviously).
What I want is to be able to store the templates I have in seperate files, ofcourse I can do that with simple .txt files and reading them into a String, looks a bit like this then
ST template = new ST(readTemplateFromFile("template.txt"))
private String readTemplateFromFile(String templateFile){
//read template from file
}
But what I was wondering is if there's functionality in the StringTemplate engine to do that automatically. SO that i don't have to write code that already exists.
I've read something about Group Files but I don't quite understand that,are those like Template files? Or am I completely missing something?
Yes, there is functionality available that can be used directly without providing your own file loading code.
From the ST JavaDoc:
To use templates, you create one (usually via STGroup) and then inject attributes using add(java.lang.String, java.lang.Object). To render its attacks, use render().
To follow that advice the following code can be used.
First, create a file called exampleTemplate.stg and place it on your classpath.
templateExample(param) ::= <<
This is a template with the following param: (<param>)
>>
Then, render the template by using the following code:
// Load the file
final STGroup stGroup = new STGroupFile("exampleTemplate.stg");
// Pick the correct template
final ST templateExample = stGroup.getInstanceOf("templateExample");
// Pass on values to use when rendering
templateExample.add("param", "Hello World");
// Render
final String render = templateExample.render();
// Print
System.out.println(render);
The output is:
This is a template with the following param: (Hello World)
Some additional notes:
STGroupFile is a subclass of STGroup. There are other subclasses as well that you find out more about in the JavaDoc.
In the example above the template file was placed on the classpath. This is not a requirement, files can be placed in a relative folder or a in an absolute folder as well.
I am using Rhino as part of an Ant build process to bundle and minify JavaScript. In addition to that, I would also like pre-compile client-side templates, i.e. compile them from markup to JavaScript. At a glance, I thought Rhino's serialize() method would do it but that does not seem to be the case.
// Load templating engine
load( "engine.js" );
// Read template file
var template = readFile( "foo.template" ),
// Compile template
compiled = engine.compile( template );
// Write compiled template to file
serialize( compiled, "compiledFoo.js" );
This results in a binary file being written. What I want is a text file which contains the compiled template.
If using serialize() is not the answer, then what is? Since it's Rhino, it would be possible to import Java classes as well. Offhand, I can't figure out a way to do it.
I know this can be done in Node but I'm not in a position to migrate the build process from Ant-Rhino to Grunt-Node right now.
In my search for an answer I came across the fact that SpiderMonkey, Rhino's C/C++ sister, has an uneval() function, which as you can guess does the opposite of JavaScript's eval() function. One more Google search later, I found that Rhino implemented uneval() in 1.5R5. This may be the only documentation that mentions Rhino has this feature (or not).
That being said, here is the solution:
// Load the templating engine
load( "engine.js" );
// Read the template file
var template = readFile( "foo.template" ),
// Compile the template markup to JavaScript
compiled = engine.compile( template ),
// Un-evaluate the compiled template to text
code = uneval( compiled ),
// Create the file for the code
out = new java.io.FileWriter( "foo.js" );
// Write the code to the file
out.write( code, 0, code.length );
out.flush();
out.close();
quit();
Write a function in the javascript that you can call from Java that returns the value that you need as a string.