Creation of AST via ANTLR API in Java - java

I'm currently working on a project which that require me to generate an ANTLR grammar on the fly because the generated language depends on user input. Hence I generate the ANTLR grammar in code, and generate a lexer and parser from it.
My goal is to have an input program that is written in the language of the generated grammar (it is actually created through genetic algorithms, but that's not relevant here), and to ultimately have an AST representing the program. However, currently I'm only able to generate a ParseTree object, and this is not sufficient for my program.
Does anybody know how to use the ANTLR API to generate an object representing the AST? (For example an antlr.collections.AST object). I'll append a piece of code here, but the best way to test it is to run the Eclipse project that resides in https://snowdrop.googlecode.com/svn/trunk/src/ANTLRTest/
public class GEQuorra extends GEModel {
Grammar grammar;
private org.antlr.tool.Grammar lexer;
private org.antlr.tool.Grammar parser;
private String startRule;
private String ignoreTokens;
public GEQuorra(IntegrationTest.Grammar g) {
grammar = new Grammar(g.getBnfGrammar());
setGrammar(grammar);
try {
ignoreTokens = "WS";
startRule = "agentProgram";
parser = new org.antlr.tool.Grammar(g.getAntlrGrammar());
#SuppressWarnings("rawtypes")
List leftRecursiveRules = parser.checkAllRulesForLeftRecursion();
if (leftRecursiveRules.size() > 0) {
throw new Exception("Grammar is left recursive");
}
String lexerGrammarText = parser.getLexerGrammar();
lexer = new org.antlr.tool.Grammar();
lexer.importTokenVocabulary(parser);
lexer.setFileName(parser.getFileName());
lexer.setGrammarContent(lexerGrammarText);
} catch (Exception e) {
e.printStackTrace();
}
}
#Override
public double getFitness(CandidateProgram program) {
try {
GECandidateProgram gecp = (GECandidateProgram) program;
System.out.println("Parsing:" + gecp.getSourceCode());
CharStream input = new ANTLRStringStream(gecp.getSourceCode());
Interpreter lexEngine = new Interpreter(lexer, input);
FilteringTokenStream tokens = new FilteringTokenStream(lexEngine);
StringTokenizer tk = new StringTokenizer(ignoreTokens, " ");
while (tk.hasMoreTokens()) {
String tokenName = tk.nextToken();
tokens.setTokenTypeChannel(lexer.getTokenType(tokenName), 99);
}
Interpreter parseEngine = new Interpreter(parser, tokens);
ParseTree t;
t = parseEngine.parse(startRule);
return 1.0 / t.toStringTree().length();
} catch (Exception e) {
// Something failed, return very big fitness, making it unfavorable
return Double.MAX_VALUE;
}
}
Where t.toStringTree() contains the ParseTree.

Related

Implementing save/open with RichTextFX?

Here is my code:
private void save(File file) {
StyledDocument<ParStyle, Either<StyledText<TextStyle>, LinkedImage<TextStyle>>, TextStyle> doc = textarea.getDocument();
// Use the Codec to save the document in a binary format
textarea.getStyleCodecs().ifPresent(codecs -> {
Codec<StyledDocument<ParStyle, Either<StyledText<TextStyle>, LinkedImage<TextStyle>>, TextStyle>> codec
= ReadOnlyStyledDocument.codec(codecs._1, codecs._2, textarea.getSegOps());
try {
FileOutputStream fos = new FileOutputStream(file);
DataOutputStream dos = new DataOutputStream(fos);
codec.encode(dos, doc);
fos.close();
} catch (IOException fnfe) {
fnfe.printStackTrace();
}
});
}
I am trying to implement the save/loading from the demo from here on the RichTextFX GitHub.
I am getting errors in the following lines:
StyledDocument<ParStyle, Either<StyledText<TextStyle>, LinkedImage<TextStyle>>, TextStyle> doc = textarea.getDocument();
error: incompatible types:
StyledDocument<Collection<String>,StyledText<Collection<String>>,Collection<String>>
cannot be converted to
StyledDocument<ParStyle,Either<StyledText<TextStyle>,LinkedImage<TextStyle>>,TextStyle>
and
= ReadOnlyStyledDocument.codec(codecs._1, codecs._2, textarea.getSegOps());
error: incompatible types: inferred type does not conform to equality
constraint(s) inferred: ParStyle
equality constraints(s): ParStyle,Collection<String>
I have added all the required .java files and imported them into my main code. I thought it would be relatively trivial to implement this demo but it has been nothing but headaches.
If this cannot be resolved, does anyone know an alternative way to save the text with formatting from RichTextFX?
Thank you
This question is quite old, but since i ran into the same problem i figured a solution might be useful to others as well.
In the demo, the code from which you use, ParStyle and TextStyle (Custom Types) are used for defining how information about the style is stored.
The error messages you get pretty much just tell you that your way of storing the information about the style (In your case in a String) is not compatible with the way it is done in the demo.
If you want to store the style in a String, which i did as well, you need to implement some way of serializing and deserializing the information yourself.
You can do that, for example (I used an InlineCssTextArea), in the following way:
public class SerializeManager {
public static final String PAR_REGEX = "#!par!#";
public static final String PAR_CONTENT_REGEX = "#!pcr!#";
public static final String SEG_REGEX = "#!seg!#";
public static final String SEG_CONTENT_REGEX = "#!scr!#";
public static String serialized(InlineCssTextArea textArea) {
StringBuilder builder = new StringBuilder();
textArea.getDocument().getParagraphs().forEach(par -> {
builder.append(par.getParagraphStyle());
builder.append(PAR_CONTENT_REGEX);
par.getStyledSegments().forEach(seg -> builder
.append(
seg.getSegment()
.replaceAll(PAR_REGEX, "")
.replaceAll(PAR_CONTENT_REGEX, "")
.replaceAll(SEG_REGEX, "")
.replaceAll(SEG_CONTENT_REGEX, "")
)
.append(SEG_CONTENT_REGEX)
.append(seg.getStyle())
.append(SEG_REGEX)
);
builder.append(PAR_REGEX);
});
String textAreaSerialized = builder.toString();
return textAreaSerialized;
}
public static InlineCssTextArea fromSerialized(String string) {
InlineCssTextArea textArea = new InlineCssTextArea();
ReadOnlyStyledDocumentBuilder<String, String, String> builder = new ReadOnlyStyledDocumentBuilder<>(
SegmentOps.styledTextOps(),
""
);
if (string.contains(PAR_REGEX)) {
String[] parsSerialized = string.split(PAR_REGEX);
for (int i = 0; i < parsSerialized.length; i++) {
String par = parsSerialized[i];
String[] parContent = par.split(PAR_CONTENT_REGEX);
String parStyle = parContent[0];
List<String> segments = new ArrayList<>();
StyleSpansBuilder<String> spansBuilder = new StyleSpansBuilder<>();
String styleSegments = parContent[1];
Arrays.stream(styleSegments.split(SEG_REGEX)).forEach(seg -> {
String[] segContent = seg.split(SEG_CONTENT_REGEX);
segments.add(segContent[0]);
if (segContent.length > 1) {
spansBuilder.add(segContent[1], segContent[0].length());
} else {
spansBuilder.add("", segContent[0].length());
}
});
StyleSpans<String> spans = spansBuilder.create();
builder.addParagraph(segments, spans, parStyle);
}
textArea.append(builder.build());
}
return textArea;
}
}
You can then take the serialized InlineCssTextArea, write the resulting String to a file, and load and deserialize it.
As you can see in the code, i made up some Strings as regexes which will be removed in the serialization process (We don't want our Serializer to be injectable, do we ;)).
You can change these to whatever you like, just note they will be removed if used in the text of the TextArea, so they should be something users wont miss in their TextArea.
Also note that this solution serializes the Style of the Text, the Text itself and the Paragraph style, BUT not inserted images or parameters of the TextArea (such as width and height), just the text content of the TextArea with its Style.
This issue on github really helped me btw.

Antlr4 grammar requires me to use setInterpreter

When i set up a grammar with antlr4, and generated it i see the following line throughout the parser
_errHandler.sync(this);
Which in turn, does
getInterpreter()
and then calls methods on it. By default this returns null, and thus parsing throws NPEs.
I glomed together something that gets around this
myparser.setInterpreter(new ParserATNSimulator(myparser, myparser.getATN(), mylexer.getInterpreter().decisionToDFA,
new PredictionContextCache()));
But I'm certain that is wrong. The odd thing is I don't see any examples address this requirement, so I'm wondering what i have done wrong that this even needs to be done.
Interesting TestRig works fine, w/o the setInterpreter line, here's what i'm doing:
PelLexer pl = new PelLexer(CharStreams.fromString(s));
CommonTokenStream tokens = new CommonTokenStream(pl);
SecureRandom r = new SecureRandom();
String clsName = Parser.class.getPackage().getName() + ".eval.Eval" + Math.abs(r.nextLong());
PelParser pp = new PelParser(tokens, clsName);
pp.setBuildParseTree(false);
// pp.setInterpreter(new ParserATNSimulator(pp, pp.getATN(), pl.getInterpreter().decisionToDFA, new PredictionContextCache()));
pp.addErrorListener(new PELErrorListener());
pp.blockStatements();
byte[] clzData = pp.getClassBytes();
PELClassLoader pcl = AccessController.doPrivileged(new PrivilegedAction<PELClassLoader>() {
#Override
public PELClassLoader run() {
return new PELClassLoader(Thread.currentThread().getContextClassLoader());
}
});
pcl.addClass(clsName, clzData);
Class<Evaluable> c = (Class<Evaluable>) pcl.loadClass(clsName);
return c.newInstance();
Here's the answer.
When you add a constructor to your parser, you DON'T want to call
super(tokens);
You want to call
this(tokens);
As the default constructor created in your parser does
public PelParser(TokenStream input) {
super(input);
_interp = new ParserATNSimulator(this,_ATN,_decisionToDFA,_sharedContextCache);
}

Get content of a file inside a directory

I want to get the content of a file inside a directory:
/sys/block/sda/device/model
I use this code to get the content:
String content = new String(Files.readAllBytes(Paths.get("/sys/block/sda/device/model")));
But in some scenarios, I have cases like this:
/sys/block/sda/device/model
/sys/block/sdb/device/model
/sys/block/sdc/device/model
How I can iterate all the directories starting with
sd* and print the file model?
Can you show me some example for Java 8 with filter?
Here is an example of how to do this using Java 8 features:
Function<Path,byte[]> uncheckedRead = p -> {
try { return Files.readAllBytes(p); }
catch(IOException ex) { throw new UncheckedIOException(ex); }
};
try(Stream<Path> s=Files.find(Paths.get("/sys/block"), 1,
(p,a)->p.getName(p.getNameCount()-1).toString().startsWith("sd"))) {
s.map(p->p.resolve("device/model")).map(uncheckedRead).map(String::new)
.forEach(System.out::println);
}
This is an example that strives for compactness and working stand-alone. For real applications, it’s likely that you would do it a bit differently. The task of using an IO operation as a Function which doesn’t allow checked exception is quite common so you might have a wrapper function like:
interface IOFunction<T,R> {
R apply(T in) throws IOException;
}
static <T,R> Function<T,R> wrap(IOFunction<T,R> f) {
return t-> { try { return f.apply(t); }
catch(IOException ex) { throw new UncheckedIOException(ex); }
};
}
Then you can use
try(Stream<Path> s=Files.find(Paths.get("/sys/block"), 1,
(p,a)->p.getName(p.getNameCount()-1).toString().startsWith("sd"))) {
s.map(p->p.resolve("device/model")).map(wrap(Files::readAllBytes))
.map(String::new).forEach(System.out::println);
}
But maybe you’d use newDirectoryStream instead even if the returned DirectoryStream is not a Stream and hence requires a manual Stream creation as this method allows passing a glob pattern like "sd*":
try(DirectoryStream<Path> ds
=Files.newDirectoryStream(Paths.get("/sys/block"), "sd*")) {
StreamSupport.stream(ds.spliterator(), false)
.map(p->p.resolve("device/model")).map(wrap(Files::readAllBytes))
.map(String::new).forEach(System.out::println);
}
Finally, the option to process the files as stream of lines should be mentioned:
try(DirectoryStream<Path> ds
=Files.newDirectoryStream(Paths.get("/sys/block"), "sd*")) {
StreamSupport.stream(ds.spliterator(), false)
.map(p->p.resolve("device/model")).flatMap(wrap(Files::lines))
.forEach(System.out::println);
}
Rather using st* it's better if you can first search the existing folder inside the path /sys/block by using below code.
Please find working example :-
String dirNames[] = new File("E://block").list();
for(String name : dirNames)
{
if (new File("E://block//" + name).isDirectory())
{
if(name.contains("sd")){
String content = new String(Files.readAllBytes(Paths.get("E://block//"+name+"//device//model")));
System.out.println(content);
}
}
}

Java: CSV file read & write

I'm reading 2 csv files: store_inventory & new_acquisitions.
I want to be able to compare the store_inventory csv file with new_acquisitions.
1) If the item names match just update the quantity in store_inventory.
2) If new_acquisitions has a new item that does not exist in store_inventory, then add it to the store_inventory.
Here is what i have done so far but its not very good. I added comments where i need to add taks 1 & 2.
Any advice or code to do the above tasks would be great! thanks.
File new_acq = new File("/src/test/new_acquisitions.csv");
Scanner acq_scan = null;
try {
acq_scan = new Scanner(new_acq);
} catch (FileNotFoundException ex) {
Logger.getLogger(mainpage.class.getName()).log(Level.SEVERE, null, ex);
}
String itemName;
int quantity;
Double cost;
Double price;
File store_inv = new File("/src/test/store_inventory.csv");
Scanner invscan = null;
try {
invscan = new Scanner(store_inv);
} catch (FileNotFoundException ex) {
Logger.getLogger(mainpage.class.getName()).log(Level.SEVERE, null, ex);
}
String itemNameInv;
int quantityInv;
Double costInv;
Double priceInv;
while (acq_scan.hasNext()) {
String line = acq_scan.nextLine();
if (line.charAt(0) == '#') {
continue;
}
String[] split = line.split(",");
itemName = split[0];
quantity = Integer.parseInt(split[1]);
cost = Double.parseDouble(split[2]);
price = Double.parseDouble(split[3]);
while(invscan.hasNext()) {
String line2 = invscan.nextLine();
if (line2.charAt(0) == '#') {
continue;
}
String[] split2 = line2.split(",");
itemNameInv = split2[0];
quantityInv = Integer.parseInt(split2[1]);
costInv = Double.parseDouble(split2[2]);
priceInv = Double.parseDouble(split2[3]);
if(itemName == itemNameInv) {
//update quantity
}
}
//add new entry into csv file
}
Thanks again for any help. =]
Suggest you use one of the existing CSV parser such as Commons CSV or Super CSV instead of reinventing the wheel. Should make your life a lot easier.
Your implementation makes the common mistake of breaking the line on commas by using line.split(","). This does not work because the values themselves might have commas in them. If that happens, the value must be quoted, and you need to ignore commas within the quotes. The split method can not do this -- I see this mistake a lot.
Here is the source of an implementation that does it correctly:
http://agiletribe.purplehillsbooks.com/2012/11/23/the-only-class-you-need-for-csv-files/
With help of the open source library uniVocity-parsers, you could develop with pretty clean code as following:
private void processInventory() throws IOException {
/**
* ---------------------------------------------
* Read CSV rows into list of beans you defined
* ---------------------------------------------
*/
// 1st, config the CSV reader with row processor attaching the bean definition
CsvParserSettings settings = new CsvParserSettings();
settings.getFormat().setLineSeparator("\n");
BeanListProcessor<Inventory> rowProcessor = new BeanListProcessor<Inventory>(Inventory.class);
settings.setRowProcessor(rowProcessor);
settings.setHeaderExtractionEnabled(true);
// 2nd, parse all rows from the CSV file into the list of beans you defined
CsvParser parser = new CsvParser(settings);
parser.parse(new FileReader("/src/test/store_inventory.csv"));
List<Inventory> storeInvList = rowProcessor.getBeans();
Iterator<Inventory> storeInvIterator = storeInvList.iterator();
parser.parse(new FileReader("/src/test/new_acquisitions.csv"));
List<Inventory> newAcqList = rowProcessor.getBeans();
Iterator<Inventory> newAcqIterator = newAcqList.iterator();
// 3rd, process the beans with business logic
while (newAcqIterator.hasNext()) {
Inventory newAcq = newAcqIterator.next();
boolean isItemIncluded = false;
while (storeInvIterator.hasNext()) {
Inventory storeInv = storeInvIterator.next();
// 1) If the item names match just update the quantity in store_inventory
if (storeInv.getItemName().equalsIgnoreCase(newAcq.getItemName())) {
storeInv.setQuantity(newAcq.getQuantity());
isItemIncluded = true;
}
}
// 2) If new_acquisitions has a new item that does not exist in store_inventory,
// then add it to the store_inventory.
if (!isItemIncluded) {
storeInvList.add(newAcq);
}
}
}
Just follow this code sample I worked out according to your requirements. Note that the library provided simplified API and significent performance for parsing CSV files.
The operation you are performing will require that for each item in your new acquisitions, you will need to search each item in inventory for a match. This is not only not efficient, but the scanner that you have set up for your inventory file would need to be reset after each item.
I would suggest that you add your new acquisitions and your inventory to collections and then iterate over your new acquisitions and look up the new item in your inventory collection. If the item exists, update the item. If it doesnt, add it to the inventory collection. For this activity, it might be good to write a simple class to contain an inventory item. It could be used for both the new acquisitions and for the inventory. For a fast lookup, I would suggest that you use HashSet or HashMap for your inventory collection.
At the end of the process, dont forget to persist the changes to your inventory file.
As Java doesn’t support parsing of CSV files natively, we have to rely on third party library. Opencsv is one of the best library available for this purpose. It’s open source and is shipped with Apache 2.0 licence which makes it possible for commercial use.
Here, this link should help you and others in the situations!
For writing to CSV
public void writeCSV() {
// Delimiter used in CSV file
private static final String NEW_LINE_SEPARATOR = "\n";
// CSV file header
private static final Object[] FILE_HEADER = { "Empoyee Name","Empoyee Code", "In Time", "Out Time", "Duration", "Is Working Day" };
String fileName = "fileName.csv");
List<Objects> objects = new ArrayList<Objects>();
FileWriter fileWriter = null;
CSVPrinter csvFilePrinter = null;
// Create the CSVFormat object with "\n" as a record delimiter
CSVFormat csvFileFormat = CSVFormat.DEFAULT.withRecordSeparator(NEW_LINE_SEPARATOR);
try {
fileWriter = new FileWriter(fileName);
csvFilePrinter = new CSVPrinter(fileWriter, csvFileFormat);
csvFilePrinter.printRecord(FILE_HEADER);
// Write a new student object list to the CSV file
for (Object object : objects) {
List<String> record = new ArrayList<String>();
record.add(object.getValue1().toString());
record.add(object.getValue2().toString());
record.add(object.getValue3().toString());
csvFilePrinter.printRecord(record);
}
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
fileWriter.flush();
fileWriter.close();
csvFilePrinter.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
You can use Apache Commons CSV api.
FYI this anwser : https://stackoverflow.com/a/42198895/6549532
Read / Write Example

Write XML file (using XStream) to filesystem in Java

I need to be able to serialize a string and then have it save in a .txt or .xml file. I've never used the implementation to read/write files, just remember I am a relative beginner. Also, I need to know how to deserialize the string to be printed out in terminal as a normal string.
XStream has facilities to read from and write to files, see the simple examples (Writer.java and Reader.java) in this article.
If you can serialize it to a txt file, just open an ObjectOutputStream and have it use String's own serialization capability for you.
String str = "serialize me";
String file = "file.txt";
try{
ObjectOutputStream out = new ObjectOutputStream(new FileOutputStream(file));
out.writeObject(str);
out.close();
ObjectInputStream in = new ObjectInputStream(new FileInputStream(file));
String newString = (String) in.readObject();
assert str.equals(newString);
System.out.println("Strings are equal");
}catch(IOException ex){
ex.printStackTrace();
}catch(ClassNotFoundException ex){
ex.printStackTrace();
}
You could also just open a PrintStream and syphon it out that way, then use a BufferedReader and readLine(). If you really want to get fancy (since this is a HW assignment after all), you could use a for loop and print each character individually. Using XML is more complicated than you need to serialize a String and using an external library is just overkill.
If you are beginning Java, then take some time to look through the Apache Commons project. There are lots of basic extensions to java that you will make use of many times.
I'm assuming you just want to persist a string so you can read it back later - in which case it doesn't necessarily need to be XML.
To write a string to a file, see org.apache.commons.io.FileUtils:
FileUtils.writeStringToFile(File file,String data)
To read it back:
FileUtils.readFileToString(File file)
References:
http://commons.apache.org/
http://commons.apache.org/io
http://commons.apache.org/io/api-release/org/apache/commons/io/FileUtils.html
Make sure you also look at commons-lang for lots of good basic stuff.
If you need to create a text file containing XML that represents the contents of an object (and make it bidirectional), just use JSON-lib:
class MyBean{
private String name = "json";
private int pojoId = 1;
private char[] options = new char[]{'a','f'};
private String func1 = "function(i){ return this.options[i]; }";
private JSONFunction func2 = new JSONFunction(new String[]{"i"},"return this.options[i];");
// getters & setters
...
}
JSONObject jsonObject = JSONObject.fromObject( new MyBean() );
String xmlText = XMLSerializer.write( jsonObject );
From there just wrote the String to your file. Much simpler than all those XML API's. Now, however, if you need to conform to a DTD or XSD, this is a bad way to go as it's much more free-format and conforms only to the object layout.
http://json-lib.sourceforge.net/usage.html
Piko
Is there any particular reason to use XStream? This would be extremely easy to do with something like JDOM if all you are doing is trying to serialize a string or two.
Ie, something like:
Document doc = new Document();
Element rootEl = new Element("root");
rootEl.setText("my string");
doc.appendChild(rootEl);
XMLOutputter outputter = new XMLOutputter();
outputter.output(doc);
Some of the details above are probably wrong, but thats the basic flow. Perhaps you should ask a more specific question so that we can understand exactly what problem it is that you are having?
From http://www.xml.com/pub/a/2004/08/18/xstream.html:
import com.thoughtworks.xstream.XStream;
class Date {
int year;
int month;
int day;
}
public class Serialize {
public static void main(String[] args) {
XStream xstream = new XStream();
Date date = new Date();
date.year = 2004;
date.month = 8;
date.day = 15;
xstream.alias("date", Date.class);
String decl = "\n";
String xml = xstream.toXML(date);
System.out.print(decl + xml);
}
}
public class Deserialize {
public static void main(String[] args) {
XStream xstream = new XStream();
Date date = new Date();
xstream.alias("date", Date.class);
String xml = xstream.toXML(date);
System.out.print(xml);
Date newdate = (Date)xstream.fromXML(xml);
newdate.month = 12;
newdate.day = 2;
String newxml = xstream.toXML(newdate);
System.out.print("\n\n" + newxml);
}
}
You can then take the xml string and write it to a file.
try something like this:
FileOutputStream fos = null;
try {
new File(FILE_LOCATION_DIRECTORY).mkdirs();
File fileLocation = new File(FILE_LOCATION_DIRECTORY + "/" + fileName);
fos = new FileOutputStream(fileLocation);
stream.toXML(userAlertSubscription, fos);
} catch (IOException e) {
Log.error(this, "Error %s in file %s", e.getMessage(), fileName);
} finally {
IOUtils.closeQuietly(fos);
}

Categories