Xtext - No viable alternative at input - java

i'm trying to create a grammar that join togheter a script language with the possibility to create method.
Grammar
grammar org.example.domainmodel.Domainmodel with org.eclipse.xtext.xbase.Xbase
generate domainmodel "http://www.example.org/domainmodel/Domainmodel"
import "http://www.eclipse.org/xtext/xbase/Xbase" as xbase
Model:
imports = XImportSection
methods += XMethodDeclaration*
body = XBlockScriptLanguage;
XMethodDeclaration:
"def" type=JvmTypeReference name=ValidID
'('(params+=FullJvmFormalParameter (',' params+=FullJvmFormalParameter)*)? ')'
body=XBlockExpression
;
XBlockScriptLanguage returns xbase::XExpression:
{xbase::XBlockExpression}
(expressions+=XExpressionOrVarDeclaration ';'?)*
;
At the moment i create the following JvmModelInferr, for defining the main method for scripting language.
JvmModelInferr
def dispatch void infer(Model model, IJvmDeclaredTypeAcceptor acceptor, boolean isPreIndexingPhase) {
acceptor.accept(
model.toClass("myclass")
).initializeLater [
members += model.toMethod("main", model.newTypeRef(Void::TYPE)) [
parameters += model.toParameter("args", model.newTypeRef(typeof(String)).addArrayTypeDimension)
setStatic(true)
body = model.body
]
]
}
When i tryed to use my grammar, i obtain the following error after that i wrote my method:
no viable alternative at input 'def'
The method mymethod() is undefined
The problem is related only with method declaration, without it myclass.java is created.
Moreover i obtain the "Warning 200" for a not clear grammar, why?

There are two fixes that appear necessary:
The imports section is not marked as optional. If it was intended to be optional, it should be declared as imports ?= XImportSection. Or, add necessary import statements to your JvmModelInferr example.
The dispatch keyword isn't defined in your grammar. As defined, a method should consist of def, followed by a Java type (the return type), and then the method's name (then the body, etc.). You could add `(dispatch ?= 'dispatch') if you're targeting Xtend and intend to support its multiple dispatch feature (or your own version of it).
HTH

Related

Groovy StreamingTemplateEngine gives error with withCredentials function

I created the Jenkins pipeline which calls below function... It creates a Template variable with StreamingTemplateEngine object... But it gives an error
def call() {
def name = "abc"
def binding = [
firstname: "Grace",
lastname: "Hopper",
]
def text = 'Dear <% out.print firstname %> ${lastname}'
def template = new groovy.text.StreamingTemplateEngine().createTemplate(text)
print template.make(binding)
def response = template.make(binding)
withCredentials([string(credentialsId: 'Token', variable: 'TOKEN')]) {
println("test")
println(response)
}
}
Above code prints response successfully first time but at the end it gives below error
an exception which occurred:
in field com.cloudbees.groovy.cps.impl.BlockScopeEnv.locals
in object com.cloudbees.groovy.cps.impl.BlockScopeEnv#3678d955
in field com.cloudbees.groovy.cps.impl.CpsClosureDef.capture
in object com.cloudbees.groovy.cps.impl.CpsClosureDef#23a3d63c
in field com.cloudbees.groovy.cps.impl.CpsClosure.def
in object org.jenkinsci.plugins.workflow.cps.CpsClosure2#6d8ad313
in field org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.closures
in object org.jenkinsci.plugins.workflow.cps.CpsThreadGroup#76f2b368
in object org.jenkinsci.plugins.workflow.cps.CpsThreadGroup#76f2b368
Caused: java.io.NotSerializableException: groovy.text.StreamingTemplateEngine$StreamingTemplate
If I remove withCredentials function then it works fine.
Jenkins Pipeline runs Groovy code in the continuation-passing style (groovy-cps). It expects that every local variable is Serializable so it can safely serialize every computation and restore it in case of, e.g., Jenkins restart.
In case of using a non-serializable object, Jenkins Pipeline offers #NonCPS annotation that can be used with a method to mark that this part of code is not serializable and shouldn't be transformed to the CPS code interpretation.
"Pipeline scripts may mark designated methods with the annotation #NonCPS. These are then compiled normally (except for sandbox security checks), and so behave much like “binary” methods from the Java Platform, Groovy runtime, or Jenkins core or plugin code. #NonCPS methods may safely use non-Serializable objects as local variables, though they should not accept nonserializable parameters or return or store nonserializable values. You may not call regular (CPS-transformed) methods, or Pipeline steps, from a #NonCPS method, so they are best used for performing some calculations before passing a summary back to the main script. Note in particular that #Overrides of methods defined in binary classes, such as Object.toString(), should in general be marked #NonCPS since it will commonly be binary code calling them."
Source: https://github.com/jenkinsci/workflow-cps-plugin#technical-design
You can extract StreamingTemplateEngine part to the separate #NonCPS method that expects a template as text, and a map of bindings. Something like this should be safe to use:
import com.cloudbees.groovy.cps.NonCPS
def call() {
def name = "abc"
def binding = [
firstname: "Grace",
lastname: "Hopper",
]
def text = 'Dear <% out.print firstname %> ${lastname}'
def response = parseTemplate(text, binding)
withCredentials([string(credentialsId: 'Token', variable: 'TOKEN')]) {
println(response)
}
}
#NonCPS
String parseTemplate(String text, Map bindings) {
new groovy.text.StreamingTemplateEngine().createTemplate(text)
.make(bindings)
.toString()
}

Import packageDefinition '.' InterfaceDefinition when defining xtext grammar

I am trying to create a simple grammar using xText. The grammar should define language for Java Interfaces (only) and currently I am struggling with import declarations. I want to be able to reference other interfaces form other packages that I have defined using their FQNs. Here is how my grammar look like:
DomainModel:
elements=AbstractElement;
AbstractElement:
'package' packageDeclaration=PackageDeclaration
'import'? importDeclarations+=ImportDeclaration*
typeDeclaration=TypeDeclaration;
PackageDeclaration:
name=QualifiedName ';';
ImportDeclaration:
importedNamespace=[ReferncedType|QualifiedName] ('.*')? ';';
ReferncedType:
PackageDeclaration |InterfaceDeclaration; //need to combine both?? separated by '.'
TypeDeclaration:
'interface' interfaceDeclaration=InterfaceDeclaration;
TypeList:
Type ( ',' type+=Type)*;
Type:
typeDefinition=[ReferncedType|ValidID];
InterfaceDeclaration:
name=ValidID ('extends' superType=TypeList)? body=InterfaceBody;
InterfaceBody:
'{' (declarations+=InterfaceBodyDeclaration)+ '}';
InterfaceBodyDeclaration:
interfaceMemberDelcaration+=InterfaceMemberDeclaration ';';
InterfaceMemberDeclaration:
InterfaceMethodDeclaration;
InterfaceMethodDeclaration:
(Type | 'void') name+=ValidID '(' (formalParameters+=FormalParameters)* ')' ('throws'
....)?;
I have defined both files:
package org.first.sample;
interface Demo {
void getA();
}
....
package org.second.sample;
import org.first.sample.Demo; // this line says that the reference to org.first.sample.Demo is invalid, but I am able to reference org.first.sample
interface AnotherDemo {
Demo getDemo();
}
Do you have any ideas?
As I read the grammar:
ImportDeclaration:
importedNamespace=[ReferncedType|QualifiedName] ('.*')? ';';
ReferncedType:
PackageDeclaration |InterfaceDeclaration; //need to combine both?? separated by '.'
PackageDeclaration:
name=QualifiedName ';';
So import can be followed by a (PackageDeclaration=QualifiedName followed by a ; and then, back to rule ImportDeclaration, another ; must follow.
Also, I don't understand why ReferncedType can also expand to an InterfaceDeclaration, which is the entire thing?
Later
So, perhaps the "import" should be defined as
AbstractElement:
'package' packageDeclaration=PackageDeclaration
importDeclarations+=ImportDeclaration*
...
ImportDeclaration
'import' importedNamespace=QualifiedName ('.*')? ';';
It doesn't permit static imports, and something must be done to keep track of .* if it occurs.
You can bind a custom QualifiedNameProvider to override name exported by your interfaces.
Something like this should make the import reference ok : (import org.first.sample.Demo;)
public class CustomQualifiedNameProvider extends DefaultDeclarativeQualifiedNameProvider {
#Override
public QualifiedName getFullyQualifiedName(EObject obj) {
if (obj instanceof InterfaceDeclaration && obj.eContainer().eContainer() instanceof AbstractElement) {
QualifiedName packageQualifiedName = getFullyQualifiedName(((AbstractElement)obj.eContainer().eContainer()).getPackageDeclaration());
return packageQualifiedName.append(((InterfaceDeclaration) obj).getName());
}
return super.getFullyQualifiedName(obj);
}
}
Also, you can press Ctrl + Shift + F3 to see what name have your exported objects.
Actually #laune is right. Xtext supports this out of the box. As long as the referred types in the model have feature called 'name', the fully qualified name of the type is built out-of-the box. What I noticed that is wrong in my Xtext grammar definition is that Package should contain Interface, so that when a fully qualified name is formed xtext is constucting it by combining the 'name' of the Interface with the 'name' of the package (or its parent).
#Fabien your answer is correct in case the xtext grammar rules doesn't contain 'name' features. It is a custom way of building fqns if for example we are using this:
Package:
'package' name=ID ';' imports=Import ';' interface=Interface ';'
;
Interface:
qualifier=Qualifier !id=ID (instead of name=ID)
;
Then we should construct explicitly the fqn because the built-in support looks for 'name' features only.
So in my case the correct way to use this is:
Package:
'package' name=ID; imports=Import typeDefinition=TypeDefinition;
Import:
'import' importedNamespace=[TypeDefinition|QualifiedName] ';'
;
TypeDefinition:
InterfaceDefinition | EnumDefinition ...
;
InterfaceDefinition:
qualifier=Qualifier !name=ID
;

TestRig in ANTLRworks: how to use own classes?

I'm trying to build a MT940 parser using antlr4. The grammar is simple but works for most cases.
Now I want to return my own classes. This works:
file returns [String myString]
:
Header940? record+ EOF
;
I think this is becasue String is in the default java packages.
I want this:
file returns [List<MT940Record> records]
:
Header940? record+ EOF
;
The TestRig complains (logically):
/tmp/TestRigTask-1392235543340/MT940_5aParser.java:50: error: cannot find symbol
public List<MT940Record> records;
^
symbol: class MT940Record
location: class FileContext
How can I set the CLASSPATH / lib directory in the TestRig in ANLTRWorks?
In ANTLRWorks, you can't. You can add an issue for this on the issue tracker:
https://github.com/sharwell/antlrworks2/issues
Note that ANTLR 4 was designed so you no longer need to use user-defined arguments and/or return values in your grammar. Instead of returning a List<MT940Record> like you described above, you should use a listener or visitor after the parse is complete to compute the necessary result.

What does the #sign do?

I have seen the at (#) sign in Groovy files and I don't know if it's a Groovy or Java thing. I have tried to search on Google, Bing, and DuckDuckGo for the mystery at sign, but I haven't found anything. Can anyone please give me a resource to know more about what this operator does?
It's a Java annotation. Read more at that link.
As well as being a sign for an annotation, it's the Groovy Field operator
In Groovy, calling object.field calls the getField method (if one exists). If you actually want a direct reference to the field itself, you use #, ie:
class Test {
String name = 'tim'
String getName() {
"Name: $name"
}
}
def t = new Test()
println t.name // prints "Name: tim"
println t.#name // prints "tim"
'#' is an annotations in java/ Groovy look at the demo :Example with code
Java 5 and above supports the use of annotations to include metadata within programs. Groovy 1.1 and above also supports such annotations.
Annotations are used to provide information to tools and libraries.
They allow a declarative style of providing metadata information and allow it to be stored directly in the source code.
Such information would need to otherwise be provided using non-declarative means or using external files.
It can also be used to access attributes when parsing XML using Groovy's XmlSlurper:
def xml = '''<results><result index="1"/></results>'''
def results = new XmlSlurper().parseText(xml)
def index = results.result[0].#index.text() // prints "1"
http://groovy.codehaus.org/Reading+XML+using+Groovy's+XmlSlurper

How user can (safely) programme their own filter in Java

I want my users to be able to write there own filter when requesting a List in Java.
Option 1) I'm thinking about JavaScript with Rhino.
I get my user's filter as a javascript string. And then call isAccepted(myItem) in this script.
Depending on the reply I accept the element or not.
Option 2) I'm thinking about Groovy.
My user can write Groovy script in a textfield. When my user searches with this filter the Groovy script is compiled in Java (if first call) and call the Java methode isAccepted()
Depending on the reply I accept the element or not.
My application rely a lot on this fonctionallity and it will be called intensively on my server.
So I beleave speed is the key.
Option 1 thinking:
Correct me if I'm wrong, but I think in my case the main advantage of Groovy is the speed but my user can compile and run unwanted code on my server... (any workaround?)
Option 2 thinking:
I think in most people mind JavaScript is more like a toy. Even if it's not my idea at all it is probably for my customers who will not trust it that much. Do you think so?
An other bad point I expect is speed, from my reading on the web.
And again my user can access Java and run unwanted code on my server... (any workaround?)
More info:
I'm running my application on Google App Engine for the main web service of my app.
The filter will be apply 20 times by call.
The filter will be (most of the times) simple.
Any idea to make this filter safe for my server?
Any other approche to make it work?
My thoughts:
You'll have to use your own classloader when compiling your script, to avoid any other classes to be accessible from the script. Not sure if that is possible in GAE.
You'll have to use Java's SecurityManager features to avoid a script being able to access the file ssystem, network, etc etc. Not sure if that is possible in GAE.
Looking only at the two items above, it looks incredibly complicated and brittle to me. If you can't find existing sandboxing features as an existing project, you should stay away from it.
Designing a Domain Specific Language that will allow the expressions you decide are legal is a lot safer, and looking at the above items, you will have to think very hard anyway at what you want to allow. From there to designing the language is not a big step.
Be careful not to implement the DSL with groovy closures (internal DSL), because that is just groovy and you are hackable too. You need to define an extrnal language and parse it. I recommend the parser combinator jparsec to define the grammar. no compiler compiler needed in that case.
http://jparsec.codehaus.org/
FYI, here's a little parser I wrote with jparsec (groovy code):
//import some static methods, this will allow more concise code
import static org.codehaus.jparsec.Parsers.*
import static org.codehaus.jparsec.Terminals.*
import static org.codehaus.jparsec.Scanners.*
import org.codehaus.jparsec.functors.Map as FMap
import org.codehaus.jparsec.functors.Map4 as FMap4
import org.codehaus.jparsec.functors.Map3 as FMap3
import org.codehaus.jparsec.functors.Map2 as FMap2
/**
* Uses jparsec combinator parser library to construct an external DSL parser for the following grammar:
* <pre>
* pipeline := routingStep*
* routingStep := IDENTIFIER '(' parameters? ')'
* parameters := parameter (',' parameter)*
* parameter := (IDENTIFIER | QUOTED_STRING) ':' QUOTED_STRING
* </pre>
*/
class PipelineParser {
//=======================================================
//Pass 1: Define which terminals are part of the grammar
//=======================================================
//operators
private static def OPERATORS = operators(',', '(', ')', ':')
private static def LPAREN = OPERATORS.token('(')
private static def RPAREN = OPERATORS.token(')')
private static def COLON = OPERATORS.token(':')
private static def COMMA = OPERATORS.token(',')
//identifiers tokenizer
private static def IDENTIFIER = Identifier.TOKENIZER
//single quoted strings tokenizer
private static def SINGLE_QUOTED_STRING = StringLiteral.SINGLE_QUOTE_TOKENIZER
//=======================================================
//Pass 2: Define the syntax of the grammar
//=======================================================
//PRODUCTION RULE: parameter := (IDENTIFIER | QUOTED_STRING) ':' QUOTED_STRING
#SuppressWarnings("GroovyAssignabilityCheck")
private static def parameter = sequence(or(Identifier.PARSER,StringLiteral.PARSER), COLON, StringLiteral.PARSER, new FMap3() {
def map(paramName, colon, paramValue) {
new Parameter(name: paramName, value: paramValue)
}
})
//PRODUCTION RULE: parameters := parameter (',' parameter)*
#SuppressWarnings("GroovyAssignabilityCheck")
private static def parameters = sequence(parameter, sequence(COMMA, parameter).many(), new FMap2() {
def map(parameter1, otherParameters) {
if (otherParameters != null) {
[parameter1, otherParameters].flatten()
} else {
[parameter1]
}
}
})
//PRODUCTION RULE: routingStep := IDENTIFIER '(' parameters? ')'
#SuppressWarnings("GroovyAssignabilityCheck")
private static def routingStep = sequence(Identifier.PARSER, LPAREN, parameters.optional(), RPAREN, new FMap4() {
def map(routingStepName, lParen, parameters, rParen) {
new RoutingStep(
name: routingStepName,
parameters: parameters ?: []
)
}
})
//PRODUCTION RULE: pipeline := routingStep*
#SuppressWarnings("GroovyAssignabilityCheck")
private static def pipeline = routingStep.many().map(new FMap() {
def map(from) {
new Pipeline(
routingSteps: from
)
}
})
//Combine the above tokenizers to create the tokenizer that will parse the stream and spit out the tokens of the grammar
private static def tokenizer = or(OPERATORS.tokenizer(), SINGLE_QUOTED_STRING, IDENTIFIER)
//This parser will be used to define which input sequences need to be ignored
private static def ignored = or(JAVA_LINE_COMMENT, JAVA_BLOCK_COMMENT, WHITESPACES)
/**
* Parser that is used to parse extender pipelines.
* <pre>
* def parser=PipelineParser.parser
* Pipeline pipeline=parser.parse(pipelineStr)
* </pre>
* Returns an instance of {#link Pipeline} containing the AST representation of the parsed string.
*/
//Create a syntactic pipeline parser that will use the given tokenizer to parse the input into tokens, and will ignore sequences that are matched by the given parser.
static def parser = pipeline.from(tokenizer, ignored.skipMany())
}
Some thoughts:
Whether you use JavaScript or Groovy, it will be run in a context that you provide to the script, so the script should not be able to access anything that you don't want it to (but of course, you should test it extensively to be sure if go this route).
You'd probably be safer by having the filter expression specified as data, rather than as executable code, if possible. Of course, this depends on how complex the filter expressions are. Perhaps you can break up the representation into something like field, comparator, and value, or something similar, that can be treated as data and evaluated in regular way?
If you're worried about what the user can inject via a scripting language, you're probably safer with JavaScript. I don't think that performance should be a problem, but again, I'd suggest extensive testing to be sure.
I would never let users input arbitrary code. It's brittle, insecure and a bad user experience. Not knowing anything about your users, my guess is that you will spend a lot of time answering questions.. If most of your filters are simple, why not create a little filter builder for them instead?
As far as groovy vs JavaScript i think groovy is easier to understand and better for scripting but that's just my opinion.

Categories