I've been using Elastic 1.4.4, but we're now upgrading to 2.2.0. I am having trouble getting my integration tests to run. My integration test extends org.elasticsearch.test.ESIntegTestCase:
#ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.SUITE, numDataNodes = 1)
public abstract class AbstractApplicationTest extends ESIntegTestCase {
...
}
I can index documents without problems, but when I try searching with a script field, I get an error. I'm running my tests using sbt (I'm using the Play framework).
The error I'm getting is following:
{
"error": {
"root_cause": [{
"type": "script_exception",
"reason": "failed to compile groovy script"
}],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [{
"shard": 0,
"index": "bokun",
"node": "BNyjts9hTOicRgCAWGdKgQ",
"reason": {
"type": "script_exception",
"reason": "Failed to compile inline script [if(_source.accumulated_availability != null){ for(item in _source.accumulated_availability){ if(start.compareTo(item.day) < 0 && (end == null || end.compareTo(item.day) >= 0)){ return item.day } }} else return null;] using lang [groovy]",
"caused_by": {
"type": "script_exception",
"reason": "failed to compile groovy script",
"caused_by": {
"type": "multiple_compilation_errors_exception",
"reason": "startup failed:\nCould not instantiate global transform class groovy.grape.GrabAnnotationTransformation specified at jar:file:/Users/ogg/.ivy2/cache/org.codehaus.groovy/groovy-all/jars/groovy-all-2.4.4-indy.jar!/META-INF/services/org.codehaus.groovy.transform.ASTTransformation because of exception java.lang.ClassNotFoundException: groovy.grape.GrabAnnotationTransformation\n\nCould not instantiate global transform class org.codehaus.groovy.ast.builder.AstBuilderTransformation specified at jar:file:/Users/ogg/.ivy2/cache/org.codehaus.groovy/groovy-all/jars/groovy-all-2.4.4-indy.jar!/META-INF/services/org.codehaus.groovy.transform.ASTTransformation because of exception java.lang.ClassNotFoundException: org.codehaus.groovy.ast.builder.AstBuilderTransformation\n\n2 errors\n"
}
}
}
}]
},
"status": 500
}
I'll reformat the "reason" message for readability:
startup failed:
Could not instantiate global transform class
groovy.grape.GrabAnnotationTransformation
specified at jar:file:/Users/ogg/.ivy2/cache/org.codehaus.groovy/groovy-all/jars/groovy-all-2.4.4-indy.jar!/META-INF/services/org.codehaus.groovy.transform.ASTTransformation
because of exception
java.lang.ClassNotFoundException: groovy.grape.GrabAnnotationTransformation
Could not instantiate global transform class
org.codehaus.groovy.ast.builder.AstBuilderTransformation
specified at jar:file:/Users/ogg/.ivy2/cache/org.codehaus.groovy/groovy-all/jars/groovy-all-2.4.4-indy.jar!/META-INF/services/org.codehaus.groovy.transform.ASTTransformation
because of exception java.lang.ClassNotFoundException: org.codehaus.groovy.ast.builder.AstBuilderTransformation
What can cause this? As far as I can tell, I have this class in my classpath: org.codehaus.groovy.ast.builder.AstBuilderTransformation.
I have the following dependencies in my build.sbt:
"org.codehaus.groovy" % "groovy-all" % "2.4.4",
"com.carrotsearch.randomizedtesting" % "randomizedtesting-runner" % "2.3.0" % "test",
"org.apache.lucene" % "lucene-test-framework" % "5.4.1",
"org.elasticsearch" % "elasticsearch" % "2.2.0" % "test" classifier "tests" withSources(),
"org.elasticsearch" % "elasticsearch" % "2.2.0" withSources(),
"org.elasticsearch.plugin" % "analysis-icu" % "2.2.0" % "test",
"org.elasticsearch.module" % "lang-groovy" % "2.2.0" % "test"
...and I have the following in my EsIntegTestCase extension class:
#Override
protected Settings nodeSettings(int nodeOrdinal) {
return Settings.settingsBuilder()
.put(super.nodeSettings(nodeOrdinal))
.put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)
.put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)
.put(Node.HTTP_ENABLED, true)
.put("script.groovy.sandbox.enabled", true)
.put("script.engine.groovy.inline.search", true)
.put("script.engine.groovy.inline.update", "true")
.put("script.inline", true)
.put("script.update", true)
.put("script.indexed", true)
.put("script.default_lang", "groovy")
.build();
}
#Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return pluginList(GroovyPlugin.class, AnalysisICUPlugin.class);
}
I'm completely stuck, and Google is unwilling to help! :slightly_smiling:
Any help or pointers would be greatly appreciated.
Many thanks,
OGG
This is now solved.
The problem was actually that this was a SecurityException rethrown as ClassNotFoundException.
Using the instructions at https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-scripting-security.html I created a security policy file and added permission for the following classes:
grant {
permission org.elasticsearch.script.ClassPermission "java.lang.Class";
permission org.elasticsearch.script.ClassPermission "org.codehaus.groovy.*";
permission org.elasticsearch.script.ClassPermission "groovy.*";
permission org.elasticsearch.script.ClassPermission "java.lang.*";
permission org.elasticsearch.script.ClassPermission "java.util.*";
permission org.elasticsearch.script.ClassPermission "java.math.BigDecimal";
permission org.elasticsearch.script.ClassPermission "org.joda.time.*";
};
And then I start the tests passing my security policy file on the command line:
-Djava.security.policy=security.policy
You can see the thread on the Elastic discussion forum which helped me reach this solution: https://discuss.elastic.co/t/2-2-0-esintegtestcase-classnotfoundexception-when-executing-groovy-script-in-search/43579
Related
I want to use webview in my project in flutter.
i called a url in webview and i see a white page. in log i see this error:
E/chromium( 5310): [ERROR:ssl_client_socket_impl.cc(946)] handshake failed; returned -1, SSL error code 1, net_error -202
I didn't have any error for other domain
notice:
1- I have this problem in android only.
2- I used the android project with java and get this problem too.
Code:
import 'package:webview_flutter/webview_flutter.dart' as web;Scaffold(
appBar: new AppBar(
title: InkWell(
child: isLoading == true
? Loading(
indicator: BallPulseIndicator(),
size: 100,
color: Colors.white,
)
: Text("اپلیکیشن آرایشگاه"),
onTap: () {
_webViewControllerFuture.loadUrl("domain");
},
),
),
body: Builder(builder: (BuildContext context) {
return SafeArea(
child: web.WebView(
key: key,
onWebViewCreated: (WebViewController webViewController) {
_webViewControllerFuture = webViewController;
},
debuggingEnabled: true,
initialUrl: 'https://domain',
javascriptMode: web.JavascriptMode.unrestricted,
onPageStarted: (String url) {
if (url == "https://domain")
scan();
else if (!_isBack)
setState(() {
isLoading = true;
});
},
onPageFinished: (String url) {
_isBack = false;
setState(() {
isLoading = false;
});
},
gestureNavigationEnabled: true,
),
);
}),
);
I meet this error with the webview_flutter plugin version is 2.0.4 (beta).
I used this version for about 2 years and don't have any issues. Now, when I meet this issue, I upgraded this plugin to version 2.5.0 and the issue was gone.
Maybe the version you used too old, I think you should upgrade it up to <3.0 version bcz from 3.0 version is BREAKING CHANGE :D
I have a Spring Boot 2.3.2.RELEASE WebFlux application. In the application.yml I have these settings:
spring:
jackson:
property-naming-strategy: SNAKE_CASE
serialization:
write-date-timestamps-as-nanoseconds: false
write-dates-as-timestamps: true
Indeed, almost any JSON response is sent back to the user-agents correctly: in snake case format, except for the default ones. What I meant by this, is any response generated by the framework.
This is a usual GET:
{
"port_code": "blah",
"firm_code": "foo",
"type": "THE_TYPE",
"status": "BAR",
}
...this is a custom (intercepted within a #RestControllerAdvice) response for a ConstraintViolationException:
{
"timestamp": 1597344667156,
"path": "/path/to/resources/null",
"status": 400,
"error": "Bad Request",
"message": [
{
"field": "id",
"code": "field.id.Range",
"message": "Identifier must be a number within the expected range"
}
],
"request_id": "10c4978f-3"
}
...and finally this is how Spring Boot generates an HTTP 404 from the controller:
{
"timestamp": 1597344662823,
"path": "/path/to/resources/312297273",
"status": 404,
"error": "Not Found",
"message": null,
"requestId": "10c4978f-2" <== NOTICE HERE requestId INSTEAD OF request_id ...what the hell?!
}
This is how I'm triggering that in the controller: return service.findById(id).switchIfEmpty(Mono.error(new ResponseStatusException(HttpStatus.NOT_FOUND)));
Is there any way to tell the framework to honor Jackson's settings? Or is there anything else I'm missing configuration-wise?
The following can be used to reproduce the responses:
Create a new Spring Boot 2.3.3 WebFlux project (using start.spring.io)
Make it Gradle / Java 11.x
Update application.properties with spring.jackson.property-naming-strategy = SNAKE_CASE
Replace DemoApplication with the following:
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.server.ResponseStatusException;
import reactor.core.publisher.Mono;
#SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
#RestController
#RequestMapping("/hello")
public class Ctrl {
#GetMapping(value = "/{id}", produces = MediaType.APPLICATION_JSON_VALUE)
public Mono<DummyResponse> get(#PathVariable("id") final String id) {
if ("ok".equalsIgnoreCase(id)) {
return Mono.just(new DummyResponse(id));
}
return Mono.error(new ResponseStatusException(HttpStatus.NOT_FOUND));
}
final class DummyResponse {
public final String gotIt;
DummyResponse(final String gotIt) {
this.gotIt = gotIt;
}
}
}
}
[x80486#archbook:~]$ curl -H "accept: application/json" -X GET http://localhost:8080/hello/ok | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 15 100 15 0 0 154 0 --:--:-- --:--:-- --:--:-- 156
{
"got_it": "ok"
}
[x80486#archbook:~]$ curl -H "accept: application/json" -X GET http://localhost:8080/hello/notok | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 140 100 140 0 0 5600 0 --:--:-- --:--:-- --:--:-- 5600
{
"timestamp": "2020-08-14T12:32:49.096+00:00",
"path": "/hello/notok",
"status": 404,
"error": "Not Found",
"message": null,
"requestId": "207921a8-2" <== Again, no snake case here :/
}
Spring Boot's error controller gives Jackson a Map to serialize whereas your controller advice is serialising a Java object. When serializing a Java object, Jackson serializes it based on JavaBean-style properties and uses the property naming strategy to decide how each of those properties should appear in the JSON. When serialising a map, the keys aren't treated as properties so the property naming strategy has no effect and Jackson serialises the keys as-is.
As things stand, if you want to customise the format of map keys as they're serialized you'll have to configure your ObjectMapper to use a custom StringKeySerializer.
As a part of my promote artifact task I am creating a query to send to Artifactory. Unfortunately, it does not work and I am trying to debug it stepwise. Here is the message I am preparing to send. Somehow println returns "check", but does not show anything for message in the logs. Why so?
stage('Promote') {
id = "0.1-2020-01-28-18-08.zip"
try {
message = """
items.find(
{
"$and":[
{ "repo": {"$eq": "generic-dev-local"} },
{ "path": {"$match": "mh/*"} },
{ "name": {"$eq": ${id}}}
]
}
).include("artifact.module.build.number")
"""
println "check"
println message
} catch (e) {
return [''] + e.toString().tokenize('\n')
}
}
I am doing batch ingestion in druid, by using the wikiticker-index.json file which comes with the druid quickstart.
Following is my data schema in wikiticker-index.json file.
{
type:"index_hadoop",
spec:{
ioConfig:{
type:"hadoop",
inputSpec:{
type:"static",
paths:"quickstart/wikiticker-2015-09-12-sampled.json"
}
},
dataSchema:{
dataSource:"wikiticker",
granularitySpec:{
type:"uniform",
segmentGranularity:"day",
queryGranularity:"none",
intervals:[
"2015-09-12/2015-09-13"
]
},
parser:{
type:"hadoopyString",
parseSpec:{
format:"json",
dimensionsSpec:{
dimensions:[
"channel",
"cityName",
"comment",
"countryIsoCode",
"countryName",
"isAnonymous",
"isMinor",
"isNew",
"isRobot",
"isUnpatrolled",
"metroCode",
"namespace",
"page",
"regionIsoCode",
"regionName",
"user"
]
},
timestampSpec:{
format:"auto",
column:"time"
}
}
},
metricsSpec:[
{
name:"count",
type:"count"
},
{
name:"added",
type:"longSum",
fieldName:"added"
},
{
name:"deleted",
type:"longSum",
fieldName:"deleted"
},
{
name:"delta",
type:"longSum",
fieldName:"delta"
},
{
name:"user_unique",
type:"hyperUnique",
fieldName:"user"
}
]
},
tuningConfig:{
type:"hadoop",
partitionsSpec:{
type:"hashed",
targetPartitionSize:5000000
},
jobProperties:{
}
}
}
}
After ingesting the sample json. only the following metrics show up.
I am unable to find the longSum metrics.i.e added, deleted and delta.
Any particular reason?
Does anybody know about this?
OP confirmed this comment from Slim Bougerra worked:
You need to add yourself on the Superset UI. Superset doesn't populate the metrics automatically.
Haven't seen a solution to my particular problem so far. It isn't working at least. Its driving me pretty crazy. This particular combo doesn't seem to have a lot in the google space. My error occurs as the job does into the mapper from what I can tell. The input to this job are avro schema'd output that is compressed with deflate though I tried uncompressed as well.
Avro: 1.7.7
Hadoop: 2.4.1
I am getting this error and I'm not sure why. Here is my job, mapper, and reduce. The error is happening when the mapper comes in.
Sample uncompressed Avro input file (StockReport.SCHEMA is defined this way)
{"day": 3, "month": 2, "year": 1986, "stocks": [{"symbol": "AAME", "timestamp": 507833213000, "dividend": 10.59}]}
Job
#Override
public int run(String[] strings) throws Exception {
Job job = Job.getInstance();
job.setJobName("GenerateGraphsJob");
job.setJarByClass(GenerateGraphsJob.class);
configureJob(job);
int resultCode = job.waitForCompletion(true) ? 0 : 1;
return resultCode;
}
private void configureJob(Job job) throws IOException {
try {
Configuration config = getConf();
Path inputPath = ConfigHelper.getChartInputPath(config);
Path outputPath = ConfigHelper.getChartOutputPath(config);
job.setInputFormatClass(AvroKeyInputFormat.class);
AvroKeyInputFormat.addInputPath(job, inputPath);
AvroJob.setInputKeySchema(job, StockReport.SCHEMA$);
job.setMapperClass(StockAverageMapper.class);
job.setCombinerClass(StockAverageCombiner.class);
job.setReducerClass(StockAverageReducer.class);
FileOutputFormat.setOutputPath(job, outputPath);
} catch (IOException | ClassCastException e) {
LOG.error("An job error has occurred.", e);
}
}
Mapper:
public class StockAverageMapper extends
Mapper<AvroKey<StockReport>, NullWritable, StockYearSymbolKey, StockReport> {
private static Logger LOG = LoggerFactory.getLogger(StockAverageMapper.class);
private final StockReport stockReport = new StockReport();
private final StockYearSymbolKey stockKey = new StockYearSymbolKey();
#Override
protected void map(AvroKey<StockReport> inKey, NullWritable ignore, Context context)
throws IOException, InterruptedException {
try {
StockReport inKeyDatum = inKey.datum();
for (Stock stock : inKeyDatum.getStocks()) {
updateKey(inKeyDatum, stock);
updateValue(inKeyDatum, stock);
context.write(stockKey, stockReport);
}
} catch (Exception ex) {
LOG.debug(ex.toString());
}
}
Schema for map output key:
{
"namespace": "avro.model",
"type": "record",
"name": "StockYearSymbolKey",
"fields": [
{
"name": "year",
"type": "int"
},
{
"name": "symbol",
"type": "string"
}
]
}
Stack trace:
java.lang.Exception: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
at org.apache.avro.mapreduce.AvroKeyInputFormat.createRecordReader(AvroKeyInputFormat.java:47)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:492)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:735)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Edit: Not that it matters but I'm working to reduce this to data I can create JFreeChart outputs from. Not getting through the mapper so that shouldn't be related.
The problem is that org.apache.hadoop.mapreduce.TaskAttemptContext was a class in Hadoop 1 but became an interface in Hadoop 2.
This is one of the reasons why libraries which depend on the Hadoop libs need to have separately compiled jarfiles for Hadoop 1 and Hadoop 2. Based on your stack trace, it appears that somehow you got a Hadoop1-compiled Avro jarfile, despite running with Hadoop 2.4.1.
The download mirrors for Avro provide nice separate downloadables for avro-mapred-1.7.7-hadoop1.jar vs avro-mapred-1.7.7-hadoop2.jar.
The problem is that Avro 1.7.7 supports 2 versions of Hadoop and hence depends on both Hadoop versions. And by default Avro 1.7.7 jars dependend on old Hadoop version.
To build with Avro 1.7.7 with Hadoop2 just add extra classifier line to maven dependencies:
<dependency>
<groupId>org.apache.avro</groupId>
<artifactId>avro-mapred</artifactId>
<version>1.7.7</version>
<classifier>hadoop2</classifier>
</dependency>
This will tell maven to search for avro-mapred-1.7.7-hadoop2.jar, not avro-mapred-1.7.7.jar
Same applicable for Avro 1.7.4 and above