Not sure what I am doing wrong, but with my set up even a basic cypher query using the neo4j Rest API is not working. I get a java.lang.RuntimeException: Error reading as JSON ''
My set up
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-rest-graphdb</artifactId>
<version>2.0.0-M06</version>
</dependency>
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j</artifactId>
<version>2.0.0</version>
</dependency>
<dependency>
<groupId>org.neo4j.app</groupId>
<artifactId>neo4j-server</artifactId>
<version>2.0.0</version>
</dependency>
private GraphDatabaseService graphDb;
private RestCypherQueryEngine queryEngine;
System.setProperty("org.neo4j.rest.batch_transaction", "true");
graphDb = new RestGraphDatabase( "http://localhost:7474/db/data/" );
queryEngine = new RestCypherQueryEngine(((RestGraphDatabase)graphDb).getRestAPI());
StringBuilder query = new StringBuilder();
query.append("match (u { id:'").append(id).append("' }) return u");
QueryResult<Map<String,Object>> result = engine.query(query.toString(), null);
//the above statement throws the runtime exception with message "Error reading as JSON '' "
I just released 2.0.0 of java-rest-binding, so please give that a try.
M06 shouldn't really work with Neo4j 2.0.0 final.
Related
The exception I am getting --
java.lang.NoClassDefFoundError: org/apache/http/impl/client/DefaultClientConnectionReuseStrategy
at org.jboss.resteasy.core.ExceptionHandler.handleApplicationException(ExceptionHandler.java:76)
at org.jboss.resteasy.core.ExceptionHandler.handleException(ExceptionHandler.java:212)
at org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:166)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:393)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:200)
at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:220)
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56)
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:741)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
The code block where exception is occurring --
//The region value being passed is "us-east-1" and paramKeyPath is the key present in aws systems //manager.
public String getParameterUsingAwsSSM(String paramKeyPath, String region) throws Exception {
String paramKeyValue = null;
System.out.println("Key path in AWS systems manager :: " + paramKeyPath);
try {
Region newRegion = Region.of(region);
//The below line gets printed---
System.out.println("Region:: " + newRegion);
//Exception occurs in below line most probably---
SsmClient ssmClient = SsmClient.builder().region(newRegion).build();
//The below line doesn't get printed---
System.out.println("successfully got ssmclient");
GetParameterRequest parameterRequest = GetParameterRequest.builder().name(paramKeyPath)
.withDecryption(Boolean.TRUE).build();
System.out.println("successfully parameterRequest fetched");
GetParameterResponse parameterResponse = ssmClient.getParameter(parameterRequest);
System.out.println("successfully parameterResponse fetched");
paramKeyValue = parameterResponse.parameter().value();
System.out.println("The value of param is ::: "+
parameterResponse.parameter().value());
} catch (Exception exception) {
System.out.println("Exception from getParameterUsingAwsSSM() : "+ exception);
throw exception;
}
return paramKeyValue;
}
The dependencies I have added in my pom.xml --
<dependencies>
<dependency>
<!-- this is an upgrade from 3.0-rc4 -->
<groupId>commons-httpclient</groupId>
<artifactId>commons-httpclient</artifactId>
<version>3.0</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.30</version>
</dependency>
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>cloudfront</artifactId>
<version>2.19.15</version>
</dependency>
<dependency>
<groupId>org.bouncycastle</groupId>
<artifactId>bcprov-jdk18on</artifactId>
<version>1.72</version>
</dependency>
<dependency>
<groupId>org.bouncycastle</groupId>
<artifactId>bcpkix-jdk18on</artifactId>
<version>1.72</version>
</dependency>
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>regions</artifactId>
<version>2.19.15</version>
</dependency>
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>ssm</artifactId>
<version>2.19.15</version>
</dependency>
</dependencies>
Can someone please help me with this? I have been stuck in this since 3 days.
I was expecting to get the parameter value from aws parameter store but I am getting the above mentioned exception, I tried adding org.apache.httpcomponents.httpclient dependency also to my project but still not working.
I just tested the code here in AWS Github.
https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2/example_code/ssm
It works perfectly as shown here:
Try using the POM/code example located in this Git repo. Did you take your POM dependencies from this github repo?
This is a change from V1 of the SDK. Add something like:
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>url-connection-client</artifactId>
<version>2.19.15</version>
</dependency>
to your pom.xml.
I'm trying to issue an ElasticSearch query using the java api from my application but for some reason i keep getting the following error:
java.lang.NoClassDefFoundError:
org/apache/lucene/search/spans/SpanBoostQuery at
org.elasticsearch.index.query.QueryBuilders.boolQuery(QueryBuilders.java:301)
Below are the current dependencies I have in my pom.xml:
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>transport</artifactId>
<version>5.4.2</version>
</dependency>
<dependency>
<groupId>org.locationtech.spatial4j</groupId>
<artifactId>spatial4j</artifactId>
<version>0.6</version>
</dependency>
<dependency>
<groupId>com.vividsolutions</groupId>
<artifactId>jts</artifactId>
<version>1.13</version>
<exclusions>
<exclusion>
<groupId>xerces</groupId>
<artifactId>xercesImpl</artifactId>
</exclusion>
</exclusions>
</dependency>
The code:
double lon = -115.14029016987968;
double lat = 36.17206351151878;
QueryBuilder fullq = boolQuery()
.must(matchAllQuery())
.filter(geoShapeQuery(
"geometry",
ShapeBuilders.newCircleBuilder().center(lon, lat).radius(10, DistanceUnit.METERS)).relation(ShapeRelation.INTERSECTS));
TransportClient client = new PreBuiltTransportClient(Settings.EMPTY)
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("localhost"), 9300));
SearchRequestBuilder finalQuery = client.prepareSearch("speedlimit").setTypes("speedlimit")
.setQuery(fullq);
SearchResponse searchResponse = finalQuery.execute().actionGet();
SearchHits searchHits = searchResponse.getHits();
if (searchHits.getTotalHits() > 0) {
String strSpeed = JsonPath.read(searchResponse.toString(), "$.hits.hits[0]._source.properties.TITLE");
int speed = Integer.parseInt(strSpeed.substring(0, 2));
}
else if (searchHits.getTotalHits() <= 0){
System.out.println("nothing");
}
This is the query I'm trying to run, i've followed the ES docs but can't get any further. Has anyone tried to run a query like this, or am I going the incorrect route? I'm tempted to just abandon the Java API and go back to making HTTP calls from Java, but i thought i would try their Java API. Any tips appreciated, thanks.
This error for me was resolved after i removed the older dependency related to "org.apache.lucene", we need to make sure all the org.apache.lucene dependecies are latest which are at par with the version which contains SpanBoostQuery:
I commented below and it worked:
<!--<dependency>-->
<!--<groupId>org.apache.lucene</groupId>-->
<!--<artifactId>lucene-spellchecker</artifactId>-->
<!--<version>3.6.2</version>-->
<!--</dependency>-->
I am new to Cassandra and Spark and trying to fetch data from DB using spark.
I am using Java for this purpose.
Problem is that there are no exceptions thrown or error occurred but still I am not able to get the data. Find my code below -
SparkConf sparkConf = new SparkConf();
sparkConf.setAppName("Spark-Cassandra Integration");
sparkConf.setMaster("local[4]");
sparkConf.set("spark.cassandra.connection.host", "stagingHost22");
sparkConf.set("spark.cassandra.connection.port", "9042");
sparkConf.set("spark.cassandra.connection.timeout_ms", "5000");
sparkConf.set("spark.cassandra.read.timeout_ms", "200000");
JavaSparkContext javaSparkContext = new JavaSparkContext(sparkConf);
String keySpaceName = "testKeySpace";
String tableName = "testTable";
CassandraJavaRDD<CassandraRow> cassandraRDD = CassandraJavaUtil.javaFunctions(javaSparkContext).cassandraTable(keySpaceName, tableName);
final ArrayList dataList = new ArrayList();
JavaRDD<String> userRDD = cassandraRDD.map(new Function<CassandraRow, String>() {
private static final long serialVersionUID = -165799649937652815L;
public String call(CassandraRow row) throws Exception {
System.out.println("Inside RDD call");
dataList.add(row);
return "test";
}
});
System.out.println( "data Size -" + dataList.size());
Cassandra and spark maven dependencies are -
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-core</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-mapping</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-extras</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>com.sparkjava</groupId>
<artifactId>spark-core</artifactId>
<version>2.5.4</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>2.0.0-M3</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.4.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.3.0</version>
</dependency>
This is sure that stagingHost22 host has the cassandra data with keyspace - testKeySpace and table name - testTable. Find below query output -
cqlsh:testKeySpace> select count(*) from testTable;
count
34
(1 rows)
Can Anybody please suggest what am I missing here?
Thanks in advance.
Warm regards,
Vibhav
Your current code does not perform any Spark action. Therefore no data is loaded.
See the Spark documentation to understand the difference between transformations and actions in Spark:
http://spark.apache.org/docs/latest/programming-guide.html#rdd-operations
Furthermore adding CassandraRows to a ArrayList isn't something that is usally necessary when using the Cassandra connector. I would suggest to implement a simple select first (following the Spark-Cassandra-Connector documentation). If this is working you can extend this code as needed.
Check the following links on samples how to load data using the connector:
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/14_data_frames.md
I'm working in a project which uses Spark streaming, Apache kafka and Cassandra.
I use streaming-kafka integration. In kafka I have a producer which sends data using this configuration:
props.put("metadata.broker.list", KafkaProperties.ZOOKEEPER);
props.put("bootstrap.servers", KafkaProperties.SERVER);
props.put("client.id", "DemoProducer");
where ZOOKEEPER = localhost:2181, and SERVER = localhost:9092.
Once I send data I can receive it with spark, and I can consume it too. My spark configuration is:
SparkConf sparkConf = new SparkConf().setAppName("org.kakfa.spark.ConsumerData").setMaster("local[4]");
sparkConf.set("spark.cassandra.connection.host", "localhost");
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(2000));
After that I am trying to store this data into cassandra database. But when I try to open session using this:
CassandraConnector connector = CassandraConnector.apply(jssc.sparkContext().getConf());
Session session = connector.openSession();
I get the following error:
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:220)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1231)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:334)
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:182)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:161)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:161)
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:36)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:61)
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:70)
at org.kakfa.spark.ConsumerData.main(ConsumerData.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Regarding to cassandra, I'm using default configuration:
start_native_transport: true
native_transport_port: 9042
- seeds: "127.0.0.1"
cluster_name: 'Test Cluster'
rpc_address: localhost
rpc_port: 9160
start_rpc: true
I can manage to connect to cassandra from the command line using cqlsh localhost, getting the following message:
Connected to Test Cluster at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 3.0.5 | CQL spec 3.4.0 | Native protocol v4] Use HELP for help. cqlsh>
I used nodetool status too, which shows me this:
http://pastebin.com/ZQ5YyDyB
For running cassandra I invoke bin/cassandra -f
What I am trying to run is this:
try (Session session = connector.openSession()) {
System.out.println("dentro del try");
session.execute("DROP KEYSPACE IF EXISTS test");
System.out.println("dentro del try - 1");
session.execute("CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}");
System.out.println("dentro del try - 2");
session.execute("CREATE TABLE test.users (id TEXT PRIMARY KEY, name TEXT)");
System.out.println("dentro del try - 3");
}
My pom.xml file looks like that:
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.6.0-M1</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.6.0-M2</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.1.0-alpha2</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.1.0-alpha2</version>
</dependency>
<dependency>
<groupId>org.json</groupId>
<artifactId>json</artifactId>
<version>20160212</version>
</dependency>
</dependencies>
I have no idea why I can't connect to cassandra using spark, is it configuration bad or what i am doing wrong?
Thank you!
com.datastax.driver.core.exceptions.InvalidQueryException:
unconfigured table schema_keyspaces)
That error indicates an old driver with a new Cassandra version. Looking at the POM file, we find there the spark-cassandra-connector dependency declared twice.
One uses version 1.6.0-m2 (GOOD) and the other 1.1.0-alpha2 (old).
Remove the references to the old dependencies 1.1.0-alpha2 from your config:
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.1.0-alpha2</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.1.0-alpha2</version>
</dependency>
I'm trying to build some filters to filter data from Bigtable. I'm using bigtable-hbase drivers and HBase drivers. Actually here are my dependencies from pom.xml:
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-common</artifactId>
<version>${hbase.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-protocol</artifactId>
<version>${hbase.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>${hbase.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>${hbase.version}</version>
</dependency>
<dependency>
<groupId>com.google.cloud.bigtable</groupId>
<artifactId>bigtable-hbase</artifactId>
<version>${bigtable.version}</version>
</dependency>
I'm filtering data like this:
Filter filterName = new SingleColumnValueFilter(Bytes.toBytes("FName"), Bytes.toBytes("FName"),
CompareFilter.CompareOp.EQUAL, new RegexStringComparator("JOHN"));
FilterList filters = new FilterList();
filters.addFilter(filterName);
Scan scan1 = new Scan();
scan1.setFilter(filters);
This works ok. But then I add following to previous code:
Filter filterSalary = new SingleColumnValueFilter(Bytes.toBytes("Salary"), Bytes.toBytes("Salary"),
CompareFilter.CompareOp.GREATER_OR_EQUAL, new LongComparator(100000));
filters.addFilter(filterSalary);
and it give me this exception:
Exception in thread "main" com.google.cloud.bigtable.hbase.adapters.filters.UnsupportedFilterException: Unsupported filters encountered: FilterSupportStatus{isSupported=false, reason='ValueFilter must have either a BinaryComparator with any compareOp or a RegexStringComparator with an EQUAL compareOp. Found (LongComparator, GREATER_OR_EQUAL)'}
at com.google.cloud.bigtable.hbase.adapters.filters.FilterAdapter.throwIfUnsupportedFilter(FilterAdapter.java:144)
at com.google.cloud.bigtable.hbase.adapters.ScanAdapter.throwIfUnsupportedScan(ScanAdapter.java:55)
at com.google.cloud.bigtable.hbase.adapters.ScanAdapter.adapt(ScanAdapter.java:91)
at com.google.cloud.bigtable.hbase.adapters.ScanAdapter.adapt(ScanAdapter.java:43)
at com.google.cloud.bigtable.hbase.BigtableTable.getScanner(BigtableTable.java:247)
So my question is how to filter long data type? Is it hbase issue or bigtable specific?
I found this How do you use a custom comparator with SingleColumnValueFilter on HBase? but I can't load my jars to server so it is not applicable for my case.
SingleColumnValueFilter supports the following comparators:
BinaryComparator
BinaryPrefixComparator
RegexStringComparator.
See this link for an up-to-date list:
https://cloud.google.com/bigtable/docs/hbase-differences