Log into in-memory database using log4j in Spring Boot - java

In our Spring Boot project I am trying to accumulate messages in h2 in-memory database and load them into MySQL database once message number reaches 50.
Here is dependencies from pom.xml file:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-log4j -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-log4j</artifactId>
<version>1.3.8.RELEASE</version>
</dependency>
<!-- H2 DB -->
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<version>${h2.version}</version>
</dependency>
log4j.properties is as below:
log4j.rootLogger = DEBUG, sql
# Define the file appender
log4j.appender.sql=org.apache.log4j.jdbc.JDBCAppender
log4j.appender.sql.URL=jdbc:h2:mem:liteDatabase
# Set Database Driver
log4j.appender.sql.driver=org.h2.Driver
# Set database user name and password
log4j.appender.sql.user=sa
log4j.appender.sql.password=sa
# Set the SQL statement to be executed.
log4j.appender.sql.sql=INSERT INTO LOGS (LOG_DATE, LOGGER, LOG_LEVEL, MESSAGE) VALUES (now() ,'%C','%p','%m')
# Define the xml layout for file appender
log4j.appender.sql.layout=org.apache.log4j.PatternLayout
From my createTables-h2.sql file:
CREATE TABLE LOGS
(
id INT auto_increment PRIMARY KEY,
LOG_DATE DATETIME NOT NULL,
LOGGER VARCHAR2(250) NOT NULL,
LOG_LEVEL VARCHAR2(10) NOT NULL,
MESSAGE VARCHAR2(1000) NOT NULL
);
datasource bean for h2:
#Bean(name = "liteDataSource")
public BasicDataSource liteDataSource() {
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName(env.getProperty("h2.driver-class-name"));
dataSource.setUrl(env.getProperty("h2.url"));
dataSource.setUsername(env.getProperty("h2.username"));
dataSource.setPassword(env.getProperty("h2.password"));
Resource initData = new ClassPathResource("scripts/createTables-h2.sql");
DatabasePopulator databasePopulator = new ResourceDatabasePopulator(initData);
DatabasePopulatorUtils.execute(databasePopulator, dataSource);
return dataSource;
}
The problem is log4j is trying to log into the database before it is
created
Error message:
log4j:ERROR Failed to excute sql
org.h2.jdbc.JdbcSQLException: Table "LOGS" not found; SQL statement:
INSERT INTO LOGS VALUES ('', now() ,'org.springframework.core.env.MutablePropertySources','DEBUG','Adding [servletConfigInitParams] PropertySource with lowest search precedence') [42102-193]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.command.Parser.readTableOrView(Parser.java:5389)
at org.h2.command.Parser.readTableOrView(Parser.java:5366)
at org.h2.command.Parser.parseInsert(Parser.java:1053)
at org.h2.command.Parser.parsePrepared(Parser.java:413)
at org.h2.command.Parser.parse(Parser.java:317)
at org.h2.command.Parser.parse(Parser.java:289)
at org.h2.command.Parser.prepareCommand(Parser.java:254)
at org.h2.engine.Session.prepareLocal(Session.java:561)
at org.h2.engine.Session.prepareCommand(Session.java:502)
at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1203)
at org.h2.jdbc.JdbcStatement.executeUpdateInternal(JdbcStatement.java:126)
at org.h2.jdbc.JdbcStatement.executeUpdate(JdbcStatement.java:115)
at org.apache.log4j.jdbc.JDBCAppender.execute(JDBCAppender.java:218)
at org.apache.log4j.jdbc.JDBCAppender.flushBuffer(JDBCAppender.java:289)
at org.apache.log4j.jdbc.JDBCAppender.append(JDBCAppender.java:186)
at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
at org.apache.log4j.Category.callAppenders(Category.java:206)
at org.apache.log4j.Category.forcedLog(Category.java:391)
at org.apache.log4j.Category.log(Category.java:856)
at org.slf4j.impl.Log4jLoggerAdapter.log(Log4jLoggerAdapter.java:581)
at org.apache.commons.logging.impl.SLF4JLocationAwareLog.debug(SLF4JLocationAwareLog.java:131)
at org.springframework.core.env.MutablePropertySources.addLast(MutablePropertySources.java:109)
at org.springframework.web.context.support.StandardServletEnvironment.customizePropertySources(StandardServletEnvironment.java:84)
at org.springframework.core.env.AbstractEnvironment.<init>(AbstractEnvironment.java:122)
at org.springframework.core.env.StandardEnvironment.<init>(StandardEnvironment.java:54)
at org.springframework.web.context.support.StandardServletEnvironment.<init>(StandardServletEnvironment.java:44)
at org.springframework.boot.SpringApplication.getOrCreateEnvironment(SpringApplication.java:436)
at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:335)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:308)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1186)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1175)
at com.example.DemoApplication.main(DemoApplication.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49)
Please help me out if you can offer any solution!

Related

Gremlin query on Janusgraph through Spark. Error: Provider org.janusgraph.hadoop.serialize.JanusGraphKryoShimService could not be instantiated

Current Architecture
Description
I am using JanusGraph 0.6.2 for graph processing.
GCP BigTable as JanusGraph Backend/database.
Spark 3.0.0 with hadoop 2.7 for data processing, setup locally (planning to setup the env in GCP after the POC).
Gremlin Client and Java 11 as a client to run Spark Job, to do queries like traversal, find nodes and etc through SparkGraphComputer
Problem
I am able to trigger a query job, to do the node count on Spark using Gremlin Client, But I am facing issues triggering a query job using Java apis.
Expectation
Trigger a Query Job using Java APIs.
Apache Spark Setup is done
Configuration working for Gremlin Client
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Hadoop Graph Configuration
#
gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph
gremlin.hadoop.graphReader=org.janusgraph.hadoop.formats.hbase.HBaseInputFormat
gremlin.hadoop.graphWriter=org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
gremlin.hadoop.jarsInDistributedCache=true
gremlin.hadoop.inputLocation=none
gremlin.hadoop.outputLocation=output
gremlin.spark.persistContext=true
#
# JanusGraph HBase InputFormat configuration
#
#janusgraphmr.ioformat.conf.storage.backend=hbase
#janusgraphmr.ioformat.conf.storage.hostname=localhost
#janusgraphmr.ioformat.conf.storage.port=8586
#janusgraphmr.ioformat.conf.storage.hbase.table=janusgraph
janusgraphmr.ioformat.conf.storage.backend=hbase
janusgraphmr.ioformat.conf.storage.hbase.ext.hbase.client.connection.impl=com.google.cloud.bigtable.hbase2_x.BigtableConnection
janusgraphmr.ioformat.conf.storage.hbase.ext.google.bigtable.project.id= *****
janusgraphmr.ioformat.conf.storage.hbase.ext.google.bigtable.instance.id= *****
janusgraphmr.ioformat.conf.storage.hbase.table= ******
janusgraphmr.ioformat.conf.storage.hbase.ext.hbase.regionsizecalculator.enable=false
# This defines the indexing backend configuration used while writing data to JanusGraph.
janusgraphmr.ioformat.conf.index.search.backend=elasticsearch
janusgraphmr.ioformat.conf.index.search.hostname=localhost
#
# SparkGraphComputer Configuration
#
spark.master=spark://RINMAC1714:7077
spark.executor.memory=1g
spark.executor.extraClassPath=/Users/rohit.pahan/portables/janusgraph-0.6.2/lib/*
spark.serializer=org.apache.spark.serializer.KryoSerializer
spark.kryo.registrator=org.janusgraph.hadoop.serialize.JanusGraphKryoRegistrator
Above config works and I get the result. Please follow the screenshot
Java API configuration which is not working for me
GraphTraversalProvider.java
import org.apache.commons.configuration.Configuration;
import org.apache.tinkerpop.gremlin.hadoop.Constants;
public class GraphTraversalProvider {
public static Configuration makeLocal() {
return make(true);
}
public static Configuration makeRemote() {
return make(false);
}
private static Configuration make(boolean local) {
final Configuration hadoopConfig = new BaseConfiguration();
hadoopConfig.setProperty("gremlin.graph", "org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph");
hadoopConfig.setProperty(Constants.GREMLIN_HADOOP_GRAPH_READER, "org.janusgraph.hadoop.formats.hbase.HBaseInputFormat");
hadoopConfig.setProperty(Constants.GREMLIN_HADOOP_GRAPH_WRITER, "org.apache.hadoop.mapreduce.lib.output.NullOutputFormat");
hadoopConfig.setProperty(Constants.GREMLIN_HADOOP_JARS_IN_DISTRIBUTED_CACHE, true);
hadoopConfig.setProperty(Constants.GREMLIN_HADOOP_INPUT_LOCATION, "none");
hadoopConfig.setProperty(Constants.GREMLIN_HADOOP_OUTPUT_LOCATION, "output");
hadoopConfig.setProperty(Constants.GREMLIN_SPARK_PERSIST_CONTEXT, true);
hadoopConfig.setProperty("janusgraphmr.ioformat.conf.storage.backend", "hbase");
hadoopConfig.setProperty("janusgraphmr.ioformat.conf.storage.hbase.ext.hbase.client.connection.impl", "com.google.cloud.bigtable.hbase2_x.BigtableConnectio");
hadoopConfig.setProperty("janusgraphmr.ioformat.conf.storage.hbase.ext.google.bigtable.project.id", "******");
hadoopConfig.setProperty("janusgraphmr.ioformat.conf.storage.hbase.ext.google.bigtable.instance.id", "*******");
hadoopConfig.setProperty("janusgraphmr.ioformat.conf.storage.hbase.table", "******");
hadoopConfig.setProperty("janusgraphmr.ioformat.conf.storage.hbase.ext.hbase.regionsizecalculator.enable", false);
hadoopConfig.setProperty("janusgraphmr.ioformat.conf.index.search.backend", "elasticsearch");
hadoopConfig.setProperty("janusgraphmr.ioformat.conf.index.search.hostname", "localhost");
if (local) {
hadoopConfig.setProperty("spark.master", "local[*]"); // Run Spark locally with as many worker threads as logical cores on your machine.
} else {
hadoopConfig.setProperty("spark.master", "spark://MAC1714:7077");
}
hadoopConfig.setProperty("spark.executor.memory", "1g");
hadoopConfig.setProperty(Constants.SPARK_SERIALIZER, "org.apache.spark.serializer.KryoSerializer");
hadoopConfig.setProperty("spark.kryo.registrator", "org.janusgraph.hadoop.serialize.JanusGraphKryoRegistrator");
hadoopConfig.setProperty("spark.kryo.registrationRequired","false");
return hadoopConfig;
}
}
Main Class
public static void main(String[] args) throws Exception {
runSpark();
}
private static void runSpark() throws Exception {
Configuration config = GraphTraversalProvider.makeRemote();
Graph hadoopGraph = GraphFactory.open(config);
Long totalVertices = hadoopGraph.traversal().withComputer(SparkGraphComputer.class).V().count().next();
System.out.println("IT WORKED: " + totalVertices);
hadoopGraph.close();
}
}
pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.2.6.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.janus</groupId>
<artifactId>janus-spark</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>janus-spark</name>
<description>Demo project for Spring Boot</description>
<properties>
<janus.version>0.6.2</janus.version>
<spark.version>3.0.0</spark.version>
<gremlin.version>3.4.6</gremlin.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- https://mvnrepository.com/artifact/org.janusgraph/janusgraph-bigtable -->
<dependency>
<groupId>org.janusgraph</groupId>
<artifactId>janusgraph-bigtable</artifactId>
<version>${janus.version}</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.janusgraph/janusgraph-hadoop -->
<dependency>
<groupId>org.janusgraph</groupId>
<artifactId>janusgraph-hadoop</artifactId>
<version>${janus.version}</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.janusgraph/janusgraph-hbase -->
<dependency>
<groupId>org.janusgraph</groupId>
<artifactId>janusgraph-hbase</artifactId>
<version>${janus.version}</version>
</dependency>
<dependency>
<groupId>org.janusgraph</groupId>
<artifactId>janusgraph-solr</artifactId>
<version>${janus.version}</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.esotericsoftware.kryo/kryo -->
<dependency>
<groupId>com.esotericsoftware.kryo</groupId>
<artifactId>kryo</artifactId>
<version>2.16</version>
</dependency>
<!--
<dependency>
<groupId>com.twitter</groupId>
<artifactId>chill_2.13</artifactId>
<version>0.10.0</version>
</dependency>-->
<!-- GREMLIN -->
<dependency>
<groupId>org.apache.tinkerpop</groupId>
<artifactId>spark-gremlin</artifactId>
<version>${gremlin.version}</version>
<exclusions>
<exclusion>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</exclusion>
<exclusion>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.tinkerpop</groupId>
<artifactId>hadoop-gremlin</artifactId>
<version>${gremlin.version}</version>
</dependency>
<!-- SPARK -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
<version>${spark.version}</version>
<exclusions>
<exclusion>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>27.0-jre</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
Error Logs
SLF4J: Found binding in [jar:file:/Users/rohit.pahan/portables/janusgraph-0.6.2/lib/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/rohit.pahan/portables/janusgraph-0.6.2/lib/logback-classic-1.1.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/rohit.pahan/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/rohit.pahan/.m2/repository/org/slf4j/slf4j-log4j12/1.7.30/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
0 [main] WARN org.apache.tinkerpop.gremlin.spark.process.computer.SparkGraphComputer - class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat does not implement PersistResultGraphAware and thus, persistence options are unknown -- assuming all options are possible
Exception in thread "main" java.lang.IllegalStateException: java.util.ServiceConfigurationError: org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.KryoShimService: Provider org.janusgraph.hadoop.serialize.JanusGraphKryoShimService could not be instantiated
at org.apache.tinkerpop.gremlin.process.computer.traversal.step.map.VertexProgramStep.processNextStart(VertexProgramStep.java:88)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:150)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.ExpandableStepIterator.next(ExpandableStepIterator.java:55)
at org.apache.tinkerpop.gremlin.process.computer.traversal.step.map.ComputerResultStep.processNextStart(ComputerResultStep.java:68)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.next(AbstractStep.java:135)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.next(AbstractStep.java:40)
at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.next(DefaultTraversal.java:240)
at com.janus.app.services.RunSparkJob.runSpark(RunSparkJob.java:20)
at com.janus.app.services.RunSparkJob.main(RunSparkJob.java:14)
Caused by: java.util.concurrent.ExecutionException: java.util.ServiceConfigurationError: org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.KryoShimService: Provider org.janusgraph.hadoop.serialize.JanusGraphKryoShimService could not be instantiated
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
at org.apache.tinkerpop.gremlin.process.computer.traversal.step.map.VertexProgramStep.processNextStart(VertexProgramStep.java:68)
... 8 more
Caused by: java.util.ServiceConfigurationError: org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.KryoShimService: Provider org.janusgraph.hadoop.serialize.JanusGraphKryoShimService could not be instantiated
at java.base/java.util.ServiceLoader.fail(ServiceLoader.java:582)
at java.base/java.util.ServiceLoader$ProviderImpl.newInstance(ServiceLoader.java:804)
at java.base/java.util.ServiceLoader$ProviderImpl.get(ServiceLoader.java:722)
at java.base/java.util.ServiceLoader$3.next(ServiceLoader.java:1393)
at org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.KryoShimServiceLoader.load(KryoShimServiceLoader.java:97)
at org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.KryoShimServiceLoader.applyConfiguration(KryoShimServiceLoader.java:58)
at org.apache.tinkerpop.gremlin.spark.process.computer.SparkGraphComputer.lambda$submitWithExecutor$1(SparkGraphComputer.java:248)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
at java.base/java.lang.Thread.run(Thread.java:831)
Caused by: java.lang.IllegalArgumentException: Unable to create serializer "org.apache.tinkerpop.shaded.kryo.serializers.FieldSerializer" for class: java.util.concurrent.atomic.AtomicLong
at org.apache.tinkerpop.shaded.kryo.factories.ReflectionSerializerFactory.makeSerializer(ReflectionSerializerFactory.java:67)
at org.apache.tinkerpop.shaded.kryo.factories.ReflectionSerializerFactory.makeSerializer(ReflectionSerializerFactory.java:45)
at org.apache.tinkerpop.shaded.kryo.Kryo.newDefaultSerializer(Kryo.java:380)
at org.apache.tinkerpop.shaded.kryo.Kryo.getDefaultSerializer(Kryo.java:364)
at org.apache.tinkerpop.gremlin.structure.io.gryo.GryoTypeReg.registerWith(GryoTypeReg.java:122)
at org.apache.tinkerpop.gremlin.structure.io.gryo.GryoMapper.createMapper(GryoMapper.java:101)
at org.apache.tinkerpop.gremlin.structure.io.gryo.GryoMapper.createMapper(GryoMapper.java:75)
at org.apache.tinkerpop.gremlin.structure.io.gryo.GryoReader.<init>(GryoReader.java:71)
at org.apache.tinkerpop.gremlin.structure.io.gryo.GryoReader.<init>(GryoReader.java:64)
at org.apache.tinkerpop.gremlin.structure.io.gryo.GryoReader$Builder.create(GryoReader.java:302)
at org.apache.tinkerpop.gremlin.structure.io.gryo.GryoPool.createPool(GryoPool.java:126)
at org.apache.tinkerpop.gremlin.structure.io.gryo.GryoPool.access$100(GryoPool.java:40)
at org.apache.tinkerpop.gremlin.structure.io.gryo.GryoPool$Builder.create(GryoPool.java:227)
at org.apache.tinkerpop.gremlin.hadoop.structure.io.HadoopPools.initialize(HadoopPools.java:51)
at org.janusgraph.hadoop.serialize.JanusGraphKryoShimService.<init>(JanusGraphKryoShimService.java:30)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:78)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
at java.base/java.util.ServiceLoader$ProviderImpl.newInstance(ServiceLoader.java:780)
... 9 more
Caused by: java.lang.reflect.InvocationTargetException
at jdk.internal.reflect.GeneratedConstructorAccessor3.newInstance(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
at org.apache.tinkerpop.shaded.kryo.factories.ReflectionSerializerFactory.makeSerializer(ReflectionSerializerFactory.java:54)
... 29 more
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make field private volatile long java.util.concurrent.atomic.AtomicLong.value accessible: module java.base does not "opens java.util.concurrent.atomic" to unnamed module #1d9b7cce
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:357)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:177)
at java.base/java.lang.reflect.Field.setAccessible(Field.java:171)
at org.apache.tinkerpop.shaded.kryo.serializers.FieldSerializer.buildValidFields(FieldSerializer.java:306)
at org.apache.tinkerpop.shaded.kryo.serializers.FieldSerializer.rebuildCachedFields(FieldSerializer.java:239)
at org.apache.tinkerpop.shaded.kryo.serializers.FieldSerializer.rebuildCachedFields(FieldSerializer.java:182)
at org.apache.tinkerpop.shaded.kryo.serializers.FieldSerializer.<init>(FieldSerializer.java:155)
... 34 more
Process finished with exit code 1
I am still exploring Janusgraph and its processing capabilities with Spark. I have given all the details here, Let me know if any more details are required. It is a very new techstack for me. I would be grateful for any help.
<properties>
<janus.version>0.6.2</janus.version>
<spark.version>3.0.0</spark.version>
<gremlin.version>3.4.6</gremlin.version>
</properties>
JanusGraph-0.6.2 depends on TinkerPop-3.5.3.
Mixing with other TinkerPop versions can easily lead to these kind of problems.

Use Hive Dialect execute Hive DDL On Flink, But throw a NullPointerException

I try to execute a hive ddl sql with stream table api on flink-1.13.2, the code like:
String hiveDDL = ResourceUtil.readClassPathSource("hive-ddl.sql");
EnvironmentSettings settings = EnvironmentSettings.newInstance()
.useBlinkPlanner()
.inStreamingMode().build();
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, settings);
String name = "hive";
String defaultDatabase = "stream";
String hiveConfDir = "conf";
HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir);
tableEnv.registerCatalog("hive", hive);
tableEnv.useCatalog("hive");
tableEnv.useDatabase("stream");
tableEnv.executeSql("DROP TABLE IF EXISTS dimension_table");
// 设置HIVE方言
tableEnv.getConfig().setSqlDialect(SqlDialect.HIVE);
tableEnv.executeSql(hiveDDL);
the hive server in cdh5.14.2, and the ddl sql like:
CREATE TABLE dimension_table (
product_id STRING,
product_name STRING,
unit_price DECIMAL(10, 4),
pv_count BIGINT,
like_count BIGINT,
comment_count BIGINT,
update_time TIMESTAMP(3),
update_user STRING
)
PARTITIONED BY (
pt_year STRING,
pt_month STRING,
pt_day STRING
)
TBLPROPERTIES (
– using default partition-name order to load the latest partition every 12h (the most recommended and convenient way)
'streaming-source.enable' = 'true',
'streaming-source.partition.include' = 'latest',
'streaming-source.monitor-interval' = '12 h',
'streaming-source.partition-order' = 'partition-name', – option with default value, can be ignored.
– using partition file create-time order to load the latest partition every 12h
'streaming-source.enable' = 'true',
'streaming-source.partition.include' = 'latest',
'streaming-source.partition-order' = 'create-time',
'streaming-source.monitor-interval' = '12 h'
– using partition-time order to load the latest partition every 12h
'streaming-source.enable' = 'true',
'streaming-source.partition.include' = 'latest',
'streaming-source.monitor-interval' = '12 h',
'streaming-source.partition-order' = 'partition-time',
'partition.time-extractor.kind' = 'default',
'partition.time-extractor.timestamp-pattern' = '$pt_year-$pt_month-$pt_day 00:00:00'
)
then run it, but throw NullPointerException, like:
2021-11-18 15:33:00,387 INFO [org.apache.flink.table.catalog.hive.HiveCatalog] - Setting hive conf dir as conf
2021-11-18 15:33:00,481 WARN [org.apache.hadoop.util.NativeCodeLoader] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2021-11-18 15:33:01,345 INFO [org.apache.flink.table.catalog.hive.HiveCatalog] - Created HiveCatalog 'hive'
2021-11-18 15:33:01,371 INFO [hive.metastore] - Trying to connect to metastore with URI thrift://cdh-dev-node-119:9083
2021-11-18 15:33:01,441 INFO [hive.metastore] - Opened a connection to metastore, current connections: 1
2021-11-18 15:33:01,521 INFO [hive.metastore] - Connected to metastore.
2021-11-18 15:33:01,856 INFO [org.apache.flink.table.catalog.hive.HiveCatalog] - Connected to Hive metastore
2021-11-18 15:33:01,899 INFO [org.apache.flink.table.catalog.CatalogManager] - Set the current default catalog as [hive] and the current default database as [stream].
2021-11-18 15:33:03,290 INFO [org.apache.hadoop.hive.ql.session.SessionState] - Created local directory: /var/folders/4m/n1wgh7rd2yqfv301kq00l4q40000gn/T/681dd0aa-ba35-4a0e-b069-3ad48f030774_resources
2021-11-18 15:33:03,298 INFO [org.apache.hadoop.hive.ql.session.SessionState] - Created HDFS directory: /tmp/hive/chenxiaojun/681dd0aa-ba35-4a0e-b069-3ad48f030774
2021-11-18 15:33:03,305 INFO [org.apache.hadoop.hive.ql.session.SessionState] - Created local directory: /var/folders/4m/n1wgh7rd2yqfv301kq00l4q40000gn/T/chenxiaojun/681dd0aa-ba35-4a0e-b069-3ad48f030774
2021-11-18 15:33:03,311 INFO [org.apache.hadoop.hive.ql.session.SessionState] - Created HDFS directory: /tmp/hive/chenxiaojun/681dd0aa-ba35-4a0e-b069-3ad48f030774/_tmp_space.db
2021-11-18 15:33:03,314 INFO [org.apache.hadoop.hive.ql.session.SessionState] - No Tez session required at this point. hive.execution.engine=mr.
Exception in thread "main" java.lang.NullPointerException
at org.apache.flink.table.catalog.hive.client.HiveShimV100.registerTemporaryFunction(HiveShimV100.java:422)
at org.apache.flink.table.planner.delegation.hive.HiveParser.parse(HiveParser.java:217)
at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:724)
at com.hacker.flinksql.hive.HiveSqlTest.main(HiveSqlTest.java:48)
I found the error code in flink-1.13.2,
org.apache.flink.table.catalog.hive.client.HiveShimV100.java - line:422
this method params is null, the code:
#Override
public void registerTemporaryFunction(String funcName, Class funcClass) {
try
{ registerTemporaryFunction.invoke(null, funcName, funcClass); }
catch (IllegalAccessException | InvocationTargetException e)
{ throw new FlinkHiveException("Failed to register temp function", e); }
}
my maven dependency
<properties>
<hadoop.version>2.6.0-cdh5.14.2</hadoop.version>
<hive.version>1.1.0-cdh5.14.2</hive.version>
</properties>
<!-- flink sql core -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-api-java-bridge_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-planner-blink_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.5</version>
<scope>provided</scope>
</dependency>
<!-- hive catalog -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-hive_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>${hive.version}</version>
<scope>provided</scope>
</dependency>
<!-- catalog hadoop dependency -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.6.0-cdh5.15.2</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>2.6.0-cdh5.15.2</version>
<scope>provided</scope>
</dependency>
I create issues at https://issues.apache.org/jira/browse/FLINK-24950
It's flink source bug? and what should i do ?

Spring - CrudRepository is not created

The CrudRepository bean
#Repository
public interface UserDao extends CrudRepository<User, Long>
{
List<User> findByFirstNameAndLastName(String firstName, String lastName);
}
for the User resource
#Entity
#Table(name="User")
public class User
{
#Id
#GeneratedValue
private long id;
#Column(name="first_name")
private String firstName;
#Column(name="last_name")
private String lastName;
/* --- Getters,setters,default constructor -------*/
}
is not created when I start my Spring boot app
Field dao in base.package.service.UserService required a bean
of type 'base.package.dao.UserDao' that could not be found.
but the packages are definitely scanned
#SpringBootApplication(scanBasePackages= {"base.package"})
I got the strong suspicion that it has to do something with the embedded database h2 that I am using. I am trying to create the User table in schema.sql
CREATE TABLE IF NOT EXISTS User (
id INTEGER NOT NULL AUTO_INCREMENT,
first_name VARCHAR(128) NOT NULL,
last_name VARCHAR(128) NOT NULL,
PRIMARY KEY (id)
);
but as soon as I uncomment IF NOT EXISTS it throws an error (table already exists). So this firstly means that spring takes charge of created the schema. But I get the feeling that the table is not recreated because in the data.sql script
INSERT INTO User (id,first_name,last_name) VALUES (1,'Vincent', 'Vega');
I always have to manually increment the id on startup otherwise i get a
org.h2.jdbc.JdbcSQLException: Unique index or primary key violation: "PRIMARY KEY ON PUBLIC.USER(ID)"
Here are the jpa properties from application.properties
# below properties create schema, so schema.sql is redundant
spring.jpa.generate-ddl=true
spring.jpa.hibernate.ddl-auto=create-drop
spring.datasource.initialization-mode=always
spring.jpa.database=H2
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
spring.jpa.show-sql=true
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.url=jdbc:h2:~/testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=TRUE
spring.datasource.name=testdb
spring.h2.console.enabled=true
spring.h2.console.path=/h2-console
The reason why I think that the table is never recreated on startup is because I get this error when the app closes
2018-02-02 09:27:47.907 INFO [restartedMain] [StandardService] Stopping service [Tomcat]
2018-02-02 09:27:47.907 WARN [localhost-startStop-1] [WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [MVStore background writer nio:C:/Users/user/testdb.mv.db] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
This is followed by the exception that UserDao bean is not created.
***************************
APPLICATION FAILED TO START
***************************
Description:
Field dao in base.package.user.service.UserService required a bean of type 'base.package.user.dao.UserDao' that could not be found.
Action:
Consider defining a bean of type 'base.package.user.dao.UserDao' in your configuration.
POM.XML
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.0.0.M7</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
<spring-cloud.version>Finchley.M5</spring-cloud.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
/* .... */
EDIT
Here is the UserService
#Service
public class UserService
{
#Autowired private UserDao dao;
public List<User> findByFirstNameAndLastName(String firstName, String lastName)
{
return dao.findByFirstNameAndLastName(firstName, lastName);
}
public User save(final User user)
{
return dao.save(user);
}
}
EDIT 2
Now I am getting
Caused by: java.lang.IllegalArgumentException: Not a managed type: class com.project.bot.user.User
at org.hibernate.metamodel.internal.MetamodelImpl.managedType(MetamodelImpl.java:472)
at org.springframework.data.jpa.repository.support.JpaMetamodelEntityInformation.<init>(JpaMetamodelEntityInformation.java:72)
at org.springframework.data.jpa.repository.support.JpaEntityInformationSupport.getEntityInformation(JpaEntityInformationSupport.java:66)
at org.springframework.data.jpa.repository.support.JpaRepositoryFactory.getEntityInformation(JpaRepositoryFactory.java:169)
at org.springframework.data.jpa.repository.support.JpaRepositoryFactory.getTargetRepository(JpaRepositoryFactory.java:107)
at org.springframework.data.jpa.repository.support.JpaRepositoryFactory.getTargetRepository(JpaRepositoryFactory.java:90)
at org.springframework.data.repository.core.support.RepositoryFactorySupport.getRepository(RepositoryFactorySupport.java:300)
at org.springframework.data.repository.core.support.RepositoryFactoryBeanSupport.lambda$afterPropertiesSet$3(RepositoryFactoryBeanSupport.java:287)
at org.springframework.data.util.Lazy.getNullable(Lazy.java:141)
at org.springframework.data.util.Lazy.get(Lazy.java:63)
at org.springframework.data.repository.core.support.RepositoryFactoryBeanSupport.afterPropertiesSet(RepositoryFactoryBeanSupport.java:290)
at org.springframework.data.jpa.repository.support.JpaRepositoryFactoryBean.afterPropertiesSet(JpaRepositoryFactoryBean.java:102)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1769)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1706)
... 102 common frames omitted
The table not being created should not have any impact on whether the bean is defined or not.
I think the problem you have here is that you are not instatiating your repository beans. Spring Data JPA repository beans are not picked up by component scans since they are only interfaces. The #Repository annotation actually does nothing here.
Spring Data JPA repo beans are created dynamically provided you have supplied the #EnableJpaRepositories in your configuration.
You may also need to put an #EntityScan to make sure all your #Entity's are recognised by Spring
#SpringBootApplication(scanBasePackages= {"base.package"})
#EnableJpaRepositories("base.package")
#EntityScan("base.package")

Spring boot - MySQL settings are not working

I am trying to develop an application using Spring boot and MySQL. As the documentation said, First I created the project using Spring initializr using Intelij Idea, configured the application.properties file, and wrote schema-mysql.sql file and data-mysql.sql file. After I ran the project, I found there are no tables or data in the MySQL database. What is wrong with my configuration? Please help.
application.properties file,
spring.datasource.url=jdbc:mysql://localhost:3306/testdb?useSSL=false
spring.datasource.username=root
spring.datasource.password=password
spring.datasource.testWhileIdle=true
spring.datasource.validationQuery=SELECT 1
spring.datasource.driverClassName=com.mysql.jdbc.Driver
spring.jpa.hibernate.ddl-auto=create-drop
spring.jpa.show-sql = true
spring.jpa.hibernate.naming-strategy = org.hibernate.cfg.ImprovedNamingStrategy
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
spring.datasource.schema=schema-mysql.sql
spring.datasource.data=data-mysql.sql
dependencies in pom.xml file,
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
schema-mysql.sql file,
CREATE TABLE IF NOT EXISTS `SOL16_USERS` (
`USERNAME` VARCHAR(200) NOT NULL,
'PASSWORD' VARCHAR(200) NOT NULL,
PRIMARY KEY (`USERNAME`)
);
CREATE TABLE IF NOT EXISTS 'SOL16_PRIVILEGES' (
'PRIVILEGE_ID' INT(2) NOT NULL,
'PRIVILEGE' VARCHAR(15) NOT NULL,
PRIMARY KEY ('PRIVILEGE_ID')
);
CREATE TABLE IF NOT EXISTS 'SOL16_USER_PRIVILEGES' (
'USERNAME' VARCHAR(200) NOT NULL,
'PRIVILEGE_ID' VARCHAR(2) NOT NULL,
PRIMARY KEY ('USERNAME')
);
And the file/directory structure is,
src
|----main
| |----java
| |----resources
| | |----static
| | |----templates
| | |----application.properties
| | |----data-mysql.sql
| | |----schema-mysql.sql
As the Spring boot documentation mentioned, what have fixed my issue was adding spring.datasource.platform to the application.properties. That is what I have been missing to initialize the schema using schema-{platform}.sql and data-{platform}.sql.
{platform} = value of spring.datasource.platform
So my final application.properties file is,
spring.datasource.url = jdbc:mysql://localhost:3306/testdb?useSSL=false
spring.datasource.username = root
spring.datasource.password = password
spring.datasource.testWhileIdle = true
spring.datasource.validationQuery = SELECT 1
spring.datasource.driverClassName = com.mysql.jdbc.Driver
spring.jpa.hibernate.ddl-auto = validate
spring.jpa.show-sql = true
spring.jpa.hibernate.naming-strategy = org.hibernate.cfg.ImprovedNamingStrategy
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
spring.datasource.platform=mysql
spring.datasource.schema=schema-mysql.sql
spring.datasource.data=data-mysql.sql
spring.datasource.initialize=true
spring.datasource.continue-on-error=true
I faced the same issue and resolved it applying the below code in spring boot 2.0
spring.datasource.initialization-mode=always
You need to add below code in application.properties
and open your xammp/wamp or server then create yourdatabase for example studentdb
same name give in application.properties file
spring.datasource.url=jdbc:mysql://localhost:3306/yourdatabase
spring.datasource.username=root
spring.datasource.password=
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.jpa.database-platform = org.hibernate.dialect.MySQL5Dialect
spring.jpa.generate-ddl=true
spring.jpa.hibernate.ddl-auto = update
Please use the key field given below---->
spring.jpa.hibernate.ddl-auto=create
or
spring.jpa.hibernate.ddl-auto=update

Error while trying to connect cassandra database using spark streaming

I'm working in a project which uses Spark streaming, Apache kafka and Cassandra.
I use streaming-kafka integration. In kafka I have a producer which sends data using this configuration:
props.put("metadata.broker.list", KafkaProperties.ZOOKEEPER);
props.put("bootstrap.servers", KafkaProperties.SERVER);
props.put("client.id", "DemoProducer");
where ZOOKEEPER = localhost:2181, and SERVER = localhost:9092.
Once I send data I can receive it with spark, and I can consume it too. My spark configuration is:
SparkConf sparkConf = new SparkConf().setAppName("org.kakfa.spark.ConsumerData").setMaster("local[4]");
sparkConf.set("spark.cassandra.connection.host", "localhost");
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(2000));
After that I am trying to store this data into cassandra database. But when I try to open session using this:
CassandraConnector connector = CassandraConnector.apply(jssc.sparkContext().getConf());
Session session = connector.openSession();
I get the following error:
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:220)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1231)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:334)
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:182)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:161)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:161)
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:36)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:61)
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:70)
at org.kakfa.spark.ConsumerData.main(ConsumerData.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Regarding to cassandra, I'm using default configuration:
start_native_transport: true
native_transport_port: 9042
- seeds: "127.0.0.1"
cluster_name: 'Test Cluster'
rpc_address: localhost
rpc_port: 9160
start_rpc: true
I can manage to connect to cassandra from the command line using cqlsh localhost, getting the following message:
Connected to Test Cluster at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 3.0.5 | CQL spec 3.4.0 | Native protocol v4] Use HELP for help. cqlsh>
I used nodetool status too, which shows me this:
http://pastebin.com/ZQ5YyDyB
For running cassandra I invoke bin/cassandra -f
What I am trying to run is this:
try (Session session = connector.openSession()) {
System.out.println("dentro del try");
session.execute("DROP KEYSPACE IF EXISTS test");
System.out.println("dentro del try - 1");
session.execute("CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}");
System.out.println("dentro del try - 2");
session.execute("CREATE TABLE test.users (id TEXT PRIMARY KEY, name TEXT)");
System.out.println("dentro del try - 3");
}
My pom.xml file looks like that:
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.6.0-M1</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.6.0-M2</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.1.0-alpha2</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.1.0-alpha2</version>
</dependency>
<dependency>
<groupId>org.json</groupId>
<artifactId>json</artifactId>
<version>20160212</version>
</dependency>
</dependencies>
I have no idea why I can't connect to cassandra using spark, is it configuration bad or what i am doing wrong?
Thank you!
com.datastax.driver.core.exceptions.InvalidQueryException:
unconfigured table schema_keyspaces)
That error indicates an old driver with a new Cassandra version. Looking at the POM file, we find there the spark-cassandra-connector dependency declared twice.
One uses version 1.6.0-m2 (GOOD) and the other 1.1.0-alpha2 (old).
Remove the references to the old dependencies 1.1.0-alpha2 from your config:
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.1.0-alpha2</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.1.0-alpha2</version>
</dependency>

Categories