Not able to create a connection to Janus to define the schema.
JanusGraph graph = JanusGraphFactory.build().set("storage.backend", "cql")//.set("storage.cql.keyspace", "janusgraph")
.set("storage.hostname", "url").open();
Error:
java.lang.IllegalArgumentException: Could not find implementation class: org.janusgraph.diskstorage.cql.CQLStoreManager
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:75)
at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:530)
at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:494)
Able to do normal tinkerpop gremlin query with the following config
#Bean
public Cluster cluster() {
return Cluster.build()
.addContactPoint(dbUrl)
.port(dbPort)
.serializer(new GraphBinaryMessageSerializerV1())
.maxConnectionPoolSize(5)
.maxInProcessPerConnection(1)
.maxSimultaneousUsagePerConnection(10)
.create();
}
#Bean
public GraphTraversalSource g(Cluster cluster) throws Exception {
//return traversal().withRemote(DriverRemoteConnection.using(cluster));
return traversal().withRemote("conf/remote-graph.properties");
}
Want to define the schema during the application start up, Trying to use openManagement
When writing a java application using janusgraph, you can choose between embedding janusgraph in your application or connecting to a janusgraph server. Your code suggests you are attempting the embedded option, so you can start from the example in the provided link.
Related
I have a Spring Boot app that uses Hibernate and Spring Batch. I am using PostgreSQL for my backend database.
My project has 2 different data sources configured: one for Hibernate and one for Spring Batch. They are both in the same database but in different schemas.
My spring batch connection string is the following:
spring.batch.datasource.url=jdbc:postgresql://localhost:5432/mytestapp?currentSchema=springbatch
I have the Datasource configured in the following way:
#Bean("b_ds_prop")
#ConfigurationProperties("spring.batch.datasource")
public DataSourceProperties dataSourceProperties() {
return new DataSourceProperties();
}
#Bean("b_ds")
#DependsOn({"b_ds_prop"})
public DataSource batchFrameworkDatasource(DataSourceProperties dataSourceProperties) {
System.out.println("start print");
// dataSourceProperties.getSchema().stream().forEach(System.out::println);
// System.out.println("datasrcprop is null : " + (dataSourceProperties==null));
// System.out.println("datasrcprop.schema is null : " + (dataSourceProperties.getSchema()==null));
System.out.println("end print");
// dataSourceProperties.setSchema(Arrays.asList("spring_batch"));
return dataSourceProperties.initializeDataSourceBuilder().build();
}
#Bean
#DependsOn({"b_ds"})
public BatchConfigurer defaultBatchConfigurer(#Qualifier("b_ds") DataSource dataSource) {
return new DefaultBatchConfigurer(dataSource);
}
The currentSchema=springbatch setting is not respected, no matter what I do. All my Spring Batch tables keep ending up in public in Postgres.
The schema named springbatch is already present in my database : mytestapp.
I have tried currentSchema=springbatch, public, I have even tried spring.batch.table-prefix=springbatch..
I have tried everything, but still I cannot understand why my batch tables keep ending up in the public schema.
Note I have another connection string in my project like:
spring.hib.datasource.url=jdbc:postgresql://localhost:5432/mytestapp
I have also tried programmatically checking what is the value of schema in DataSourceProperties.
System.out.println("datasrcprop.schema is null : " + (dataSourceProperties.getSchema()==null));
This evaluates to true.
How can I set the schema to be different for Spring Batch ?
By default, Spring boot automatically creates the spring batch metadata tables at starting. It executes the default script which is located in org.springframework.batch.core package. In your situtation, this script is schema-postgresql.sql. This script creates tables in the postgre default schema so public if nothing is specified (and as we don't know which datasource spring boot uses to create spring batch metadata tables, we don't know what is the default schema).
When executing creation script, spring boot doesn't take into account spring.batch.table-prefix property which is only used to say spring batch where metadata tables are.
If you want to control this spring boot feature, you should :
modify the creation script (add the springbatch schema) and give spring boot the path for the modified script (spring.batch.schema property)
OR
create by yourself spring batch metadata tables before starting application and disable spring boot auto creation (spring.batch.initialize-schema=never property)
Good luck
Have you tried this configration:
spring.datasource.url=jdbc:postgresql://localhost:5432/mytestapp
#spring batch properties
spring.batch.job.enabled=false
spring.batch.initializer.enabled=false
spring.batch.initialize-schema=never
spring.batch.table-prefix=springbatch.
I am trying to create a TCP server and client by reading the property files which contains the detail of the connections.
I am using Dynamic and runtime Integration Flows with the help of following reference document (
9.20 Dynamic and runtime Integration Flows)
The code is working fine while creating the client but when I am creating the server using the same with changes in the code as follow:
IntegrationFlow flow = f -> f
.handle(Tcp.inboundAdapter(Tcp.netServer(2221)
.serializer(TcpCodecs.crlf())
.deserializer(TcpCodecs.lengthHeader1())
.id("server")))
.transform(Transformers.objectToString());
IntegrationFlowRegistration theFlow = this.flowContext.registration(flow).register();
I am getting the following error:
Caused by: java.lang.IllegalArgumentException: Found ambiguous parameter type [class java.lang.String] for method match: [public java.lang.Class<?> org.springframework.integration.dsl.IntegrationComponentSpec.getObjectType(), public S org.springframework.integration.dsl.MessageProducerSpec.outputChannel(java.lang.String), public S org.springframework.integration.dsl.MessageProducerSpec.outputChannel(org.springframework.messaging.MessageChannel), public org.springframework.integration.ip.dsl.TcpInboundChannelAdapterSpec org.springframework.integration.ip.dsl.TcpInboundChannelAdapterSpec.taskScheduler(org.springframework.scheduling.TaskScheduler), public S org.springframework.integration.dsl.MessageProducerSpec.errorMessageStrategy(org.springframework.integration.support.ErrorMessageStrategy), public S org.springframework.integration.dsl.MessageProducerSpec.phase(int), public S org.springframework.integration.dsl.MessageProducerSpec.autoStartup(boolean), public S org.springframework.integration.dsl.MessageProducerSpec.sendTimeout(long)]
at org.springframework.util.Assert.isNull(Assert.java:155)
at org.springframework.integration.util.MessagingMethodInvokerHelper.findHandlerMethodsForTarget(MessagingMethodInvokerHelper.java:843)
at org.springframework.integration.util.MessagingMethodInvokerHelper.<init>(MessagingMethodInvokerHelper.java:362)
at org.springframework.integration.util.MessagingMethodInvokerHelper.<init>(MessagingMethodInvokerHelper.java:231)
at org.springframework.integration.util.MessagingMethodInvokerHelper.<init>(MessagingMethodInvokerHelper.java:225)
at org.springframework.integration.handler.MethodInvokingMessageProcessor.<init>(MethodInvokingMessageProcessor.java:60)
at org.springframework.integration.handler.ServiceActivatingHandler.<init>(ServiceActivatingHandler.java:38)
at org.springframework.integration.dsl.IntegrationFlowDefinition.handle(IntegrationFlowDefinition.java:924)
at org.springframework.integration.dsl.IntegrationFlowDefinition.handle(IntegrationFlowDefinition.java:904)
at org.springframework.integration.dsl.IntegrationFlowDefinition.handle(IntegrationFlowDefinition.java:891)
at org.springframework.integration.samples.dynamictcp.DynamicTcpClientApplication.lambda$1(DynamicTcpClientApplication.java:194)
at org.springframework.integration.config.dsl.IntegrationFlowBeanPostProcessor.processIntegrationFlowImpl(IntegrationFlowBeanPostProcessor.java:268)
at org.springframework.integration.config.dsl.IntegrationFlowBeanPostProcessor.postProcessBeforeInitialization(IntegrationFlowBeanPostProcessor.java:96)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:423)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1702)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:583)
... 16 common frames omitted
Please help me with above issue.
Also I have found the code for dynamic tcp client but no code is present for dynamic tcp server(any resource or link where I can take an idea to create dynamic server).
You are mixing responsibility. The Tcp.inboundAdapter() must be a first in the IntegrationFlow chain. Consider to use this instead:
IntegrationFlow flow =
IntegrationFlows.from(Tcp.inboundAdapter(Tcp.netServer(2221)
.serializer(TcpCodecs.crlf())
.deserializer(TcpCodecs.lengthHeader1())
.id("server")))
.transform(Transformers.objectToString())
.get();
All examples I read related with activeMq and spring-boot has especial property to change the url of broker:
spring.activemq.broker-url=<SOME_URL>
By default it uses default settings: default url and default port.
But I use rabbirMq and I want to know how to change broker url
I've read this one
I've added application.properties to the src/main/resources
with following content(host absolutely wrong, I expected to see error):
spring.rabbitmq.host=olololo
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest
But it doesn't affect application.
Looks like spring(boot) doesn't read these prioerties.
P.S.
Project structure looks like this:
Spring Boot does not have auto configuration support for rabbitmq-jms (the link you referenced is the native RabbitMQ AMQP auto configuration).
For the JMS connection factory, you will have to do the configuration yourself...
#Bean
public RMQConnectionFactory connectionFactory(#Value("${spring.rabbitmq.host}") String host,
#Value("${spring.rabbitmq.port}") int port) {
RMQConnectionFactory cf = new RMQConnectionFactory();
cf.setHost(host);
cf.setPort(port);
return cf;
}
I'm using Spring Data JPA with Hibernate as persistence provider in conjunction with a remote MySQL5 Server for a job that periodically replicates a subset of internal data. The job (i.e. a quartz-scheduled java application) runs once per dai and needs approx. 30seconds to complete the synchronization). For safety reasons we don't want to open the remote server for direct connections from outside (i.e. other than localhost).
I've seen examples with Jsch to programmatically set up an ssh tunnel, but could not finde any resources on how to integrate Jsch with spring data. One problem I'm seeing is that certain of my spring beans (i.e. org.apache.commons.configuration.DatabaseConfiguration) are created at application startup and already needs access to the datasource.
I could open the ssh tunnel outside of the application, but then it would be opened all the time, but I wanted to avoid that as I only need it opened 30seconds per day.
EDIT:
After some research I found several ways to get a ssh tunnel
A) Implementing my own DataSource (I extended org.springframework.jdbc.datasource.DriverManagerDataSource) and then used PostContruct and Predestroy to setup / close the ssh tunnel with Jsch
--> Problem: The ssh tunnel remains open for the lifetime of the application, what is not what I want
B) Implementing my own Driver (I extended com.mysql.jdbc.Driver) and overwrite "connect" to create the ssh tunnel before the connection
--> Problem: I'm not able to close the ssh tunnel connection
Any more suggestions are welcome
If you have a DataSource bean in your Spring configuration, you can create your own DataSource implementation that opens an SSH tunnel before attempting to make a connection using the provided JDBC URL. As an example, consider the following configuration that uses a HikariDataSource:
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource">
<bean class="com.zaxxer.hikari.HikariDataSource" destroy-method="close">...</bean>
</property>
</bean>
You can extend the class HikariDataSource to provide your own implementation. An example below:
class TunneledHikariDataSource extends HikariDataSource implements InitializingBean {
private boolean createTunnel = true;
private int tunnelPort = 3306;
public void afterPropertiesSet() {
if(createTunnel) {
// 1. Extract remote host name from the JDBC URL.
// 2. Extract/infer remote tunnel port (e.g. 3306)
// from the JDBC URL.
// 3. Create a tunnel using Jsch and sample code
// at http://www.jcraft.com/jsch/examples/PortForwardingL.java.html
...
}
}
}
Then, instantiate a bean instance for the custom class instead of HikariDataSource.
Expanding on #manish's answer here is my solution that works with Spring Boot in 2022:
#Configuration
class DataSourceInitializer {
#Bean
fun dataSource(properties: DataSourceProperties): DataSource {
return properties.initializeDataSourceBuilder().type(SshTunnelingHikariDataSource::class.java).build()
}
}
class SshTunnelingHikariDataSource : HikariDataSource(), InitializingBean {
override fun afterPropertiesSet() {
val jsch = JSch()
val filePath = javaClass.classLoader.getResource("id_rsa")?.toURI()?.path
jsch.addIdentity(filePath, "optional_key_file_passphrase")
val session = jsch.getSession("root", "remote-host.com")
val config = Properties()
config["StrictHostKeyChecking"] = "no";
session.setConfig(config)
session.connect()
session.setPortForwardingL(3307, "db-host.com", 3306)
}
}
Depending on your connection you may not need the identity file (i. e. a private key) but instead you may need username + password which you can provide at getSession.
remote-host.com is the host you want to have your SSH session to terminate in, having access to the database
db-host.com is the host that your remote-host can resolve. It may as well be localhost if the database is running locally on your remote-host
I need to create a connection pool from a spring application running in a tomcat server.
This application has many catalogs, the main catalog (its is static) called 'db' has just one table with all existing catalog names and a boolean flag for the "active" one.
When the application starts I need to choose from the main catalogs the active one, then I have to select it as default catalog.
How can I accomplish this?
Until now I used a custom class DataSourceSelector extends DriverManagerDataSource but now I need to improve the db connection using a pool, then I thought about a tomcat dbcp pool.
I would suggest the following steps:
Extend BasicDataSourceFactory to produce customized BasicDataSources.
Those customized BasicDataSources would already know which catalog is active and have the defaultCatalog property set accordingly.
Use your extended BasicDataSourceFactory in the Tomcat configuration.
#Configuration
public class DataAccessConfiguration {
#Bean(destroyMethod = "close")
public javax.sql.DataSource dataSource() {
org.apache.tomcat.jdbc.pool.DataSource ds = new org.apache.tomcat.jdbc.pool.DataSource();
ds.setDriverClassName("com.mysql.jdbc.Driver");
ds.setUrl("jdbc:mysql://localhost/db");
ds.setUsername("javauser");
ds.setPassword("");
ds.setInitialSize(5);
ds.setMaxActive(10);
ds.setMaxIdle(5);
ds.setMinIdle(2);
ds.get
return ds;
}
}