Reactive hibernate on Quarkus with Flyway - java

I face a problem when trying to use Quarkus Flyway extension with Quarkus Reactive Hibernate & RESTEasy. When starting my application, I get the following error:
[io.qu.ru.Application] (Quarkus Main Thread) Failed to start application (with profile dev): java.lang.IllegalStateException: Booting an Hibernate Reactive serviceregistry on a non-reactive RecordedState!
at io.quarkus.hibernate.reactive.runtime.boot.registry.PreconfiguredReactiveServiceRegistryBuilder.checkIsReactive(PreconfiguredReactiveServiceRegistryBuilder.java:76)
at io.quarkus.hibernate.reactive.runtime.boot.registry.PreconfiguredReactiveServiceRegistryBuilder.<init>(PreconfiguredReactiveServiceRegistryBuilder.java:66)
at io.quarkus.hibernate.reactive.runtime.FastBootHibernateReactivePersistenceProvider.rewireMetadataAndExtractServiceRegistry(FastBootHibernateReactivePersistenceProvider.java:177)
at io.quarkus.hibernate.reactive.runtime.FastBootHibernateReactivePersistenceProvider.getEntityManagerFactoryBuilderOrNull(FastBootHibernateReactivePersistenceProvider.java:156)
at io.quarkus.hibernate.reactive.runtime.FastBootHibernateReactivePersistenceProvider.createEntityManagerFactory(FastBootHibernateReactivePersistenceProvider.java:82)
Here are the relevant Quarkus configurations:
quarkus:
datasource:
db-kind: "postgresql"
username: "sarah"
password: "connor"
jdbc:
~: true
url: "jdbc:postgresql://localhost:5432/mybase"
reactive:
~: true
url: "postgresql://localhost:5432/mybase"
And the relevant dependencies:
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-hibernate-reactive-panache</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-reactive-pg-client</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-flyway</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-jdbc-postgresql</artifactId>
</dependency>
Disabling the JDBC configuration with ~: false avoids the exception but then the application does not launch the Flyway migration at start time. In that case, I see the following message:
[io.qu.ag.de.AgroalProcessor] (build-39) The Agroal dependency is present but no JDBC datasources have been defined.
I found on some Quarkus issues that it's indeed not possible to run a reactive and a blocking database connection at the same time but is there a way to make Flyway working with a reactive Quarkus application ?

Currently they indeed don't seem support both blocking JDBC and a reactive sql client at the same time.
A workaround is to disable JDBC for the Quarkus runtime and write your own wrapper to execute the Flyway migration.
Below workaround is based on their corresponding GitHub issue.
Flyway wrapper to run when application starts:
#ApplicationScoped
public class RunFlyway {
#ConfigProperty(name = "myapp.flyway.migrate")
boolean runMigration;
#ConfigProperty(name = "quarkus.datasource.reactive.url")
String datasourceUrl;
#ConfigProperty(name = "quarkus.datasource.username")
String datasourceUsername;
#ConfigProperty(name = "quarkus.datasource.password")
String datasourcePassword;
public void runFlywayMigration(#Observes StartupEvent event) {
if (runMigration) {
Flyway flyway = Flyway.configure().dataSource("jdbc:" + datasourceUrl, datasourceUsername, datasourcePassword).load();
flyway.migrate();
}
}
}
pom.xml:
<!-- DB -->
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-reactive-pg-client</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-hibernate-reactive</artifactId>
</dependency>
<!-- Flyway -->
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-flyway</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-jdbc-postgresql</artifactId>
</dependency>
application.yml:
myapp:
flyway:
migrate: true
quarkus:
datasource:
db-kind: postgresql
username: myuser
password: mypassword
jdbc: false
reactive:
url: postgresql://localhost:5432/mydb

Related

Google Cloud postgres instance not able to connect to spring boot app

I am using postgresSQL instance on google cloud platform in spring boot app with spring data JPA.
But i am not able to connect to postgresSQL instance at the time of deployment.
I am not really sure on what dependency is required for this and what is the application properties configuration.
here is the code.
application.properties
spring.datasource.url=jdbc:postgresql://google/postgres?cloudSqlInstance=<instance-name>&socketFactory=com.google.cloud.sql.postgres.SocketFactory
spring.datasource.username=postgres
spring.datasource.password=postgres
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.PostgreSQLDialect
spring.jpa.hibernate.ddl-auto=update
pom.xml
<dependency>
<groupId>com.google.cloud.sql</groupId>
<artifactId>postgres-socket-factory</artifactId>
<version>1.0.12</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter-sql-postgresql</artifactId>
<version>1.1.0.RELEASE</version>
</dependency>
app.yml for deployment on flex environment:
runtime: java
env: flex
service: payment2
handlers:
- url: /.*
script: this field is required, but ignored
beta_settings:
cloud_sql_instances: <instance-name>
Thanks in advance.Please help!!!

Spring Boot and ActiveMQ: Ignores broker-url and uses default localhost:61616

I'm using Spring Boot and ActiveMQ. In application.properties I set the url for activemq like this:
spring.activemq.broker-url=vm://localhost?broker.persistent=false
As you can see I'm using an embedded broker (dependency added in pom).
This is my application class:
#SpringBootApplication
#EntityScan(
basePackageClasses = {ServiceApplication.class, Jsr310JpaConverters.class}
)
#EnableAutoConfiguration
#ServletComponentScan
public class ServiceApplication {
public static void main(String[] args) {
SpringApplication.run(ServiceApplication.class, args);
}
}
These are the activemq related dependencies in the pom:
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-camel</artifactId>
<version>5.14.5</version>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-pool</artifactId>
<version>5.14.5</version>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-broker</artifactId>
<version>5.14.5</version>
</dependency>
I have a single application.properties, I don't have different profiles.
But when I run the app, I get this log:
[ActiveMQ Task-1] o.a.a.t.failover.FailoverTransport : Failed to connect to [tcp://localhost:61616] after: 10 attempt(s) continuing to retry.
It's trying to connect to tcp://localhost:61616 even though that's not the url I defined.
I tried removing #EnableAutoConfiguration but still the same issue.
How can I solve this?
Your ActiveMQ client is not aware of spring.activemq.broker-url since this property is used to configure spring-boot-starter-activemq.If you do not have this starter - you configure nothing with this property.
I would suggest you to go through the following resources to have a better understanding of how to set up spring-boot-starter-activemq in your project:
https://spring.io/guides/gs/messaging-jms/
https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-messaging.html
Hope it helps!

Cannot determine embedded database driver class for database type NONE with Redis in Spring Boot

I use Spring Boot and redis. I added in pom.xml:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-redis</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-redis</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
And created class RedisConfig, which contains Beans JedisConnectionFactory jedisConnectionFactory and RedisTemplate< String, Object > redisTemplate().
When I run application, I get error:
***************************
APPLICATION FAILED TO START
***************************
Description:
Cannot determine embedded database driver class for database type NONE
Action:
If you want an embedded database please put a supported one on the classpath. If you have database settings to be loaded from a particular profile you may need to active it (no profiles are currently active).
I don't use a embeded redis. Redis work on my computer on localhost.
application.properties:
spring.redis.host=localhost
spring.redis.port=6379
Why there is this error?
There is a couple of issues:
spring-boot-starter-redis is deprecated. Use spring-boot-starter-data-redis instead.
Remove the spring-boot-starter-data-jpa dependency. Spring Data Redis does not support JPA and it's not needed. This is actually causing your error.

Use pivotal cloud foundry rabbitmq and mysql service using vcap service for spring data jpa application

I am able to use RABBITMQ and MYSQLSERVICES which is on pivotal.While binding services I am able to get the Credentials and using that credentials in my application.properties for spring data jpa project.
But this configuration that I am using is hard-coded in application.Properties To Make this configuration dynamically I came to know that we can use vcap services provided by pivotal.
So want to use run-time credentials for rabbimq and mysql.
My Code is below for reference.
File: application.propeties
rabbitmq.host=hostname
rabbitmq.virtual-host=vhost
rabbitmq.username=username
rabbitmq.password=password
rabbit.mainqueue=queue name
rabbit.errorqueue=erro queue name
spring.datasource.url=jdbc:mysql://hostname:postno
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.username=root
spring.datasource.password=root
server.port=8000
The below is the repository file
package com.redistomysql.consumer.repo;
import org.springframework.data.jpa.repository.JpaRepository;
public interface tblemployee_personal_infoRepository extends JpaRepository<tblemployee_personal_info, Long> {
}
Any help would be appreciated.
The link for reference **http://www.java-allandsundry.com/2016/05/approaches-to-binding-spring-boot.html**
Set this configuration in application-cloud.yml for Mysql
---
spring:
datasource:
url: ${vcap.services.mydb.credentials.jdbcUrl}
username: ${vcap.services.mydb.credentials.username}
password: ${vcap.services.mydb.credentials.password}
The config for rabbitMq:
System.getEnv("VCAP_SERVICES")
The dependencies in pom.xml
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-spring-service-connector</artifactId>
<version>1.2.4.RELEASE</version>
</dependency>
<!-- If you intend to deploy the app on Cloud Foundry, add the following -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-cloudfoundry-connector</artifactId>
<version>1.2.4.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-heroku-connector</artifactId>
<version>1.2.4.RELEASE</version>
</dependency>
**The manifest.yml**
---
applications:
- name: redistomysql-consumer
path: target/redistomysql-consumer-0.0.1-SNAPSHOT.jar
memory: 1024M
env:
JAVA_OPTS: -Djava.security.egd=file:/dev/./urandom
SPRING_PROFILES_ACTIVE: cloud
services:
- es-mysql-db
- es-consumer-rabbitmq-service
buildpack: https://github.com/cloudfoundry/java-buildpack.git
env:
JBP_CONFIG_SPRING_AUTO_RECONFIGURATION: '{enabled: false}'

Error while trying to connect cassandra database using spark streaming

I'm working in a project which uses Spark streaming, Apache kafka and Cassandra.
I use streaming-kafka integration. In kafka I have a producer which sends data using this configuration:
props.put("metadata.broker.list", KafkaProperties.ZOOKEEPER);
props.put("bootstrap.servers", KafkaProperties.SERVER);
props.put("client.id", "DemoProducer");
where ZOOKEEPER = localhost:2181, and SERVER = localhost:9092.
Once I send data I can receive it with spark, and I can consume it too. My spark configuration is:
SparkConf sparkConf = new SparkConf().setAppName("org.kakfa.spark.ConsumerData").setMaster("local[4]");
sparkConf.set("spark.cassandra.connection.host", "localhost");
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(2000));
After that I am trying to store this data into cassandra database. But when I try to open session using this:
CassandraConnector connector = CassandraConnector.apply(jssc.sparkContext().getConf());
Session session = connector.openSession();
I get the following error:
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:220)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1231)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:334)
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:182)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:161)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:161)
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:36)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:61)
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:70)
at org.kakfa.spark.ConsumerData.main(ConsumerData.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Regarding to cassandra, I'm using default configuration:
start_native_transport: true
native_transport_port: 9042
- seeds: "127.0.0.1"
cluster_name: 'Test Cluster'
rpc_address: localhost
rpc_port: 9160
start_rpc: true
I can manage to connect to cassandra from the command line using cqlsh localhost, getting the following message:
Connected to Test Cluster at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 3.0.5 | CQL spec 3.4.0 | Native protocol v4] Use HELP for help. cqlsh>
I used nodetool status too, which shows me this:
http://pastebin.com/ZQ5YyDyB
For running cassandra I invoke bin/cassandra -f
What I am trying to run is this:
try (Session session = connector.openSession()) {
System.out.println("dentro del try");
session.execute("DROP KEYSPACE IF EXISTS test");
System.out.println("dentro del try - 1");
session.execute("CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}");
System.out.println("dentro del try - 2");
session.execute("CREATE TABLE test.users (id TEXT PRIMARY KEY, name TEXT)");
System.out.println("dentro del try - 3");
}
My pom.xml file looks like that:
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.6.0-M1</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.6.0-M2</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.1.0-alpha2</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.1.0-alpha2</version>
</dependency>
<dependency>
<groupId>org.json</groupId>
<artifactId>json</artifactId>
<version>20160212</version>
</dependency>
</dependencies>
I have no idea why I can't connect to cassandra using spark, is it configuration bad or what i am doing wrong?
Thank you!
com.datastax.driver.core.exceptions.InvalidQueryException:
unconfigured table schema_keyspaces)
That error indicates an old driver with a new Cassandra version. Looking at the POM file, we find there the spark-cassandra-connector dependency declared twice.
One uses version 1.6.0-m2 (GOOD) and the other 1.1.0-alpha2 (old).
Remove the references to the old dependencies 1.1.0-alpha2 from your config:
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.1.0-alpha2</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.1.0-alpha2</version>
</dependency>

Categories