I have a server that talks to a database that I need to test. I connect to the database using Hibernate and manage the dependencies using Gradle. I want to use separate tables in MySql for production and testing. So I have currently this line in hibernate.cfg.xml:
<property name="hibernate.connection.url">jdbc:mysql://127.0.0.1:3306/production_database</property>
But what I really want is for it to be something like:
<property name="hibernate.connection.url">jdbc:mysql://127.0.0.1:3306/${DATABASE_NAME}</property>
and then when I run gradle test, DATABASE_NAME can be set to "test_database_name", and when I run gradle jettyRun it'll still be "production_database". This seems like something that should be possible, but when I google for "hibernate teplating" I get references this other thing called HibernateTemplate that has nothing to do with what I want as far as I can tell. What's the syntax that'll make this happen for me?
You should move that property out of hibernate.cfg.xml, and into a database.properties file.
And, Then you can use gradle to modify this file depending upon the argument.
Please refer to Gradle Tasks for this.
ant.propertyfile(
file: "database.properties") {
entry( key: "connectionurl", value: "somevalue")
}
Related
I am trying to build a custom Liquibase docker image (based on the official liquibase/liquibase:4.3.5 image) for running database migrations in Kubernetes.
I am using some custom types for the database which are implemented using #DataTypeInfo annotation and extending existing LiquibaseDataTypes like liquibase.datatype.core.VarcharType (class discovery is implemented using the META-INF/services/liquibase.datatype.LiquibaseDatatype mechanism introduced in Liquibase 4+).
These extensions are implemented inside their own maven module called "schema-impl", which is generating a schema-impl.jar. Everything was working fine when using migrations integrated inside the app startup process, but now we want this to be done by the dedicated docker image.
The only information in the Liquibase documentation regarding this topic is the "Drivers and extensions" section from this document. According to this, I added the schema-impl.jar into the /liquibase/classpath directory during the image building process and also modified the liquibase.docker.properties in order to add this jar file explicitly inside the classpath property:
classpath: /liquibase/changelog:/liquibase/classpath:/liquibase/classpath/schema-impl.jar
liquibase.headless: true
However, when I try to run my changesets with the docker image, I am always getting an error because it cannot find the custom type definition:
liquibase.exception.DatabaseException: ERROR: type "my-string" does not exist
Any help would be really appreciated. Thanks in advance.
Ok I found it. Basically the problem was that I needed to include the classpath in the entrypoint command, not in the liquibase.docker.properties file (which seems to be useless for this usecase), like this:
--classpath=/liquibase/changelog:/liquibase/classpath/schema-impl.jar
I am using spring with liquibase to update my database. Since know I have not need to user rollback functonality, but the times come where I would like to make it work.
But I cant seems to fire it from my application.
I know that maven has plugin which helps with that, but until know I was not using it and when I add it I need to provide source and credentials to my database.
In this moment liquibase is configured in xml.
<bean id="liquibase" class="liquibase.integration.spring.SpringLiquibase">
<property name="dataSource" ref="p6spyDataSource"/>
<property name="changeLog" value="classpath:db.changelog-master.yaml"/>
.
.
</bean>
And in maven I have only dependency to liquibase-core.
And the place where I set liquibase.shouldRun is in application.properties
DataSource is taken from TomEE configuration server.xml file
So the question is if I can maybe somehow add maven plugin without adding credentials (should be taken from dataSource). Or is there other way to run rollback script from my changelog?
There are several related questions that have been posted previously about using Liquibase rollback with Spring Boot. This one seems the most similar to your post: Perform a liquibase:rollback from the command line when properties are in Spring-boot files (application.properties) and not liquibase.properties
Here is the answer as provided by Robert Kleinschmager:
The property names within springs application.properties and liquidate.properties are not compatible. You have three options
#1 just create a separate liquibase.properties file with the content you need - see liquibase doc as you only need to fix your current setup
#2 give the database parameters via command-line arguments
mvn liquibase:rollback -Dliquibase.rollbackCount=1 -Dliquibase.url=jdbc:postgresql://localhost:5432/comptesfrance -Dliquibase.username
see rollback goal for all arguments
#3 if you need a permanent solution, then you may add the liquibase properties into your application.properties and reuse them in the same file. i.e.
liquibase.url=jdbc:postgresql://localhost:5432/comptesfrance
spring.datasource.url=${liquibase.url}
I would like to have my:
spring.datasource.url = jdbc:mysql://666.666.666.666/prod_very_wow
Change into:
spring.datasource.url = jdbc:mysql://666.666.666.666/dev_very_wow
According to the branch I am currently on. I think I should also have it specified within the Dockerfile - I should have a property added next to docker's RUN which should determine which data source ought to be activated.
Namely, I would like my app to be connected to prod_very_wow when I am on master branch and dev_very_wow everytime I am checking out to dev or creating a new feature branch and have it all determined by a property added to RUN mvn package within the Dockerfile.
I apologise if the question makes no sense, but, frankly - I am a little bit clueless how to ask this question and so I have troubles googling for answers.
I just found a couple of leads about "environmental variables", but I can't find any connection between the datasource connected to and the branch I am currently on.
The best way to handle different configuration based on environment is to have decoupled your code from your configuration that is one of the twelve-factor apps principles. In this case you should have an external config server, like spring cloud config server, that will host the configurations files for the different environments and the application will ask this config server for the proper config file depending on the environment where it is deployed.
However, if you don't want to follow this approach you can create the different configuration files in the application and use an environment variable that tells spring which file to use. For example, in your case you can have an application-local.yaml and application-prod.yaml, and then if you want to specify it in the dockerfile in the mvn package command, you can use:
RUN mvn -Dspring.profiles.activ=local package
RUN mvn -Dspring.profiles.activ=prod package
I am trying to index my entities on AWS Elasticsearch cluster, I am currently using hibernate search and local file for it. Therefore, the hibernate integration with elasticsearch is the only option I have, I've followed the hibernate search doc but it ends up with
Caused by: java.util.ServiceConfigurationError: org.hibernate.search.bridge.spi.IndexManagerTypeSpecificBridgeProvider: Provider org.hibernate.search.elasticsearch.bridge.impl.ElasticsearchBridgeProvider not a subtype
I tried to remove all the jars and clean install maven once again, it didn't change anything
I've tried to add the hibernate-search-elasticsearch as a module in wildfly but ends up with many issues as well like Lucene query parser is not found in the class loader (maybe I missed up something while adding the jar as a module )
As I understood I don't need server provisioning since I am using the version which is supported by wildfly (correct me if I am wrong).
I am using:
Wildfly server 14.0.1
Hibernate core 5.3.6.Final
Hibernate search orm 5.10.3.Final
Hibernate search elasticsearch 5.10.3.Final
any ideas what could be wrong? and the better question am I adding the correct dependencies for wildfly?
P.S I know the similar question was asked before but the answer didn't help at all.
All that needed to be done is to add the dependency for hibernate elasticsearch as compile scope and add those properties in persistence.xml.
<property name="jboss.as.jpa.providerModule" value="org.hibernate" />
<property name="wildfly.jpa.hibernate.search.module" value="org.hibernate.search.orm" />
Is it possible for new Flyway migrations to be generated by JPA/Hibernate's automatic schema generation when a new model / field etc. are added via Java code.
It would be useful to capture the auto-generated SQL and save it directly to a new Flyway migration, for review / editing / committing to a project repository.
Thank you in advance for any assistance or enlightenment you can offer.
If your IDE of choice is IntelliJ IDEA, I'd recommend using the JPA Buddy plugin to do this. It can generate Flyway migrations by comparing your Java model to the target DB.
You can use it to keep your evolving model and your SQL scripts in sync.
Also, it can create the init script if your DB is empty.
Once you have it installed and have Flyway as your Maven/Gradle dependency, you can generate a migration like this:
Flyway doesn't have built-in support for diff, I use liquidbase within a maven spring boot project and changelogs can be created from JPA/hibernate changes by using:
mvn liquibase:diff
All of the options for liquibase diff are located here:
http://www.liquibase.org/documentation/maven/maven_diff.html
If you want to generate the update SQL automatically, you can ask Hibernate to do so; just add the lines below to your Spring Boot configuration:
spring.jpa.properties.javax.persistence.schema-generation.create-source=metadata
spring.jpa.properties.javax.persistence.schema-generation.scripts.action=update
spring.jpa.properties.javax.persistence.schema-generation.scripts.create-target=update.sql
When you execute the application, this will generate a file named update.sql on the root of your project. Now, you can just copy and paste them into your Flyway migration.
This was adapted from this other answer: https://stackoverflow.com/a/36966419/679240 ; it is basically the same logic, except that one wants to generate a database creation script, while I needed an update script, instead.
BTW, if you want to replace the names of the foreign keys on the script with more readable ones, you could use this regex: ^(alter table .*?)(\w+)(\s+add constraint )\w+( foreign key \()(.*?)(\).*) with this replacement: $1$2$3fk_$2__$5$4$5$6; this will change the names of the FKs in the script to fk_name_of_the_table__name_of_the_field.