Flyway (SpringBoot) migration with file in multiple directories skip version - java

I am working on a SpringBoot application with Flyway. I have to update the database that already has those migration :
The migrations under common must be executed on different environements (Spring profiles loaded) while local and qa will have different data inserted into a H2 database.
I need to alter the table (adding and modifying columns) and then update the data inserted in V1_1 and V1_2. I tried MANY different approaches to avoid putting the ALTER TABLE sql command in the local and qa migration files. I would like to leaver the ALTER TABLE commands in the common folder, while in the local and qa folder only have the update commands. But all of them were in vain, the new migration I add in the local directory always gets executed before the one I add in the common repository :
Even with the naming scheme above, V1_4 gets executed before V1_3, causing an error because the new columns were not added yet. I know this is not the perfect naming scheme, I used it mostly for testing and expressing my point. But even while manually testing, flyway does not behave like I would expect (surely because of my misunderstaing). The app log clearly shows V1_3 not being executed :
2019-08-13 13:31:04.025 INFO 26508 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 1.0 - schema
2019-08-13 13:31:04.076 INFO 26508 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 1.1 - institutions
2019-08-13 13:31:04.092 INFO 26508 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 1.2 - data
2019-08-13 13:31:04.476 INFO 26508 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 1.4 - update data
2019-08-13 13:31:04.482 ERROR 26508 --- [ main] o.f.core.internal.command.DbMigrate : Migration of schema "PUBLIC" to version 1.4 - update data failed! Please restore backups and roll back database and code!
I am using this propertiy : spring.flyway.locations=classpath:db/migration/common,classpath:db/migration/local
in the environnement where the exception occurs.
What am I doing wrong ? I can't seem to find a lot of documentation on flyway migration with file in multiple directories. Unfortunatly, this is what I am stuck with and I cannot change the file structure since these decision are out of my hands.
Thanks in advance !

When you provide a location for your migration, flyway looks for .sql files under that folder and subfolders. So if you have V1.1 and V1.4 in Local and V1.3 in Common, flyway will treat Local as the migration folder and execute in the order V1.1 and V1.4, it won't go to common unless you provide root dir as your flyway location. In your case you should give db.migration as your location.

Related

How to access Hbase on S3 in from non EMR node

I am trying to access hbase on EMR for read and write from a java application that is running outside EMR cluster nodes . ie;from a docker application running on ECS cluster/EC2 instance. The hbase root folder is like s3://<bucketname/. I need to get hadoop and hbase configuration objects to access the hbase data for read and write using the core-site.xml,hbase-site.xml files. I am able to access the same if hbase data is stored in hdfs.
But when it is hbase on S3 and try to achieve the same I am getting below exception.
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.amazon.ws.emr.hadoop.fs.EmrFileSystem not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2638
The core-site.xml file contains the the below properties.
<property>
<name>fs.s3.impl</name>
<value>com.amazon.ws.emr.hadoop.fs.EmrFileSystem</value>
</property>
<property>
<name>fs.s3n.impl</name>
<value>com.amazon.ws.emr.hadoop.fs.EmrFileSystem</value>
</property>
Below is the jar containing the “com.amazon.ws.emr.hadoop.fs.EmrFileSystem” class:
/usr/share/aws/emr/emrfs/lib/emrfs-hadoop-assembly-2.44.0.jar
This jar is present only on emr nodes and cannot be included as a maven dependency in a java project from maven public repo. For Map/Reduce jobs and Spark jobs adding the jar location in the classpath will serve the purpose. For a java application running outside emr cluster nodes, adding the jar to the classpath won't work as the jar is not available in the ecs instances. Manually adding the jar to the classpath will lead to the below error.
2021-03-26 10:02:39.420 INFO 1 --- [ main] c.a.ws.emr.hadoop.fs.util.PlatformInfo : Unable to read clusterId from http://localhost:8321/configuration , trying extra instance data file: /var/lib/instance-controller/extraInstanceData.json
2021-03-26 10:02:39.421 INFO 1 --- [ main] c.a.ws.emr.hadoop.fs.util.PlatformInfo : Unable to read clusterId from /var/lib/instance-controller/extraInstanceData.json, trying EMR job-flow data file: /var/lib/info/job-flow.json
2021-03-26 10:02:39.421 INFO 1 --- [ main] c.a.ws.emr.hadoop.fs.util.PlatformInfo : Unable to read clusterId from /var/lib/info/job-flow.json, out of places to look
2021-03-26 10:02:45.578 WARN 1 --- [ main] c.a.w.e.h.fs.util.ConfigurationUtils : Cannot create temp dir with proper permission: /mnt/s3
We are using emr version 5.29. Is there any work around to solve the issue?
S3 isn't a "real" filesystem -it doesn't have two things hbase needs
atomic renames needed for compaction
hsync() to flush/sync the write ahead log.
To use S3 as the HBase back end
There's a filesystem wrapper around S3a, "HBoss" which does the locking needed for compaction.
you MUST still use HDFS or some other real FS for the WAL
Further reading [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/outputstream.md]
I was able to solve the issue by using s3a. EMRFS libs used in the emr are not public and cannot be used outside EMR. Hence I used S3AFileSystem to access hbase on S3 from my ecs cluster. Add hadoop-aws and aws-java-sdk-bundle maven dependencies corresponding to your hadoop version.
And add the below property in my core-site.xml.
<property>
<name>fs.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
<description>The implementation class of the S3A Filesystem</description>
</property>
then change the hbase root directory url in hbase-site.xml as follows.
<property>
<name>hbase.rootdir</name>
<value>s3a://bucketname/</value>
</property>
You can also set the other s3a related properties. Please refer to the below link for more details related to s3a.
https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html

Unsupported partitioner with Amazon Keyspaces (for Apache Cassandra)

I have a Java Spring app and I'm using Amazon Keyspaces (for Apache Cassandra). I'm using the sigv4 plugin , (version 4.0.2), the cassandra java-driver-core (version 4.4.0) and have followed the official documentation on how to connect my java app with MCS. The app connects just fine but I'm getting a weird warning at start up:
WARN 1 --- [ s0-admin-0] .o.d.i.c.m.t.DefaultTokenFactoryRegistry : [s0] Unsupported partitioner 'com.amazonaws.cassandra.DefaultPartitioner', token map will be empty.
Everything looks good but after a few minutes that warning comes back and my queries start to fail. This is how the logs look after a few minutes:
WARN 1 --- [ s0-admin-0] .o.d.i.c.m.t.DefaultTokenFactoryRegistry : [s0] Unsupported partitioner 'com.amazonaws.cassandra.DefaultPartitioner', token map will be empty.
WARN 1 --- [ s0-io-1] c.d.o.d.i.c.m.SchemaAgreementChecker : [s0] Unknown peer xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, excluding from schema agreement check
WARN 1 --- [ s0-io-0] c.d.o.d.i.c.control.ControlConnection : [s0] Unexpected error while refreshing schema after a successful reconnection, keeping previous version (CompletionException: com.datastax.oss.driver.api.core.connection.ClosedConnectionException: Channel was force-closed)
WARN 1 --- [ s0-io-1] c.d.o.d.i.c.m.DefaultTopologyMonitor : [s0] Control node ec2-x-xx-xxx-xx.us-east-2.compute.amazonaws.com/x.xx.xxx.xxx:xxxx has an entry for itself in system.peers: this entry will be ignored. This is likely due to a misconfiguration; please verify your rpc_address configuration in cassandra.yaml on all nodes in your cluster.
I have debugged a little and it looks like that partitioner comes from the actual node metadata, so I don't really know if there's an actual way to fix it.
I've seen there's a similar question asked recently here, but no solution has been posted yet. Any ideas? Thanks so much in advance
These are all warnings and not errors. Your connection should work just fine. They are logged due to how Amazon Keyspaces is slightly different from an actual Cassandra cluster. Try setting these to get rid of the noise:
datastax-java-driver.advanced {
metadata {
schema.enabled = false
token-map.enabled = false
}
connection.warn-on-init-error = false
}
Problem the same with me.
The above problems encountered when using a Spring boot version 2.3.x
Because is a Spring boot version 2.3.x
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-cassandra-reactive</artifactId>
</dependency>
OR
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-cassandra</artifactId>
</dependency>
When creating a Maven / Gradle will get "datastax-java-driver-core 4.6.1" and I think this is another reason that Amazon Keyspaces are not supported.
Okay, Clear.....
Back to the subject of AWS library aws-sigv4-auth-cassandra-java-driver-plugin 4.0.2
When creating a Maven / Gradle will get "datastax-java-driver-core 4.4.0"
Now I am starting to see that Amazon Keyspaces
Maybe not supported "datastax-java-driver-core" with version greater than 4.4.0
Okay, it's been very long.
If you want Application Spring Boot 2 to work
You try following the solutions follows.
look at pom.xml
remove aws-sigv4-auth-cassandra-java-driver-plugin
downgrade Spring boot version 2.3.x to 2.2.9
add dependency below,
spring-boot-starter-data-cassandra-reactive
OR
spring-boot-starter-data-cassandra
create Amazon digital certificate and download
If you used InteljiJ IDEA Edit Configurations
Goto Edit Configurations -> next VM Options add below,
-Djavax.net.ssl.trustStore=path_to_file/cassandra_truststore.jks
-Djavax.net.ssl.trustStorePassword=my_password
Reference: https://docs.aws.amazon.com/keyspaces/latest/devguide/using_java_driver.html
application-dev.yml add config below,
spring:
data:
cassandra:
contact-points:
- "cassandra.ap-southeast-1.amazonaws.com"
port: 9142
ssl: true
username: "cassandra-username"
password: "cassandra-password"
keyspace-name: keyspace-name
request:
consistency: local_quorum
Run Test Program
Pass.
Work for me.
Tech Stack
Spring Boot WebFlux 2.2.9.RELEASE
Cassandra Reactive
JDK 13
Cassandra Database with Amazon Keyspaces
Maven 3.6.3
Have fun with programming.

spring boot not loading default profile while using spring.profile.default in application.properties

FYI : I swear there is no activating profile configuration such as -D or run configuration
Goal
When application is booted up without any activating profile, the dev profile is activated as a default.
Problem
I've set spring.profile.default = dev , and I would expect that the dev profile is activated. But it is not.
What I did
Run Environment
Spring-Boot-version : 2.1.2 Release
What I'm referred
1) How to use profiles in Spring Boot Application -
https://www.javacodegeeks.com/2019/07/profiles-spring-boot-application.html#respond
Here's Code What I did
/resources/application.properties
spring.profiles.default= dev
application.environment=This is a "Default" Environment
/resources/application-dev.properties
application.environment=This is a "dev" Environment
server.port= 8082
/ProfileController.java
#RestController
#RequestMapping("/v1")
public class ProfileController {
#Value("${application.environment}")
private String applicationEnv;
#GetMapping
public String getApplicationEnv(){
return applicationEnv;
}
}
Result
localhost/v1 => This is a "Default" Environment
And
I've found out the default profile is set up as dev correctly.
This is my spring-boot log.
2019-10-16 23:17:02.926 INFO 64099 --- [ main] c.e.s.SpringbootdemoappApplication : No active profile set, falling back to default profiles: dev
Add another log for server port
2019-10-17 00:25:03.837 INFO 68318 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8083 (http)
The reason that I add is it doesn't seem a problem related with Injection.
Time to Close this question
The first goal that I want to achieve is change default profile.
From spring docs(https://docs.spring.io/spring/docs/4.2.0.RELEASE/spring-framework-reference/htmlsingle/#beans-definition-profiles-default), default profile can be changed as setting spring.profiles.default in application.properties.
But it seems kind of bug (Thanks #Antoniossss). Even if I've set that in application.properties and the console showed No active profile set, falling back to default profiles: dev.
However still dev profile wasn't activated.
The thing that I've found out is changing default profile should be done before loading application.properties.
It means if changing default profile is described in application.properties, it's too late.(IDK why though, I can't figure out because there are so many layers in Spring ...)
If défaut profile is set up using -Dspring.default.profile = dev , it's working properly.
From https://github.com/spring-projects/spring-boot/issues/1219:
You can't change the default profile by declaring it in a config file. It has to be in place before the config files are read.
The problem is you are passing this properties after application is loaded, you need to provide this property while application is booting up
Pass it as JVM args
java -jar -Dspring.profiles.default=dev myproject.jar
Pass as environment variable
java -jar myproject.jar --spring.profiles.default=dev
System variable
SPRING_PROFILES_DEFAULT=dev
You set your default profile fallback to dev with spring.profiles.default=dev. If you had spring.profiles.active=someOtherProfile, you would have someOtherProfile activated.
2019-10-16 23:17:02.926 INFO 64099 --- [ main]
c.e.s.SpringbootdemoappApplication : No active profile set, falling
back to default profiles: dev
This log says that dev profile is the active one, since you don't have any explicit profile activated.
You have only configured the default profile.
You need to set an active profile like spring.profiles.active=dev
in your app properties. For injecting current active profile,
#Value("${spring.profiles.active}")
private String currentActiveEnv;
Please go through with following link -
https://medium.com/#khandkesoham7/profiling-with-spring-boot-408b33f8a25f
that guy has explained well
and to verify the or command for generate the build follow the following command
mvn -Ppreprod package -DskipTests clean verify

How do I make FlyWay run my migrations? "Schema is up to date. No migration necessary."

I have an existing database. I created two migrations
$ ls src/main/resources/db/migration/
V1__create_stats.sql V2__create_sources.sql
I set the following in application.properties
# Prevent complaints when starting migrations with existing tables.
flyway.baselineOnMigrate = true
Otherwise it would give the error org.flywaydb.core.api.FlywayException: Found non-empty schemagalaxybadgewithout metadata table! Use baseline() or set baselineOnMigrate to true to initialize the metadata table.
When I try to start the app, it skips the migrations and doesn't execute them! I use show tables; in MySQL and see they are not there!
>mvn spring-boot:run
...
2018-05-09 18:43:03.671 INFO 24520 --- [ restartedMain] o.f.core.internal.util.VersionPrinter : Flyway 3.2.1 by Boxfuse
2018-05-09 18:43:04.420 INFO 24520 --- [ restartedMain] o.f.c.i.dbsupport.DbSupportFactory : Database: jdbc:mysql://localhost:3306/galaxybadge (MySQL 5.5)
2018-05-09 18:43:04.486 INFO 24520 --- [ restartedMain] o.f.core.internal.command.DbValidate : Validated 0 migrations (execution time 00:00.030s)
2018-05-09 18:43:04.704 INFO 24520 --- [ restartedMain] o.f.c.i.metadatatable.MetaDataTableImpl : Creating Metadata table: `galaxybadge`.`schema_version`
2018-05-09 18:43:05.116 INFO 24520 --- [ restartedMain] o.f.core.internal.command.DbBaseline : Schema baselined with version: 1
2018-05-09 18:43:05.145 INFO 24520 --- [ restartedMain] o.f.core.internal.command.DbMigrate : Current version of schema `galaxybadge`: 1
2018-05-09 18:43:05.146 INFO 24520 --- [ restartedMain] o.f.core.internal.command.DbMigrate : Schema `galaxybadge` is up to date. No migration necessary.
I looked at this answer but it didn't help and seemed to give the wrong property name. Here is the schema_version table it created.
> select * from schema_version;
+--------------+----------------+---------+-----------------------+----------+-----------------------+----------+--------------+---------------------+----------------+---------+
| version_rank | installed_rank | version | description | type | script | checksum | installed_by | installed_on | execution_time | success |
+--------------+----------------+---------+-----------------------+----------+-----------------------+----------+--------------+---------------------+----------------+---------+
| 1 | 1 | 1 | << Flyway Baseline >> | BASELINE | << Flyway Baseline >> | NULL | root | 2018-05-09 18:43:05 | 0 | 1 |
+--------------+----------------+---------+-----------------------+----------+-----------------------+----------+--------------+---------------------+----------------+---------+
Spring Boot 1.5.6, FlyWay Core 3.2.1
Spring docs -
FlyWay docs
OK I found this https://flywaydb.org/documentation/existing
But didn't follow it. Instead, I moved my migrations from V1__* and V2__* to V2... and V3... and downloaded the production schema into V1__initialize.sql.
mysqldump -h project.us-east-1.rds.amazonaws.com -u username -p --no-data --skip-add-drop-table --compact --skip-set-charset databasename > V1__initialize.sql
Then when I ran Spring mvn spring-boot:run it ran the migrations.
(Well actually there was a lot of debugging of the SQL and I had to drop the tables several times and delete rows out of schema_verion and delete old file names from target/.../migration/ but that's another story.)
I believe it may be possible to set
flyway.baselineVersion=0
and skip the SQL dump (initialization) based on the info here: https://flywaydb.org/documentation/configfiles. However, having the schema available for future devs seems like the right approach.
I still don't understand why it didn't run V2__... migration from the original question. If it starts at 1, then migration 2 is still available to run. Had it worked as expected, then I might have understood the problem sooner.
With a Spring boot application that already have an existing database, configure this in your
application.yml:
flyway:
baseline-on-migrate: true
baseline-version: 0
And start your migration scripts at 1, like this : V1__script_description.sql, V2__script_description.sql, ...

Sonar-runner execution failure causing cast exception

After configure the sonar tools (SonarQube, MySql database and Sonar-runner) I perform an analysis over an Android project without any problem. But after install the Android Plugin for sonar and repeat the analysis, this one fails getting the next error:
INFO - Preview mode
Load batch settings
User cache: /home/user/.sonar/cache
INFO - Install plugins
INFO - Exclude plugins: devcockpit, jira, pdfreport, views, report, buildstability, scmactivity, buildbreaker
INFO - Create JDBC datasource for jdbc:h2:/home/user/workspace/myAndroidProject/.sonar/.sonartmp/preview1394469024394-0
INFO - Initializing Hibernate
INFO - Load project settings
INFO - Apply project exclusions
INFO - ------------- Scan myAndroidProject
INFO - Load module settings
INFO - Language is forced to java
INFO - Loading technical debt model...
INFO - Loading technical debt model done: 424 ms
INFO - Configure Maven plugins
INFO - Base dir: /home/user/workspace/myAndroidProject
INFO - Working dir: /home/user/workspace/myAndroidProject/.sonar
INFO - Source dirs: /home/user/workspace/myAdnroidProject/src
INFO - Source encoding: UTF-8, default locale: en_EN
INFO - Index files
INFO - Included sources:
INFO - src/**
INFO - 116 files indexed
WARN - Accessing the filesystem before the Sensor phase is deprecated and will not be supported in the future. Please update your plugin.
INFO - Index files
INFO - Included sources:
INFO - src/**
INFO - 116 files indexed
WARN - Accessing the filesystem before the Sensor phase is deprecated and will not be supported in the future. Please update your plugin.
INFO - Index files
INFO - Included sources:
INFO - src/**
INFO - 116 files indexed
INFO - Quality profile for java: Sonar way
INFO - Sensor JavaSourceImporter...
INFO - Sensor JavaSourceImporter done: 49 ms
INFO - Sensor JavaSquidSensor...
INFO - Java AST scan...
INFO - 116 source files to be analyzed
INFO - 116/116 source files analyzed
INFO - Java AST scan done: 6693 ms
WARN - Java bytecode has not been made available to the analyzer. The Depth of Inheritance Tree (DIT) metric, Response for Class (RFC) metric, Number of Children (NOC) metric, Lack of Cohesion (LCOM4) metric, deperecated dependencies metrics, UnusedPrivateMethod rule, RedundantThrowsDeclarationCheck rule, S1160 rule, S1217 rule are disabled.
INFO: ------------------------------------------------------------------------
INFO: EXECUTION FAILURE
INFO: ------------------------------------------------------------------------
Total time: 18.440s
Final Memory: 12M/357M
INFO: ------------------------------------------------------------------------
ERROR: Error during Sonar runner execution
ERROR: Unable to execute Sonar
ERROR: Caused by: org.sonar.api.resources.Directory cannot be cast to org.sonar.api.resources.JavaPackage
My sonar-project.properties file is the enxt:
#Required metadata
sonar.projectKey=mKey
sonar.projectName=myAndroidProject
sonar.projectVersion=1.0
# Paths to source directories.
# Paths are relative to the sonar-project.properties file. Replace "\" by "/" on Windows.
# Do not put the "sonar-project.properties" file in the same directory with the source code.
# (i.e. never set the "sonar.sources" property to ".")
sonar.sources=src
# The value of the property must be the key of the language.
sonar.language=java
# Encoding of the source code
sonar.sourceEncoding=UTF-8
# Analysis mode
sonar.analysis.mode=preview
#Enables the Lint profile to analyze the code using the Lint rules.
#sonar.profile=Android Lint
I'm using the next environment:
SonarQube 4.2 RC1
Sonar-runner 2.3
Database: MySQL
Ubuntu 12.04 LTS
Java 1.7
I tryed uninstalling the Android plugin but the problem persists. The unique way that I've found to solve it is deleting the database and the user and create them again.
As stated on http://docs.codehaus.org/pages/viewpage.action?pageId=236224987, the Android plugin is not yet compatible with SonarQube 4.2-RC1. See also http://jira.codehaus.org/browse/SONARPLUGINS-3483.
You need to provide the binaries (bytecode .class files) to the sonar executor.
Add the following line to your sonar-project.properties
# Path to the class files
sonar.binaries=build\\classes\\main
If the above line doesn't work , then check your binaries actual path and place it in sonar.binaries property

Categories