my problem is that I cannot perform a migration from flyway java spring, even though the migration files are detected, and the same migration files work from cmd.
I have already tried to set all possibly useful parameters I found on the internet to configure the schema, but it still sticks at "PUBLIC"
First of all the problem is as below: (logs from Java spring)
"2019-07-01 15:06:04.296 INFO 296 --- [ main] o.f.core.internal.command.DbMigrate : Current version of schema "PUBLIC": << Empty Schema >>
2019-07-01 15:06:04.297 INFO 296 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 1 - Create person table
2019-07-01 15:06:04.324 INFO 296 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 2 - Add people
2019-07-01 15:06:04.339 INFO 296 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 3 - Add people2
2019-07-01 15:06:04.356 INFO 296 --- [ main] o.f.core.internal.command.DbMigrate : Successfully applied 3 migrations to schema "PUBLIC" (execution time 00:00.094s)"
The table is called public, and I also cannot see it from mysql workbench.
But if I do it from command line with flyway migrate, it alters the schema called td, which is my intention:
"Migrating schema `td` to version 1 - Create person table
Migrating schema `td` to version 2 - Add people
Successfully applied 2 migrations to schema `td` (execution time 00:00.207s)"
The flyway config for Java:
public static void main(String[] args) {
Flyway flyway = new Flyway();
flyway.setBaselineOnMigrate(true);
flyway.migrate();
SpringApplication.run(TimeReportApplication.class, args);
}
application.properties:
flyway.user=root
flyway.password=root
flyway.url=jdbc:mysql://localhost:3306/td
flyway.schemas=TD
The working flyway config for command line:
flyway.url=jdbc:mysql://localhost:3306/td
flyway.user=root
flyway.password=root
Do you have any suggestions what could go wrong?
So after a day of trying, I found a solution:
You have to add "Datasource" to your initialization file. This will be autoconfigured by Spring from your application.properties file, which you have to place in src/main/resources:
public class TimeReportApplication {
#Autowired
static DataSource dataSource;
public static void main(String[] args) {
PrintLog.print("Server started");
System.out.println("Server started");
Flyway flyway = new Flyway();
flyway.clean();
flyway.setDataSource(dataSource);
flyway.setSqlMigrationPrefix("V");
flyway.setBaselineOnMigrate(true);
flyway.migrate();
SpringApplication.run(TimeReportApplication.class, args);
}
}
In your application.properties file write before each parameter "spring", e.g.:
spring.flyway.user=root
Related
I have a very simple spring boot project with a KTable and I want to customize my configuration in application.yml, but the config seems to not be applied. This is my configuration file application.yml
spring:
kafka:
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS:localhost:9092}
streams:
application-id: ${APPLICATION_ID:train-builder-processor}
buffered-records-per-partition: 50
consumer:
auto-offset-reset: earliest
max-poll-records: ${MAX_POLL_RECORDS:50}
max-poll-interval-ms: ${KAFKA_CONSUMER_MAX_POLL_INTERVAL_MS:1000}
properties:
spring:
json:
trusted:
packages:
- com.example.kafkastream
However, when starting the application the log outputs the following:
2022-03-03 08:20:06.992 INFO 32989 --- [ main] s.r.s.m.t.TrainBuilderApplication : Starting TrainBuilderApplication using Java 16.0.2 on MAPFVFG90ZQQ05P with PID 32989 (/Users/xxx/dev/train-builder-processor/target/classes started by xxx in /Users/xxx/dev/train-builder-processor)
2022-03-03 08:20:06.995 DEBUG 32989 --- [ main] s.r.s.m.t.TrainBuilderApplication : Running with Spring Boot v2.6.3, Spring v5.3.15
2022-03-03 08:20:06.995 INFO 32989 --- [ main] s.r.s.m.t.TrainBuilderApplication : No active profile set, falling back to default profiles: default
2022-03-03 08:20:08.856 INFO 32989 --- [ main] org.apache.kafka.streams.StreamsConfig : StreamsConfig values:
acceptable.recovery.lag = 10000
application.id = test.train-builder-processor
application.server =
bootstrap.servers = [localhost:9092]
buffered.records.per.partition = 1000
... (a bunch of other configs)
ConsumerConfig:
...
max.poll.interval.ms = 300000
max.poll.records = 1000
...
Below is the simple application class I'm using:
#EnableKafka
#EnableKafkaStreams
#SpringBootApplication
public class TrainBuilderApplication {
...
#Autowired
private TrainIdMapper trainIdMapper;
#Autowired
private TrainBuilder trainBuilder;
public static void main(String[] args) {
SpringApplication.run(TrainBuilderApplication.class, args);
}
#Bean
public KTable<String, Train> trainTable(StreamsBuilder kStreamBuilder) {
return kStreamBuilder
.stream(Pattern.compile(sourceTopicsPattern), Consumed.with(Serdes.String(), myJsonSerde))
.map(trainIdMapper)
.filter((key, value) -> key != null)
.groupByKey(Grouped.with(Serdes.String(), mySerde))
.aggregate(() -> null, trainBuilder, trainStore);
}
}
The values from my application.yml seems to be ignored. What could be the cause of this? What am I missing? Thanks in advance!
So I figured it out with the help of How do I properly externalize spring-boot kafka-streams configuration in a properties file?.
Apparently, consumer and producer configs are completely separated from streams config when using a KStream. To set specific properties for the consumer of the kafka stream one must use "additional properties" like so:
spring:
kafka:
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS,localhost:9092}
streams:
application-id: ${APPLICATION_ID:train-builder-processor}
cache-max-size-buffering: 1048576
cleanup.on-shutdown: ${CLEANUP_ON_SHUTDOWN:false}
properties:
max:
poll:
records: 50
which was a bit unintuitive, but it works. Hope this can help someone in the future!
this is strange but my spring boot api taking much longer that expected when deployed on aws lambda.
in the cloudwatch log, i see spring boot is starting up twice first with default profile and second with a profile i set.
Why should it boot twice.. that is significantly costing time..
Source Code:
lambdahandler.java
public class LambdaHandler implements RequestStreamHandler {
private static SpringBootLambdaContainerHandler<AwsProxyRequest, AwsProxyResponse> handler;
static {
try {
handler = SpringBootLambdaContainerHandler.getAwsProxyHandler(Application.class);
handler.activateSpringProfiles("lambda");
} catch (ContainerInitializationException e) {
// Re-throw the exception to force another cold start
e.printStackTrace();
throw new RuntimeException("Could not initialize Spring Boot application", e);
}
}
application.java
#SpringBootApplication
public class Application extends SpringBootServletInitializer {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
both these files are in the same package
config.java
#Configuration
#EnableWebMvc
#Profile("lambda")
public class Config {
/**
* Create required HandlerMapping, to avoid several default HandlerMapping instances being created
*/
#Bean
public HandlerMapping handlerMapping() {
return new RequestMappingHandlerMapping();
}
/**
* Create required HandlerAdapter, to avoid several default HandlerAdapter instances being created
*/
#Bean
public HandlerAdapter handlerAdapter() {
return new RequestMappingHandlerAdapter();
}
..
..
}
pom.xml
<dependency>
<groupId>com.amazonaws.serverless</groupId>
<artifactId>aws-serverless-java-container-spring</artifactId>
<version>[0.1,)</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-core</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-events</artifactId>
<version>3.1.0</version>
</dependency>
cloudwatch log
07:16:51.546 [main] INFO com.amazonaws.serverless.proxy.internal.LambdaContainerHandler - Starting Lambda Container Handler
:: Spring Boot ::
2020-09-05 07:16:52.724 INFO 1 --- [ main] lambdainternal.LambdaRTEntry : Starting LambdaRTEntry on 169.254.184.173 with PID 1 (/var/runtime/lib/LambdaJavaRTEntry-1.0.jar started by sbx_user1051 in /)
2020-09-05 07:16:52.726 INFO 1 --- [ main] lambdainternal.LambdaRTEntry : No active profile set, falling back to default profiles: default
2020-09-05 07:16:52.906 INFO 1 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#1e81f4dc: startup date [Sat Sep 05 07:16:52 UTC 2020]; root of context hierarchy
..
..
2020-09-05 07:16:57.222 INFO 1 --- [ main] o.s.web.servlet.DispatcherServlet : FrameworkServlet 'dispatcherServlet': initialization completed in 40 ms
:: Spring Boot ::
2020-09-05 07:16:57.442 INFO 1 --- [ main] lambdainternal.LambdaRTEntry : Starting LambdaRTEntry on 169.254.184.173 with PID 1 (/var/runtime/lib/LambdaJavaRTEntry-1.0.jar started by sbx_user1051 in /)
2020-09-05 07:16:57.442 INFO 1 --- [ main] lambdainternal.LambdaRTEntry : The following profiles are active: lambda
2020-09-05 07:16:57.445 INFO 1 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#5ef60048: startup date [Sat Sep 05 07:16:57 UTC 2020]; root of context hierarchy
Why should it boot twice ?
I suspect your code change with activateSpringProfiles force reinitialisation.
handler.activateSpringProfiles("lambda");
https://github.com/awslabs/aws-serverless-java-container/blob/master/aws-serverless-java-container-spring/src/main/java/com/amazonaws/serverless/proxy/spring/SpringBootLambdaContainerHandler.java#L149
Try setting active profile with env variable SPRING_PROFILES_ACTIVE as part of lambda configuration file.
Java and serverless
If you use java for serverless application like AWS lambdas I would recommend for looking a framework which supports Ahead-of-Time compilation which will boost a lot your application start.
For instance have a look at Micronaut, Quarkus using with Graalvm.
Spring Boot is not the best option using with directly with AWS lambdas.
I'm using an H2 embedded database for testing, and after the tests complete, I'm seeing the system trying to close the database twice and then it hangs waiting on the last log line shown here:
...
2019-07-14 07:58:47.115 INFO 44844 --- [ Thread-2] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
2019-07-14 07:58:47.115 INFO 44844 --- [ Thread-4] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
2019-07-14 07:58:47.116 INFO 44844 --- [ Thread-2] .SchemaDropperImpl$DelayedDropActionImpl : HHH000477: Starting delayed evictData of schema as part of SessionFactory shut-down'
2019-07-14 07:58:47.116 INFO 44844 --- [ Thread-4] .SchemaDropperImpl$DelayedDropActionImpl : HHH000477: Starting delayed evictData of schema as part of SessionFactory shut-down'
2019-07-14 07:58:47.117 INFO 44844 --- [ Thread-4] o.s.j.d.e.EmbeddedDatabaseFactory : Shutting down embedded database: url='jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=false'
2019-07-14 07:58:47.117 INFO 44844 --- [ Thread-2] o.s.j.d.e.EmbeddedDatabaseFactory : Shutting down embedded database: url='jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=false'
This is happening with Spring Boot 2.1.5 and 2.1.6
In the test class I set up the database this way
#RunWith(SpringRunner.class)
#SpringBootTest
#TestPropertySource(locations = "classpath:application.yml")
#Slf4j
public class DBTest {
...
static EmbeddedDatabase informixDB;
static JdbcTemplate informixJDBCTemplate;
#BeforeClass
public static void initDb() {
informixDB = new EmbeddedDatabaseBuilder().setType(EmbeddedDatabaseType.H2).build();
informixJDBCTemplate = new JdbcTemplate(informixDB);
ClassPathResource initSchema = new ClassPathResource("data/informix/InformixUp.sql");
DatabasePopulator databasePopulator = new ResourceDatabasePopulator(initSchema);
DatabasePopulatorUtils.execute(databasePopulator, informixDB);
}
#AfterClass
public static void dropDb() {
ClassPathResource drop = new ClassPathResource("data/informix/InformixDown.sql");
DatabasePopulator databasePopulator = new ResourceDatabasePopulator(drop);
DatabasePopulatorUtils.execute(databasePopulator, informixDB);
}
I have this in my test/application.yml though it seems to be being ignored
spring:
jpa:
database-platform: org.hibernate.dialect.H2Dialect
h2:
console:
path: /h2-console
enabled: true
settings:
web-allow-others: true
# trace: true
datasource:
url: jdbc:h2:mem:informixDB;AUTO_SERVER=TRUE
username: sa
password:
I have made a Spring Boot test for testing JMS consumption.
The test looks like this:
#RunWith(SpringRunner.class)
#SpringBootTest(classes = Application.class)
#DirtiesContext(classMode = DirtiesContext.ClassMode.BEFORE_EACH_TEST_METHOD)
public class UpdateThingByJmsIntegrationTest {
#Test
#Rollback(false)
public void updateThingByJmsUpdatesDatabase() throws InterruptedException {
final Thing thing = new ThingBuilder().withId(null).build();
final TransactionTemplate transactionTemplate = new TransactionTemplate(transactionManager);
transactionTemplate.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
transactionTemplate.execute(transactionStatus -> {
thingRepository.save(thing);
return thing;
});
final String xml = String.format(
"<thingDto><id>%s</id><name>something else</name><location>somewhere</location></thingDto>",
thing.getId());
jmsMessagingTemplate.convertAndSend(thingUpdateQueue, xml);
Thread.sleep(1500L);
final Thing updatedThing = thingRepository.getOne(thing.getId());
assertNotNull(updatedThing);
assertEquals("something else", updatedThing.getName());
assertEquals("somewhere", updatedThing.getLocation());
}
So, I save a Thing in the database, then send a JMS message to update the Thing. Since JMS consumption happens in a separate thread from the test itself, I wait, and then try to verify that the Thing has been updated.
This works just fine in IntelliJ, but when running it with Maven it fails, due to the thread consuming the JMS message not being able to find the Thing in the database.
I have tried to output the object hashcode (identifier) of the ThingRepository in both the test and the code consuming the JMS message, and they come out differently. With IntelliJ they are the same. I suspect this might be part of the problem, but I'm not sure how to avoid it.
I also checked the log output in IntelliJ vs Maven, and I find that maven outputs these lines before the test is even run, which IntelliJ does not. Don't know if it is relevant.
2019-05-13 09:48:53.983 INFO 9271 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
2019-05-13 09:48:53.995 INFO 9271 --- [ main] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
2019-05-13 09:48:53.996 INFO 9271 --- [ main] .SchemaDropperImpl$DelayedDropActionImpl : HHH000477: Starting delayed evictData of schema as part of SessionFactory shut-down'
2019-05-13 09:48:54.000 INFO 9271 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-3 - Shutdown initiated...
2019-05-13 09:48:54.001 INFO 9271 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-3 - Shutdown completed.
But why would I get a different repository-object in the test and the class under test?
Update:
Turns out this only happens when running the test in question in the same run as another test. In this other test, I have:
#RunWith(SpringRunner.class)
#SpringBootTest(classes = Application.class)
#DirtiesContext(classMode = DirtiesContext.ClassMode.BEFORE_EACH_TEST_METHOD)
public class OtherIntegrationTest {
#MockBean
private ThingRepository thingRepository;
It seems this "bleeds" through to my other test, making the context use a mock while my test uses the real deal. Any way to avoid this, or do I have to find an alternative to using #MockBean?
This could be caused by lack of proper test isolation. If the updateThingByJmsUpdatesDatabase test is working by itself and fails when run as part of the test suite during build e.g. when tests are run with mvn clean install.
You should verify this by running this single test using Maven:
mvn test -Dtest=ClassName.updateThingByJmsUpdatesDatabase
I have an Environment Variable on an Ubuntu server, SDB_DOMAIN, that I'm trying to pass to this gradle properties file:
https://github.com/Netflix/SimianArmy/blob/master/src/main/resources/janitor.properties#L20
What's the syntax to pull environment variables into a properties file like this? I've tried a couple different ways, one example being: simianarmy.janitor.snapshots.ownerId = System.getenv("SIMIAN_OWNER_ID") but that just returns the literal value when I start the jetty server withgradlew jettRun and watch the logs.
19:55:53.957 [main] INFO c.n.s.basic.BasicSimianArmyContext - using standard class for simianarmy.client.recorder.class
19:55:54.060 [main] INFO c.n.simianarmy.aws.SimpleDBRecorder - Creating SimpleDB domain: "System.getenv(SDB_DOMAIN)"
19:55:54.122 [main] WARN c.n.simianarmy.aws.SimpleDBRecorder - Error while trying to auto-create SimpleDB domain
com.amazonaws.services.simpledb.model.InvalidParameterValueException: Value ("System.getenv(SDB_DOMAIN)") for parameter DomainName is invalid. (Service: AmazonSimpleDB; Status Code: 400; Error Code: InvalidParameterValue; Request ID: 4aabdeb2-68a5-0f49-dacd-17c96f375793)
Here is what I did. I Wanted my Spring-Boot Application to show me $HOME variable.
My application.properties file:
variable.home = #{ systemEnvironment['HOME'] }
Class that is using it:
#Component
public class SomeName implements CommandLineRunner {
#Value("${variable.home}" )
String home;
#Override
public void run(String... args) throws Exception {
System.out.println(home);
}
public String getHome() {
return home;
}
public void setHome(String home) {
this.home = home;
}
}
Spring boot starting log:
2015-12-10 17:46:07.622 INFO 5710 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2015-12-10 17:46:07.652 INFO 5710 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
/home/dogbert
2015-12-10 17:46:07.655 INFO 5710 --- [ main] com.example.DemoApplication : Started DemoApplication in 1.431 seconds (JVM running for 1.614)
and echo $HOME:
dogbert#borsuk:~$ echo $HOME
/home/dogbert
dogbert#borsuk:~$
I hope this helps.