Spring Boot's test framework has an option to select a random port for a test run with 'server.port:0'. The documentation suggests grabbing the port as a Spring #Value, but I'm wanting to use it to set the baseUrl in my GebConfig.groovy. Is there a way to access the dynamic port number from within the ConfigSlurper?
Simply override GebSpec.createConf() in a base spec:
#Value("${local.server.port}")
int port
Configuration createConf() {
def configuration = super.createConf()
configuration.baseUrl = "http://localhost:$port"
configuration
}
Related
I'm using Spring Boot v2.7.2 and the latest version of Spring Kafka provided by spring-boot-dependencies:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
I want the app to load all configuration from file hence I created the beans with this bare minimum configuration:
public class KakfaConfig {
#Bean
public ProducerFactory<Integer, FileUploadEvent> producerFactory() {
return new DefaultKafkaProducerFactory<>(Collections.emptyMap());
}
#Bean
public KafkaTemplate<Integer, FileUploadEvent> kafkaTemplate() {
return new KafkaTemplate<Integer, anEvent>(producerFactory());
}
}
It works and loads the configuration from the application.yaml below as expected.
spring:
application:
name: my-app
kafka:
bootstrap-servers: localhost:9092
producer:
client-id: ${spring.application.name}
# transaction-id-prefix: "tx-"
template:
default-topic: my-topic
However, if I uncomment the transaction-id-prefix line, the application fails to start with the exception
java.lang.IllegalArgumentException: The 'ProducerFactory' must support transactions
The documentation in here reads
If you provide a custom producer factory, it must support
transactions. See ProducerFactory.transactionCapable().
The only way I managed to make it work is removing the transaction prefix from the application.yaml and configure it in the code as per below:
#Bean
public ProducerFactory<Integer, FileUploadEvent> fileUploadProducerFactory() {
var pf = new DefaultKafkaProducerFactory<Integer, FileUploadEvent>(Collections.emptyMap());
pf.setTransactionIdPrefix("tx-");
return pf;
}
Any thoughts on how I can configure everything using the application properties file? Is this a bug?
The only solution atm is really setting the transaction-prefix-id in the code whilst creating the ProducerFactory, despite it's already been defined in the application.yaml.
The Spring Boot team replied as per below:
The intent is that transactions should be used and that the ProducerFactory should support them. The transaction-id-prefix property can be set and this results in the auto-configuration of the kafkaTransactionManager bean. However, if you define your own ProducerFactory (to constrain the types, for example) there's no built-in way to have the transaction-id-prefix applied to that ProducerFactory.
It's a fundamental principle of auto-configuration that it backs off when a user defines a bean of their own. If we post-processed the user's bean to change its configuration, it would no longer be possible for your code to take complete control of how things are configured. Unfortunately, this flexibility does sometimes require you to write a little bit more code. This is one such time.
If we want to keep the prefix as a property in the application.yaml file, we can inject it to avoid config duplication:
#Value("${spring.kafka.producer.transaction-id-prefix}")
private String transactionPrefix;
#Bean
public ProducerFactory<Integer, FileUploadEvent> fileUploadProducerFactory() {
var pf = new DefaultKafkaProducerFactory<Integer, FileUploadEvent>(Collections.emptyMap());
pf.setTransactionIdPrefix(transactionIdPrefix);
return pf;
}
I use this library to replace links to AWS parameters to they actual values: https://github.com/NitorCreations/spring-property-aws-ssm-resolver/blob/master/src/main/java/com/nitorcreations/spring/aws/SpringPropertySSMParameterResolver.java
This library uses EnvironmentPostProcessor to substitude parameters. Next, I use the following component to get properties:
#Data
#Component
#ConfigurationProperties(prefix = "activemq.connection")
public class ConnectionProperties {}
This library replaces all spring properties except #ConfigurationProperties and I can't understand why. Is there any priority for processing these beans?
UPD: From the library description:
spring-property-aws-ssm-resolver is a small Spring Boot plugin for resolving AWS SSM Parameters during startup simply by using prefixed regular Spring Boot properties.
Set up your Spring Properties with the {ssmParameter} prefix.
Example application.yml:
my.regular.property: 'Foo'
my.secret.property: '{ssmParameter}/myproject/myapp/mysecret'
During startup, the plugin would look for properties with this prefix and replace the value by looking for a property called /myproject/myapp/mysecret on AWS SSM.
I expect this library will replace all values in #ConfigurationProperties on values from AWS SSM, but it actually don't, it leaves them unchanged. I think the reason is in the initialization order of ConfigurationProperties and EnvironmentPostProcessor.
I am using Spring cloud stream along with Kafka binders to connect to a Kafka cluster using SASL. The SASL config looks as follows:
spring.cloud.stream.kafka.binder.configuration.sasl.mechanism=SCRAM-SHA-512
spring.cloud.stream.kafka.binder.configuration.sasl.jaas.config= .... required username="..." password="..."
spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
I want to update the username and password programmatically/at runtime, how can I do that in Spring Cloud Stream using Spring Kafka binders?
Side note:
Using BinderFactory I can get reference to KafkaMessageChannelBinder which has KafkaBinderConfigurationProperties, in its configuration hashmap I can see those configurations but I want to know how can I update the configuration at runtime such that those changes are reflected in the connections as well?
#Autowired
BinderFactory binderFactory
....
public void foo()
{
KafkaMessageChannelBinder k = (KafkaMessageChannelBinder)binderFactory.getBinder(null, MessageChannel.class);
// Using debugger I inspected k.configurationProperties.configuration which has the SASL properties I need to update
}
jaas username and password can be provided using configuration, which also means that they can be overridden using the same properties at runtime.
Here is an example: https://github.com/spring-cloud/spring-cloud-stream-samples/blob/master/multi-binder-samples/kafka-multi-binder-jaas/src/main/resources/application.yml#L26
At runtime, you can override the values set in application.properties. For example, if you are running the app using java -jar, you could simply pass the property along with it: spring.cloud.stream.kafka.binder.jaas.options.username. Then this new value will take effect for the duration of the application run.
I came across this problem yesterday and spent about 3-4 hours in order to figure out how to programmatically update the username and password in Spring Cloud Stream using Spring Kafka binders as one cannot/should not store passwords inside Git.(Spring Boot Version 2.5.2)
Overriding the bean KafkaBinderConfigurationProperties works.
#Bean
#Primary
public KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties(KafkaBinderConfigurationProperties properties) {
String saslJaasConfigString = "org.apache.kafka.common.security.scram.ScramLoginModule required username=${USERNAME_FROM_EXTERNAL_SYSTEM_LIKE_VAULT} password=${PASSWORD_FROM_EXTERNAL_SYSTEM_LIKE_VAULT}"
Map<String, String> configMap = properties.getConfiguration();
configMap.put(SaslConfigs.SASL_JAAS_CONFIG, saslJaasConfigString);
return properties;
}
To explain myself why I need this -- the application's main purpose is to wrap another web server, with intent to add restrictions and certain per-call calculations. This is achieved using zuul proxy and its filters. The application itself has to call that other web server intermediately as well, and often in context of certain user. Because of this, such intermediate calls are not done on the other web server directly, but looped over itself (ie RestTemplate is used, that points to application itself), so that if there are any security filters in place for the request, they would also be applied.
The definition for rest template looks like this:
#Bean
#Primary
public RestTemplate zuulRestTemplate(
#Value("${server.port}") String port
) {
RestTemplate restTemplate = new RestTemplateBuilder()
.rootUri("http://localhost:" + port)
.build();
MappingJackson2HttpMessageConverter messageConverter = restTemplate.getMessageConverters().stream()
.filter(MappingJackson2HttpMessageConverter.class::isInstance)
.map(MappingJackson2HttpMessageConverter.class::cast)
.findFirst().orElseThrow( () -> new RuntimeException("MappingJackson2HttpMessageConverter not found"));
messageConverter.getObjectMapper().registerModule(new VavrModule());
return restTemplate;
}
and this works for normal execution. The problem is with tests. Running whole test suite, a lot of tests failed with "port already in use" because it seems spring doesn't wait for one test to finish before launching the next. I found the solution is to specify WebEnvironment.RANDOM_PORT in the spring boot test annotation, however then my RestTemplate doesn't work anymore, because the server.port property is 0. I tried to solve this by adding #Value("${local.server.port}") parameter to my rest template bean function (as to use server.port if it's not 0, local.server.port if it is), however spring says it cannot resolve it.
I was in the same situation, one rest client is calling another spring boot app and that one is calling itself and the port was defined is "8090" and I was running all from integration test with random port and
#LocalServerPort
int port;
got Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'local.server.port' in value "${local.server.port}" tried environment and couple of other methods but all failed, Solution for me is ServletWebServerApplicationContext example code is below
#RestController
public class MyController {
#Autowired
ServletWebServerApplicationContext ctx;
#GetMapping("/port")
public int printPort() {
return ctx.getWebServer().getPort();
}
}
I also suggest if you have multiple spring boot applications you can run all in one port with this way
How can I set the name of an embedded database started in a Spring Boot app running in a test?
I'm starting two instances of the same Spring Boot app as part of a test, as they collaborate. They are both correctly starting an HSQL database, but defaulting to a database name of testdb despite being provided different values for spring.datasource.name.
How can I provide different database names, or some other means of isolating the two databases? What is going to be the 'lightest touch'? If I can avoid it, I'd rather control this with properties than adding beans to my test config - the test config classes shouldn't be cluttered up because of this one coarse-grained collaboration test.
Gah - setting spring.datasource.name changes the name of the datasource, but not the name of the database.
Setting spring.datasource.url=jdbc:hsql:mem:mydbname does exactly what I need it to. It's a bit crap that I have to hardcode the embedded database implementation, but Spring Boot is using an enum for default values, which would mean a bigger rewrite if it were to try getting the name from a property.
You can try it so:
spring.datasource1.name=testdb
spring.datasource2.name=otherdb
And then declare datasource in your ApplicationConfig like this
#Bean
#ConfigurationProperties(prefix="spring.datasource1")
public DataSource dataSource1() {
...
}
#Bean
#ConfigurationProperties(prefix="spring.datasource2")
public DataSource dataSource2() {
...
}
See official docs for more details: https://docs.spring.io/spring-boot/docs/current/reference/html/howto-data-access.html#howto-configure-a-datasource