I have an elasticsearch instance running locally.
I have a spring boot application.
In my application I have a service ServiceX which contains an elasticsearch repository which extends ElasticsearchRepository.
So
Service X contains
YRepository extends ElasticsearchRepository
I have an elasticsearch instance running locally.
My elastic search settings are
ELASTICSEARCH (ElasticsearchProperties)
spring.data.elasticsearch.properties.http.enabled=true
spring.data.elasticsearch.properties.host = localhost
spring.data.elasticsearch.properties.port = 9300
When the application is started an elasticsearch template is created.
The client that is used is a NodeClient.
The settings for the NodeClient are
"http.enabled" -> "true"
"port" -> "9300"
"host" -> "localhost"
"cluster.name" -> "elasticsearch"
"node.local" -> "true"
"name" -> "Human Robot"
"path.logs" -> "C:/dev/git/xxx/logs"
The name of the elasticsearch (Human Robot in this case), does not match the local elasticsearch instance running (Nikki in this case).
It looks like it
1. creates a new instance of logstash
2. creates an embedded instance of logstash.
I have searched through a lot of information but cannot find any documentation to help.
Could people please advise about what settings to use?
Thanks.
I believe that you do not want to use the NodeClient but the TransportClient unless you want your application to become part of the cluster
I believe you have the following dependency:
<dependency>
<groupId>org.springframework.boot</groupId>
<artificatId>spring-boot-starter-data-elasticsearch</artificatId>
</dependency>
then you need to create some configuration class as follows:
#Configuration
#PropertySource(value = "classpath:config/elasticsearch.properties")
public class ElasticsearchConfiguration {
#Resource
private Environment environment;
#Bean
public Client client() {
TransportClient client = new TransportClient();
TransportAddress address = new InetSocketTransportAddress(
environment.getProperty("elasticsearch.host"),
Integer.parseInt(environment.getProperty("elasticsearch.port"))
);
client.addTransportAddress(address);
return client;
}
#Bean
public ElasticsearchOperations elasticsearchTemplate() {
return new ElasticsearchTemplate(client());
}
}
Also check ElasticSearch section of the Spring Boot guide, and especially the section about spring.data.elasticsearch.cluster-nodes if you put multiple comma seperated list of host port it will be generated a TransportClient instead, your choice
Try it, hope it helps
Thanks. Would you believe I actually just started trying to use a configuration file before I saw your post. I added a configuration class
#Configuration
public class ElasticSearchConfig {
#Bean
public Client client() {
TransportClient client = new TransportClient();
TransportAddress address = new InetSocketTransportAddress(
"localhost",9300);
client.addTransportAddress(address);
return client;
}
}
And the client is now being injected into the elasticsearch template (so don't need the elasticsearchtemplate bean).
I had an error when I tried to connect but that turned out to be due to elasticsearch 2.2.0, have tried it with elasticsearch 1.7.3 and it worked so now onto the next problem!
Related
I created a simple spring boot app to retrieve secrets from keyvault.
I added the following dependency to work around with,
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>azure-spring-boot-starter-keyvault-secrets</artifactId>
<version>3.5.0</version>
</dependency>
and added the following in application.properties
azure.keyvault.enabled=true
azure.keyvault.uri=<URL>
#keys
mySecretProperty=secret
and my main application,
#SpringBootApplication
public class KeyVaultSample implements CommandLineRunner {
#Value("${mySecretProperty}")
private String mySecretProperty;
public static void main(String[] args) {
SpringApplication.run(KeyVaultSample.class, args);
}
#Override
public void run(String... args) {
System.out.println("property your-property-name value is: " + mySecretProperty);
}
}
But every time I tried to run the above app on local, it tries to use ManagedIdentityCredential to connect. So I added a configuration class for creating a bean for SecretClient with AzureCliCredential, but then too, the results are the same.
My Configuration class,
#Configuration
public class AppConfiguration {
#Bean
public SecretClient secretClient() {
AzureCliCredential az = new AzureCliCredentialBuilder().build();
SecretClient sec = new SecretClientBuilder().vaultUrl("<url>")
.credential(az).buildClient();
return sec;
}
}
I'm looking for ways I could use/test this keyvault on my local.
Is there any configuration I could put in the properties file which would make it use AzureCliCredential instead of ManagedIdentityCredential?
azure-spring-boot-starter-keyvault-secrets uses MSI / Managed identities.
If you would like to authenticate with Azure CLI, just use azure-identity and azure-security-keyvault-secrets.
public void getSecretWithAzureCliCredential() {
AzureCliCredential cliCredential = new AzureCliCredentialBuilder().build();
// Azure SDK client builders accept the credential as a parameter
SecretClient client = new SecretClientBuilder()
.vaultUrl("https://{YOUR_VAULT_NAME}.vault.azure.net")
.credential(cliCredential)
.buildClient();
KeyVaultSecret secret = secretClient.getSecret("<secret-name>");
System.out.printf("Retrieved secret with name \"%s\" and value \"%s\"%n", secret.getName(), secret.getValue());
}
If you don't necessarily need the real thing in local (a test double can be fine instead of Azure Key Vault) you could try Lowkey Vault too! It supports keys and secrets using a local container and you can fake the authentication using a simple basic auth.
Project home: https://github.com/nagyesta/lowkey-vault
Java POC (although not using the Spring Boot starter): https://github.com/nagyesta/lowkey-vault-example
I'm making an application that uses Spring Boot, MySQL and Redis on the Back End and Angular on the Front End. I want to deploy it to Heroku so I could use my front end with it but I just can't seem to configure the remote URL for Redis. I have the Redis To Go Add-on on Heroku for this with the remote URL ready. I just don't know how to configure the environment variables to access that instead of localhost and the default port 6379.
I added the following lines to my application.properties but it still did not work :
spring.redis.url= #URL
spring.redis.host= #HOSTNAME
spring.redis.password= #PASSWORD
spring.redis.port = #PORT
I keep getting the following error :
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'enableRedisKeyspaceNotificationsInitializer' defined in class path resource [org/springframework/session/data/redis/config/annotation/web/http/RedisHttpSessionConfiguration.class]: Invocation of init method failed; nested exception is org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis on localhost:6379; nested exception is com.lambdaworks.redis.RedisConnectionException: Unable to connect to localhost/127.0.0.1:6379
Is there any way I could configure this to access the remote url instead of localhost?
I'm using Lettuce and not Jedis and my HttpSessionConfig file is :
#EnableRedisHttpSession
public class HttpSessionConfig {
#Bean
public LettuceConnectionFactory connectionFactory() {
return new LettuceConnectionFactory();
}
}
I was having a similar issue, this is how I solved it:
If you define the bean this way:
#Bean
public LettuceConnectionFactory connectionFactory() {
return new LettuceConnectionFactory();
}
You are not allowing Spring to take the Redis values from the application.properties.
To make it work, please do the following:
Remove this bean definition:
#Bean
public LettuceConnectionFactory connectionFactory() {
return new LettuceConnectionFactory();
}
Define the RedisTemplate bean this way:
#Bean
public RedisTemplate<String, Object> deliveryRedisTemplate(RedisConnectionFactory connectionFactory) {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(connectionFactory);
return template;
}
Define the following values in application.properties:
spring.redis.database=<replace-me>
spring.redis.host=<replace-me>
spring.redis.port=<replace-me>
spring.redis.timeout=<replace-me>
I was frustrated with the exact same issue you are facing and the spring boot documentation or examples do nothing to address what we are facing. The reason why your config entries are not being used is because you are creating a new instance of LettuceConnectionFactory with a parameter-less constructor. Digging into the source/byte code you can see that constructor is completely ignoring the spring.redis.host and spring.redis.port values and hardcoding them to localhost:6379.
What you should be doing is either:
Use the LettuceConnectionFactory(hostname, port) constructor.
Don't define the connection factory. Get rid of the entire #Bean entry for connectionFactory() and Spring will automatically wire everything up and use your config entries.
Also as a side note; to get this working with AWS Elasticache (locally via VPN) I had to add to that config class:
#Bean
public static ConfigureRedisAction configureRedisAction() {
return ConfigureRedisAction.NO_OP;
}
I found the Heroku documentation for connecting to their Redis add-on (https://devcenter.heroku.com/articles/heroku-redis#connecting-in-java) contains an example using Jedis and therefore needed a little adaption. The content of the REDIS_URL added by Heroku to the environment of your running app resembles
redis://h:credentials#host:port
I parsed this using RedisURI.create and then set the host, port and password parameters of RedisStandaloneConfiguration
val uri = RedisURI.create(configuration.redisUrl)
val config = RedisStandaloneConfiguration(uri.host, uri.port)
config.setPassword(uri.password)
return LettuceConnectionFactory(config)
The code above is Kotlin rather than Java but perhaps it will help? You can find the full code at https://github.com/DangerousDarlow/springboot-redis
You need to choose between depending on autoconfiguration or defining your custom connection template.
First way is to remove HttpSessionConfig and then your redis properties from application.properties file will be applied. And as you have spring-redis-data-session dependency on your classpath your lettuce connection will be created implicitly.
Second solution is defining your connection properties as host, port, password inside LettuceConnectionFactory.
However it is recommended to use autoconfiguration.
Set the configuration in application.properties and RedisStandaloneConfiguration.
#Configuration
#PropertySource("application.properties")
#EnableRedisHttpSession
public class SpringRedisConfig {
#Autowired
private Environment env;
#Bean
public LettuceConnectionFactory connectionFactory() {
RedisStandaloneConfiguration redisConf = new RedisStandaloneConfiguration();
redisConf.setHostName(env.getProperty("spring.redis.host"));
redisConf.setPort(Integer.parseInt(env.getProperty("spring.redis.port")));
redisConf.setPassword(RedisPassword.of(env.getProperty("spring.redis.password")));
return new LettuceConnectionFactory(redisConf);
}
}
I have a Elasticache setup with one master and two slaves. I am still not sure how to pass in a list of master slave RedisURIs to construct a StatefulRedisMasterSlaveConnection for LettuceConnectionFactory. I only see support for standaloneConfiguration with single host and port.
LettuceClientConfiguration configuration = LettuceTestClientConfiguration.builder().readFrom(ReadFrom.SLAVE).build();
LettuceConnectionFactory factory = new LettuceConnectionFactory(SettingsUtils.standaloneConfiguration(),configuration);
I know there is a similar question Configuring Spring Data Redis with Lettuce for Redis master/slave
But I don't think it works for ElastiCache Master/Slave setup as currently the above code would try to use MasterSlaveTopologyProvider to discover slave ips. However, slave IP addresses are not reachable. So what's the right way to configure Spring Data Redis to make it compatible with Master/Slave ElastiCache? It seems to me LettuceConnectionFactory needs to take in a list of endpoints and use StaticMasterSlaveTopologyProvider in order to work.
There have been further improvements in AWS and Lettuce making it easier to support Master/Slave.
One improvement that has happened recently in AWS is it has launched reader endpoints for Redis which distributes load among replicas: Amazon ElastiCache launches reader endpoints for Redis.
Hence the best way to connect to Redis using Spring Data Redis will be to use the primary endpoint (master) and reader endpoint (for replicas) of the Redis cluster.
You can get both of them from the AWS console. Here is a sample code:
#Bean
public LettuceConnectionFactory redisConnectionFactory() {
LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
.readFrom(ReadFrom.SLAVE_PREFERRED)
.build();
RedisStaticMasterReplicaConfiguration redisStaticMasterReplicaConfiguration =
new
RedisStaticMasterReplicaConfiguration(REDIS_CLUSTER_PRIMARY_ENDPOINT, redisPort);
redisStaticMasterReplicaConfiguration.addNode(REDIS_CLUSTER_READER_ENDPOINT, redisPort);
redisStaticMasterReplicaConfiguration.setPassword(redisPassword);
return new LettuceConnectionFactory(redisStaticMasterReplicaConfiguration, clientConfig);
}
Right now, static Master/Slave with provided endpoints is not supported by Spring Data Redis. I filed a ticket to add support for that.
You can implement this functionality yourself by subclassing LettuceConnectionFactory, creating an own configuration and LettuceConnectionFactory.
You would start with something like:
public static class MyLettuceConnectionFactory extends LettuceConnectionFactory {
private final MyMasterSlaveConfiguration configuration;
public MyLettuceConnectionFactory(MyMasterSlaveConfiguration standaloneConfig,
LettuceClientConfiguration clientConfig) {
super(standaloneConfig, clientConfig);
this.configuration = standaloneConfig;
}
#Override
protected LettuceConnectionProvider doCreateConnectionProvider(AbstractRedisClient client, RedisCodec<?, ?> codec) {
return new ElasticacheConnectionProvider((RedisClient) client, codec, getClientConfiguration().getReadFrom(),
this.configuration);
}
}
static class MyMasterSlaveConfiguration extends RedisStandaloneConfiguration {
private final List<RedisURI> endpoints;
public MyMasterSlaveConfiguration(List<RedisURI> endpoints) {
this.endpoints = endpoints;
}
public List<RedisURI> getEndpoints() {
return endpoints;
}
}
You can find all code in this gist, not posting all code here as it would be a wall of code.
I've got a working Spring Boot Elasticsearch Application which uses one of two profiles: application.dev.properties or application.prod.properties. That part works fine. I am having issue with getting the external elasticsearch to read from the application.xxx.properties.
This works:
#Configuration
#PropertySource(value = "classpath:config/elasticsearch.properties")
public class ElasticsearchConfiguration {
#Resource
private Environment environment;
#Bean
public Client client() {
TransportClient client = new TransportClient();
TransportAddress address = new InetSocketTransportAddress(
environment.getProperty("elasticsearch.host"),
Integer.parseInt(environment.getProperty("elasticsearch.port"))
);
client.addTransportAddress(address);
return client;
}
#Bean
public ElasticsearchOperations elasticsearchTemplate() {
return new ElasticsearchTemplate(client());
}
}
but obviously doesn't solve my multi-environment issue.
I've also tried #Value annotations for host and port variables without success.
How can I convert the above to read its values from the application properties file or choose a different #PropertySource file based on whichever profile I want to run?
spring.data.elasticsearch.properties.host = 10.10.1.10
spring.data.elasticsearch.properties.port = 9300
Thanks
Remove your configuration class and properties.
Add the following dependency
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>
Just add the spring.data.elasticsearch properties to an application-prod.properties and application-dev.properties and change for the desired environment. This is described in the ElasticSearch section of the Spring Boot guide.
spring.data.elasticsearch.cluster-nodes=10.10.1.10:9300
The value in either file will of course differ (or put the default in the application.properties and simply override with an application-dev.properties.
Spring Boot will based on the spring.profiles.active load the desired properties file.
There is no need to hack around yourself.
I agree with Deinum, if you are using Spring boot it will get the properties from the active profile active.
I have different profiles in my project and this is my elasticsearch configuration:
#Configuration
public class ElasticSearchConfiguration {
#Value("${spring.data.elasticsearch.cluster-name}")
private String clusterName;
#Value("${spring.data.elasticsearch.cluster-nodes}")
private String clusterNodes;
#Bean
public ElasticsearchTemplate elasticsearchTemplate() throws UnknownHostException {
String server = clusterNodes.split(":")[0];
Integer port = Integer.parseInt(clusterNodes.split(":")[1]);
Settings settings = Settings.settingsBuilder()
.put("cluster.name", clusterName).build();
client = TransportClient.builder().settings(settings).build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(server), port));
return new ElasticsearchTemplate(client);
}
I'm starting up a Spring Boot application with mvn spring-boot:run.
One of my #Controllers needs information about the host and port the application is listening on, i.e. localhost:8080 (or 127.x.y.z:8080). Following the Spring Boot documentation, I use the server.address and server.port properties:
#Controller
public class MyController {
#Value("${server.address}")
private String serverAddress;
#Value("${server.port}")
private String serverPort;
//...
}
When starting up the application with mvn spring-boot:run, I get the following exception:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'myController': Injection of autowired dependencies failed; nested exception is
org.springframework.beans.factory.BeanCreationException: Could not autowire field: ... String ... serverAddress; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 'server.address' in string value "${server.address}"
Both server.address and server.port cannot be autowired.
How can I find out the (local) host/address/NIC and port that a Spring Boot application is binding on?
IP Address
You can get network interfaces with NetworkInterface.getNetworkInterfaces(), then the IP addresses off the NetworkInterface objects returned with .getInetAddresses(), then the string representation of those addresses with .getHostAddress().
Port
If you make a #Configuration class which implements ApplicationListener<EmbeddedServletContainerInitializedEvent>, you can override onApplicationEvent to get the port number once it's set.
#Override
public void onApplicationEvent(EmbeddedServletContainerInitializedEvent event) {
int port = event.getEmbeddedServletContainer().getPort();
}
You can get port info via
#Value("${local.server.port}")
private String serverPort;
An easy workaround, at least to get the running port, is to add the parameter javax.servlet.HttpServletRequest in the signature of one of the controller's methods. Once you have the HttpServletRequest instance is straightforward to get the baseUrl with this: request.getRequestURL().toString()
Have a look at this code:
#PostMapping(value = "/registration" , produces = "application/json")
public StringResponse register(#RequestBody RequestUserDTO userDTO,
HttpServletRequest request) {
request.getRequestURL().toString();
//value: http://localhost:8080/registration
------
return "";
}
I have just found a way to get server ip and port easily by using Eureka client library. As I am using it anyway for service registration, it is not an additional lib for me just for this purpose.
You need to add the maven dependency first:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
<version>2.2.2.RELEASE</version>
</dependency>
Then you can use the ApplicationInfoManager service in any of your Spring beans.
#Autowired
private ApplicationInfoManager applicationInfoManager;
...
InstanceInfo applicationInfo = applicationInfoManager.getInfo();
The InstanceInfo object contains all important information about your service, like IP address, port, hostname, etc.
One solution mentioned in a reply by #M. Deinum is one that I've used in a number of Akka apps:
object Localhost {
/**
* #return String for the local hostname
*/
def hostname(): String = InetAddress.getLocalHost.getHostName
/**
* #return String for the host IP address
*/
def ip(): String = InetAddress.getLocalHost.getHostAddress
}
I've used this method when building a callback URL for Oozie REST so that Oozie could callback to my REST service and it's worked like a charm
To get the port number in your code you can use the following:
#Autowired
Environment environment;
#GetMapping("/test")
String testConnection(){
return "Your server is up and running at port: "+environment.getProperty("local.server.port");
}
To understand the Environment property you can go through this
Spring boot Environment
You can get hostname from spring cloud property in spring-cloud-commons-2.1.0.RC2.jar
environment.getProperty("spring.cloud.client.ip-address");
environment.getProperty("spring.cloud.client.hostname");
spring.factories of spring-cloud-commons-2.1.0.RC2.jar
org.springframework.boot.env.EnvironmentPostProcessor=\
org.springframework.cloud.client.HostInfoEnvironmentPostProcessor
For Spring 2
val hostName = InetAddress.getLocalHost().hostName
var webServerPort: Int = 0
#Configuration
class ApplicationListenerWebServerInitialized : ApplicationListener<WebServerInitializedEvent> {
override fun onApplicationEvent(event: WebServerInitializedEvent) {
webServerPort = event.webServer.port
}
}
then you can use also webServerPort from anywhere...
I used to declare the configuration in application.properties like this (you can use you own property file)
server.host = localhost
server.port = 8081
and in application you can get it easily by #Value("${server.host}") and #Value("${server.port}") as field level annotation.
or if in your case it is dynamic than you can get from system properties
Here is the example
#Value("#{systemproperties['server.host']}")
#Value("#{systemproperties['server.port']}")
For a better understanding of this annotation , see this example Multiple uses of #Value annotation
#Value("${hostname}")
private String hostname;
At Spring-boot 2.3, The hostname was add as the System Environment Property, and you can check it on /actuator/env