I am working on elastic search using mongo db. When importing transport client it shows
The constructor TransportClient() is undefined.
I configured it as follows:
package com.appointment.api.mobile.config;
import javax.annotation.Resource;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.common.transport.TransportAddress;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;
import org.springframework.core.env.Environment;
import org.springframework.data.elasticsearch.core.ElasticsearchOperations;
import org.springframework.data.elasticsearch.core.ElasticsearchTemplate;
import org.springframework.data.elasticsearch.repository.config.EnableElasticsearchRepositories;
#Configuration
#PropertySource(value = "classpath:elasticsearch.properties")
#EnableElasticsearchRepositories(basePackages = "com.appointment.api")
public class ElasticsearchConfiguration {
#Resource
private Environment environment;
#Bean
public Client client() {
TransportClient client = new TransportClient();
/* TransportAddress address = new InetSocketTransportAddress(environment.getProperty("elasticsearch.host"), Integer.parseInt(environment.getProperty("elasticsearch.port")));
client.addTransportAddress(address); */
return client;
}
#Bean
public ElasticsearchOperations elasticsearchTemplate() {
return new ElasticsearchTemplate(client());
}
}
dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>
Project:
Spring boot maven project.
According to the doc here, there is no public default constructor for the TransportClient , you need to do something like:
TransportClient client = new PreBuiltTransportClient(Settings.EMPTY)
Related
I am using Spring Boot 3.0.0 , Spring Cloud 2021.0.5 . I have BillingServiceApplication.java
package org.sid.billingservice;
import org.sid.billingservice.entities.Bill;
import org.sid.billingservice.entities.ProductItem;
import org.sid.billingservice.model.Customer;
import org.sid.billingservice.model.Product;
import org.sid.billingservice.repository.BillRepository;
import org.sid.billingservice.repository.ProductItemRepository;
import org.sid.billingservice.services.CustomerRestClient;
import org.sid.billingservice.services.ProductRestClient;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.ImportAutoConfiguration;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.openfeign.EnableFeignClients;
import org.springframework.cloud.openfeign.FeignAutoConfiguration;
import org.springframework.context.annotation.Bean;
import java.util.Collection;
import java.util.Date;
import java.util.Random;
#SpringBootApplication
#EnableFeignClients
#ImportAutoConfiguration({FeignAutoConfiguration.class})
public class BillingServiceApplication {
public static void main(String[] args) {
SpringApplication.run(BillingServiceApplication.class, args);
}
#Bean
CommandLineRunner start(BillRepository billRepository, ProductItemRepository productItemRepository, CustomerRestClient customerRestClient, ProductRestClient productRestClient) {
return args -> {
Collection<Product> products = productRestClient.allProducts().getContent();
Long customerId = 1L;
Customer customer = customerRestClient.findCustomerById(customerId);
if (customer == null) throw new RuntimeException("Customer not found");
Bill bill = new Bill();
bill.setBillDate(new Date());
bill.setCustomerId(customerId);
Bill savedBill = billRepository.save(bill);
products.forEach(product -> {
ProductItem productItem = new ProductItem();
productItem.setBill(savedBill);
productItem.setProductId(product.getId());
productItem.setQuantity(1 + new Random().nextInt(10));
productItem.setPrice(product.getPrice());
productItem.setDiscount(Math.random());
productItemRepository.save(productItem);
});
};
}
}
my error
No Feign Client for loadBalancing defined. Did you forget to include spring-cloud-starter-loadbalancer?
Log error full https://gist.github.com/donhuvy/348aa7b096cde63a7129ad0f009c7507
How to fix it?
please share your pom.xml. It looks like you don't have spring-cloud-starter-loadbalancer dependency in your project.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-loadbalancer</artifactId>
<version>2.4.4</version>
</dependency>
Replacing path with url in feign client class along with #ImportAutoConfiguration({FeignAutoConfiguration.class}) in main class worked for me.
Over internet I didn't got reference to implement Kafka in liferay.
Below are the requirements needs to be implemented
Push the message to Kafka topic
Poll and receive message from Kafka topic.
Tried below, but unable to receive message from kafka after pushed from kafka terminal
Dependencies :
compileInclude "org.springframework.kafka:spring-kafka:2.9.2"
compileInclude "org.apache.kafka:kafka-streams:3.3.1"
Code:
import org.apache.kafka.common.serialization.Serdes;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.annotation.EnableKafkaStreams;
import org.springframework.kafka.annotation.KafkaStreamsDefaultConfiguration;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import java.util.HashMap;
import java.util.Map;
import static org.apache.kafka.streams.StreamsConfig.APPLICATION_ID_CONFIG;
import static org.apache.kafka.streams.StreamsConfig.BOOTSTRAP_SERVERS_CONFIG;
import static org.apache.kafka.streams.StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG;
import static org.apache.kafka.streams.StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG;
#Configuration
#EnableKafka
#EnableKafkaStreams
public class KafkaConfig {
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
KafkaStreamsConfiguration kStreamsConfig() {
Map<String, Object> props = new HashMap<>();
props.put(APPLICATION_ID_CONFIG, "streams-app");
props.put(BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
props.put(DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
return new KafkaStreamsConfiguration(props);
}
}
import com.liferay.portal.kernel.log.Log;
import com.liferay.portal.kernel.log.LogFactoryUtil;
import org.osgi.service.component.annotations.Component;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.scheduling.annotation.Async;
import org.springframework.transaction.annotation.Transactional;
#Component(immediate = true)
public class KafkaMessageReceiver {
public static Log _log = LogFactoryUtil.getLog(KafkaMessageReceiver.class);
#Async
#KafkaListener(
topics = "liferay-topic",
concurrency = "2"
)
#Transactional
public void handleMessage(String payload) {
_log.info(payload);
}
}
I am trying to build a springboot application integrated with S3 and SQS. I have an AppConfig, S3Config, SqsConfig wherein I autowire the S3Config and SqsConfig inside the AppConfig.
S3Config declares a bean s3client which I use to perform aws s3 operations. I am getting an error where in the autowiring is failing due to multiple bean definitions available but I'm not sure what part is causing what duplicacy. Attaching the snippets.
Note that I am creating a different definition of s3client bean depending on the profile ( local development vs server deployment )
AppConfig
import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.context.config.annotation.RefreshScope;
import org.springframework.context.annotation.Configuration;
import org.springframework.stereotype.Component;
#Configuration
#Slf4j
#Component
#RefreshScope
#Getter
public class AppConfig {
#Autowired
S3Config s3Config;
#Autowired
SqsConfig sqsConfig;
}
S3Config
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.cloud.context.config.annotation.RefreshScope;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.stereotype.Component;
#Configuration
#Slf4j
#Component
#RefreshScope
#Getter
public class S3Config {
#Value("${aws.s3.roleArn}")
private String s3RoleArn;
#Value("${aws.s3.roleSessionName}")
private String s3RoleSessionName;
#Value("${aws.s3.region}")
private String s3region;
#Value("${aws.s3.inputBucketName}")
private String inputBucketName;
#Value("${aws.s3.outputBucketName}")
private String outputBucketName;
#Bean(name = "s3client")
#Profile("!local")
public AmazonS3 AmazonS3Client() {
log.info(String.format("validating_config_repo_values: s3RoleArn=%s s3RoleSessionName=%s s3RoleSessionName=%s", s3RoleArn, s3RoleSessionName,s3region));
AmazonS3 s3client = null;
try {
STSAssumeRoleSessionCredentialsProvider roleCredentialsProvider = new STSAssumeRoleSessionCredentialsProvider.Builder(
s3RoleArn, s3RoleSessionName).build();
AmazonS3ClientBuilder amazonS3ClientBuilder = AmazonS3ClientBuilder.standard()
.withCredentials(roleCredentialsProvider);
s3client = amazonS3ClientBuilder.withRegion(s3region).build();
} catch (Exception e) {
log.error(String.format("exception_while_creating_AmazonS3Client : %s", e));
}
return s3client;
}
#Bean(name = "s3client")
#Profile("local")
public AmazonS3 localhostAmazonS3Client() {
AmazonS3 s3client = null;
try {
s3client = AmazonS3ClientBuilder.standard().withRegion(Regions.DEFAULT_REGION).withCredentials(new DefaultAWSCredentialsProviderChain()).build();
} catch (Exception e) {
log.error(String.format("exception_while_creating_AmazonS3Client : %s", e));
}
return s3client;
}
}
UploadServiceImpl
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.S3Object;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.io.*;
import java.net.URI;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
#Service
#Slf4j
public class S3DatasetDownloadServiceImpl implements DatasetDownloadService {
#Autowired
AmazonS3 s3Client;
#Autowired
AppConfig appConfig;
public String downloadDataset(URI uri) throws IOException {
String key = uri.getPath();
String outputBucketName = appConfig.getS3Config().getOutputBucketName();
log.info(String.format("client_downloading_file : key=%s from bucket=%s",key,outputBucketName));
try {
} catch (Exception e) {
}
}
}
Kill me. Found the error, I have the bean name as s3client ( c of client small ) whereas on autowiring I am using AmazonS3 s3Client;
Fixing the casing worked!
I have below implementation of Feign Load balancer which is working with spring cloud Hoxtan SR6 dependencies.
import feign.auth.BasicAuthRequestInterceptor;
import org.apache.http.conn.ssl.NoopHostnameVerifier;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.cloud.netflix.ribbon.SpringClientFactory;
import org.springframework.cloud.openfeign.ribbon.CachingSpringLoadBalancerFactory;
import org.springframework.cloud.openfeign.ribbon.LoadBalancerFeignClient;
import org.springframework.context.annotation.Bean;
public class ClientConfig {
#Bean
public BasicAuthRequestInterceptor basicAuthRequestInterceptor(
#Value("${username}") String username,
#Value("${password}") String password) {
return new BasicAuthRequestInterceptor(username, password);
}
#Autowired
private CachingSpringLoadBalancerFactory cachingFactory;
#Autowired
private SpringClientFactory clientFactory;
#Value("${keystore.location}")
private String keyStoreLocation;
#Value("${keystore.secPhase}")
private String keyPassword;
#Bean
public Client feignClient() {
SslUtils.KeystoreConfig truststoreConfig = SslUtils.KeystoreConfig.builder().type("JKS").location(keyStoreLocation).password(keyPassword).build();
SocketFactory factory = new SocketFactory(() -> SslUtils.newContext(null, truststoreConfig));
NoopHostnameVerifier verifier = new NoopHostnameVerifier();
Client.Default client = new Client.Default(factory, verifier);
return new LoadBalancerFeignClient(client, cachingFactory, clientFactory);
}
}
I tried to upgrade spring cloud version to 2020.0.0. I noticed below packages no longer available.
import org.springframework.cloud.openfeign.ribbon.CachingSpringLoadBalancerFactory;
import org.springframework.cloud.openfeign.ribbon.LoadBalancerFeignClient;
How can I change the current implementation? or what dependency will provide these packages?
I was experiencing the same issue.
Finally, I solve the error by adding the bellow dependency.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-openfeign-core</artifactId>
<version>2.2.6.RELEASE</version>
</dependency>
I want to confirm rollback behavior, but I'm having trouble getting it to work.
I have a postgres DB with the following tables:
select * from cat;
pkid | name
--------------------------------------+-------
c75d6e8b-6aff-4214-ad45-d17db254857b | Abbey
select * from toy;
pkid | name | description
--------------------------------------+---------------+-------------------------------------------
dda72782-a1aa-4c0e-9cf6-a408db58a1ae | Laser pointer | Red laser.
f4d7e67d-1b26-4d8d-bb98-1a5c69f3cb49 | String | Colored string attached to a plastic rod.
select * from cattoy;
pkid | fkcat | fktoy
------+-------+-------
I have created a CatService implementation with the idea being you can create a cat, toy, and associate that toy with that cat. If any one of the 3 operations fails I want them all to rollback.
DefaultCatService.java:
import java.util.List;
import java.util.UUID;
import javax.inject.Inject;
import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
import com.bonkeybee.dao.CatDao;
import com.bonkeybee.dao.CatToyDao;
import com.bonkeybee.dao.ToyDao;
import com.bonkeybee.querydsl.Cat;
import com.bonkeybee.querydsl.Cattoy;
import com.bonkeybee.querydsl.Toy;
#Inject
private CatDao catDao;
#Inject
private CatToyDao catToyDao;
#Inject
private ToyDao toyDao;
#Override
#Transactional
public UUID createCat(final Cat cat){
LOG.debug("Creating cat");
UUID catPkid = catDao.createCat(cat);
Toy toy = new Toy();
toy.setName("Box");
toy.setDescription("Cardboard box.");
toy.setPkid(toyDao.createToy(toy));
Cattoy catToy = new Cattoy();
catToy.setFkcat(catPkid);
catToy.setFktoy(toy.getPkid());
catToyDao.createCatToy(catToy);
return catPkid;
}
I have created DAO's and their implementations for each table with basic CRUD operations.
CatDaoJdbc.java:
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.UUID;
import javax.inject.Inject;
import org.apache.commons.lang3.StringUtils;
import org.springframework.stereotype.Repository;
import org.springframework.transaction.annotation.Transactional;
import com.bonkeybee.querydsl.Cat;
import com.bonkeybee.querydsl.QCat;
import com.querydsl.core.BooleanBuilder;
import com.querydsl.core.types.Predicate;
import com.querydsl.core.types.dsl.SimplePath;
import com.querydsl.core.types.dsl.StringPath;
import com.querydsl.sql.SQLQueryFactory;
private static final QCat CAT = QCat.cat;
private static final SimplePath<Object> CAT_PKID = CAT.pkid;
private static final StringPath CAT_NAME = CAT.name;
#Inject
private SQLQueryFactory sqlQueryFactory;
#Override
public UUID createCat(final Cat cat) {
UUID catPkid = UUID.randomUUID();
sqlQueryFactory.insert(CAT)
.columns(CAT_PKID, CAT_NAME)
.values(catPkid, cat.getName())
.execute();
return catPkid;
}
ToyDaoJdbc.java
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.UUID;
import javax.inject.Inject;
import org.apache.commons.lang3.StringUtils;
import org.springframework.stereotype.Repository;
import org.springframework.transaction.annotation.Transactional;
import com.bonkeybee.querydsl.QToy;
import com.bonkeybee.querydsl.Toy;
import com.querydsl.core.BooleanBuilder;
import com.querydsl.core.types.Predicate;
import com.querydsl.core.types.dsl.SimplePath;
import com.querydsl.core.types.dsl.StringPath;
import com.querydsl.sql.SQLQueryFactory;
private static final QToy TOY = QToy.toy;
private static final SimplePath<Object> TOY_PKID = TOY.pkid;
private static final StringPath TOY_NAME = TOY.name;
private static final StringPath TOY_DESCRIPTION = TOY.description;
#Inject
private SQLQueryFactory sqlQueryFactory;
#Override
public UUID createToy(Toy toy) {
UUID toyPkid = UUID.randomUUID();
sqlQueryFactory.insert(TOY)
.columns(TOY_PKID, TOY_NAME, TOY_DESCRIPTION)
.values(toyPkid, toy.getName(), toy.getDescription())
.execute();
return toyPkid;
}
CatToyDaoJdbc.java:
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.UUID;
import javax.inject.Inject;
import org.springframework.stereotype.Repository;
import org.springframework.transaction.annotation.Transactional;
import com.bonkeybee.querydsl.Cattoy;
import com.bonkeybee.querydsl.QCat;
import com.bonkeybee.querydsl.QCattoy;
import com.bonkeybee.querydsl.QToy;
import com.bonkeybee.querydsl.Toy;
import com.querydsl.core.BooleanBuilder;
import com.querydsl.core.types.Predicate;
import com.querydsl.core.types.dsl.SimplePath;
import com.querydsl.sql.SQLQueryFactory;
#Override
public UUID createCatToy(Cattoy catToy) {
throw new RuntimeException("Simulating exception");
}
Main.java:
import java.util.UUID;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.transaction.annotation.Transactional;
import com.bonkeybee.querydsl.Cat;
import com.bonkeybee.querydsl.Cattoy;
import com.bonkeybee.querydsl.Toy;
import com.bonkeybee.service.CatService;
import com.bonkeybee.service.CatToyService;
import com.bonkeybee.service.ToyService;
public static void main(String[] args) {
try (ConfigurableApplicationContext applicationContext = new AnnotationConfigApplicationContext(ApplicationConfiguration.class)) {
CatService catService = applicationContext.getBean(CatService.class);
Cat newCat = new Cat();
newCat.setName(DORA);
newCat.setPkid(catService.createCat(newCat));
}
}
ApplicationConfiguration.java:
import java.beans.PropertyVetoException;
import javax.inject.Inject;
import javax.sql.DataSource;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.jdbc.datasource.DataSourceTransactionManager;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.annotation.EnableTransactionManagement;
import com.mchange.v2.c3p0.ComboPooledDataSource;
import com.querydsl.sql.PostgreSQLTemplates;
import com.querydsl.sql.SQLQueryFactory;
#Configuration
#EnableTransactionManagement
#ComponentScan("com.bonkeybee")
public class ApplicationConfiguration {
#Bean
public DataSource getDataSource() throws PropertyVetoException {
ComboPooledDataSource comboPooledDataSource = new ComboPooledDataSource();
comboPooledDataSource.setDriverClass(JDBC_DRIVER);
comboPooledDataSource.setJdbcUrl(JDBC_URL);
comboPooledDataSource.setUser(USER);
comboPooledDataSource.setPassword(PASSWORD);
comboPooledDataSource.setMinPoolSize(MIN_POOL_SIZE);
comboPooledDataSource.setInitialPoolSize(MIN_POOL_SIZE);
comboPooledDataSource.setMaxPoolSize(MAX_POOL_SIZE);
return comboPooledDataSource;
}
#Bean
#Inject
public PlatformTransactionManager getPlatformTransactionManager(final DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
#Bean
public com.querydsl.sql.Configuration getQueryDslConfiguration() {
return new com.querydsl.sql.Configuration(PostgreSQLTemplates.builder().build());
}
#Bean
#Inject
public SQLQueryFactory getSQLQueryFactory(final com.querydsl.sql.Configuration configuration, final DataSource dataSource) {
return new SQLQueryFactory(configuration, dataSource);
}
}
When Main calls catService.createCat() the cat and toy are created, then a RuntimeException is thrown as expected, however inspection of the tables afterward show the new cat and new toy created instead of being rolled back. Please SO, help me ensure no cat goes toyless >:3
EDIT: Adding imports as requested
Solved it after more searching, there were two configuration issues.
First: the transaction manager bean spring looks for by default should be named "transactionManager" otherwise you have to explicitly set the name.
Second: I added a dependency on "querydsl-sql-spring" artifact and changed my SQLQueryFactory to use a SpringConnectionProvider instead of the DataSource bean (found from this example from the querydsl people). Below is the final configuration:
#Configuration
#EnableTransactionManagement
#ComponentScan("com.bonkeybee")
public class ApplicationConfiguration {
#Bean
public DataSource getDataSource() throws PropertyVetoException {
ComboPooledDataSource comboPooledDataSource = new ComboPooledDataSource();
comboPooledDataSource.setDriverClass(JDBC_DRIVER);
comboPooledDataSource.setJdbcUrl(JDBC_URL);
comboPooledDataSource.setUser(USER);
comboPooledDataSource.setPassword(PASSWORD);
comboPooledDataSource.setMinPoolSize(MIN_POOL_SIZE);
comboPooledDataSource.setInitialPoolSize(MIN_POOL_SIZE);
comboPooledDataSource.setMaxPoolSize(MAX_POOL_SIZE);
return comboPooledDataSource;
}
#Inject
#Bean(name = "transactionManager")
public PlatformTransactionManager getPlatformTransactionManager(final DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
#Bean
public com.querydsl.sql.Configuration getQueryDslConfiguration() {
return new com.querydsl.sql.Configuration(PostgreSQLTemplates.builder().build());
}
#Inject
#Bean
public SQLQueryFactory getSQLQueryFactory(final com.querydsl.sql.Configuration configuration, final DataSource dataSource) {
Provider<Connection> provider = new SpringConnectionProvider(dataSource);
return new SQLQueryFactory(configuration, provider);
}
}
Thanks querydsl people for such a cool lib.
Since it is more than one database request involve in your transaction, you have to specify
the persistence context as PersistenceContextType.EXTENDED, which means that it can survive multiple requests.
You have to have a entity manager and then get transaction object from it.
After getting the transactionmanager, begin a new transaction, do all your database operations and then commit. Here is a sample below
EntityManagerFactory emf = ...
EntityManager em = emf.createEntityManager (PersistenceContextType.EXTENDED);
Magazine mag1 = em.find (Magazine.class, magId);
Magazine mag2 = em.find (Magazine.class, magId);
em.getTransaction().begin();
:
:
em.getTransaction().end();