I am using Apache camel FTP and AWS module (v2.18 ) to create a route between SFTP and AWS S3. Connection to SFTP location is established via ssh jump-host.
Able to connect via Unix command :
sftp -o UserKnownHostsFile=/dev/null
-o StrictHostKeyChecking=no
-i /path/to/host/private-key-file
-o 'ProxyCommand=ssh
-o UserKnownHostsFile=/dev/null
-o StrictHostKeyChecking=no
-i /path/to/jumphost/private-key-file
-l jumphostuser jump.host.com nc sftp.host.com 22' sftp-user#sftp.host.com
However I am getting error while connecting using Apache camel :
Cannot connect/login to: sftp://sftp-user#sftp.host.com:22
For testing purposes I tried connecting to SFTP using Spring -Integration and I was able to do it successfully using the same proxy implementation (JumpHostProxyCommand) mentioned below.
Below is the Spring boot + Apache Camel code that I have been using:
Jsch proxy :
import com.jcraft.jsch.*;
class JumpHostProxyCommand implements Proxy {
String command;
Process p = null;
InputStream in = null;
OutputStream out = null;
public JumpHostProxyCommand(String command) {
this.command = command;
}
public void connect(SocketFactory socket_factory, String host, int port, int timeout) throws Exception {
String cmd = command.replace("%h", host);
cmd = cmd.replace("%p", new Integer(port).toString());
p = Runtime.getRuntime().exec(cmd);
log.debug("Process returned by proxy command {} , {}", command, p);
in = p.getInputStream();
log.debug("Input stream returned by proxy {}", in);
out = p.getOutputStream();
log.debug("Output stream returned by proxy {}", out);
}
public Socket getSocket() {
return null;
}
public InputStream getInputStream() {
return in;
}
public OutputStream getOutputStream() {
return out;
}
public void close() {
try {
if (p != null) {
p.getErrorStream().close();
p.getOutputStream().close();
p.getInputStream().close();
p.destroy();
p = null;
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
Spring boot camel Configuration :
#Configuration
public class CamelConfig {
#Autowired
DataSource dataSource;
#Bean(name = "jdbcMsgIdRepo")
public JdbcMessageIdRepository JdbcMessageIdRepository() {
return new JdbcMessageIdRepository(dataSource,"jdbc-repo");
}
#Bean(name = "s3Client")
public AmazonS3 s3Client() {
return new AmazonS3Client();
}
#Bean(name="jumpHostProxyCommand")
JumpHostProxyCommand jumpHostProxyCommand()
{
String proxykeyFilePath = "/path/to/jumphost/private-key-file";
String command = "ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /proxy/host/key/path -l jumphostuser jump.host.com nc %h %p";
log.debug("JumpHostProxyCommand : " + command);
return new JumpHostProxyCommand(command);
}
}
Camel Route builder :
#Component
public class FtpRouteInitializer extends RouteBuilder {
#Value("${s3.bucket.name}")
private String s3Bucket;
#Autowired
private JdbcMessageIdRepository repo;
#Override
public void configure() throws Exception {
String ftpRoute = "sftp://sftp-user#sftp.host.com:22/?"
+ "delay=300s"
+ "&noop=true"
+ "&idempotentRepository=#jdbcMsgIdRepo"
+ "&idempotentKey=${file:name}-${file:modified}"
+ "&proxy=#jumpHostProxyCommand"
+ "&privateKeyUri=file:/path/to/host/private-key-file"
+ "&jschLoggingLevel=DEBUG"
+ "&knownHostsFile=/dev/null"
+ "&initialDelay=60s"
+ "&autoCreate=false"
+ "&preferredAuthentications=publickey";
from(ftpRoute)
.routeId("FTP-S3")
.setHeader(S3Constants.KEY, simple("${file:name}"))
.to("aws-s3://" + s3ucket + "?amazonS3Client=#s3Client")
.log("Uploaded ${file:name} complete.");
}
}
build.gradle file:
task wrapper(type: Wrapper) {
gradleVersion = '2.5'
}
ext {
springBootVersion = "1.4.1.RELEASE"
awsJavaSdkVersion = "1.10.36"
postgresVersion = "11.2.0.3.0"
jacksonVersion = "2.8.4"
sl4jVersion = "1.7.21"
junitVersion = "4.12"
camelVersion ="2.18.0"
}
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath("org.springframework.boot:spring-boot-gradle-plugin:1.4.1.RELEASE")
}
}
repositories {
mavenCentral()
}
apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'spring-boot'
sourceCompatibility = 1.8
targetCompatibility = 1.8
springBoot {
executable = true
}
dependencies {
//logging
compile("ch.qos.logback:logback-classic:1.1.3")
compile("ch.qos.logback:logback-core:1.1.3")
compile("org.slf4j:slf4j-api:$sl4jVersion")
//Spring boot
compile("org.springframework.boot:spring-boot-starter-web:$springBootVersion")
compile("org.springframework.boot:spring-boot-starter-jdbc:$springBootVersion")
compile("org.apache.camel:camel-spring-boot-starter:$camelVersion")
//Jdbc
compile("postgresql:postgresql:9.0-801.jdbc4")
//Camel
compile("org.apache.camel:camel-ftp:$camelVersion")
compile("org.apache.camel:camel-aws:$camelVersion")
compile("org.apache.camel:camel-core:$camelVersion")
compile("org.apache.camel:camel-spring-boot:$camelVersion")
compile("org.apache.camel:camel-sql:$camelVersion")
//Aws sdk
compile("com.amazonaws:aws-java-sdk:$awsJavaSdkVersion")
//Json
compile("com.fasterxml.jackson.core:jackson-core:$jacksonVersion")
compile("com.fasterxml.jackson.core:jackson-annotations:$jacksonVersion")
compile("com.fasterxml.jackson.core:jackson-databind:$jacksonVersion")
compile("com.fasterxml.jackson.datatype:jackson-datatype-jsr310:$jacksonVersion")
//Swagger
compile("io.springfox:springfox-swagger2:2.0.2")
compile("io.springfox:springfox-swagger-ui:2.0.2")
//utilities
compile('org.projectlombok:lombok:1.16.6')
compile("org.apache.commons:commons-collections4:4.1")
compile("org.apache.commons:commons-lang3:3.4")
//Junit
testCompile("junit:junit:$junitVersion")
testCompile("org.springframework.boot:spring-boot-starter-test:$springBootVersion")
testCompile("org.mockito:mockito-all:1.10.19")
}
I have been struggling for last 2 days to find out the root cause of the error, any help on this issue is really appreciated. Thanks!
Try adding jump host configuration in the ssh config file on the machine where you are running this code. You shall be able to transparently connect to the server using the jump host for the host(s) specified in the config file without needing to specify any proxy or jump host in the sftp command.
An example config to setup a dynamic jump host is as follows:
Host sftp.host.com
user sftp-user
IdentityFile /home/sftp-user/.ssh/id_rsa
ProxyCommand ssh sftp-user#jump.host.com nc %h %p 2> /dev/null
ForwardAgent yes
You can add multiple hosts or regex pattern in the Host line. This entry needs to be done in ~/.ssh/config file (create this file if not already present).
Related
I need to test some REST client. For that purpose I'm using org.mockserver.integration.ClientAndServer
I start my server. Create some expectation. After that I mock my client. Run this client. But when server receives request I see in logs:
14:00:13.511 [MockServer-EventLog0] INFO org.mockserver.log.MockServerEventLog - received binary request:
505249202a20485454502f322e300d0a0d0a534d0d0a0d0a00000604000000000000040100000000000408000000000000ff0001
14:00:13.511 [MockServer-EventLog0] INFO org.mockserver.log.MockServerEventLog - unknown message format
505249202a20485454502f322e300d0a0d0a534d0d0a0d0a00000604000000000000040100000000000408000000000000ff0001
This is my test:
#RunWith(PowerMockRunner.class)
#PrepareForTest({NrfClient.class, SnefProperties.class})
#PowerMockIgnore({"javax.net.ssl.*"})
#TestPropertySource(locations = "classpath:test.properties")
public class NrfConnectionTest {
private String finalPath;
private UUID uuid = UUID.fromString("8d92d4ac-be0e-4016-8b2c-eff2607798e4");
private ClientAndServer mockServer;
#Before
public void startMockServer() {
mockServer = startClientAndServer(8888);
}
#After
public void stopServer() {
mockServer.stop();
}
#Test
public void NrfRegisterTest() throws Exception {
//create some expectation
new MockServerClient("127.0.0.1", 8888)
.when(HttpRequest.request()
.withMethod("PUT")
.withPath("/nnrf-nfm/v1/nf-instances/8d92d4ac-be0e-4016-8b2c-eff2607798e4"))
.respond(HttpResponse.response().withStatusCode(201));
//long preparations and mocking the NrfClient (client that actually make request)
//NrfClient is singleton, so had to mock a lot of methods.
PropertiesConfiguration config = new PropertiesConfiguration();
config.setAutoSave(false);
File file = new File("test.properties");
if (!file.exists()) {
String absolutePath = file.getAbsolutePath();
finalPath = absolutePath.substring(0, absolutePath.length() - "test.properties".length()) + "src\\test\\resources\\test.properties";
file = new File(finalPath);
}
try {
config.load(file);
config.setFile(file);
} catch(ConfigurationException e) {
LogUtils.warn(NrfConnectionTest.class, "Failed to load properties from file " + "classpath:test.properties", e);
}
SnefProperties spyProperties = PowerMockito.spy(SnefProperties.getInstance());
PowerMockito.doReturn(finalPath).when(spyProperties, "getPropertiesFilePath");
PowerMockito.doReturn(config).when(spyProperties, "getProperties");
PowerMockito.doReturn(config).when(spyProperties, "getLastUpdatedProperties");
NrfConfig nrfConfig = getNrfConfig();
NrfClient nrfClient = PowerMockito.spy(NrfClient.getInstance());
SnefAddressInfo snefAddressInfo = new SnefAddressInfo("127.0.0.1", "8080");
PowerMockito.doReturn(nrfConfig).when(nrfClient, "loadConfiguration", snefAddressInfo);
PowerMockito.doReturn(uuid).when(nrfClient, "getUuid");
Whitebox.setInternalState(SnefProperties.class, "instance", spyProperties);
nrfClient.initialize(snefAddressInfo);
//here the client makes request
nrfClient.run();
}
private NrfConfig getNrfConfig() {
NrfConfig nrfConfig = new NrfConfig();
nrfConfig.setNrfDirectConnection(true);
nrfConfig.setNrfAddress("127.0.0.1:8888");
nrfConfig.setSnefNrfService(State.ENABLED);
nrfConfig.setSmpIp("127.0.0.1");
nrfConfig.setSmpPort("8080");
return nrfConfig;
}
}
Looks like I miss some server configuration, or use it in wrong way.
Or, maybe the reason is in powermock: could it be that mockserver is incompatible with powermock or PowerMockRunner?
TL:DR; When running tests with different #ResourceArgs, the configuration of different tests get thrown around and override others, breaking tests meant to run with specific configurations.
So, I have a service that has tests that run in different configuration setups. The main difference at the moment is the service can either manage its own authentication or get it from an external source (Keycloak).
I firstly control this using test profiles, which seem to work fine. Unfortunately, in order to support both cases, the ResourceLifecycleManager I have setup supports setting up a Keycloak instance and returns config values that break the config for self authentication (This is due primarily to the fact that I have not found out how to get the lifecycle manager to determine on its own what profile or config is currently running. If I could do this, I think I would be much better off than using #ResourceArg, so would love to know if I missed something here).
To remedy this shortcoming, I have attempted to use #ResourceArgs to convey to the lifecycle manager when to setup for external auth. However, I have noticed some really odd execution timings and the config that ends up at my test/service isn't what I intend based on the test class's annotations, where it is obvious the lifecycle manager has setup for external auth.
Additionally, it should be noted that I have my tests ordered such that the profiles and configs shouldn't be running out of order; all the tests that don't care are run first, then the 'normal' tests with self auth, then the tests with the external auth profile. I can see this working appropriately when I run in intellij, and the fact I can tell the time is being taken to start up the new service instance between the test profiles.
Looking at the logs when I throw a breakpoint in places, some odd things are obvious:
When breakpoint on an erring test (before the external-configured tests run)
The start() method of my TestResourceLifecycleManager has been called twice
The first run ran with Keycloak starting, would override/break config
though the time I would expect to need to be taken to start up keycloak not happening, a little confused here
The second run is correct, not starting keycloak
The profile config is what is expected, except for what the keycloak setup would override
When breakpoint on an external-configured test (after all self-configured tests run):
The start() method has now been called 4 times; appears that things were started in the same order as before again for the new run of the app
There could be some weirdness in how Intellij/Gradle shows logs, but I am interpreting this as:
Quarkus initting the two instances of LifecycleManager when starting the app for some reason, and one's config overrides the other, causing my woes.
The lifecycle manager is working as expected; it appropriately starts/ doesn't start keycloak when configured either way
At this point I can't tell if I'm doing something wrong, or if there's a bug.
Test class example for self-auth test (same annotations for all tests in this (test) profile):
#Slf4j
#QuarkusTest
#QuarkusTestResource(TestResourceLifecycleManager.class)
#TestHTTPEndpoint(Auth.class)
class AuthTest extends RunningServerTest {
Test class example for external auth test (same annotations for all tests in this (externalAuth) profile):
#Slf4j
#QuarkusTest
#TestProfile(ExternalAuthTestProfile.class)
#QuarkusTestResource(value = TestResourceLifecycleManager.class, initArgs = #ResourceArg(name=TestResourceLifecycleManager.EXTERNAL_AUTH_ARG, value="true"))
#TestHTTPEndpoint(Auth.class)
class AuthExternalTest extends RunningServerTest {
ExternalAuthTestProfile extends this, providing the appropriate profile name:
public class NonDefaultTestProfile implements QuarkusTestProfile {
private final String testProfile;
private final Map<String, String> overrides = new HashMap<>();
protected NonDefaultTestProfile(String testProfile) {
this.testProfile = testProfile;
}
protected NonDefaultTestProfile(String testProfile, Map<String, String> configOverrides) {
this(testProfile);
this.overrides.putAll(configOverrides);
}
#Override
public Map<String, String> getConfigOverrides() {
return new HashMap<>(this.overrides);
}
#Override
public String getConfigProfile() {
return testProfile;
}
#Override
public List<TestResourceEntry> testResources() {
return QuarkusTestProfile.super.testResources();
}
}
Lifecycle manager:
#Slf4j
public class TestResourceLifecycleManager implements QuarkusTestResourceLifecycleManager {
public static final String EXTERNAL_AUTH_ARG = "externalAuth";
private static volatile MongodExecutable MONGO_EXE = null;
private static volatile KeycloakContainer KEYCLOAK_CONTAINER = null;
private boolean externalAuth = false;
public synchronized Map<String, String> startKeycloakTestServer() {
if(!this.externalAuth){
log.info("No need for keycloak.");
return Map.of();
}
if (KEYCLOAK_CONTAINER != null) {
log.info("Keycloak already started.");
} else {
KEYCLOAK_CONTAINER = new KeycloakContainer()
// .withEnv("hello","world")
.withRealmImportFile("keycloak-realm.json");
KEYCLOAK_CONTAINER.start();
log.info(
"Test keycloak started at endpoint: {}\tAdmin creds: {}:{}",
KEYCLOAK_CONTAINER.getAuthServerUrl(),
KEYCLOAK_CONTAINER.getAdminUsername(),
KEYCLOAK_CONTAINER.getAdminPassword()
);
}
String clientId;
String clientSecret;
String publicKey = "";
try (
Keycloak keycloak = KeycloakBuilder.builder()
.serverUrl(KEYCLOAK_CONTAINER.getAuthServerUrl())
.realm("master")
.grantType(OAuth2Constants.PASSWORD)
.clientId("admin-cli")
.username(KEYCLOAK_CONTAINER.getAdminUsername())
.password(KEYCLOAK_CONTAINER.getAdminPassword())
.build();
) {
RealmResource appsRealmResource = keycloak.realms().realm("apps");
ClientRepresentation qmClientResource = appsRealmResource.clients().findByClientId("quartermaster").get(0);
clientSecret = qmClientResource.getSecret();
log.info("Got client id \"{}\" with secret: {}", "quartermaster", clientSecret);
//get private key
for (KeysMetadataRepresentation.KeyMetadataRepresentation curKey : appsRealmResource.keys().getKeyMetadata().getKeys()) {
if (!SIG.equals(curKey.getUse())) {
continue;
}
if (!"RSA".equals(curKey.getType())) {
continue;
}
String publicKeyTemp = curKey.getPublicKey();
if (publicKeyTemp == null || publicKeyTemp.isBlank()) {
continue;
}
publicKey = publicKeyTemp;
log.info("Found a relevant key for public key use: {} / {}", curKey.getKid(), publicKey);
}
}
// write public key
// = new File(TestResourceLifecycleManager.class.getResource("/").toURI().toString() + "/security/testKeycloakPublicKey.pem");
File publicKeyFile;
try {
publicKeyFile = File.createTempFile("oqmTestKeycloakPublicKey",".pem");
// publicKeyFile = new File(TestResourceLifecycleManager.class.getResource("/").toURI().toString().replace("/classes/java/", "/resources/") + "/security/testKeycloakPublicKey.pem");
log.info("path of public key: {}", publicKeyFile);
// if(publicKeyFile.createNewFile()){
// log.info("created new public key file");
//
// } else {
// log.info("Public file already exists");
// }
try (
FileOutputStream os = new FileOutputStream(
publicKeyFile
);
) {
IOUtils.write(publicKey, os, UTF_8);
} catch (IOException e) {
log.error("Failed to write out public key of keycloak: ", e);
throw new IllegalStateException("Failed to write out public key of keycloak.", e);
}
} catch (IOException e) {
log.error("Failed to create public key file: ", e);
throw new IllegalStateException("Failed to create public key file", e);
}
String keycloakUrl = KEYCLOAK_CONTAINER.getAuthServerUrl().replace("/auth", "");
return Map.of(
"test.keycloak.url", keycloakUrl,
"test.keycloak.authUrl", KEYCLOAK_CONTAINER.getAuthServerUrl(),
"test.keycloak.adminName", KEYCLOAK_CONTAINER.getAdminUsername(),
"test.keycloak.adminPass", KEYCLOAK_CONTAINER.getAdminPassword(),
//TODO:: add config for server to talk to
"service.externalAuth.url", keycloakUrl,
"mp.jwt.verify.publickey.location", publicKeyFile.getAbsolutePath()
);
}
public static synchronized void startMongoTestServer() throws IOException {
if (MONGO_EXE != null) {
log.info("Flapdoodle Mongo already started.");
return;
}
Version.Main version = Version.Main.V4_0;
int port = 27018;
log.info("Starting Flapdoodle Test Mongo {} on port {}", version, port);
IMongodConfig config = new MongodConfigBuilder()
.version(version)
.net(new Net(port, Network.localhostIsIPv6()))
.build();
try {
MONGO_EXE = MongodStarter.getDefaultInstance().prepare(config);
MongodProcess process = MONGO_EXE.start();
if (!process.isProcessRunning()) {
throw new IOException();
}
} catch (Throwable e) {
log.error("FAILED to start test mongo server: ", e);
MONGO_EXE = null;
throw e;
}
}
public static synchronized void stopMongoTestServer() {
if (MONGO_EXE == null) {
log.warn("Mongo was not started.");
return;
}
MONGO_EXE.stop();
MONGO_EXE = null;
}
public synchronized static void cleanMongo() throws IOException {
if (MONGO_EXE == null) {
log.warn("Mongo was not started.");
return;
}
log.info("Cleaning Mongo of all entries.");
}
#Override
public void init(Map<String, String> initArgs) {
this.externalAuth = Boolean.parseBoolean(initArgs.getOrDefault(EXTERNAL_AUTH_ARG, Boolean.toString(this.externalAuth)));
}
#Override
public Map<String, String> start() {
log.info("STARTING test lifecycle resources.");
Map<String, String> configOverride = new HashMap<>();
try {
startMongoTestServer();
} catch (IOException e) {
log.error("Unable to start Flapdoodle Mongo server");
}
configOverride.putAll(startKeycloakTestServer());
return configOverride;
}
#Override
public void stop() {
log.info("STOPPING test lifecycle resources.");
stopMongoTestServer();
}
}
The app can be found here: https://github.com/Epic-Breakfast-Productions/OpenQuarterMaster/tree/main/software/open-qm-base-station
The tests are currently failing in the ways I am describing, so feel free to look around.
Note that to run this, you will need to run ./gradlew build publishToMavenLocal in https://github.com/Epic-Breakfast-Productions/OpenQuarterMaster/tree/main/software/libs/open-qm-core to install a dependency locally.
Github issue also tracking this: https://github.com/quarkusio/quarkus/issues/22025
Any use of #QuarkusTestResource() without restrictToAnnotatedClass set to true, means that the QuarkusTestResourceLifecycleManager will be applied to all tests no matter where the annotation is placed.
Hope restrictToAnnotatedClass will solve the problem.
i've Eureka cluster with multiple servers:
#ENV
EUREKA_URL=http://host1:8761/eureka/,http://host2:8761/eureka/,http://host3:8761/eureka/
#boostrap.yml
eureka:
client:
registryFetchIntervalSeconds: 5
serviceUrl:
defaultZone: ${EUREKA_URL:http://127.0.0.1:8761/eureka/}
And in case, when host1 is down, my app doesn't start with exception:
13410 2021-01-14 11:28:34,994 ERROR [ main ] o.s.b.SpringApplication | Application run failed
org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://host1:8761/eureka/apps/": Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused (Connection refused)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:748)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:674)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:583)
at org.springframework.cloud.netflix.eureka.http.RestTemplateEurekaHttpClient.getApplicationsInternal(RestTemplateEurekaHttpClient.java:154)
at org.springframework.cloud.netflix.eureka.http.RestTemplateEurekaHttpClient.getApplications(RestTemplateEurekaHttpClient.java:142)
at org.springframework.cloud.netflix.eureka.config.EurekaConfigServerBootstrapConfiguration.lambda$eurekaConfigServerInstanceProvider$0(EurekaConfigServerBootstrapConfiguration.java:112)
at org.springframework.cloud.config.client.ConfigServerInstanceProvider.getConfigServerInstances(ConfigServerInstanceProvider.java:50)
at org.springframework.cloud.config.client.ConfigServerInstanceProvider$$FastClassBySpringCGLIB$$facbf882.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:771)
at
It's strange situation for me, because i expected that eureka will try to connect to other servers, but it's not so. I found it in spring-cloud-netflix-eureka-client-2.2.3.RELEASE.jar:
// EurekaConfigServerBootstrapConfiguration.java
private String getEurekaUrl(EurekaClientConfigBean config) {
List<String> urls = EndpointUtils.getServiceUrlsFromConfig(config,
EurekaClientConfigBean.DEFAULT_ZONE, true);
return urls.get(0);
}
and
// DefaultEurekaClientConfig.java
#Override
public List<String> getEurekaServerServiceUrls(String myZone) {
String serviceUrls = configInstance.getStringProperty(
namespace + CONFIG_EUREKA_SERVER_SERVICE_URL_PREFIX + "." + myZone, null).get();
if (serviceUrls == null || serviceUrls.isEmpty()) {
serviceUrls = configInstance.getStringProperty(
namespace + CONFIG_EUREKA_SERVER_SERVICE_URL_PREFIX + ".default", null).get();
}
if (serviceUrls != null) {
return Arrays.asList(serviceUrls.split(URL_SEPARATOR));
}
return new ArrayList<String>();
}
and finally:
// RestTemplateEurekaHttpClient.java
private EurekaHttpResponse<Applications> getApplicationsInternal(String urlPath,
String[] regions) {
String url = serviceUrl + urlPath;
if (regions != null && regions.length > 0) {
url = url + (urlPath.contains("?") ? "&" : "?") + "regions="
+ StringUtil.join(regions);
}
// There is no exception handling here and above!
ResponseEntity<EurekaApplications> response = restTemplate.exchange(url,
HttpMethod.GET, null, EurekaApplications.class);
return anEurekaHttpResponse(response.getStatusCodeValue(),
response.getStatusCode().value() == HttpStatus.OK.value()
&& response.hasBody() ? (Applications) response.getBody() : null)
.headers(headersOf(response)).build();
}
I've java11, springBootVersion = '2.3.4.RELEASE', spring-cloud-netflix-eureka-client-2.2.3.RELEASE, eureka-client-1.9.21.
How I can make it so that eureka, in case of unsuccessful registration on the first server, continues to try on the following servers? Any idea?
I found workaround.
I just override bean:
#Slf4j
#Configuration
public class EurekaMultiClientBoostrapConfig {
#Bean
#Primary
public RestTemplateEurekaHttpClient configDiscoveryRestTemplateEurekaHttpClient(EurekaClientConfigBean config) {
List<String> urls = EndpointUtils.getServiceUrlsFromConfig(config,
EurekaClientConfigBean.DEFAULT_ZONE, true);
for (String url : urls) {
try {
RestTemplateEurekaHttpClient client = (RestTemplateEurekaHttpClient) new RestTemplateTransportClientFactory()
.newClient(new DefaultEndpoint(url));
client.getApplications(config.getRegion());
log.info("Registered on Eureka host '{}' is successful.", url);
return client;
} catch (Exception e) {
log.warn("Eureka host '{}' is unavailable(reason: {})", url, e.getMessage());
}
}
throw new IllegalStateException("Failed to register on any eureka host.");
}
P.S. Since this configuration belongs to the Bootstrap Configuration, don't forget add following param to resources/META-INF/spring.factories
org.springframework.cloud.bootstrap.BootstrapConfiguration=your.app.package.EurekaMultiClientBoostrapConfig
I'm trying to deploy a Java based chaincode in "first-network" sample.
The code is generated with IBM Blockchain Platform plugin for VSCode.
It works in the local environment (Using the VSCode plugin to install, invoke,...), but when I try to test the chaincode in the "first-network" sample, it crashes.
Local Environment:
peer0.org1.example.com
ca.org1.example.com
orderer.example.com
First Network Environment:
cli
peer0.org2.example.com
peer1.org2.example.com
peer0.org1.example.com
peer1.org1.example.com
orderer.example.com
couchdb2
couchdb1
couchdb3
couchdb0
ca.example.com
I have two classes:
SimpleAsset.java
/*
* SPDX-License-Identifier: Apache-2.0
*/
package org.example;
import org.hyperledger.fabric.contract.annotation.DataType;
import org.hyperledger.fabric.contract.annotation.Property;
import org.json.JSONObject;
#DataType()
public class SimpleAsset {
#Property()
private String value;
public SimpleAsset(){
}
public String getValue() {
return value;
}
public void setValue(String value) {
this.value = value;
}
public String toJSONString() {
return new JSONObject(this).toString();
}
public static SimpleAsset fromJSONString(String json) {
String value = new JSONObject(json).getString("value");
SimpleAsset asset = new SimpleAsset();
asset.setValue(value);
return asset;
}
}
SimpleAssetContract.java
/*
* SPDX-License-Identifier: Apache-2.0
*/
package org.example;
import org.hyperledger.fabric.contract.Context;
import org.hyperledger.fabric.contract.ContractInterface;
import org.hyperledger.fabric.contract.annotation.Contract;
import org.hyperledger.fabric.contract.annotation.Default;
import org.hyperledger.fabric.contract.annotation.Transaction;
import org.hyperledger.fabric.contract.annotation.Contact;
import org.hyperledger.fabric.contract.annotation.Info;
import org.hyperledger.fabric.contract.annotation.License;
import static java.nio.charset.StandardCharsets.UTF_8;
#Contract(name = "SimpleAssetContract",
info = #Info(title = "SimpleAsset contract",
description = "My Smart Contract",
version = "0.0.1",
license =
#License(name = "Apache-2.0",
url = ""),
contact = #Contact(email = "SimpleAsset#example.com",
name = "SimpleAsset",
url = "http://SimpleAsset.me")))
#Default
public class SimpleAssetContract implements ContractInterface {
public SimpleAssetContract() {
}
#Transaction()
public boolean simpleAssetExists(Context ctx, String simpleAssetId) {
byte[] buffer = ctx.getStub().getState(simpleAssetId);
return (buffer != null && buffer.length > 0);
}
#Transaction()
public void createSimpleAsset(Context ctx, String simpleAssetId, String value) {
boolean exists = simpleAssetExists(ctx,simpleAssetId);
if (exists) {
throw new RuntimeException("The asset "+simpleAssetId+" already exists");
}
SimpleAsset asset = new SimpleAsset();
asset.setValue(value);
ctx.getStub().putState(simpleAssetId, asset.toJSONString().getBytes(UTF_8));
}
#Transaction()
public SimpleAsset readSimpleAsset(Context ctx, String simpleAssetId) {
boolean exists = simpleAssetExists(ctx,simpleAssetId);
if (!exists) {
throw new RuntimeException("The asset "+simpleAssetId+" does not exist");
}
SimpleAsset newAsset = SimpleAsset.fromJSONString(new String(ctx.getStub().getState(simpleAssetId),UTF_8));
return newAsset;
}
#Transaction()
public void updateSimpleAsset(Context ctx, String simpleAssetId, String newValue) {
boolean exists = simpleAssetExists(ctx,simpleAssetId);
if (!exists) {
throw new RuntimeException("The asset "+simpleAssetId+" does not exist");
}
SimpleAsset asset = new SimpleAsset();
asset.setValue(newValue);
ctx.getStub().putState(simpleAssetId, asset.toJSONString().getBytes(UTF_8));
}
#Transaction()
public void deleteSimpleAsset(Context ctx, String simpleAssetId) {
boolean exists = simpleAssetExists(ctx,simpleAssetId);
if (!exists) {
throw new RuntimeException("The asset "+simpleAssetId+" does not exist");
}
ctx.getStub().delState(simpleAssetId);
}
}
I don't know if I'm doing it right. The steps I'm following are:
$ ./byfn.sh up -s couchdb -l java # Deploy the network with Couchdb and Java
$ cp -r SimpleAsset fabric-samples/chaincode/ # This is the chaincodes path in the docker
$ docker exec -it cli bash # Go to the Cli # We go inside the docker
$ /opt/gopath/src/github.com/hyperledger/fabric/peer# peer chaincode install -n sa01 -v 1.0 -l java -p /opt/gopath/src/github.com/chaincode/SimpleAsset/ # Install the SimpleAsset chaincode -> OK!
$ /opt/gopath/src/github.com/hyperledger/fabric/peer# peer chaincode instantiate -o orderer.example.com:7050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n sa01 -l java -v 1.0 -c '{"Args":[]}' -P 'AND ('\''Org1MSP.peer'\'','\''Org2MSP.peer'\'')'
Error: could not assemble transaction, err proposal response was not successful, error code 500, msg chaincode registration failed: container exited with 1
What am I doing wrong? How could I solve this?
There is a problem with the java fabric-shim version 1.4.2 which means if you declare a dependency on that version it will fail to instantiate. Check your pom.xml or build.gradle file to see which version is being used and use version 1.4.4 or later (currently only 1.4.4 is available now but there are plans for further releases)
I would like to write some integration with ElasticSearch. For testing I would like to run in-memory ES.
I found some information in documentation, but without example how to write those kind of test. Elasticsearch Reference [1.6] » Testing » Java Testing Framework » integration tests
« unit tests
Also I found following article, but it's out of data. Easy JUnit testing with Elastic Search
I looking example how to start and run ES in-memory and access to it over REST API.
Based on the second link you provided, I created this abstract test class:
#RunWith(SpringJUnit4ClassRunner.class)
public abstract class AbstractElasticsearchTest {
private static final String HTTP_PORT = "9205";
private static final String HTTP_TRANSPORT_PORT = "9305";
private static final String ES_WORKING_DIR = "target/es";
private static Node node;
#BeforeClass
public static void startElasticsearch() throws Exception {
removeOldDataDir(ES_WORKING_DIR + "/" + clusterName);
Settings settings = Settings.builder()
.put("path.home", ES_WORKING_DIR)
.put("path.conf", ES_WORKING_DIR)
.put("path.data", ES_WORKING_DIR)
.put("path.work", ES_WORKING_DIR)
.put("path.logs", ES_WORKING_DIR)
.put("http.port", HTTP_PORT)
.put("transport.tcp.port", HTTP_TRANSPORT_PORT)
.put("index.number_of_shards", "1")
.put("index.number_of_replicas", "0")
.put("discovery.zen.ping.multicast.enabled", "false")
.build();
node = nodeBuilder().settings(settings).clusterName("monkeys.elasticsearch").client(false).node();
node.start();
}
#AfterClass
public static void stopElasticsearch() {
node.close();
}
private static void removeOldDataDir(String datadir) throws Exception {
File dataDir = new File(datadir);
if (dataDir.exists()) {
FileSystemUtils.deleteRecursively(dataDir);
}
}
}
In the production code, I configured an Elasticsearch client as follows. The integration test extends the above defined abstract class and configures property elasticsearch.port as 9305 and elasticsearch.host as localhost.
#Configuration
public class ElasticsearchConfiguration {
#Bean(destroyMethod = "close")
public Client elasticsearchClient(#Value("${elasticsearch.clusterName}") String clusterName,
#Value("${elasticsearch.host}") String elasticsearchClusterHost,
#Value("${elasticsearch.port}") Integer elasticsearchClusterPort) throws UnknownHostException {
Settings settings = Settings.settingsBuilder().put("cluster.name", clusterName).build();
InetSocketTransportAddress transportAddress = new InetSocketTransportAddress(InetAddress.getByName(elasticsearchClusterHost), elasticsearchClusterPort);
return TransportClient.builder().settings(settings).build().addTransportAddress(transportAddress);
}
}
That's it. The integration test will run the production code which is configured to connect to the node started in the AbstractElasticsearchTest.startElasticsearch().
In case you want to use the elasticsearch REST api, use port 9205. E.g. with Apache HttpComponents:
HttpClient httpClient = HttpClients.createDefault();
HttpPut httpPut = new HttpPut("http://localhost:9205/_template/" + templateName);
httpPut.setEntity(new FileEntity(new File("template.json")));
httpClient.execute(httpPut);
Here is my implementation
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.util.UUID;
import org.elasticsearch.client.Client;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.node.Node;
import org.elasticsearch.node.NodeBuilder;
/**
*
* #author Raghu Nair
*/
public final class ElasticSearchInMemory {
private static Client client = null;
private static File tempDir = null;
private static Node elasticSearchNode = null;
public static Client getClient() {
return client;
}
public static void setUp() throws Exception {
tempDir = File.createTempFile("elasticsearch-temp", Long.toString(System.nanoTime()));
tempDir.delete();
tempDir.mkdir();
System.out.println("writing to: " + tempDir);
String clusterName = UUID.randomUUID().toString();
elasticSearchNode = NodeBuilder
.nodeBuilder()
.local(false)
.clusterName(clusterName)
.settings(
ImmutableSettings.settingsBuilder()
.put("script.disable_dynamic", "false")
.put("gateway.type", "local")
.put("index.number_of_shards", "1")
.put("index.number_of_replicas", "0")
.put("path.data", new File(tempDir, "data").getAbsolutePath())
.put("path.logs", new File(tempDir, "logs").getAbsolutePath())
.put("path.work", new File(tempDir, "work").getAbsolutePath())
).node();
elasticSearchNode.start();
client = elasticSearchNode.client();
}
public static void tearDown() throws Exception {
if (client != null) {
client.close();
}
if (elasticSearchNode != null) {
elasticSearchNode.stop();
elasticSearchNode.close();
}
if (tempDir != null) {
removeDirectory(tempDir);
}
}
public static void removeDirectory(File dir) throws IOException {
if (dir.isDirectory()) {
File[] files = dir.listFiles();
if (files != null && files.length > 0) {
for (File aFile : files) {
removeDirectory(aFile);
}
}
}
Files.delete(dir.toPath());
}
}
You can start ES on your local with:
Settings settings = Settings.settingsBuilder()
.put("path.home", ".")
.build();
NodeBuilder.nodeBuilder().settings(settings).node();
When ES started, access it over REST like:
http://localhost:9200/_cat/health?v
As of 2016 embedded elasticsearch is no-longer supported
As per a response from one of the devlopers in 2017 you can use the following approaches:
Use the Gradle tools elasticsearch already has. You can read some information about this here: https://github.com/elastic/elasticsearch/issues/21119
Use the Maven plugin: https://github.com/alexcojocaru/elasticsearch-maven-plugin
Use Ant scripts like http://david.pilato.fr/blog/2016/10/18/elasticsearch-real-integration-tests-updated-for-ga
Using Docker: https://www.testcontainers.org/modules/elasticsearch
Using Docker from maven: https://github.com/dadoonet/fscrawler/blob/e15dddf72b1ed094dad279d492e4e0314f73683f/pom.xml#L241-L289