Cannot instantiate this java chaincode - java

I'm trying to deploy a Java based chaincode in "first-network" sample.
The code is generated with IBM Blockchain Platform plugin for VSCode.
It works in the local environment (Using the VSCode plugin to install, invoke,...), but when I try to test the chaincode in the "first-network" sample, it crashes.
Local Environment:
peer0.org1.example.com
ca.org1.example.com
orderer.example.com
First Network Environment:
cli
peer0.org2.example.com
peer1.org2.example.com
peer0.org1.example.com
peer1.org1.example.com
orderer.example.com
couchdb2
couchdb1
couchdb3
couchdb0
ca.example.com
I have two classes:
SimpleAsset.java
/*
* SPDX-License-Identifier: Apache-2.0
*/
package org.example;
import org.hyperledger.fabric.contract.annotation.DataType;
import org.hyperledger.fabric.contract.annotation.Property;
import org.json.JSONObject;
#DataType()
public class SimpleAsset {
#Property()
private String value;
public SimpleAsset(){
}
public String getValue() {
return value;
}
public void setValue(String value) {
this.value = value;
}
public String toJSONString() {
return new JSONObject(this).toString();
}
public static SimpleAsset fromJSONString(String json) {
String value = new JSONObject(json).getString("value");
SimpleAsset asset = new SimpleAsset();
asset.setValue(value);
return asset;
}
}
SimpleAssetContract.java
/*
* SPDX-License-Identifier: Apache-2.0
*/
package org.example;
import org.hyperledger.fabric.contract.Context;
import org.hyperledger.fabric.contract.ContractInterface;
import org.hyperledger.fabric.contract.annotation.Contract;
import org.hyperledger.fabric.contract.annotation.Default;
import org.hyperledger.fabric.contract.annotation.Transaction;
import org.hyperledger.fabric.contract.annotation.Contact;
import org.hyperledger.fabric.contract.annotation.Info;
import org.hyperledger.fabric.contract.annotation.License;
import static java.nio.charset.StandardCharsets.UTF_8;
#Contract(name = "SimpleAssetContract",
info = #Info(title = "SimpleAsset contract",
description = "My Smart Contract",
version = "0.0.1",
license =
#License(name = "Apache-2.0",
url = ""),
contact = #Contact(email = "SimpleAsset#example.com",
name = "SimpleAsset",
url = "http://SimpleAsset.me")))
#Default
public class SimpleAssetContract implements ContractInterface {
public SimpleAssetContract() {
}
#Transaction()
public boolean simpleAssetExists(Context ctx, String simpleAssetId) {
byte[] buffer = ctx.getStub().getState(simpleAssetId);
return (buffer != null && buffer.length > 0);
}
#Transaction()
public void createSimpleAsset(Context ctx, String simpleAssetId, String value) {
boolean exists = simpleAssetExists(ctx,simpleAssetId);
if (exists) {
throw new RuntimeException("The asset "+simpleAssetId+" already exists");
}
SimpleAsset asset = new SimpleAsset();
asset.setValue(value);
ctx.getStub().putState(simpleAssetId, asset.toJSONString().getBytes(UTF_8));
}
#Transaction()
public SimpleAsset readSimpleAsset(Context ctx, String simpleAssetId) {
boolean exists = simpleAssetExists(ctx,simpleAssetId);
if (!exists) {
throw new RuntimeException("The asset "+simpleAssetId+" does not exist");
}
SimpleAsset newAsset = SimpleAsset.fromJSONString(new String(ctx.getStub().getState(simpleAssetId),UTF_8));
return newAsset;
}
#Transaction()
public void updateSimpleAsset(Context ctx, String simpleAssetId, String newValue) {
boolean exists = simpleAssetExists(ctx,simpleAssetId);
if (!exists) {
throw new RuntimeException("The asset "+simpleAssetId+" does not exist");
}
SimpleAsset asset = new SimpleAsset();
asset.setValue(newValue);
ctx.getStub().putState(simpleAssetId, asset.toJSONString().getBytes(UTF_8));
}
#Transaction()
public void deleteSimpleAsset(Context ctx, String simpleAssetId) {
boolean exists = simpleAssetExists(ctx,simpleAssetId);
if (!exists) {
throw new RuntimeException("The asset "+simpleAssetId+" does not exist");
}
ctx.getStub().delState(simpleAssetId);
}
}
I don't know if I'm doing it right. The steps I'm following are:
$ ./byfn.sh up -s couchdb -l java # Deploy the network with Couchdb and Java
$ cp -r SimpleAsset fabric-samples/chaincode/ # This is the chaincodes path in the docker
$ docker exec -it cli bash # Go to the Cli # We go inside the docker
$ /opt/gopath/src/github.com/hyperledger/fabric/peer# peer chaincode install -n sa01 -v 1.0 -l java -p /opt/gopath/src/github.com/chaincode/SimpleAsset/ # Install the SimpleAsset chaincode -> OK!
$ /opt/gopath/src/github.com/hyperledger/fabric/peer# peer chaincode instantiate -o orderer.example.com:7050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n sa01 -l java -v 1.0 -c '{"Args":[]}' -P 'AND ('\''Org1MSP.peer'\'','\''Org2MSP.peer'\'')'
Error: could not assemble transaction, err proposal response was not successful, error code 500, msg chaincode registration failed: container exited with 1
What am I doing wrong? How could I solve this?

There is a problem with the java fabric-shim version 1.4.2 which means if you declare a dependency on that version it will fail to instantiate. Check your pom.xml or build.gradle file to see which version is being used and use version 1.4.4 or later (currently only 1.4.4 is available now but there are plans for further releases)

Related

PicoCLI : How to use #ArgGroup for a CommandLine.Command method

I have two options (-n and -t) under a command where if -n is used, then -t is required, but bothare not required. However, I keep getting an error about
I am trying to send the options as a parameter to another method (with business logic) as a parameter.
Valid Usage:
agent.bat install -n -t <blahblah>
agent.bat install -t <blahblah> -n
agent.bat install -t <blah blah>
agent.bat install -t <----This is on interactive so it would ask for a parameter later
Invalid Usage:
agent.bat install -n
agent.bat install -n -t
Current output with valid usage:
agent.bat install -t
Missing required parameter: '<arg0>'
Usage: agent install [-hV] <arg0>
Setup or update the agent service program by install token.
<arg0>
public class Agent implements Callable<Integer> {
static class InstallArgs {
#Option(names = {"-t", "--token"},
order = 0,
arity = "0..1",
interactive = true,
description = "The agent install token.",
required = true) String installToken ;
#Option(names = {"-n", "--noninteractive"},
order = 1,
description = "Sets installation to non-interactive",
required = false) boolean nonInteractive ;
public String toString() {
return String.format("%s,%s", installToken, nonInteractive);
}
}
private static String[] programArgs;
#ArgGroup(exclusive = false, multiplicity = "1")
#CommandLine.Command(name = AgentCommand.INSTALL_COMMAND, mixinStandardHelpOptions = true,
description = "Setup or update the agent service program by install token.")
void install(InstallArgs installArgs) {
String[] installArgsValues = installArgs.toString().split(",");
String installToken = installArgsValues[0];
boolean nonInteractive = Boolean.parseBoolean(installArgsValues[1]);
IcbProgram.initProgramMode(ProgramMode.INSTALL);
MainService mainService = MainService.createInstallInstance(configFile, agentUserFile, backupAgentUserFile, installToken, nonInteractive);
}
public static void main(String... args) {
if (ArgumentValidator.validateArgument(args)) {
programArgs = args;
int exitCode = new CommandLine(new Agent()).execute(args);
System.exit(exitCode);
} else
//Exit with usage error
System.exit(ExitCode.USAGE);
}
}
Can you try using arity=1 for installToken?
static class InstallArgs {
#Option(names = {"-t", "--token"},
order = 0,
arity = "1",
interactive = true,
description = "The agent install token.",
required = true) String installToken ;

How to use nested NamedDomainObjectContainer in Java

I have been trying to create a custom plugin with an extension that has has nested NamedDomainObjectContainer's. I keep getting a strange error if I implement it in Java using Action compared to the same thing in Groovy using Closure.
Here is the Groovy one:
package com.example.gradle
import org.gradle.api.NamedDomainObjectContainer
import org.gradle.api.Project
import org.gradle.api.Plugin
class DeploymentPlugin implements Plugin<Project> {
void apply(final Project project) {
def servers = project.container(Server)
servers.all {
nodes = project.container(Node)
}
project.extensions.add('deployments', servers)
}
static class Server {
NamedDomainObjectContainer<Node> nodes
String url
String name
Server(String name) {
this.name = name
}
def nodes(final Closure configureClosure) {
nodes.configure(configureClosure)
}
}
static class Node {
String name
Integer port
Node(String name) {
this.name = name
}
}
}
And the Java one:
package com.example.gradle;
import org.gradle.api.Action;
import org.gradle.api.NamedDomainObjectContainer;
import org.gradle.api.Plugin;
import org.gradle.api.Project;
public class DeploymentPlugin2 implements Plugin<Project> {
public void apply(final Project project) {
final NamedDomainObjectContainer<Server2> servers = project.container(Server2.class);
servers.all(it ->
it.nodes = project.container(Node2.class)
);
project.getExtensions().add("deployments2", servers);
}
public static class Server2 {
public NamedDomainObjectContainer<Node2> nodes;
public String url;
public String name;
public Server2(String name) {
this.name = name;
}
public void nodes(final Action<? super NamedDomainObjectContainer<Node2>> action) {
action.execute(nodes);
}
}
public static class Node2 {
public String name;
public Integer port;
public Node2(String name) {
this.name = name;
}
}
}
And the build.gradle file:
apply plugin: com.example.gradle.DeploymentPlugin
apply plugin: com.example.gradle.DeploymentPlugin2
wrapper {
gradleVersion = '5.4.1'
distributionType = Wrapper.DistributionType.ALL
}
deployments {
aws {
url = 'http://aws.address'
nodes {
node1 {
port = 9000
}
node2 {
port = 80
}
}
}
cf {
url = 'http://cf.address'
nodes {
test {
port = 10001
}
acceptanceTest {
port = 10002
}
}
}
}
deployments2 {
aws {
url = 'http://aws.address'
nodes {
node1 {
port = 9000
}
node2 {
port = 80
}
}
}
cf2 {
url = 'http://cf.address'
nodes {
test {
port = 10001
}
acceptanceTest {
port = 10002
}
}
}
}
Which fails with:
PS C:\source\gradle-nested-doc-bug> ./gradlew tasks
FAILURE: Build failed with an exception.
* Where:
Build file 'C:\source\gradle-nested-doc-bug\build.gradle' line: 42
* What went wrong:
A problem occurred evaluating root project 'gradle-nested-doc-bug'.
> Could not find method node1() for arguments [build_afudfj5pxfy9w4tkoowa6djon$_run_closure3$_closure12$_closure14$_closure15#4724dfaa] on object of type com.example.gradle.DeploymentPlugin2$Server2.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 2s
There is something funky going on with nested NamedDomainObjectContainer's when using Action.
Any idea what is wrong with this?
Something Like:
import org.gradle.api.Project;
public static class Server2 {
Project project;
public NamedDomainObjectContainer<Node2> nodes = getProject().container(Node2.class);
public String url;
public String name;
public Server2(String name) {
this.name = name;
}
public void nodes(final Action<? super NamedDomainObjectContainer<Node2>> action) {
action.execute(nodes);
}
}

Not able to connect to SFTP( via jumhost) using Apache camel

I am using Apache camel FTP and AWS module (v2.18 ) to create a route between SFTP and AWS S3. Connection to SFTP location is established via ssh jump-host.
Able to connect via Unix command :
sftp -o UserKnownHostsFile=/dev/null
-o StrictHostKeyChecking=no
-i /path/to/host/private-key-file
-o 'ProxyCommand=ssh
-o UserKnownHostsFile=/dev/null
-o StrictHostKeyChecking=no
-i /path/to/jumphost/private-key-file
-l jumphostuser jump.host.com nc sftp.host.com 22' sftp-user#sftp.host.com
However I am getting error while connecting using Apache camel :
Cannot connect/login to: sftp://sftp-user#sftp.host.com:22
For testing purposes I tried connecting to SFTP using Spring -Integration and I was able to do it successfully using the same proxy implementation (JumpHostProxyCommand) mentioned below.
Below is the Spring boot + Apache Camel code that I have been using:
Jsch proxy :
import com.jcraft.jsch.*;
class JumpHostProxyCommand implements Proxy {
String command;
Process p = null;
InputStream in = null;
OutputStream out = null;
public JumpHostProxyCommand(String command) {
this.command = command;
}
public void connect(SocketFactory socket_factory, String host, int port, int timeout) throws Exception {
String cmd = command.replace("%h", host);
cmd = cmd.replace("%p", new Integer(port).toString());
p = Runtime.getRuntime().exec(cmd);
log.debug("Process returned by proxy command {} , {}", command, p);
in = p.getInputStream();
log.debug("Input stream returned by proxy {}", in);
out = p.getOutputStream();
log.debug("Output stream returned by proxy {}", out);
}
public Socket getSocket() {
return null;
}
public InputStream getInputStream() {
return in;
}
public OutputStream getOutputStream() {
return out;
}
public void close() {
try {
if (p != null) {
p.getErrorStream().close();
p.getOutputStream().close();
p.getInputStream().close();
p.destroy();
p = null;
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
Spring boot camel Configuration :
#Configuration
public class CamelConfig {
#Autowired
DataSource dataSource;
#Bean(name = "jdbcMsgIdRepo")
public JdbcMessageIdRepository JdbcMessageIdRepository() {
return new JdbcMessageIdRepository(dataSource,"jdbc-repo");
}
#Bean(name = "s3Client")
public AmazonS3 s3Client() {
return new AmazonS3Client();
}
#Bean(name="jumpHostProxyCommand")
JumpHostProxyCommand jumpHostProxyCommand()
{
String proxykeyFilePath = "/path/to/jumphost/private-key-file";
String command = "ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /proxy/host/key/path -l jumphostuser jump.host.com nc %h %p";
log.debug("JumpHostProxyCommand : " + command);
return new JumpHostProxyCommand(command);
}
}
Camel Route builder :
#Component
public class FtpRouteInitializer extends RouteBuilder {
#Value("${s3.bucket.name}")
private String s3Bucket;
#Autowired
private JdbcMessageIdRepository repo;
#Override
public void configure() throws Exception {
String ftpRoute = "sftp://sftp-user#sftp.host.com:22/?"
+ "delay=300s"
+ "&noop=true"
+ "&idempotentRepository=#jdbcMsgIdRepo"
+ "&idempotentKey=${file:name}-${file:modified}"
+ "&proxy=#jumpHostProxyCommand"
+ "&privateKeyUri=file:/path/to/host/private-key-file"
+ "&jschLoggingLevel=DEBUG"
+ "&knownHostsFile=/dev/null"
+ "&initialDelay=60s"
+ "&autoCreate=false"
+ "&preferredAuthentications=publickey";
from(ftpRoute)
.routeId("FTP-S3")
.setHeader(S3Constants.KEY, simple("${file:name}"))
.to("aws-s3://" + s3ucket + "?amazonS3Client=#s3Client")
.log("Uploaded ${file:name} complete.");
}
}
build.gradle file:
task wrapper(type: Wrapper) {
gradleVersion = '2.5'
}
ext {
springBootVersion = "1.4.1.RELEASE"
awsJavaSdkVersion = "1.10.36"
postgresVersion = "11.2.0.3.0"
jacksonVersion = "2.8.4"
sl4jVersion = "1.7.21"
junitVersion = "4.12"
camelVersion ="2.18.0"
}
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath("org.springframework.boot:spring-boot-gradle-plugin:1.4.1.RELEASE")
}
}
repositories {
mavenCentral()
}
apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'spring-boot'
sourceCompatibility = 1.8
targetCompatibility = 1.8
springBoot {
executable = true
}
dependencies {
//logging
compile("ch.qos.logback:logback-classic:1.1.3")
compile("ch.qos.logback:logback-core:1.1.3")
compile("org.slf4j:slf4j-api:$sl4jVersion")
//Spring boot
compile("org.springframework.boot:spring-boot-starter-web:$springBootVersion")
compile("org.springframework.boot:spring-boot-starter-jdbc:$springBootVersion")
compile("org.apache.camel:camel-spring-boot-starter:$camelVersion")
//Jdbc
compile("postgresql:postgresql:9.0-801.jdbc4")
//Camel
compile("org.apache.camel:camel-ftp:$camelVersion")
compile("org.apache.camel:camel-aws:$camelVersion")
compile("org.apache.camel:camel-core:$camelVersion")
compile("org.apache.camel:camel-spring-boot:$camelVersion")
compile("org.apache.camel:camel-sql:$camelVersion")
//Aws sdk
compile("com.amazonaws:aws-java-sdk:$awsJavaSdkVersion")
//Json
compile("com.fasterxml.jackson.core:jackson-core:$jacksonVersion")
compile("com.fasterxml.jackson.core:jackson-annotations:$jacksonVersion")
compile("com.fasterxml.jackson.core:jackson-databind:$jacksonVersion")
compile("com.fasterxml.jackson.datatype:jackson-datatype-jsr310:$jacksonVersion")
//Swagger
compile("io.springfox:springfox-swagger2:2.0.2")
compile("io.springfox:springfox-swagger-ui:2.0.2")
//utilities
compile('org.projectlombok:lombok:1.16.6')
compile("org.apache.commons:commons-collections4:4.1")
compile("org.apache.commons:commons-lang3:3.4")
//Junit
testCompile("junit:junit:$junitVersion")
testCompile("org.springframework.boot:spring-boot-starter-test:$springBootVersion")
testCompile("org.mockito:mockito-all:1.10.19")
}
I have been struggling for last 2 days to find out the root cause of the error, any help on this issue is really appreciated. Thanks!
Try adding jump host configuration in the ssh config file on the machine where you are running this code. You shall be able to transparently connect to the server using the jump host for the host(s) specified in the config file without needing to specify any proxy or jump host in the sftp command.
An example config to setup a dynamic jump host is as follows:
Host sftp.host.com
user sftp-user
IdentityFile /home/sftp-user/.ssh/id_rsa
ProxyCommand ssh sftp-user#jump.host.com nc %h %p 2> /dev/null
ForwardAgent yes
You can add multiple hosts or regex pattern in the Host line. This entry needs to be done in ~/.ssh/config file (create this file if not already present).

Read xml from an external jar not included in classpath

I created a javafx project using Netbeans, the project itself works just fine.
I'm now trying to implement a custom light-weight plugin system, the plugins are external JARs located inside the plugins/ directory of the main project. I'm using javax.security package to sandbox the plugins.
Here's the main project's structure:
MainProject
|
|---plugins/
| |---MyPlugin.jar
|
|---src/
| |---main.app.plugin
| |---Plugin.java
| |---PluginSecurityPolicy.java
| |---PluginClassLoader.java
| |---PluginContainer.java
....
And the plugin's one:
Plugin
|
|---src/
| |---my.plugin
| | |---MyPlugin.java
| |--settings.xml
|
|---dist/
|---MyPlugin.jar
|---META-INF/
| |---MANIFEST.MF
|---my.plugin
| |---MyPlugin.class
|---settings.xml
To load the plugins into the program i've made a PluginContainer class that gets all the jar files from the plugins directory, lists all file inside the jar and lookup for the plugin file and the settings file.
I can load and make an instance of the plugin class, but when it comes to the XML there's no way i can even list it among the jar contents.
Here's the code, maybe someone can see where i did it wrong.
PluginSecurityPolicy.java
import java.security.AllPermission;
import java.security.PermissionCollection;
import java.security.Permissions;
import java.security.Policy;
import java.security.ProtectionDomain;
public class PluginSecurityPolicy extends Policy {
#Override
public PermissionCollection getPermissions(ProtectionDomain domain) {
if (isPlugin(domain)) {
return pluginPermissions();
} else {
return applicationPermissions();
}
}
private boolean isPlugin(ProtectionDomain domain) {
return domain.getClassLoader() instanceof PluginClassLoader;
}
private PermissionCollection pluginPermissions() {
Permissions permissions = new Permissions();
//
return permissions;
}
private PermissionCollection applicationPermissions() {
Permissions permissions = new Permissions();
permissions.add(new AllPermission());
return permissions;
}
}
PluginClassLoader.java
import java.net.URL;
import java.net.URLClassLoader;
public class PluginClassLoader extends URLClassLoader {
public PluginClassLoader(URL jarFileUrl) {
super(new URL[] {jarFileUrl});
}
}
PluginContainer.java, the #load method is the one
import main.app.plugin.PluginClassLoader;
import main.app.plugin.PluginSecurityPolicy;
import java.io.File;
import java.net.URL;
import java.security.Policy;
import java.util.ArrayList;
import java.util.Enumeration;
import java.util.zip.ZipEntry;
import java.util.zip.ZipFile;
public class PluginContainer {
private ArrayList<Plugin> plugins;
private ManifestParser parser;
public PluginContainer() {
Policy.setPolicy(new PluginSecurityPolicy());
System.setSecurityManager(new SecurityManager());
plugins = new ArrayList<>();
parser = new ManifestParser();
}
public void init() {
File[] dir = new File(System.getProperty("user.dir") + "/plugins").listFiles();
for (File pluginJarFile : dir) {
try {
Plugin plugin = load(pluginJarFile.getCanonicalPath());
plugins.add(plugin);
} catch (Exception e) {
throw new RuntimeException(e.getMessage(), e);
}
}
}
public <T extends Plugin> T getPlugin(Class<T> plugin) {
for (Plugin p : plugins) {
if (p.getClass().equals(plugin)) {
return (T)p;
}
}
return null;
}
private Plugin load(String pluginJarFile) throws Exception {
PluginManifest manifest = null;
Plugin plugin = null;
// Load the jar file
ZipFile jarFile = new ZipFile(pluginJarFile);
// Get all jar entries
Enumeration allEntries = jarFile.entries();
String pluginClassName = null;
while (allEntries.hasMoreElements()) {
// Get single file
ZipEntry entry = (ZipEntry) allEntries.nextElement();
String file = entry.getName();
// Look for classfiles
if (file.endsWith(".class")) {
// Set class name
String classname = file.replace('/', '.').substring(0, file.length() - 6);
// Look for plugin class
if (classname.endsWith("Plugin")) {
// Set the class name and exit loop
pluginClassName = classname;
break;
}
}
}
// Load the class
ClassLoader pluginLoader = new PluginClassLoader(new URL("file:///" + pluginJarFile));
Class<?> pluginClass = pluginLoader.loadClass(pluginClassName);
// Edit as suggested by KDM, still null
URL settingsUrl = pluginClass.getResource("/settings.xml");
manifest = parser.load(settingsUrl);
// Check if manifest has been created
if (null == manifest) {
throw new RuntimeException("Manifest file not found in " + pluginJarFile);
}
// Create the plugin
plugin = (Plugin) pluginClass.newInstance();
plugin.load(manifest);
return plugin;
}
}
And the autogenerated MANIFEST.MF
Manifest-Version: 1.0
Ant-Version: Apache Ant 1.9.4
Created-By: 1.8.0_25-b18 (Oracle Corporation)
The Class-Path directive is missing, but if i force it to . or ./settings.xml or settings.xml (by manually editing the MANIFEST.MF file) it won't work either.
This is all I can think of, Thanks in advance for any help
[EDIT] I've created an images/monitor-16.png into the plugin jar root, added the #load2 method into the PluginContainer.
Since the method is called within a loop I left the Policy.setPolicy(new PluginSecurityPolicy()); and System.setSecurityManager(new SecurityManager()); inside the constructor.
Here's the new plugn jar structure:
TestPlugin.jar
|
|---META-INF/
| |---MANIFEST.MF
|
|---dev.jimbo
| |---TestPlugin.class
|
|---images
| |---monitor-16.png
|
|---settings.xml
The new method code:
private Plugin load2(String pluginJarFile) throws MalformedURLException, ClassNotFoundException {
PluginClassLoader urlCL = new PluginClassLoader(new File(pluginJarFile).toURL());
Class<?> loadClass = urlCL.loadClass("dev.jimbo.TestPlugin");
System.out.println(loadClass);
System.out.println("Loading the class using the class loader object. Resource = " + urlCL.getResource("images/monitor-16.png"));
System.out.println("Loading the class using the class loader object with absolute path. Resource = " + urlCL.getResource("/images/monitor-16.png"));
System.out.println("Loading the class using the class object. Resource = " + loadClass.getResource("images/monitor-16.png"));
System.out.println();
return null;
}
Here's the output
class dev.jimbo.TestPlugin
Loading the class using the class loader object. Resource = null
Loading the class using the class loader object with absolute path. Resource = null
Loading the class using the class object. Resource = null
The following program:
public static void main(String[] args) throws MalformedURLException, ClassNotFoundException {
Policy.setPolicy(new PluginSecurityPolicy());
System.setSecurityManager(new SecurityManager());
PluginClassLoader urlCL = new PluginClassLoader(new File(
"A Jar containing images/load.gif and SampleApp class").toURL());
Class<?> loadClass = urlCL.loadClass("net.sourceforge.marathon.examples.SampleApp");
System.out.println(loadClass);
System.out.println("Loading the class using the class loader object. Resource = " + urlCL.getResource("images/load.gif"));
System.out.println("Loading the class using the class loader object with absolute path. Resource = " + urlCL.getResource("/images/load.gif"));
System.out.println("Loading the class using the class object. Resource = " + loadClass.getResource("images/load.gif"));
}
Produces the following output:
class net.sourceforge.marathon.examples.SampleApp
Loading the class using the class loader object. Resource = jar:file:/Users/dakshinamurthykarra/Projects/install/marathon/sampleapp.jar!/images/load.gif
Loading the class using the class loader object with absolute path. Resource = null
Loading the class using the class object. Resource = null
So I do not think any problem with your class loader. Putting this as an answer so that the code can be formatted properly.
Nailed it! Seems that my previous Netbeans (8.0) was deleting the plugin directory from the added Jar/Folder Libraries references on Clean and Build action. I've downloaded and installed Netbeans 8.0.2 and the problem was solved. Couldn't find any related bug for that version on their tracker though..
Anyways Thanks for the help :)

ElasticSearch in-memory for testing

I would like to write some integration with ElasticSearch. For testing I would like to run in-memory ES.
I found some information in documentation, but without example how to write those kind of test. Elasticsearch Reference [1.6] » Testing » Java Testing Framework » integration tests
« unit tests
Also I found following article, but it's out of data. Easy JUnit testing with Elastic Search
I looking example how to start and run ES in-memory and access to it over REST API.
Based on the second link you provided, I created this abstract test class:
#RunWith(SpringJUnit4ClassRunner.class)
public abstract class AbstractElasticsearchTest {
private static final String HTTP_PORT = "9205";
private static final String HTTP_TRANSPORT_PORT = "9305";
private static final String ES_WORKING_DIR = "target/es";
private static Node node;
#BeforeClass
public static void startElasticsearch() throws Exception {
removeOldDataDir(ES_WORKING_DIR + "/" + clusterName);
Settings settings = Settings.builder()
.put("path.home", ES_WORKING_DIR)
.put("path.conf", ES_WORKING_DIR)
.put("path.data", ES_WORKING_DIR)
.put("path.work", ES_WORKING_DIR)
.put("path.logs", ES_WORKING_DIR)
.put("http.port", HTTP_PORT)
.put("transport.tcp.port", HTTP_TRANSPORT_PORT)
.put("index.number_of_shards", "1")
.put("index.number_of_replicas", "0")
.put("discovery.zen.ping.multicast.enabled", "false")
.build();
node = nodeBuilder().settings(settings).clusterName("monkeys.elasticsearch").client(false).node();
node.start();
}
#AfterClass
public static void stopElasticsearch() {
node.close();
}
private static void removeOldDataDir(String datadir) throws Exception {
File dataDir = new File(datadir);
if (dataDir.exists()) {
FileSystemUtils.deleteRecursively(dataDir);
}
}
}
In the production code, I configured an Elasticsearch client as follows. The integration test extends the above defined abstract class and configures property elasticsearch.port as 9305 and elasticsearch.host as localhost.
#Configuration
public class ElasticsearchConfiguration {
#Bean(destroyMethod = "close")
public Client elasticsearchClient(#Value("${elasticsearch.clusterName}") String clusterName,
#Value("${elasticsearch.host}") String elasticsearchClusterHost,
#Value("${elasticsearch.port}") Integer elasticsearchClusterPort) throws UnknownHostException {
Settings settings = Settings.settingsBuilder().put("cluster.name", clusterName).build();
InetSocketTransportAddress transportAddress = new InetSocketTransportAddress(InetAddress.getByName(elasticsearchClusterHost), elasticsearchClusterPort);
return TransportClient.builder().settings(settings).build().addTransportAddress(transportAddress);
}
}
That's it. The integration test will run the production code which is configured to connect to the node started in the AbstractElasticsearchTest.startElasticsearch().
In case you want to use the elasticsearch REST api, use port 9205. E.g. with Apache HttpComponents:
HttpClient httpClient = HttpClients.createDefault();
HttpPut httpPut = new HttpPut("http://localhost:9205/_template/" + templateName);
httpPut.setEntity(new FileEntity(new File("template.json")));
httpClient.execute(httpPut);
Here is my implementation
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.util.UUID;
import org.elasticsearch.client.Client;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.node.Node;
import org.elasticsearch.node.NodeBuilder;
/**
*
* #author Raghu Nair
*/
public final class ElasticSearchInMemory {
private static Client client = null;
private static File tempDir = null;
private static Node elasticSearchNode = null;
public static Client getClient() {
return client;
}
public static void setUp() throws Exception {
tempDir = File.createTempFile("elasticsearch-temp", Long.toString(System.nanoTime()));
tempDir.delete();
tempDir.mkdir();
System.out.println("writing to: " + tempDir);
String clusterName = UUID.randomUUID().toString();
elasticSearchNode = NodeBuilder
.nodeBuilder()
.local(false)
.clusterName(clusterName)
.settings(
ImmutableSettings.settingsBuilder()
.put("script.disable_dynamic", "false")
.put("gateway.type", "local")
.put("index.number_of_shards", "1")
.put("index.number_of_replicas", "0")
.put("path.data", new File(tempDir, "data").getAbsolutePath())
.put("path.logs", new File(tempDir, "logs").getAbsolutePath())
.put("path.work", new File(tempDir, "work").getAbsolutePath())
).node();
elasticSearchNode.start();
client = elasticSearchNode.client();
}
public static void tearDown() throws Exception {
if (client != null) {
client.close();
}
if (elasticSearchNode != null) {
elasticSearchNode.stop();
elasticSearchNode.close();
}
if (tempDir != null) {
removeDirectory(tempDir);
}
}
public static void removeDirectory(File dir) throws IOException {
if (dir.isDirectory()) {
File[] files = dir.listFiles();
if (files != null && files.length > 0) {
for (File aFile : files) {
removeDirectory(aFile);
}
}
}
Files.delete(dir.toPath());
}
}
You can start ES on your local with:
Settings settings = Settings.settingsBuilder()
.put("path.home", ".")
.build();
NodeBuilder.nodeBuilder().settings(settings).node();
When ES started, access it over REST like:
http://localhost:9200/_cat/health?v
As of 2016 embedded elasticsearch is no-longer supported
As per a response from one of the devlopers in 2017 you can use the following approaches:
Use the Gradle tools elasticsearch already has. You can read some information about this here: https://github.com/elastic/elasticsearch/issues/21119
Use the Maven plugin: https://github.com/alexcojocaru/elasticsearch-maven-plugin
Use Ant scripts like http://david.pilato.fr/blog/2016/10/18/elasticsearch-real-integration-tests-updated-for-ga
Using Docker: https://www.testcontainers.org/modules/elasticsearch
Using Docker from maven: https://github.com/dadoonet/fscrawler/blob/e15dddf72b1ed094dad279d492e4e0314f73683f/pom.xml#L241-L289

Categories