Docker -- How To Connect Services Together Using Docker Compose With ZeroMQ - java

I have a docker compose file:
version: '3.3'
services:
bifrost:
image: ivorytoast3853/bifrost
container_name: bifrost-app
ports:
- "8084:8084"
thor:
image: ivorytoast3853/thor
container_name: thor-app
ports:
- "8085:8084"
loki:
image: ivorytoast3853/loki
container_name: loki-app
ports:
- "8086:8084"
Which is meant to test a ZeroMQ app.
Bifrost: Broker
Thor: Server
Loki: Client
I am using the exact code from ZeroMQ's start guide (and when I start it locally -- without Docker it works (Loki sends messages to Thor through the Bifrost)).
For reference, the 3 files are:
LOKI
try (ZContext context = new ZContext()) {
ZMQ.Socket requester = context.createSocket(SocketType.REQ);
boolean didConnect = requester.connect("tcp://0.0.0.0:5559");
log.info("Loki connected to the bifrost: " + didConnect);
for (int request_nbr = 0; request_nbr < 10; request_nbr++) {
requester.send("One", 0);
String reply = requester.recvStr(0);
System.out.println("Received reply " + request_nbr + " [" + reply + "]");
}
}
Thor
try (ZContext context = new ZContext()) {
ZMQ.Socket responder = context.createSocket(SocketType.REP);
boolean didConnect = responder.connect("tcp://0.0.0.0:5560");
log.info("Thor connected to the bifrost: " + didConnect);
while (!Thread.currentThread().isInterrupted()) {
String string = responder.recvStr(0);
System.out.printf("Received request: [%s]\n", string);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
responder.send("You sent me: " + string);
}
}
Bifrost
while (true) {
try (ZContext context = new ZContext()) {
ZMQ.Socket frontend = context.createSocket(SocketType.ROUTER);
ZMQ.Socket backend = context.createSocket(SocketType.DEALER);
frontend.bind("tcp://*:5559");
backend.bind("tcp://*:5560");
log.info("Started Bifrost to connect Loki and Thor");
ZMQ.Poller items = context.createPoller(2);
items.register(frontend, ZMQ.Poller.POLLIN);
items.register(backend, ZMQ.Poller.POLLIN);
boolean more = false;
byte[] message;
while (!Thread.currentThread().isInterrupted()) {
items.poll();
if (items.pollin(0)) {
while (true) {
message = frontend.recv(0);
more = frontend.hasReceiveMore();
backend.send(message, more ? ZMQ.SNDMORE : 0);
if (!more) {
break;
}
}
}
if (items.pollin(1)) {
while (true) {
message = backend.recv(0);
more = backend.hasReceiveMore();
frontend.send(message, more ? ZMQ.SNDMORE : 0);
if (!more) {
break;
}
}
}
}
}
}
Am I doing something wrong with the Docker compose file? I know Docker compose creates a network automatically...
Thanks!

Turns out I was not internalizing fundamental ideas of docker and containers as a whole.
The Problem
I was trying to connect to: "tcp://0.0.0.0:5560" from Loki/Thor to Bifrost.
Why is that a problem?
It is a problem because unlike starting all 3 spring boot applications on the same computer (with the same IP), I am starting each spring application in its OWN docker container -- which has its OWN UNIQUE IP. Therefore, I cannot say to Loki/Thor to "on this computer (IP), connect to Bifrost." -- since Bifrost lies on a completely separate IP address.
How did I fix it:
I changed the docker-compose file for Bifrost to contain a network alias:
image: ivorytoast3853/bifrost
container_name: bifrost-app
networks:
my-net:
aliases:
- queue
All this does is allow me to say, "if I give you the hostname of "queue", please connect to the IP address of the container that the Bifrost application is found on."
Then, all I had to do is change the host:port string in Loki and Thor to reflect the following:
responder.connect("tcp://queue:5560");
Hope this helps anyone who comes across a similar issue (or lack of understanding in my case)

Related

how to solve “instantiate chaincode” error in fabric-sdk-java?

I use fabrc-sdk-java to operate the e2e_cli network.The e2e uses CA and the TLS is disabled.
I successfully create the channel and install the chaincode.
create channel:
Channel newChannel = client.newChannel(myChannel.getChannelName(), orderer, channelConfiguration, channelConfigurationSignatures.toArray(new byte[myPeerOrgs.size()][]));
channelConfigurationSignatures contains signatures from two organizations.
install chaincode:
Every organization has to send an installation proposal once, using its own peerAdmin organization.
reference:https://github.com/IBM/blockchain-application-using-fabric-java-sdk
But,when I prepare to instantiate chaincode,I get the error:
0endorser failed with Sending proposal to peer0.org1.example.com failed because of: gRPC failure=Status{code=UNKNOWN, description=Failed to deserialize creator identity, err MSP Org1 is unknown, cause=null}. Was verified:false
These are related codes:
client.setUserContext(myPeerOrgs.get(0).getPeerAdmin());
InstantiateProposalRequest instantiateProposalRequest = client.newInstantiationProposalRequest();
instantiateProposalRequest.setProposalWaitTime(fabricConfig.getProposalWaitTime());
instantiateProposalRequest.setChaincodeID(chaincodeID);
instantiateProposalRequest.setFcn(ininFun);
instantiateProposalRequest.setArgs(args);
Map<String, byte[]> tm = new HashMap<>();
tm.put("HyperLedgerFabric", "InstantiateProposalRequest:JavaSDK".getBytes(UTF_8));
tm.put("method", "InstantiateProposalRequest".getBytes(UTF_8));
instantiateProposalRequest.setTransientMap(tm);
ChaincodeEndorsementPolicy chaincodeEndorsementPolicy = new ChaincodeEndorsementPolicy();
chaincodeEndorsementPolicy.fromYamlFile(new File(myChaincode.getChaincodeEndorsementPolicyPath()));
instantiateProposalRequest.setChaincodeEndorsementPolicy(chaincodeEndorsementPolicy);
logger.trace("Sending instantiateProposalRequest to all peers with arguments: " + Arrays.toString(args));
Collection<ProposalResponse> successful = new LinkedList<>();
Collection<ProposalResponse> failed = new LinkedList<>();
Collection<ProposalResponse> responses = channel.sendInstantiationProposal(instantiateProposalRequest);
for (ProposalResponse response : responses) {
if (response.isVerified() && response.getStatus() == ProposalResponse.Status.SUCCESS) {
successful.add(response);
logger.trace(String.format("Succesful instantiate proposal response Txid: %s from peer %s", response.getTransactionID(), response.getPeer().getName()));
} else {
failed.add(response);
}
}
logger.trace(String.format("Received %d instantiate proposal responses. Successful+verified: %d . Failed: %d", responses.size(), successful.size(), failed.size()));
if (failed.size() > 0) {
ProposalResponse first = failed.iterator().next();
logger.error("Not enough endorsers for instantiate :" + successful.size() + "endorser failed with " + first.getMessage() + ". Was verified:" + first.isVerified());
System.exit(1);
}
I thought it was a serialization problem,but the MyUser class and the MyEnrollement class both inherit the Serializable interface, and both define the serialVersionUID.
I have compared blockchain-application-using-fabric-java-sdk and have not identified the problem.
I finally solved this problem.The problem is in the following code:
Channel newChannel = client.newChannel(myChannel.getChannelName(), orderer, channelConfiguration, channelConfigurationSignatures.toArray(new byte[myPeerOrgs.size()][]));
The above code is written by me with reference to End2endIT:
//Create channel that has only one signer that is this orgs peer admin. If channel creation policy needed more signature they would need to be added too.
Channel newChannel = client.newChannel(name, anOrderer, channelConfiguration, client.getChannelConfigurationSignature(channelConfiguration, sampleOrg.getPeerAdmin()));
I don't know if it is wrong with my usage.But my code, the error is in this sentence, when joining the node later, the error is reported.
I referenced https://github.com/IBM/blockchain-application-using-fabric-java-sdk/blob/master/java/src/main/java/org/app/network/CreateChannel.java and found the correct way of writing.
public Channel createChannel() {
logger.info("Begin create channel: " + myChannel.getChannelName());
ChannelConfiguration channelConfiguration = new ChannelConfiguration(new File(fabricConfig.getChannelArtifactsPath() + "/" + myChannel.getChannelName() + ".tx"));
logger.trace("Read channel " + myChannel.getChannelName() + " configuration file:" + fabricConfig.getChannelArtifactsPath() + "/" + myChannel.getChannelName() + ".tx");
byte[] channelConfigurationSignatures = client.getChannelConfigurationSignature(channelConfiguration, myPeerOrgs.get(0).getPeerAdmin());
Channel newChannel = client.newChannel(myChannel.getChannelName(), orderer, channelConfiguration, channelConfigurationSignatures);;
for (Peer peer : myPeerOrgs.get(0).getPeers()) {
// create a channel for the first time, only `joinPeer` here, not `addPeer`
newChannel.joinPeer(peer);
}
for (EventHub eventHub : myPeerOrgs.get(0).getEventHubs()) {
newChannel.addEventHub(eventHub);
}
if (!newChannel.isInitialized()) {
newChannel.initialize();
}
// I have only tested two organizations
// I don’t know if there are any errors in the three organizations.
for (int i = 1; i < myPeerOrgs.size(); i++) {
client.setUserContext(myPeerOrgs.get(i).getPeerAdmin());
newChannel = client.getChannel(myChannel.getChannelName());
for (Peer peer : myPeerOrgs.get(i).getPeers()) {
newChannel.joinPeer(peer);
}
for (EventHub eventHub : myPeerOrgs.get(i).getEventHubs()) {
newChannel.addEventHub(eventHub);
}
}
logger.trace("Node that has joined the channel:");
Collection<Peer> peers = newChannel.getPeers();
for (Peer peer : peers) {
logger.trace(peer.getName() + " at " + peer.getUrl());
}
logger.info("Success, end create channel: " + myChannel.getChannelName() + "\n");
return newChannel;
}
Related code later, such as installing and initializing chaincode, also refer to https://github.com/IBM/blockchain-application-using-fabric-java-sdk. This is an excellent example.
If anyone knows how to use the fourth variable parameter of newChannel, please let me know. Thanks.
Finally, I don't know how to dynamically join nodes, organizations and channels, I am looking for and testing, there are only examples of nodejs on the network, there is no java, if anyone knows, please tell me, I really need. Thanks.

Jenkins pipeline get JPPF node connected

I am working in a project using Jenkins and JPPF.
How do I get which node is connected to JPPF server? If possible, please give me the guideline detail.
Thanks,
Disclaimer: JPPF developer here.
You can monitor the nodes connected to a JPPF server using the JMX-based server management APIs. There are many things you can monitor, and a lot of different information you can obtain from the server and the nodes. Hopefully, the following example will give you a good starting point:
// connect using a JMX remote connection wrapper
try (JMXDriverConnectionWrapper serverJmx = new JMXDriverConnectionWrapper("jppf_server_host", 11111)) {
serverJmx.connectAndWait(5_000L);
if (serverJmx.isConnected()) {
// get summary information on all the connected nodes
Collection<JPPFManagementInfo> nodeInfos = serverJmx.nodesInformation();
System.out.println("there are " + nodeInfos.size() + " connected nodes:");
for (JPPFManagementInfo info: nodeInfos) {
System.out.println("node uuid: " + info.getUuid() + ", host is " + info.getHost());
}
// get detailed information on the nodes
// the node forwarder will send the same request to all selected nodes
// and group the results in a map where each key is a node uuid
JPPFNodeForwardingMBean forwarder = serverJmx.getNodeForwarder();
Map<String, Object> responses = forwarder.systemInformation(NodeSelector.ALL_NODES);
for (Map.Entry<String, Object> response: responses.entrySet()) {
String nodeUuid = response.getKey();
if (response.getValue() instanceof Exception) {
System.out.println("node with uuid = " + nodeUuid + " raised an exception:");
((Exception) response.getValue()).printStackTrace(System.out);
} else {
JPPFSystemInformation systemInfo = (JPPFSystemInformation) response.getValue();
System.out.println("system properties for node uuid " + nodeUuid + " :");
System.out.println(systemInfo.getSystem());
}
}
} else {
System.out.println("could not connect to jppf_server_host:11111");
}
} catch (Exception e) {
e.printStackTrace();
}
Note that the web and standalone administration consoles, which are built on top of the same management APIs, will also provide this information.

Accumulo scan/write not running in standalone Java main program in AWS EC2 master using Cloudera CDH 5.8.2

We are trying to run simple write/sacn from Accumulo (client jar 1.5.0) in standalone Java main program (Maven shade executable) as below in AWS EC2 master (described below) using Putty
public class AccumuloQueryApp {
private static final Logger logger = LoggerFactory.getLogger(AccumuloQueryApp.class);
public static final String INSTANCE = "accumulo"; // miniInstance
public static final String ZOOKEEPERS = "ip-x-x-x-100:2181"; //localhost:28076
private static Connector conn;
static {
// Accumulo
Instance instance = new ZooKeeperInstance(INSTANCE, ZOOKEEPERS);
try {
conn = instance.getConnector("root", new PasswordToken("xxx"));
} catch (Exception e) {
logger.error("Connection", e);
}
}
public static void main(String[] args) throws TableNotFoundException, AccumuloException, AccumuloSecurityException, TableExistsException {
System.out.println("connection with : " + conn.whoami());
BatchWriter writer = conn.createBatchWriter("test", ofBatchWriter());
for (int i = 0; i < 10; i++) {
Mutation m1 = new Mutation(String.valueOf(i));
m1.put("personal_info", "first_name", String.valueOf(i));
m1.put("personal_info", "last_name", String.valueOf(i));
m1.put("personal_info", "phone", "983065281" + i % 2);
m1.put("personal_info", "email", String.valueOf(i));
m1.put("personal_info", "date_of_birth", String.valueOf(i));
m1.put("department_info", "id", String.valueOf(i));
m1.put("department_info", "short_name", String.valueOf(i));
m1.put("department_info", "full_name", String.valueOf(i));
m1.put("organization_info", "id", String.valueOf(i));
m1.put("organization_info", "short_name", String.valueOf(i));
m1.put("organization_info", "full_name", String.valueOf(i));
writer.addMutation(m1);
}
writer.close();
System.out.println("Writing complete ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`");
Scanner scanner = conn.createScanner("test", new Authorizations());
System.out.println("Step 1 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`");
scanner.setRange(new Range("3", "7"));
System.out.println("Step 2 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`");
scanner.forEach(e -> System.out.println("Key: " + e.getKey() + ", Value: " + e.getValue()));
System.out.println("Step 3 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`");
scanner.close();
}
public static BatchWriterConfig ofBatchWriter() {
//Batch Writer Properties
final int MAX_LATENCY = 1;
final int MAX_MEMORY = 10000000;
final int MAX_WRITE_THREADS = 10;
final int TIMEOUT = 10;
BatchWriterConfig config = new BatchWriterConfig();
config.setMaxLatency(MAX_LATENCY, TimeUnit.MINUTES);
config.setMaxMemory(MAX_MEMORY);
config.setMaxWriteThreads(MAX_WRITE_THREADS);
config.setTimeout(TIMEOUT, TimeUnit.MINUTES);
return config;
}
}
Connection is established correctly but creating BatchWriter it getting error and it's trying in loop with same error
[impl.ThriftScanner] DEBUG: Error getting transport to ip-x-x-x-100:10011 : NotServingTabletException(extent:TKeyExtent(table:21 30, endRow:21 30 3C, prevEndRow:null))
When we run the same code (writing to Accumulo and reading from Accumulo) inside Spark job and submit to the YANK cluster it's running perfectly. We are struggling to figure out that but getting no clue. Please see the environment as described below
Cloudera CDH 5.8.2 on AWS environemnts (4 EC2 instances as one master and 3 child).
Consider the private IPs are like
Mater: x.x.x.100
Child1: x.x.x.101
Child2: x.x.x.102
Child3: x.x.x.103
We havethe follwing installation in CDH
Cluster (CDH 5.8.2)
Accumulo 1.6 (Tracer not installed, Garbage Collector in Child2, Master in Master, Monitor in child3, Tablet Server in Master)
HBase
HDFS (master as name node, all 3 child as datanode)
Kafka
Spark
YARN (MR2 Included)
ZooKeeper
Hrm, that's very curious that it runs with the Spark-on-YARN, but as a regular Java application. Usually, it's the other way around :)
I would verify that the JARs on the classpath of the standalone java app match the JARs used by the Spark-on-YARN job as well as the Accumulo server classpath.
If that doesn't help, try to increase the log4j level to DEBUG or TRACE and see if anything jumps out at you. If you have a hard time understanding what the logging is saying, feel free to send an email to user#accumulo.apache.org and you'll definitely have more eyes on the problem.

VPN connect using Java

Is there a way to connect and disconnect VPN in Forticlient programmatically?
I see that with Cisco VPN Client, there are options such as using the APIs they provide or executing connectivity commands from my Java code. Views and opinions on these ways of connecting to VPN are also most welcome.
I am looking for such options or any other that is possible, with Forticlient software.
Any directions from here would be of great help.
My trial so far :
private static final String COMMAND = "C:/Program Files/Cisco/Cisco AnyConnect Secure Mobility Client/vpncli";
private ExpectJ exp = new ExpectJ(10);
public void connectToVPNViaCLI(String server, String uname, String pwd)
{
try {
String command = COMMAND + " connect " + server;
Spawn sp = exp.spawn(command);
sp.expect("Username: ");
sp.send(uname + "\n");
sp.expect("Password: ");
sp.send(pwd + "\n");
sp.expect("accept? [y/n]: ");
sp.send("y" + "\n");
} catch(Exception e) {
LOGGER.severe(e.getMessage());
}
}

Multiple akka system in a single pc

How can we run multiple akka nodes in a single pc? Currently, I've following in my application.conf file. For each system, I added different port numbers, but, I can't start more than one instance. Error says, Address already in use failed to bind.
application.conf file
remotelookup {
include "common"
akka {
remote.server.port = 2500
cluster.nodename = "n1"
}
}
Update : multiple akka nodes means, I have different different stand alone server application, which will communicate to remote master node using akka.
The approach we are using is:
Create different settings in your application.conf for each of the systems:
systemOne {
akka {
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = ${public-hostname}
port = 2552
}
}
}
}
systemTwo {
akka {
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = ${public-hostname}
port = 2553
}
}
}
}
Application.conf is the default config file, so in your settings module add configs for you systems:
object Configs {
private val root = ConfigFactory.load()
val one = root.getConfig("systemOne")
val two = root.getConfig("systemTwo")
}
and then create systems with this configs:
val one = ActorSystem("SystemName", one)
val two = ActorSystem("AnotherSystemName", two)
Don't forget that system names must differ
If you don't want to hardcode the info into your application.conf, you can do this:
def remoteConfig(hostname: String, port: Int, commonConfig: Config): Config = {
val configStr = s"""
|akka.remote.netty.hostname = $hostname
|akka.remote.netty.port = $port
""".stripMargin
ConfigFactory.parseString(configStr).withFallback(commonConfig)
}
Then use it like:
val appConfig = ConfigFactory.load
val sys1 = ActorSystem("sys1", remoteConfig(args(0), args(1).toInt, appConfig))
val sys2 = ActorSystem("sys2", remoteConfig(args(0), args(2).toInt, appConfig))
If you use 0 for the port Akka will assign a random port # to that ActorSystem.
The problem was in the port definition. It should be like
remotelookup {
include "common"
akka {
remote.netty.port = 2500
cluster.nodename = "n1"
}
}
Other wise, akka will take default port.

Categories