IllegalStateException : Gremlin Server must be configured to use the JanusGraphManager - java

Set<String> graphNames = JanusGraphFactory.getGraphNames();
for(String name:graphNames) {
System.out.println(name);
}
The above snippet produces the following exception
java.lang.IllegalStateException: Gremlin Server must be configured to use the JanusGraphManager.
at com.google.common.base.Preconditions.checkState(Preconditions.java:173)
at org.janusgraph.core.JanusGraphFactory.getGraphNames(JanusGraphFactory.java:175)
at com.JanusTest.controllers.JanusController.getPersonDetail(JanusController.java:66)
my.properties
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=cql
storage.hostname=127.0.0.1
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.5
index.search.backend=elasticsearch
index.search.hostname=127.0.0.1
gremlin-server.yaml
host: 0.0.0.0
port: 8182
scriptEvaluationTimeout: 30000
channelizer: org.apache.tinkerpop.gremlin.server.channel.WebSocketChannelizer
graphManager: org.janusgraph.graphdb.management.JanusGraphManager
graphs: {
ConfigurationManagementGraph: conf/my.properties,
}
plugins:
- janusgraph.imports
scriptEngines: {
gremlin-groovy: {
imports: [java.lang.Math],
staticImports: [java.lang.Math.PI],
scripts: [scripts/empty-sample.groovy]}}
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoLiteMessageSerializerV1d0, config: {ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { serializeResultToString: true }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV2d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
processors:
- { className: org.apache.tinkerpop.gremlin.server.op.session.SessionOpProcessor, config: { sessionTimeout: 28800000 }}
- { className: org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor, config: { cacheExpirationTime: 600000, cacheMaxSize: 1000 }}
metrics: {
consoleReporter: {enabled: true, interval: 180000},
csvReporter: {enabled: true, interval: 180000, fileName: /tmp/gremlin-server-metrics.csv},
jmxReporter: {enabled: true},
slf4jReporter: {enabled: true, interval: 180000},
gangliaReporter: {enabled: false, interval: 180000, addressingMode: MULTICAST},
graphiteReporter: {enabled: false, interval: 180000}}
maxInitialLineLength: 4096
maxHeaderSize: 8192
maxChunkSize: 8192
maxContentLength: 65536
maxAccumulationBufferComponents: 1024
resultIterationBatchSize: 64
writeBufferLowWaterMark: 32768
writeBufferHighWaterMark: 65536

This answer to this is similar to this other question.
The call to JanusGraphFactory.getGraphNames() needs to be sent to the remote server. If you're working in the Gremlin Console, first establish a remote sessioned connection then set remote console mode.
gremlin> :remote connect tinkerpop.server conf/remote.yaml session
==>Configured localhost/127.0.0.1:8182
gremlin> :remote console
==>All scripts will now be sent to Gremlin Server - [localhost:8182]-[5206cdde-b231-41fa-9e6c-69feac0fe2b2] - type ':remote console' to return to local mode
Then as described in the JanusGraph docs for "Listing the Graphs":
ConfiguredGraphFactory.getGraphNames() will return a set of graph names for which you have created configurations using the ConfigurationManagementGraph APIs.
JanusGraphFactory.getGraphNames() on the other hand returns a set of graph names for which you have instantiated and the references are stored inside the JanusGraphManager.
If you are not using the Gremlin Console, then you should be using a remote client, such as the TinkerPop gremlin-driver (Java), to send your requests to the Gremlin Server.

Related

File beat is not transferring data to logstash

Beat input is not transferring to logstash. I have provided filebeat and logstash configuration files below.
Input Test.cs file
Date,Open,High,Low,Close,Volume,Adj Close
2015-04-02,125.03,125.56,124.19,125.32,32120700,125.32
2015-04-01,124.82,125.12,123.10,124.25,40359200,124.25
filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
-C:/ELK Stack/filebeat-8.2.0-windows-x86_64/filebeat-8.2.0-windows-x86_64/Test.csv
output.logstash:
hosts: ["localhost:5044"]
logstash.conf
input {
beats {
port => 5044
}
}
filter {
csv {
separator => ","
columns => ["Date","Open","High","Low","Close","Volume","Adj Close"]
}
mutate {convert => ["High", "float"]}
mutate {convert => ["Open", "float"]}
mutate {convert => ["Low", "float"]}
mutate {convert => ["Close", "float"]}
mutate {convert => ["Volume", "float"]}
}
output {
stdout {}
}
Kindly to check the filebeat yml file as there an issue with indentation
filebeat documentation
filebeat.inputs:
- type: log
paths:
- /var/log/messages
- /var/log/*.log
your filebeat
filebeat.inputs:
- type: log
enabled: true
paths:
-C:/ELK Stack/filebeat-8.2.0-windows-x86_64/filebeat-8.2.0-windows-x86_64/Test.csv
output.logstash:
hosts: ["localhost:5044"]
and for information log input is deprecated, instead use filesream input

Spring on Fargate can't connect to AWS SES

I'm trying to send emails through my Java application that I'm running on a container in Fargate. My containers are running in a VPC behind a API gateway, the connections to external services are made through VPC endpoints.
All that infra is deployed using Terraform. The Java app runs ok localy, but not when deployed to AWS, so I'm thinking that there is one missing config.
The Java app follows the AWS guidelines found here:
https://docs.aws.amazon.com/ses/latest/dg/send-email-raw.html
Following are some spinets of the Terraform code:
# SECURITY GROUPS
resource "aws_security_group" "security_group_containers" {
name = "security_group_containers_${var.project_name}_${var.environment}"
vpc_id = var.vpc_id
ingress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
self = true
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "security_group_containers_${var.project_name}_${var.environment}"
}
}
resource "aws_security_group" "security_group_ses" {
name = "security_group_ses_${var.project_name}_${var.environment}"
vpc_id = var.vpc_id
ingress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "security_group_ses_${var.project_name}_${var.environment}"
}
}
# VPC
resource "aws_vpc" "main" {
cidr_block = var.cidr
enable_dns_support = true
enable_dns_hostnames = true
}
resource "aws_subnet" "private_subnet" {
vpc_id = aws_vpc.main.id
cidr_block = var.private_subnets[0]
availability_zone = "us-east-1b"
tags = {
Name= "private_subnet_${var.project_name}_${var.environment}"
}
}
# VPC ENDPOINT
resource "aws_vpc_endpoint" "ses_endpoint" {
security_group_ids = [aws_security_group.security_group_ses]
service_name = "com.amazonaws.${var.aws_region}.email-smtp"
vpc_endpoint_type = "Interface"
subnet_ids = [aws_subnet.private_subnet.id]
private_dns_enabled = true
tags = {
"Name" = "vpc_endpoint_ses_${var.project_name}_${var.environment}"
}
vpc_id = aws_vpc.main.id
}
If there are any important service missing tell me so I can add it.
As you can see I'm keeping all traffic open, so the solution found here doesn't works for me. When the app tries to send an email I get to following error:
software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Connect to email.us-east-1.amazonaws.com:443 [email.us-east-1.amazonaws.com/52.0.170.238, email.us-east-1.amazonaws.com/54.234.96.52, email.us-east-1.amazonaws.com/34.239.37.81, email.us-east-1.amazonaws.com/18.208.125.60, email.us-east-1.amazonaws.com/52.204.223.71, email.us-east-1.amazonaws.com/18.235.72.5, email.us-east-1.amazonaws.com/18.234.10.182, email.us-east-1.amazonaws.com/44.194.249.132] failed: connect timed out
I think that I'm missing some config to make the java awssdk use the VPC endpoint.
Edit 01 - adding execution policies:
arn:aws:iam::aws:policy/AmazonSESFullAccess
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ses:*"
],
"Resource": "*"
}
]
}
arn:aws:iam::aws:policy/AmazonECS_FullAccess (too large)
arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
Edit 02 - changed to use a SMTP library:
The code used can be found here
Everything worked fine with SMTP
You've created a VPC endpoint for the SES SMTP API, but the error message you are getting email.us-east-1.amazonaws.com:443 is for the AWS SES Service API. You can see the two sets of APIs here. If you are using the AWS SDK to interact with SES in your Java application, then you need to change VPC endpoint to be service_name = "com.amazonaws.${var.aws_region}.email"
Your current endpoint configuration would work if you were configuring your Java application to use SMTP (such as with the JavaMail API).

Docker container service getting 'conection refused' when trying to call a REST API on another container even on the same custom bridge network

I got two spring-boot services and a Postgres DB's integration which Service 1 calls(HTTP) service 2 and service 2 integrate with Postgres and each service is running on a container. The problem is when service 1 calls service 2, it receives a connection refused and I have no clue why once all services seem to be on the same network (that's not a default bridge network. It works when I ping container 2 from container one, but when curl service 2 Endpoint, I got the error 'can't resolve the hostname'. Any help would be very welcome cause I'm tired of docker!
Docker-compose
version: '3.7'
networks:
valkyre-network:
driver: bridge
services:
db:
image: 'postgres:latest'
container_name: db
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- ./postgres-data:/var/lib/postgresql/data
ports:
- "5432:5432"
expose:
- 5432
networks:
- valkyre-network
valreim-player-register:
image: 'reiiissamuel/valreim-player-register:latest'
build:
context: .
container_name: valreim-player-register
depends_on:
- db
environment:
- DATABASE_URL=jdbc:postgresql://db:5432/postgres
- DATABASE_USERNAME=postgres
- DATABASE_PASSWORD=postgres
- FTP_REMOTE_WL_PATH=/files/valheim/valheim/server/BepInEx/config/aac_whitelistedDLLs.txt
- FTP_REMOTE_HOST=http://104.248.238.7:49153
- FTP_REMOTE_HOST_PORT:2022
- FTP_REMOTE_HOST_USER:reiiissamuel
- FTP_REMOTE_HOST_PASSWORD:Sbmvflrfjs23
ports:
- "8087:8087"
expose:
- 8087
networks:
- valkyre-network
valkyre-bot:
container_name: valkyre-bot
build:
context: .
image: 'reiiissamuel/valkyre:latest'
depends_on:
- valreim-player-register
environment:
- REGISTER_API_URL=http://valreim-player-register:8087/register/players
networks:
- valkyre-network
Docker network inspect
$ docker network inspect valkyre_valkyre-network
[
{
"Name": "valkyre_valkyre-network",
"Id": "862793d31abb81b777a9dd363fdf40cf1f867779a81857ea8b814d308f7b8903",
"Created": "2022-02-15T05:51:30.9692795Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.30.0.0/16",
"Gateway": "172.30.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"452659d8b9c4523fe55dcce2caeef374cc2838f88d2fb4502e90e7adfa71d2bd": {
"Name": "db",
"EndpointID": "5c60f27249b478a3e3b388f97f729d8e6363139a179cdbfa179341adfa5420fd",
"MacAddress": "02:42:ac:1e:00:02",
"IPv4Address": "172.30.0.2/16",
"IPv6Address": ""
},
"9bd52710f8239182fe083397a6b022bd774f8982efa5a0495849853a67eaa63d": {
"Name": "valreim-player-register",
"EndpointID": "e205076c01be599aab1c6537e0d2f80d2704d44882bbe190b7266f1c546d7fab",
"MacAddress": "02:42:ac:1e:00:03",
"IPv4Address": "172.30.0.3/16",
"IPv6Address": ""
},
"ac326acd43477f72c1e288ff7fd91d8cb0f2c26472071a3daa81dd82b8ccf886": {
"Name": "valkyre-bot",
"EndpointID": "bd33dded4fb9d2bb05a89073c8acc3981588c9362799f5f3568f3d4831333274",
"MacAddress": "02:42:ac:1e:00:04",
"IPv4Address": "172.30.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "valkyre-network",
"com.docker.compose.project": "valkyre",
"com.docker.compose.version": "1.29.2"
}
}
]
Successfull ping from service 1 to service 2
D:\java-projects\valkyre>docker container exec -it valkyre-bot ping valreim-player-register -p 8087
PATTERN: 0x8087
PING valreim-player-register (172.29.0.3) 56(84) bytes of data.
64 bytes from valreim-player-register.valkyre_valkyre-network (172.29.0.3): icmp_seq=1 ttl=64 time=0.067 ms
...
--- valreim-player-register ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9226ms
Unsuccessfully curl endpoint service 2 from service 1
# curl -v -L http://valreim-player-register:8087/register/players/steamid/111
* Trying 172.30.0.3:8087...
* connect to 172.30.0.3 port 8087 failed: Connection refused
* Failed to connect to valreim-player-register port 8087: Connection refused
* Closing connection 0
curl: (7) Failed to connect to valreim-player-register port 8087: Connection refused
Java service's 1 method where service 2 is call
(Guess it wont be useful, but extra info never hurts)
public void sendRegisterUser(PlayerReg player) throws IOException, InterruptedException {
logger.info("Cadastrando player na base...");
String payload = player.toString();
StringEntity entity = new StringEntity(payload,
ContentType.APPLICATION_JSON);
HttpPost request = new HttpPost(PLAYER_REGISTER_API_URI);
request.setEntity(entity);
HttpResponse response = httpClient.execute(request);
if(response.getStatusLine().getStatusCode() != 201)
throw new IOException(String.valueOf(response.getStatusLine().getStatusCode()));
}

HLF network closed for unknown reason causes Gateway to fail

This is the exception I see:
Exception in thread "main" org.hyperledger.fabric.gateway.GatewayRuntimeException: org.hyperledger.fabric.sdk.exception.ProposalException: org.hyperledger.fabric.sdk.exception.TransactionException: org.hyperledger.fabric.sdk.exception.ProposalException: getConfigBlock for channel isprintchannel failed with peer peer1.org1.isprint.com. Status FAILURE, details: Channel Channel{id: 1, name: isprintchannel} Sending proposal with transaction: 31101a32ee94cdb3ec65abaca86f0cf828d6b48cd4453257cd7270f94d192b93 to Peer{ id: 2, name: peer1.org1.isprint.com, channelName: isprintchannel, url: grpc://127.0.0.1:7051, mspid: Org1MSP} failed because of: gRPC failure=Status{code=UNAVAILABLE, description=Network closed for unknown reason, cause=null}
at org.hyperledger.fabric.gateway.impl.TransactionImpl.submit(TransactionImpl.java:121)
at org.hyperledger.fabric.gateway.impl.ContractImpl.submitTransaction(ContractImpl.java:50)
at com.isprint.axr.ext.hyperledger.isprint_fabric.isprint_chaincode.ChaincodeEventTester.main(ChaincodeEventTester.java:39)
Caused by: org.hyperledger.fabric.sdk.exception.ProposalException: org.hyperledger.fabric.sdk.exception.TransactionException: org.hyperledger.fabric.sdk.exception.ProposalException: getConfigBlock for channel isprintchannel failed with peer peer1.org1.isprint.com. Status FAILURE, details: Channel Channel{id: 1, name: isprintchannel} Sending proposal with transaction: 31101a32ee94cdb3ec65abaca86f0cf828d6b48cd4453257cd7270f94d192b93 to Peer{ id: 2, name: peer1.org1.isprint.com, channelName: isprintchannel, url: grpc://127.0.0.1:7051, mspid: Org1MSP} failed because of: gRPC failure=Status{code=UNAVAILABLE, description=Network closed for unknown reason, cause=null}
at org.hyperledger.fabric.sdk.Channel.sendProposalToPeers(Channel.java:4387)
at org.hyperledger.fabric.sdk.Channel.sendProposal(Channel.java:4358)
at org.hyperledger.fabric.sdk.Channel.sendTransactionProposal(Channel.java:3908)
at org.hyperledger.fabric.gateway.impl.TransactionImpl.sendTransactionProposal(TransactionImpl.java:161)
at org.hyperledger.fabric.gateway.impl.TransactionImpl.submit(TransactionImpl.java:94)
... 2 more
Caused by: org.hyperledger.fabric.sdk.exception.TransactionException: org.hyperledger.fabric.sdk.exception.ProposalException: getConfigBlock for channel isprintchannel failed with peer peer1.org1.isprint.com. Status FAILURE, details: Channel Channel{id: 1, name: isprintchannel} Sending proposal with transaction: 31101a32ee94cdb3ec65abaca86f0cf828d6b48cd4453257cd7270f94d192b93 to Peer{ id: 2, name: peer1.org1.isprint.com, channelName: isprintchannel, url: grpc://127.0.0.1:7051, mspid: Org1MSP} failed because of: gRPC failure=Status{code=UNAVAILABLE, description=Network closed for unknown reason, cause=null}
at org.hyperledger.fabric.sdk.Channel.parseConfigBlock(Channel.java:2023)
at org.hyperledger.fabric.sdk.Channel.loadCACertificates(Channel.java:1843)
at org.hyperledger.fabric.sdk.Channel.sendProposalToPeers(Channel.java:4385)
... 6 more
Caused by: org.hyperledger.fabric.sdk.exception.ProposalException: getConfigBlock for channel isprintchannel failed with peer peer1.org1.isprint.com. Status FAILURE, details: Channel Channel{id: 1, name: isprintchannel} Sending proposal with transaction: 31101a32ee94cdb3ec65abaca86f0cf828d6b48cd4453257cd7270f94d192b93 to Peer{ id: 2, name: peer1.org1.isprint.com, channelName: isprintchannel, url: grpc://127.0.0.1:7051, mspid: Org1MSP} failed because of: gRPC failure=Status{code=UNAVAILABLE, description=Network closed for unknown reason, cause=null}
at org.hyperledger.fabric.sdk.Channel.getConfigBlock(Channel.java:962)
at org.hyperledger.fabric.sdk.Channel.getConfigBlock(Channel.java:917)
at org.hyperledger.fabric.sdk.Channel.parseConfigBlock(Channel.java:2006)
... 8 more
This is my Gateway code (pretty much unchanged from the boilerplate):
public class Tester {
public static void main(String[] args) throws IOException {
// Load an existing wallet holding identities used to access the network.
Path wdir = Paths.get("wallet");
Wallet wallet = Wallet.createFileSystemWallet(wdir);
// Path to a common connection profile describing the network.
Path cfg = Paths.get("config","local","connection.json");
// Configure the gateway connection used to access the network.
Gateway.Builder builder = Gateway.createBuilder().identity(wallet, "myteareserve_app").networkConfig(cfg);
// Create a gateway connection
try (Gateway gateway = builder.connect()) {
Network network = gateway.getNetwork("isprintchannel");
Contract contract = network.getContract("myteacc");
// this next line throws the above exception
byte[] createProductResult = contract.submitTransaction("createProduct", "tea001", "red");
System.out.println(new String(createProductResult, StandardCharsets.UTF_8));
} catch (ContractException | TimeoutException | InterruptedException e) {
e.printStackTrace();
}
}
}
connection.json is defined as follows:
{
"name": "myteareserve",
"x-type": "hlfv1",
"x-commitTimeout": 1000,
"version": "1.0.0",
"client": {
"organization": "Org1",
"connection": {
"timeout": {
"peer": {
"endorser": "1000",
"eventHub": "1000",
"eventReg": "1000"
},
"orderer": "1000"
}
}
},
"channels": {
"myteachannel": {
"orderers": [
"orderer1.isprint.com",
"orderer2.isprint.com",
"orderer3.isprint.com"
],
"peers": {
"peer1.org1.isprint.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"ledgerQuery": true,
"eventSource": true
},
"peer2.org1.isprint.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"ledgerQuery": true,
"eventSource": true
}
}
}
},
"organizations": {
"Org1": {
"mspid": "Org1MSP",
"peers": [
"peer1.org1.isprint.com",
"peer2.org1.isprint.com"
],
"certificateAuthorities": [
"ca.org1.isprint.com"
]
}
},
"orderers": {
"orderer1.isprint.com": {
"url": "grpc://127.0.0.1:7050"
},
"orderer2.isprint.com": {
"url": "grpc://127.0.0.1:8050"
},
"orderer3.isprint.com": {
"url": "grpc://127.0.0.1:9050"
}
},
"peers": {
"peer1.org1.isprint.com": {
"url": "grpc://127.0.0.1:7051"
},
"peer2.org1.isprint.com": {
"url": "grpc://127.0.0.1:8051"
}
},
"certificateAuthorities": {
"ca.org1.isprint.com": {
"url": "http://127.0.0.1:7054",
"caName": "ca.org1.isprint.com"
}
}
}
This is what I see when I follow my peer logs (I had to set debug to INFO, else there's too much gossip DEBUG):
iamuser#isprintdev:~/shared$ docker logs --tail 0 -f eeb970c52a36
2020-05-05 20:38:59.841 UTC [core.comm] ServerHandshake -> ERRO 080 TLS handshake failed with error tls: first record does not look like a TLS handshake server=PeerServer remoteaddress=10.0.2.2:61852
2020-05-05 20:38:59.926 UTC [core.comm] ServerHandshake -> ERRO 081 TLS handshake failed with error tls: first record does not look like a TLS handshake server=PeerServer remoteaddress=10.0.2.2:61853
For completeness here is my Docker compose yaml for peers:
version: '3.4'
volumes:
peer1.org1.isprint.com:
peer2.org1.isprint.com:
couchdb1.org1.isprint.com:
couchdb2.org1.isprint.com:
networks:
isprint:
external:
name: fabric
services:
org1couchdb1:
image: hyperledger/fabric-couchdb
environment:
- COUCHDB_USER= couchdb
- COUCHDB_PASSWORD=couchdb123
volumes:
- couchdb1.org1.isprint.com:/opt/couchdb/data
deploy:
mode: replicated
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints:
- node.hostname == isprintdev
ports:
- published: 5984
target: 5984
mode: host
networks:
isprint:
aliases:
- couchdb1.org1.isprint.com
org1couchdb2:
image: hyperledger/fabric-couchdb
environment:
- COUCHDB_USER= couchdb
- COUCHDB_PASSWORD=couchdb123
volumes:
- couchdb2.org1.isprint.com:/opt/couchdb/data
deploy:
mode: replicated
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints:
- node.hostname == isprintdev
ports:
- published: 6984
target: 5984
mode: host
networks:
isprint:
aliases:
- couchdb2.org1.isprint.com
org1peer1:
image: hyperledger/fabric-peer:latest
environment:
# couchdb params
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb1.org1.isprint.com:5984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=couchdb
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=couchdb123
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=fabric
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
#- CORE_LOGGING_LEVEL=INFO
- FABRIC_LOGGING_SPEC=INFO
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
- CORE_PEER_ID=peer1.org1.isprint.com
- CORE_PEER_ADDRESS=peer1.org1.isprint.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer2.org1.isprint.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.isprint.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_DOCKER_ATTACHSTDOUT=true
- CORE_CHAINCODE_STARTUPTIMEOUT=1200s
- CORE_CHAINCODE_EXECUTETIMEOUT=800s
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.isprint.com/peers/peer1.org1.isprint.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org1.isprint.com/peers/peer1.org1.isprint.com/tls:/etc/hyperledger/fabric/tls
- peer1.org1.isprint.com:/var/hyperledger/production
deploy:
mode: replicated
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints:
- node.hostname == isprintdev
ports:
- published: 7051
target: 7051
mode: host
- published: 7053
target: 7053
mode: host
networks:
isprint:
aliases:
- peer1.org1.isprint.com
org1peer2:
image: hyperledger/fabric-peer:latest
environment:
# couchdb params
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb2.org1.isprint.com:5984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=couchdb
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=couchdb123
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=fabric
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
#- CORE_LOGGING_LEVEL=INFO
- FABRIC_LOGGING_SPEC=INFO
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
- CORE_PEER_ID=peer1.org1.isprint.com
- CORE_PEER_ADDRESS=peer2.org1.isprint.com:8051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org1.isprint.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer2.org1.isprint.com:8051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_DOCKER_ATTACHSTDOUT=true
- CORE_CHAINCODE_STARTUPTIMEOUT=1200s
- CORE_CHAINCODE_EXECUTETIMEOUT=800s
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.isprint.com/peers/peer2.org1.isprint.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org1.isprint.com/peers/peer2.org1.isprint.com/tls:/etc/hyperledger/fabric/tls
- peer2.org1.isprint.com:/var/hyperledger/production
deploy:
mode: replicated
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints:
- node.hostname == isprintdev
ports:
- published: 8051
target: 7051
mode: host
- published: 8053
target: 7053
mode: host
networks:
isprint:
aliases:
- peer2.org1.isprint.com
Do let me know if I should provide more info.
Connection.json seems to be missing Certificates. Please refer to "first-network/connection-org1.json" under "fabric-samples" on how the certificates are used while connecting to the gateway.
One problem I can see is you're using grpc:// instead of grpcs://. If you're using TLS you're going to need to access the orderer/peers using grpcs:// ... much like when you use TLS over HTTP you need https:// (rather than http://). Once you update you should get a more meaningful error message.

Index_not_found_exception no such index found in elasticsearch using Powershell

I have created two files that are,
jdbc_sqlserver.json:
{
"type": "jdbc",
"jdbc": {
"url": "jdbc:sqlserver://localhost:1433;databaseName=merchant2merchant;integratedSecurity=true;",
"user": "",
"password": "",
"sql": "select * from planets",
"treat_binary_as_string": true,
"elasticsearch": {
"cluster": "elasticsearch",
"host": "localhost",
"port": 9200
},
"index": "testing"
}
}
jdb_sqlserver.ps1:
function Get - PSVersion {
if (test - path variable: psversiontable) {
$psversiontable.psversion
} else {
[version]
"1.0.0.0"
}
}
$powershell = Get - PSVersion
if ($powershell.Major - le 2) {
Write - Error "Oh, so sorry, this script requires Powershell 3
(due to convertto - json)
"
exit
}
if ((Test - Path env: \JAVA_HOME) - eq $false) {
Write - Error "Environment variable JAVA_HOME must be set to your java home"
exit
}
curl - XDELETE "http://localhost:9200/users/"
$DIR = "C:\Program Files\elasticsearch\plugins\elasticsearch-jdbc-2.3.4.0- dist\elasticsearch-jdbc-2.3.4.0\"
$FEEDER_CLASSPATH = "$DIR\lib"
$FEEDER_LOGGER = "file://$DIR\bin\log4j2.xml"
java - cp "$FEEDER_CLASSPATH\*" - "Dlog4j.configurationFile=$FEEDER_LOGGER"
"org.xbib.tools.Runner"
"org.xbib.tools.JDBCImporter"
jdbc_sqlserver.json
and running the second one in Powershell using command .\jdb_sqlserver.ps1 in "C:\servers\elasticsearch\bin\feeder" path but I got error likwIndex_not_found_exception no such index found in powershell.

Categories