I want to connect to Google Cloud Bigtable which running on Docker:
docker run --rm -it -p 8086:8086 -v ~/.config/:/root/.config \
bigtruedata/gcloud-bigtable-emulator
It starts without any problems:
[bigtable] Cloud Bigtable emulator running on 127.0.0.1:8086
~/.config it is my default credentials that I configured in this way:
gcloud auth application-default login
I used Java-code from official sample HelloWorld.
Also, I changed connection configuration like this:
Configuration conf = BigtableConfiguration.configure("projectId", "instanceId");
conf.set(BigtableOptionsFactory.BIGTABLE_HOST_KEY, "127.0.0.1");
conf.set(BigtableOptionsFactory.BIGTABLE_PORT_KEY, "8086");
conf.set(BigtableOptionsFactory.BIGTABLE_USE_PLAINTEXT_NEGOTIATION, "true");
try (Connection connection = BigtableConfiguration.connect(conf)) {
...
And I set BIGTABLE_EMULATOR_HOST=127.0.0.1:8086 environment variable in a configuration for my app in IntelliJ Idea.
But when I run my Java app, it gets stuck on admin.createTable(descriptor); and shows this log:
...
16:42:44.697 [grpc-default-executor-0] DEBUG
com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.util.Recycler
- -Dio.netty.recycler.ratio: 8
After some time it shows log about BigtableClientMetrics and then throws an exception:
java.net.NoRouteToHostException: No route to host
I get the same problem when trying to run Google Cloud Bigtable with my own Dockerfile.
When I run Google Cloud Bigtable with this command:
gcloud beta emulators bigtable start
my app completed successfully.
So, how to solve this problem?
UPDATE:
Now I have this exception:
io.grpc.StatusRuntimeException: UNAVAILABLE: Network closed for unknown reason
and before this another exception is thrown:
java.io.IOException: Connection reset by peer
Related
Env: macOS Catalina, iOS 12.4.1 (for 13.3.1 the same), Xcode 11.4, Appium 17.0.0
Issue:
When trying to run method 'AltUnityDriver create' - Failed to execute command 'mobiledevice tunnel -u (device udid) 13000 13000'. Cause 'java.io.IOException: error=2, No such file or directory'.
Script fails when trying to run the command new Socket(127.0.0.1, 13000);
Method threw 'java.net.ConnectException' exception.
Connection refused (Connection refused)
Test created for app with AltUnityDriver inside and it is needed to create connection
Make sure your application is having AltUnityServer inside the app at the port 13000.
Then you have to port forward the phone port 13000 to the localhost.
Use command iproxy for the same. Syntax as follows:
iproxy LOCAL_PORT:DEVICE_PORT
I am new to docker and having a simple DW(dropwizard) application that connects to elasticsearch, Which is already running in docker using the docker-compose.yml, which has the following content.
Docker-compose.yml for elasticsearch
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ports:
- 8200:9200
- 8300:9300
volumes:
elasticsearch-data:
driver: local
Note: I am exposing 8200 and 8300 as ES port on my host(local mac system)
Now everything works fine when I simply run my DW application which connects to ES in 8200 on localhost, but now I am trying to dockerize my DW application and facing few issues.
Below is my Dockerfile for DW application
COPY target/my.jar my.jar
COPY config.yml config.yml
ENTRYPOINT ["java" , "-jar" , "my.jar", "server", "config.yml"]
When I run my above DW docker image, it immediately stops, using docker logs <my-container-id>, it throws below exception:
*java.io.IOException: elasticsearch: Name does not resolve*
org.elasticsearch.client.IndicesClient.exists(IndicesClient.java:827)
**Caused by: java.net.UnknownHostException: elasticsearch: Name does not resolve**
Things I have tried
The error message clearly mentions my DW app docker instance is not able to connect to elasticsearch, which I verified running fine.
Also checked the network of Elaticsearch docker and it has the network alias as elasticsearch as shown below and n/w as docker-files_default.
"Aliases": [
"elasticsearch",
"de78c684ae60"
],
Checked the n/w of my DW app docker instance and it uses bridge network and doesn't have any network alias.
Now, how can I make both my app docker and elasticsearch docker use the same network so that they can connect with each other, I guess this would solve the issue?
Two ways to solve this: First is to check what network docker-compose created for your elasticsearch setting (docker network ls) and then run your DW app with
docker run --network=<name of network> ...
Second way is to create a network docker network create elastic and use it as external network in your docker compose file as well as in your docker run command for the DW app.
Docker compose file could then look like
...
services:
elasticsearch:
networks:
elastic:
...
networks:
elastic:
external: true
I am working with docker
I am provided a docker image which compiles and runs fine
the application uses Amazon client to interact services like S3 , SNS , SQS .
the moment the application tries to load the client it fails with error
Caused by: com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), com.amazonaws.auth.profile.ProfileCredentialsProvider#17bf085: profile file cannot be null, com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper#24117d53: Unable to load credentials from service endpoint
I have tested on cli that application local IAM configuration is correct
get caller identity on console
aws sts get-caller-identity
output
{
"Account": "xxxxxxxxxxxx",
"UserId": "XXXXXXXXXXXXXXXXXXXXX:xxxxxxxx-session-1562266255",
"Arn": "arn:aws:sts::342484191705:assumed-role/abc-abc-abc-abc/xxxxxxxx-session-1562266255"
}
so the IAM role is assumed correctly on local machine ,
running unit test and integration test on local machine also assume the IAM role perfectly .
I am running the docker image by command
docker run -it --rm -e "JPDA_ADDRESS=*:8000" -e "JPDA_TRANSPORT=dt_socket" -p 5033:8000 -p 6060:6033 --memory 1300M --log-driver json-file --log-opt "max-size=1g" docker-image-arn dev
the image runs but fails all operation where it has to assume IAM role and interact with AWS services .
what is missing ?
how to make application within the container use the IAM role ?
I have a client application from where I need to remotely execute queries on spark using spark-sql. I am able to do it from spark-shell but how can I remotely execute them from my scala based client application.
I have tried the following code :
val conf = new SparkConf().set("spark.shuffle.blockTransferService", "nio").setMaster("spark://master:port").setAppName("Query Fire").set("spark.hadoop.validateOutputSpecs", "true")
.set("spark.local.dir", "/tmp/spark-temp")
.set("spark.driver.memory", "4G").set("spark.executor.memory", "4G")
val spark = SparkContext.getOrCreate(conf)
I tried giving the default port 7077 but it is not open. I have a cloudera based spark installation which it seems is not running spark standalone.
The error I get running the code when I try to give yarn resourcemanager port 8042 is the following:
16/09/16 20:14:36 WARN TransportChannelHandler: Exception in
connection from /192.168.0.171:8042 java.io.IOException: Connection
reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
Is there any way to get around this to remotely call spark-sql via jdbc client like we can do for Hive queries??
I am facing a problem connecting to mongodb from my machine to public IP server on which mongodb installed as a windows service with --auth
When I removed authentication as below command, I am able to access the database collection
mongod --install --noauth --dbpath "c:\mongodb\data" --logpath
"c:\mongodb\logs\log.txt" --bind_ip "0.0.0.0"
And when I use the --auth in place of --noauth, I am getting the following error:
errmsg : "auth failed" code :18 login failed
And I am giving correct login details to connect to the mongodb.
What is causing this and how can I fix it?
What command are you using to connect to your database?
If you use mongo like mongo -u login -p password -h xxx.yyy.zzz.aaa try to add --authenticationDatabase admin.