I cannot connect to DynamoDB that is running local using cli.
aws dynamodb list-tables --endpoint-url http://localhost:8000
Could not connect to the endpoint URL: "http://localhost:8000/"
This doesn't work either:
aws dynamodb list-tables --region local
Could not connect to the endpoint URL: "http://localhost:8000/"
I tried using a different port and that didn't help. I disabled all proxies too.
I am able to connect to DynamoDB using an application like this so I know it's not a dynamodb issue:
aws dynamodb list-tables --endpoint-url http://dynamodb.us-west-2.amazonaws.com --region us-west-2
{
"TableNames": [
"Music"
]
}
ðŸ˜ðŸ˜ðŸ˜
When you run
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
command from your terminal make sure output will be like below and no service will be running on port 8000:
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: false
DbPath: null
SharedDb: true
shouldDelayTransientStatuses: false
CorsParams: *
It means, this service running successfully on port 8000.
DynamoDB requires any/fake credentials to work.
AWS Access Key ID: "fakeMyKeyId"
AWS Secret Access Key: "fakeSecretAccessKey"
then try below command to list tables.
aws dynamodb list-tables --endpoint-url http://localhost:8000
The error in your logs is the key here Caused by: java.lang.UnsatisfiedLinkError: no sqlite4java-osx-x86_64 in java.library.path: [.]
This means that the specific dependency cannot be located.
The link that Saranjeet provided has a few solutions. I prefer this solution for testing:
First, you need to download the zip file from offcial website. Unzip the file, copy all the *.dll, *.dylib, *.so to a folder under your project root. Say, src/test/resources/libs.
Then, add the code
System.setProperty("sqlite4java.library.path", "src/test/resources/libs/");
before you initialize a local instance of AmazonDynamoDB.
Related
I want to fetch configuration properties from Parameter Store on boostrap application, for that I am using io.awspring.cloud:spring-cloud-starter-aws-parameter-store-config:2.3.3 and I run the same configuration on Windows where it works but on linux ec2 instance or WSL2 I am receiving below error message:
AwsParameterPropertySourceNotFoundException: com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
but before run it I am doing
export AWS_ACCESS_KEY_ID="xxxx"
export AWS_SECRET_ACCESS_KEY="xxxxx"
I also verify it using printenv and yes they are there. So the question is why I am receiving ...Unable to load AWS credentials from environment variables AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY...does anyone know?
EDIT:
Forgot to mention but I use
io.awspring.cloud:spring-cloud-starter-aws-parameter-store-config
with configuration:
cloud:
aws:
credentials:
access-key: ${AWS_ACCESS_KEY_ID}
secret-key: ${AWS_SECRET_ACCESS_KEY}
region:
static: eu-west-1
auto: false
use-default-aws-region-chain: true
stack:
auto: false
But underhood it use DefaultAWSCredentialsProviderChain so it should works like AWS SDK for Java
I am working with docker
I am provided a docker image which compiles and runs fine
the application uses Amazon client to interact services like S3 , SNS , SQS .
the moment the application tries to load the client it fails with error
Caused by: com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), com.amazonaws.auth.profile.ProfileCredentialsProvider#17bf085: profile file cannot be null, com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper#24117d53: Unable to load credentials from service endpoint
I have tested on cli that application local IAM configuration is correct
get caller identity on console
aws sts get-caller-identity
output
{
"Account": "xxxxxxxxxxxx",
"UserId": "XXXXXXXXXXXXXXXXXXXXX:xxxxxxxx-session-1562266255",
"Arn": "arn:aws:sts::342484191705:assumed-role/abc-abc-abc-abc/xxxxxxxx-session-1562266255"
}
so the IAM role is assumed correctly on local machine ,
running unit test and integration test on local machine also assume the IAM role perfectly .
I am running the docker image by command
docker run -it --rm -e "JPDA_ADDRESS=*:8000" -e "JPDA_TRANSPORT=dt_socket" -p 5033:8000 -p 6060:6033 --memory 1300M --log-driver json-file --log-opt "max-size=1g" docker-image-arn dev
the image runs but fails all operation where it has to assume IAM role and interact with AWS services .
what is missing ?
how to make application within the container use the IAM role ?
MacOS + Docker (Version 17.12.0-ce-mac49 (21995)) here. I am trying to Dockerize an existing Spring Boot app. Here's my Dockerfile:
FROM openjdk:8
RUN mkdir /opt/myapp
ADD build/libs/myapp.jar /opt/myapp
ADD application.yml /opt/myapp
ADD logback.groovy /opt/myapp
WORKDIR /opt/myapp
EXPOSE 9200
ENTRYPOINT ["java", "-Dspring.config=.", "-jar", "myapp.jar"]
Here's my Spring Boot application.yml config file. As you can see it expects Docker to inject environment variables from an env file:
logging:
config: 'logback.groovy'
server:
port: 9200
error:
whitelabel:
enabled: true
spring:
cache:
type: none
datasource:
driver-class-name: com.mysql.jdbc.Driver
url: jdbc:mysql://${DB_HOST}:3306/myapp_db?useSSL=false&nullNamePatternMatchesAll=true
username: ${DB_USERNAME}
password: ${DB_PASSWORD}
testWhileIdle: true
validationQuery: SELECT 1
jpa:
show-sql: false
hibernate:
ddl-auto: none
naming:
physical-strategy: org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
implicit-strategy: org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
properties:
hibernate.dialect: org.hibernate.dialect.MySQL5InnoDBDialect
hibernate.cache.use_second_level_cache: false
hibernate.cache.use_query_cache: false
hibernate.generate_statistics: false
hibernate.hbm2ddl.auto: validate
myapp:
detailsMode: ${DETAILS_MODE}
tokenExpiryDays:
alert: 5
jwtInfo:
secret: ${JWT_SECRET}
expiry: ${JWT_EXPIRY}
topics:
adminAlerts: admin-alerts
Here's my myapp-local.env file:
DB_HOST=localhost
DB_USERNAME=root
DB_PASSWORD=
DETAILS_MODE=Terse
JWT_SECRET=12345==
JWT_EXPIRY=86400000
It's worth noting that above in the env file, I have tried localhost, 127.0.0.1 and 172.17.0.1 and all of them produce identical errors below.
Then I build the container:
docker build -t myapp .
Success! Then I run the container:
docker run -it -p 9200:9200 --net="host" --env-file myapp-local.env --name myapp myapp
...and I watch as the container quickly dies with MySQL connection-related exceptions (can't connect to the MySQL machine running locally). I can confirm that the Spring Boot app has no problem connecting to MySQL when it runs as an executable ("fat") jar outside of Docker, and I can confirm that the local MySQL instance is up and running and is perfectly healthy.
Unable to connect to database. }com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:590)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:57)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:1606)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:633)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:347)
When I turn TRACE-level logging on, I see it is trying to connect to:
url=jdbc:mysql://localhost:3306/myapp?useSSL=false&nullNamePatternMatchesAll=true
So it does look like Docker is properly injecting the env file's vars into the Spring YAML-based config. So this doesn't feel like a config issue, moreover an isse with the container speaking to the MySQL port running on the Docker host.
Can anybody see where I'm going awry?
Accessing the host machine from within a container is not recommended. Usually it can be solved by wrapping service you need into a container and accessing it via container name.
There is no solution, there are only workarounds, you can use one of them:
On Mac you can access the host services using docker.for.mac.host.internal DNS name.
You need to set environment variable like this:
DB_HOST=docker.for.mac.host.internal
And refer to the DB_HOST from your connection string.
For more details see the documentation:
From 17.12 onwards our recommendation is to connect to the special
Mac-only DNS name docker.for.mac.host.internal, which resolves to the
internal IP address used by the host.
Note: Having --net="host" doesn't let you reach the host machine via localhost. localhost always points to local machine, but in case if it is invoked from within a container it points to the container itself.
So basically Docker app is not in the same network as the host you're running it from and that's why you can't access MySQL by pointing to localhost (because this is another network from Docker's point of view).
What you could try is to run docker with --net="host" option and then it will share the network with its host.
You can find better explanation on this issue in this topic From inside of a Docker container, how do I connect to the localhost of the machine?
Here's is what I have successfully done so far on SCDF Local Server
I have successfully deployed SCDF server on my local and also I have used Kafka and Zookeeper config parameters with it i.e
mymac$ java -jar spring-cloud-dataflow-server-local-1.3.0.RELEASE.jar
--spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=localhost:9092
--spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=localhost:2181
I was able to create my stream
ingest = producer-app > :broker1
filter = :broker1 > filter-app > :broker2
Now I need help to do the exact same thing on PCFDev
I have my PCFDEv running
I have to deploy SCDF-Cloudfoundry jar with my local kafka and zookeeper parameters to pcfDev but when I do the following steps it gives me an error that its
1.1) cf push -f manifest-scdf.yml --no-start -p /XXX/XXX/XXX/spring-cloud-dataflow-server-cloudfoundry-1.3.0.BUILD-SNAPSHOT.jar -k 1500M
this runs good...no problem. but 1.2
1.2) cf start dataflow-server --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=host.pcfdev.io:9092 --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=host.pcfdev.io:2181
gives me this error:--
Incorrect Usage: unknown flag `spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers'
below is my manifest-scdf.yml file
---
instances: 1
memory: 2048M
applications:
- name: dataflow-server
host: dataflow-server
services:
- redis
- rabbit
env:
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_URL: https://api.local.pcfdev.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_ORG: pcfdev-org
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SPACE: pcfdev-space
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DOMAIN: local.pcfdev.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_USERNAME: admin
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_PASSWORD: admin
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SKIP_SSL_VALIDATION: true
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_SERVICES: rabbit
MAVEN_REMOTE_REPOSITORIES_REPO1_URL: https://repo.spring.io/libs-snapshot
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DISK: 512
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_BUILDPACK: java_buildpack
spring.cloud.deployer.cloudfoundry.stream.memory: 400
spring.cloud.dataflow.features.tasks-enabled: true
spring.cloud.dataflow.features.streams-enabled: true
Please help me. Thank you.
There are few options to supply Kafka credentials to Stream-apps in PCF.
1. Kafka CUPs
This option allows you to create CUPs for an external Kafka-service. While deploying the stream, you can then supply the coordinates to each application either individually as described in the docs or you can supply them as global properties for all the stream-apps deployed by the SCDF-server.
2. Inline properties
Instead of extracting from CUPs, you can also directly supply the HOST/PORT while deploying the stream. Again, this can be applied globally, too.
stream deploy myTest --properties "app.*.spring.cloud.stream.kafka.binder.brokers=<HOST>:9092,app.*.spring.cloud.stream.kafka.binder.zkNodes=<HOST>:2181
Note: The HOST must be reachable for the stream-apps; o'wise, it ill continue to connect to localhost and potentially fail since the apps are running inside a VM.
The error you're seeing is coming from the CF CLI, it's interpreting those (I'm assuming environment) variables you're providing as flags to the cf start command and failing.
You could either provide them in your manifest.yml or set their values manually using the CLI's cf set-env command by doing something like this:
cf set-env dataflow-server spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers host.pcfdev.io:9092
cf set-env dataflow-server spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes host.pcfdev.io:2181
After you've set them they should be picked up when you run cf start dataflow-server.
Relevant CLI docs:
http://cli.cloudfoundry.org/en-US/cf/set-env.html
I have set up an h2 cluster but cannot connect via the console or using a datasource all I get is this:
IO Exception: "java.io.IOException: The filename, directory name, or volume label syntax is incorrect"; "E:/baseDirDefinedInServerConnection/myDB,localhost:1112/myDB" [90031-176] 90031/90031 (Help)
I have configured 2 servers thus:
java -cp h2-1.3.167.jar org.h2.tools.Server -tcp -tcpPort 1111 -tcpAllowOthers -baseDir E:\myBaseDir
at tcp://myIp:1111 (others can connect)
java -cp h2-1.3.167.jar org.h2.tools.Server -tcp -tcpPort 1112 -tcpAllowOthers -baseDir E:\myBaseDir\server
at tcp://myIp:1112 (others can connect)
So you see I have one database in a directory (this has been created) and another database in another directory. Both are up and running.
I have run the cluster tool thus:
java -cp h2-1.3.167.jar org.h2.tools.CreateCluster -urlSource jdbc:h2:tcp://localhost:1111/myDB -urlTarget jdbc:h2
:tcp://localhost:1112/myDB -user username -password pass -serverList localhost:1111,localhost:1112
And it all looks good. If I try to connect thorugh the console without the cluster list I get this message, which proves we are in clustered mode, which is good:
Clustering error - database currently runs in cluster mode, server list: 'localhost:1111,localhost:1112'" [
I have checked the permissions on the directories and all has read/write access.
Yes this is a windows machine.
Using H2 version:
Bundle-Vendor: H2 Group
Bundle-Version: 1.3.167
Any ideas what I might have done wrong?
Thanks for reading.
Guess you already found out that one should connect like this
jdbc:h2:tcp://localhost:1111,localhost:1112/myDB