how to run docker Image assuming an IAM role on local machine? - java

I am working with docker
I am provided a docker image which compiles and runs fine
the application uses Amazon client to interact services like S3 , SNS , SQS .
the moment the application tries to load the client it fails with error
Caused by: com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), com.amazonaws.auth.profile.ProfileCredentialsProvider#17bf085: profile file cannot be null, com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper#24117d53: Unable to load credentials from service endpoint
I have tested on cli that application local IAM configuration is correct
get caller identity on console
aws sts get-caller-identity
output
{
"Account": "xxxxxxxxxxxx",
"UserId": "XXXXXXXXXXXXXXXXXXXXX:xxxxxxxx-session-1562266255",
"Arn": "arn:aws:sts::342484191705:assumed-role/abc-abc-abc-abc/xxxxxxxx-session-1562266255"
}
so the IAM role is assumed correctly on local machine ,
running unit test and integration test on local machine also assume the IAM role perfectly .
I am running the docker image by command
docker run -it --rm -e "JPDA_ADDRESS=*:8000" -e "JPDA_TRANSPORT=dt_socket" -p 5033:8000 -p 6060:6033 --memory 1300M --log-driver json-file --log-opt "max-size=1g" docker-image-arn dev
the image runs but fails all operation where it has to assume IAM role and interact with AWS services .
what is missing ?
how to make application within the container use the IAM role ?

Related

Docker image using Jib plugin: Cant hit rest endpoint locally when container running?

I have a very simple spring boot web app that I created using spring initializer.
I have added the following controller:
#Controller
#RequestMapping("hello")
public class TestController {
#GetMapping(value = "", produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<String> sayHello() {
return ResponseEntity.ok("Hello!");
}
}
When I run it locally in intelij I can hit the following endpoint successfully and get a response:
http://localhost:8080/hello
I have pushed the image created using jib to my docker hub registry and can pull the image successfully.
However when I run the container via docker as follows I get This site can’t be reached at the same URL.
docker run --name localJibExample123 -d -p 8080:80 bc2cbf3b85d1
I am able to run other containers ok and can hit the endpoints, what can be the issue here?
Running docker ps returns this for my running container:
"java -cp /app/resou…" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp localJibExample222
So it seems that my app should be accessible on:
http://localhost:8080/hello
Spring-boot, by default, runs on port 8080. The port mapping we are using in the docker run ... command maps the host's port 8080 to the container's port 80 (... -p 8080:80 ..., relevant docker run documentatn (docs.docker.com)). Since there is nothing running on the container's port 80, we get no response when accessing http://localhost:8080.
The fix is straight-forward: we replace docker run ... -p 8080:80 ... with docker run ... -p 8080:8080 .... This will map to the container's port 8080, where the spring-boot application is listening. When we access http://localhost:8080 now, we will get a response.

azure ci cd tomcat error. Error : 2021-04-22T04:58:28.6149523Z ##[error]Error: /usr/bin/curl failed with return code: 22

I am trying to deploy my web application on tomcat server. I am using azure CI CD pipeline. I have build the code and generated war file in azure artifacts. I am using the self hosted agent for cd pipeline and installed tomcat on it. for cd pipeline, I added deploy to tomcat server as task, however I am getting below error:
As a part of troubleshooting checked curl availability and added curl in capabilities of agent. Added manager-gui role in tomcat-users.xml file
enter image description here
error:
enter image description here
Can someone please provide the solution.

Cannot connect to local DynamoDB

I cannot connect to DynamoDB that is running local using cli.
aws dynamodb list-tables --endpoint-url http://localhost:8000
Could not connect to the endpoint URL: "http://localhost:8000/"
This doesn't work either:
aws dynamodb list-tables --region local
Could not connect to the endpoint URL: "http://localhost:8000/"
I tried using a different port and that didn't help. I disabled all proxies too.
I am able to connect to DynamoDB using an application like this so I know it's not a dynamodb issue:
aws dynamodb list-tables --endpoint-url http://dynamodb.us-west-2.amazonaws.com --region us-west-2
{
"TableNames": [
"Music"
]
}
😭😭😭
When you run
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
command from your terminal make sure output will be like below and no service will be running on port 8000:
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: false
DbPath: null
SharedDb: true
shouldDelayTransientStatuses: false
CorsParams: *
It means, this service running successfully on port 8000.
DynamoDB requires any/fake credentials to work.
AWS Access Key ID: "fakeMyKeyId"
AWS Secret Access Key: "fakeSecretAccessKey"
then try below command to list tables.
aws dynamodb list-tables --endpoint-url http://localhost:8000
The error in your logs is the key here Caused by: java.lang.UnsatisfiedLinkError: no sqlite4java-osx-x86_64 in java.library.path: [.]
This means that the specific dependency cannot be located.
The link that Saranjeet provided has a few solutions. I prefer this solution for testing:
First, you need to download the zip file from offcial website. Unzip the file, copy all the *.dll, *.dylib, *.so to a folder under your project root. Say, src/test/resources/libs.
Then, add the code
System.setProperty("sqlite4java.library.path", "src/test/resources/libs/");
before you initialize a local instance of AmazonDynamoDB.

Spring boot running as container issues

I am running a spring boot with REST API inside a docker container. Everything seems to work fine when i run from eclipse or as jar. But when i dockerize it and run i am facing below issues
First
Not able to access REST Endpoint within container.
http://localhost:9000/ --> works But
http://localhost:9000/api/v1/test --> it does not identify.
However i can run it from swagger.
Second issue org.postgresql.util.PSQLException: ERROR: permission denied for schema < schema_name >
However i have given all permissions for the schema like
GRANT ALL ON SCHEMA < schemaname> TO < username>;
GRANT USAGE ON SCHEMA < schemaname> TO < username>;
These issues are only when i try to run from a container.
Commands use for docker
docker run -p 9000:9000 < image name >
Am using spring boot 2.1.9
Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ADD run-app.sh run-app.sh
RUN chmod +x run-app.sh
EXPOSE 9000
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} dummy.jar
ENTRYPOINT ./run-app.sh
run-app.sh
java $JAVA_OPTS -jar /dummy.jar
My postgresql Db is running in aws.
My spring boot is able to start, but only while my API is querying i am facing the exception
Can you share dockerfile content if possible, would like to see what commands you have given with endpoints. and for postgresql which you are using, is it embeded in docker with spring boot app or another server?
I do already setup spring boot with nginx and postgresql dockerized but in different servers/containers which working pretty smooth on production.
For first issue I will need more details.
For second issue, the problem is two containers are not on the same network and hence service container can't communicate with psql container. You can actually create a docker-compose.yml file to run those on same network. Or create a network and join the containers on that network.

Connect from Java app to Google Cloud Bigtable which running on Docker

I want to connect to Google Cloud Bigtable which running on Docker:
docker run --rm -it -p 8086:8086 -v ~/.config/:/root/.config \
bigtruedata/gcloud-bigtable-emulator
It starts without any problems:
[bigtable] Cloud Bigtable emulator running on 127.0.0.1:8086
~/.config it is my default credentials that I configured in this way:
gcloud auth application-default login
I used Java-code from official sample HelloWorld.
Also, I changed connection configuration like this:
Configuration conf = BigtableConfiguration.configure("projectId", "instanceId");
conf.set(BigtableOptionsFactory.BIGTABLE_HOST_KEY, "127.0.0.1");
conf.set(BigtableOptionsFactory.BIGTABLE_PORT_KEY, "8086");
conf.set(BigtableOptionsFactory.BIGTABLE_USE_PLAINTEXT_NEGOTIATION, "true");
try (Connection connection = BigtableConfiguration.connect(conf)) {
...
And I set BIGTABLE_EMULATOR_HOST=127.0.0.1:8086 environment variable in a configuration for my app in IntelliJ Idea.
But when I run my Java app, it gets stuck on admin.createTable(descriptor); and shows this log:
...
16:42:44.697 [grpc-default-executor-0] DEBUG
com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.util.Recycler
- -Dio.netty.recycler.ratio: 8
After some time it shows log about BigtableClientMetrics and then throws an exception:
java.net.NoRouteToHostException: No route to host
I get the same problem when trying to run Google Cloud Bigtable with my own Dockerfile.
When I run Google Cloud Bigtable with this command:
gcloud beta emulators bigtable start
my app completed successfully.
So, how to solve this problem?
UPDATE:
Now I have this exception:
io.grpc.StatusRuntimeException: UNAVAILABLE: Network closed for unknown reason
and before this another exception is thrown:
java.io.IOException: Connection reset by peer

Categories