Amazon mysql rds refusing to accept credentials after upgrade - java

I have an old RDS database that was on 5.6_MySql_1.23.0, being used by a Java application running:
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.47</version>
<scope>compile</scope>
</dependency>
I've been refactoring old code, and part of that is upgrading from java 8 to 11. According to this post Java 11 doesn't support TLS=v1.0 & v1.1 anymore.
So I upgraded the cluster instance to 5.6_MySql_1.23.1 which does support TLS=v1.2 And I upgraded mysql connector to:
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.49</version>
<scope>compile</scope>
</dependency>
Running SHOW GLOBAL VARIABLES LIKE 'tls_version'; seems to return a TLSv1.2 enabled cluster:
TLSv1,TLSv1.1,TLSv1.2
However, since the upgrade my username and password are constantly getting rejected:
Caused by: java.sql.SQLSyntaxErrorException: Access denied for user 'user'#'%' to database 'dba'
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:828)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:448)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:241)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:251)
at medispan.foundation.dataaccess.providers.sql.SQLProvider.createProviderConnection(SQLProvider.java:227)
at medispan.foundation.dataaccess.providers.sql.SQLProvider.createConnection(SQLProvider.java:205)
at medispan.foundation.dataaccess.providers.sql.SQLProvider.openConnection(SQLProvider.java:841)
at medispan.foundation.dataaccess.providers.sql.SQLProvider.executeForResults(SQLProvider.java:1489)
at medispan.foundation.dataaccess.providers.sql.SQLDataAccessProvider.innerExecuteForCollection(SQLDataAccessProvider.java:515)
... 120 common frames omitted
Here's my JDBC string that worked in my java 8 service:
jdbc:mysql://test-aurora-sdt-c1-0.cpdk4xuooxvm.us-east-1.rds.amazonaws.com:3306?user=[user]&password=[password]&verifyServerCertificate=false&useSSL=true&sslca=rds-combined-ca-bundle.pem&serverTimezone=PST
Here's my updated url for all the errors I've had to fix with the mysql changes since the two versions:
jdbc:mysql://test-aurora-sdt-c1-0.cpdk4xuooxvm.us-east-1.rds.amazonaws.com:3306/dba?user=[user]&password=[password]&verifyServerCertificate=false&useSSL=true&enabledTLSProtocols=TLSv1.2&sslca=rds-combined-ca-bundle.pem&serverTimezone=America/Los_Angeles
Did I miss a step database version migration to enable tls? Do I have to do something with my cert bundle that I'm just not aware of coming from a dynamo background?

Related

DB driver is not found when running in docker swarm

I have a Spring boot application running with maven. I can successfully run my app locally, but when I run an image in the local docker swarm: docker stack deploy --compose-file docker-compose.yml compose I get the following error: Caused by: java.lang.IllegalStateException: Cannot load driver class: org.postgresql.Driver
I've checked env.getPropertySources():
compose_service#debian| spring.datasource.driver-class-name=org.postgresql.Driver
compose_service#debian| spring.datasource.url=jdbc:postgresql://localhost:5432/service
compose_service#debian| spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect
These props work fine with local running.
I've checked, the built jar contains the Postgres lib; maven dependency in my project:
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.3.1</version>
</dependency>
I recently ran the app with docker-compose up and it also worked, so it seems like a problem with running in swarm. Any ideas?
I shouldn't add secrets to docker swarm by echo,
which adds \n to each string (that's why my driver name wasn't valid).
Instead, I should use printf:
printf "org.postgresql.Driver" | docker secret create db-driver -
Hope it will save time for someone

FINE logging org.postgresql.jdbc.PgConnection setAutoCommit = false

I was facing the issue with Postgres Driver which was 9.1-901.jdbc4 and my database server was Postgres 10.I was getting issue in bulk update so I tried to change the driver to version 42.2.5.Following is the dependency:-
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.2.5</version>
</dependency>
Its working fine now but I am getting infinite logs and I also do have Schedulars in my code:-
2019-06-04 06:48:33,358 FINE [org.postgresql.jdbc.PgConnection] (DefaultQuartzScheduler_Worker-9) setAutoCommit = false
2019-06-04 06:48:33,359 FINE [org.postgresql.jdbc.PgConnection] (DefaultQuartzScheduler_Worker-9) setAutoCommit = true
How I disable these logs.I am using Wildfly 10 as application server.
It looks like you have debug logging turned on. The simplest way would be with the web console or CLI. An example CLI command would look like:
/subsystem=logging/logger=org.postgresql.jdbc.PgConnection:remove
Note you can use tab complete as you may not have that logger specifically added.

Use instance meta data to configure spring cloud config so the IAM role can be used to clone from CodeCommit

I am trying to run a spring cloud config application inside a docker container spawned by ECS. I am having issues correctly setting this up so that the meta data is used to clone the git repo from CodeCommit
I have the following settings
pom.xml dependencies
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-config-server</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-aws</artifactId>
</dependency>
application.yml
# some other non related settings such as port
spring:
cloud:
config:
server:
git:
uri: https://git-codecommit.eu-central-1.amazonaws.com/v1/repos/<repo name>
skip-ssl-validation: true
cloud:
aws:
credentials:
instance-profile: true
stack:
auto: false
in the docker logs I can find the following
2019-04-25 16:37:54.209 WARN 1 --- [nio-5000-exec-1] .c.s.e.MultipleJGitEnvironmentRepository : Error occured cloning to base directory.
org.eclipse.jgit.api.errors.TransportException: https://git-codecommit.eu-central-1.amazonaws.com/v1/repos/<repo name>: git-upload-pack not permitted on 'https://git-codecommit.eu-central-1.amazonaws.com/v1/repos/<repo name>/'
If I am understanding this correctly; according to the spring cloud config documentation, when you use a CodeCommit git url and don't specify a username and password, it should automatically default to the AWS Credentials Chain, which has instance profile credentials as the final option.
If you provide a username and password with an AWS CodeCommit URI, they must be the AWS accessKeyId and secretAccessKey that provide access to the repository.
If you do not specify a username and password, the accessKeyId and secretAccessKey are retrieved by using the AWS Default Credential Provider Chain.

Sprint Boot Application Shutdown

I have Spring Boot Application which listens to JCAPS. The Connection is Durable.
When I shutdown the Application using
Curl -X POST ip:port//shutdown
Application is not shutting down completely. I can see the PID when I grep the processes. So I tried to kill using
Kill -15 PID
or
Kill -SIGTERM PID
The PID is gone, but the subscription to JCAPS topic is still Active. Hence, When I restart the Application, I am unable to connect to the same Topic using the same subscriber name.
Please help on How to properly shutdown the spring boot application.
Add the following dependency in your pom.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
Set following property in application.properties
management.endpoints.web.exposure.include=*
management.endpoint.shutdown.enabled=true
endpoints.shutdown.enabled=true
Then start spring-boot app
You want to shutdown app run following curl command
curl -X POST localhost:port/actuator/shutdown
Reference : https://www.baeldung.com/spring-boot-shutdown

Unable to initialize spark context using java

i am trying a simple work count program using spark , but its fails when i try to initialize spark context.
Below is my code
conf = new SparkConf(true).
setAppName("WordCount").
setMaster("spark://192.168.0.104:7077");
sc = new JavaSparkContext(conf);
Now few things i wanted to clarify i am using Spark version 2.1.1 , my java code is on windows 10 and my server is running on VM box.
I have disabled firewall in VM and can access URL http://192.168.0.104:8080/ from windows.
However i am getting below stacktrace when running the code
17/08/06 18:44:15 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.0.103:4040
17/08/06 18:44:15 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://192.168.0.104:7077...
17/08/06 18:44:15 INFO TransportClientFactory: Successfully created connection to /192.168.0.104:7077 after 41 ms (0 ms spent in bootstraps)
17/08/06 18:44:15 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master 192.168.0.104:7077
org.apache.spark.SparkException: Exception thrown in awaitResult
at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
Can some one help ?
A bit late but for those running into this now: This can be caused by the maven version used for Spark Core or Spark SQL not being compatible with the Spark version being used on the server. At this moment Spark 2.4.4 appears to be compatible with the following Maven setup:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.3.4</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.3.4</version>
</dependency>
Incompatibility issues can be diagnosed by viewing the Spark Master node logs. They should mention something along the lines of spark local class incompatible stream classdesc serialversionuid
I hope this is still of some use to some!
You need to import some Spark classes into your program. Add the following lines:
import org.apache.spark.api.java.JavaSparkContext
import org.apache.spark.api.java.JavaRDD
import org.apache.spark.SparkConf
SparkConf conf = new SparkConf().setAppName("WordCount").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);

Categories