I have a Java Spring app and I'm using Amazon Keyspaces (for Apache Cassandra). I'm using the sigv4 plugin , (version 4.0.2), the cassandra java-driver-core (version 4.4.0) and have followed the official documentation on how to connect my java app with MCS. The app connects just fine but I'm getting a weird warning at start up:
WARN 1 --- [ s0-admin-0] .o.d.i.c.m.t.DefaultTokenFactoryRegistry : [s0] Unsupported partitioner 'com.amazonaws.cassandra.DefaultPartitioner', token map will be empty.
Everything looks good but after a few minutes that warning comes back and my queries start to fail. This is how the logs look after a few minutes:
WARN 1 --- [ s0-admin-0] .o.d.i.c.m.t.DefaultTokenFactoryRegistry : [s0] Unsupported partitioner 'com.amazonaws.cassandra.DefaultPartitioner', token map will be empty.
WARN 1 --- [ s0-io-1] c.d.o.d.i.c.m.SchemaAgreementChecker : [s0] Unknown peer xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, excluding from schema agreement check
WARN 1 --- [ s0-io-0] c.d.o.d.i.c.control.ControlConnection : [s0] Unexpected error while refreshing schema after a successful reconnection, keeping previous version (CompletionException: com.datastax.oss.driver.api.core.connection.ClosedConnectionException: Channel was force-closed)
WARN 1 --- [ s0-io-1] c.d.o.d.i.c.m.DefaultTopologyMonitor : [s0] Control node ec2-x-xx-xxx-xx.us-east-2.compute.amazonaws.com/x.xx.xxx.xxx:xxxx has an entry for itself in system.peers: this entry will be ignored. This is likely due to a misconfiguration; please verify your rpc_address configuration in cassandra.yaml on all nodes in your cluster.
I have debugged a little and it looks like that partitioner comes from the actual node metadata, so I don't really know if there's an actual way to fix it.
I've seen there's a similar question asked recently here, but no solution has been posted yet. Any ideas? Thanks so much in advance
These are all warnings and not errors. Your connection should work just fine. They are logged due to how Amazon Keyspaces is slightly different from an actual Cassandra cluster. Try setting these to get rid of the noise:
datastax-java-driver.advanced {
metadata {
schema.enabled = false
token-map.enabled = false
}
connection.warn-on-init-error = false
}
Problem the same with me.
The above problems encountered when using a Spring boot version 2.3.x
Because is a Spring boot version 2.3.x
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-cassandra-reactive</artifactId>
</dependency>
OR
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-cassandra</artifactId>
</dependency>
When creating a Maven / Gradle will get "datastax-java-driver-core 4.6.1" and I think this is another reason that Amazon Keyspaces are not supported.
Okay, Clear.....
Back to the subject of AWS library aws-sigv4-auth-cassandra-java-driver-plugin 4.0.2
When creating a Maven / Gradle will get "datastax-java-driver-core 4.4.0"
Now I am starting to see that Amazon Keyspaces
Maybe not supported "datastax-java-driver-core" with version greater than 4.4.0
Okay, it's been very long.
If you want Application Spring Boot 2 to work
You try following the solutions follows.
look at pom.xml
remove aws-sigv4-auth-cassandra-java-driver-plugin
downgrade Spring boot version 2.3.x to 2.2.9
add dependency below,
spring-boot-starter-data-cassandra-reactive
OR
spring-boot-starter-data-cassandra
create Amazon digital certificate and download
If you used InteljiJ IDEA Edit Configurations
Goto Edit Configurations -> next VM Options add below,
-Djavax.net.ssl.trustStore=path_to_file/cassandra_truststore.jks
-Djavax.net.ssl.trustStorePassword=my_password
Reference: https://docs.aws.amazon.com/keyspaces/latest/devguide/using_java_driver.html
application-dev.yml add config below,
spring:
data:
cassandra:
contact-points:
- "cassandra.ap-southeast-1.amazonaws.com"
port: 9142
ssl: true
username: "cassandra-username"
password: "cassandra-password"
keyspace-name: keyspace-name
request:
consistency: local_quorum
Run Test Program
Pass.
Work for me.
Tech Stack
Spring Boot WebFlux 2.2.9.RELEASE
Cassandra Reactive
JDK 13
Cassandra Database with Amazon Keyspaces
Maven 3.6.3
Have fun with programming.
Related
I've upgraded my spring boot from version 2.3.3 to version 2.5.1 and it causes a test with
#SpringBootTest(
webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT
)
#AutoConfigureMockMvc
configuration to fail with 404 instead of 200
After comparing the logs what I found out that before the upgrade log contained the mapping:
2021-07-19 21:00:48.881 INFO 15114 --- [ Test worker] o.s.w.s.f.support.RouterFunctionMapping : Mapped /api => {
(GET && /ping) -> org.springframework.web.servlet.function.RouterFunctionDslKt$sam$org_springframework_web_servlet_function_HandlerFunction$0#22cf59c1
}
and after the upgrade, I couldn't find any RouterFunctionMapping log, so I guess it's probably related.
If it helps I defined the router using Kotlins RouterFunctionDsl.
I've tried looking for other questions and even to look for breaking changes, but I couldn't find any hint.
OK, seems that the problem is that for some reason, it can't find the endpoint unless I add a trailing slash to the mockMvc.perform call, very weird.
Small question regarding SpringBoot Admin and a rather strange log I do not know how to fix.
My current setup is:
SpringBoot Admin Server 2.3.1
SpringBoot 2.4.0 (with actuator)
Spring Cloud Ilford (with Spring Cloud Kubernetes)
On a very simple SBA app:
#EnableScheduling
#EnableAdminServer
#EnableDiscoveryClient
#SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class);
}
}
I am getting the following when deployed on Kubernetes (tried Minikube, eks, gks, all have this issue).
However, it is not reproducible running on localhost.
2020-11-xx WARN [,,,] 47 --- [or-http-epoll-1] d.c.b.a.s.s.e.ProbeEndpointsStrategy : Duplicate endpoints for id 'httptrace' detected. Omitting: [ProbeEndpointsStrategy.DetectedEndpoint(definition=ProbeEndpointsStrategy.EndpointDefinition(id=httptrace, path=httptrace), endpoint=Endpoint(id=httptrace, url=http://{some wrong ip here}:8000/actuator/httptrace))]
2020-11-xx WARN [,,,] 47 --- [or-http-epoll-1] d.c.b.a.s.s.e.ProbeEndpointsStrategy : Duplicate endpoints for id 'threaddump' detected. Omitting: [ProbeEndpointsStrategy.DetectedEndpoint(definition=ProbeEndpointsStrategy.EndpointDefinition(id=threaddump, path=threaddump), endpoint=Endpoint(id=threaddump, url=http://{some wrong ip here}:8000/actuator/threaddump))]
The issue:
the IPs are incorrect
the port is an incorrect port, there is nothing on port 8000
my httptrace and threaddump are not under actuator/xxx
My actuator endpoints are all under /
management.endpoints.web.base-path=/
management.endpoints.web.exposure.include=*
May I ask what is the root cause of this, or is there any property needed to be configured in order to fix this?
This issue has been fixed with the latest Spring Boot 2.5.2 and Spring Boot admin 2.5.2
I have a spring boot application running on Google AppEngine Standard environment. My app.yaml is something like this:
runtime: java11
instance_class: F4
service: appinapp
handlers:
- url: /(.*)
script: auto
secure: always
automatic_scaling:
min_instances: 0
max_instances: 2
env_variables:
VAR1: "var1"
VAR2: "var2"
VAR3: "var3"
My application.yml is something like this:
var1: ${VAR1:myvar1}
var2: ${VAR2:myvar2}
var3: ${VAR3:myvar3}
Whenever I deploy the application, the variables in application.yml always take the default variables, i.e., myvar1, myvar2, myvar3 instead of var1, var2, var3. I am not able to figure out why this happens. Note that I have 2 different app.yaml for staging and production. For staging it is staging_app.yaml and for the production, it is prod_app.yaml.
Any help here is much appreciated.
UPDATE:
I checked uploaded files in the AppEngine debug mode and found my YAML files are not getting updated every time I deploy a new build in the application. Only the jar file gets updated
I am working on a SpringBoot application with Flyway. I have to update the database that already has those migration :
The migrations under common must be executed on different environements (Spring profiles loaded) while local and qa will have different data inserted into a H2 database.
I need to alter the table (adding and modifying columns) and then update the data inserted in V1_1 and V1_2. I tried MANY different approaches to avoid putting the ALTER TABLE sql command in the local and qa migration files. I would like to leaver the ALTER TABLE commands in the common folder, while in the local and qa folder only have the update commands. But all of them were in vain, the new migration I add in the local directory always gets executed before the one I add in the common repository :
Even with the naming scheme above, V1_4 gets executed before V1_3, causing an error because the new columns were not added yet. I know this is not the perfect naming scheme, I used it mostly for testing and expressing my point. But even while manually testing, flyway does not behave like I would expect (surely because of my misunderstaing). The app log clearly shows V1_3 not being executed :
2019-08-13 13:31:04.025 INFO 26508 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 1.0 - schema
2019-08-13 13:31:04.076 INFO 26508 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 1.1 - institutions
2019-08-13 13:31:04.092 INFO 26508 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 1.2 - data
2019-08-13 13:31:04.476 INFO 26508 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 1.4 - update data
2019-08-13 13:31:04.482 ERROR 26508 --- [ main] o.f.core.internal.command.DbMigrate : Migration of schema "PUBLIC" to version 1.4 - update data failed! Please restore backups and roll back database and code!
I am using this propertiy : spring.flyway.locations=classpath:db/migration/common,classpath:db/migration/local
in the environnement where the exception occurs.
What am I doing wrong ? I can't seem to find a lot of documentation on flyway migration with file in multiple directories. Unfortunatly, this is what I am stuck with and I cannot change the file structure since these decision are out of my hands.
Thanks in advance !
When you provide a location for your migration, flyway looks for .sql files under that folder and subfolders. So if you have V1.1 and V1.4 in Local and V1.3 in Common, flyway will treat Local as the migration folder and execute in the order V1.1 and V1.4, it won't go to common unless you provide root dir as your flyway location. In your case you should give db.migration as your location.
I am having an issue with executing gradle clean test command.
My application is using activiti for workflow.
Git url: https://github.com/sanelib/eBOSS/tree/merge-before-dev
Branch: "merge-before-dev" is having more tests for activiti worflow process. But it execute only 6 of 12 integration tests from "core" module. If I use #Ignore to any random 6 tests then it success for rest 6. I have put some console out to debug and found it hangs on starting activiti process.
This source has also included database schema in /scripts folder. Let me know if you miss any required file for testing in your environment.
Can anybody look into this and give me solution?
I got also result: 23 tests completed, 14 failed :core:test FAILED
Than I randomly picked one of your tests and it failed in isolation also. It doesn't seem to be concurrency problem.
Root cause seem to be this:
2016-02-05 20:56:16.556 WARN 16072 --- [ main] o.h.e.jdbc.internal.JdbcServicesImpl : HHH000342: Could not obtain connection to query metadata : Cannot create PoolableConnectionFactory (Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)
Place break-point on this line in Hibernate.
So it does seem to be connection problem.