I'm debugging a spring application, which "loses" it's Entities/Repositories after ~5 Minutes.
application.properties:
spring.datasource.type=com.zaxxer.hikari.HikariDataSource
spring.jpa.database=SYBASE
spring.datasource.driver-class-name=com.sybase.jdbc4.jdbc.SybDriver
spring.datasource.url=jdbc:sybase:Tds:database-host:2638/R4Sybase
spring.datasource.username=user
spring.datasource.password=pass
spring.datasource.maxLifeTime=60000
logging.level.org.springframework.web=DEBUG
logging.level.org.hibernate=DEBUG
spring.jpa.open-in-view=true
spring.jpa.show-sql=false
Example repository:
public interface EstateRepository extends CrudRepository<Estate, Integer> {
/* ... nothing special ... */
}
After startup/initialization, available entities are listed under /:
{
"_links" : {
"estates" : {
"href" : "http://localhost:8080/choices/estates"
},
/* ... more entities ... */
"profile" : {
"href" : "http://localhost:8080/choices/profile"
}
}
However, after ~5 Minutes, the response for /changes to
{
"_links" : {
"profile" : {
"href" : "http://localhost:8080/choices/profile"
}
}
}
It seems like all entities/repos are unloaded, there aren't any exceptions in the catalina log, no timeouts, no database errors at all. Reloading the app in tomcat fixes things for another 5 minutes.
I've tried fiddling with connection pool settings, switching to a Tomcat connection pool but to no avail.
Is there some kind of keep-alive setting? Is this a garbage collection issue?
Related
I have written a spring batch with Reader and Writer (No Processor). All it does is based upon the input records from the reader, loads the data from few source tables to archive tables. I am calling a stored procedure in the writer which loads the records to archive tables and deletes from the source tables. But I am running into a strange behavior, Every time I start the batch it performs the archiving process but stops abruptly at 2 hours 10 minutes of execution time. Even if i rerun, again runs for 2 hours and 10 minutes and stops the execution. But the archival is not fully completed. There are no errors in the log except the below line.
Error Log :
2022-08-29 17:12:09.157 INFO 9109612 --- [SpringApplicationShutdownHook] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
2022-08-29 17:12:09.158 TRACE 9109612 --- [SpringApplicationShutdownHook] o.h.type.spi.TypeConfiguration$Scope : Handling #sessionFactoryClosed from [org.hibernate.internal.SessionFactoryImpl#c647c96f] for TypeConfiguration
2022-08-29 17:12:09.158 DEBUG 9109612 --- [SpringApplicationShutdownHook] o.h.type.spi.TypeConfiguration$Scope : Un-scoping TypeConfiguration [org.hibernate.type.spi.TypeConfiguration$Scope#9239dddd] from SessionFactory [org.hibernate.internal.SessionFactoryImpl#c647c96f]
Below is my batch Writer implementation. Can someone please shed some light on this ?
#Override
public void write(List<? extends Account> items) throws Exception {
DataSource dataSource= (DataSource) ctx.getBean("dataSource");
this.jdbcTemplate = new JdbcTemplate(dataSource);
for (cvObj : items) {
logger.info("Going to archive for batch ");
jdbcTemplate.update(QUERY_INSERT_ARCHIVE,cvObj.getRunner);
jdbcTemplate.update("{ call pArchiveRecords (?) }", cvObj.getRunner);
logger.info("Archival completed for ::: "+cvObj.getRunner);
jdbcTemplate.update(QUERY_UPDATE_ARCHIVE,cvObj.getRunner);
}
}
I have setup a CAS server and I have configured a service for my client-application:
{
"#class" : "org.apereo.cas.support.oauth.services.OAuthRegisteredService",
"clientId": "*********",
"clientSecret": "*****************",
"serviceId" : "^(http|https|imaps)://.*coll.mydomain.org.*",
"name" : "Angular OAuth2 App",
"id" : 10000013,
"supportedGrantTypes": [ "java.util.HashSet", [ "implicit", "refresh_token", "client_credentials" ] ],
"supportedResponseTypes": [ "java.util.HashSet", [ "token" ] ],
"bypassApprovalPrompt" : true,
"jsonFormat" : true,
"attributeReleasePolicy" : {
"#class" : "org.apereo.cas.services.ReturnAllAttributeReleasePolicy"
},
"logoutUrl" : "https://sso.coll.mydomain.org/cas/logout"
}
After the user has submitted the login form with is (correct) credentials, the OAuth2 flow goes on and produces this http request
GET https://sso.coll.mydomain.org/cas/oauth2.0/callbackAuthorize?client_id=*************&redirect_uri=https://myclientapp.coll.mydomain.org&response_type=token&client_name=CasOAuthClient&ticket=ST-*********************************************
which gets this (wrong) Location Response Header in which the protocol has switched from HTTPS to HTTP:
location: http://sso.coll.mydomain.org/cas/oauth2.0/authorize?response_type=token&client_id=*************&redirect_uri=https%3A%2F%2Fmyclientapp.coll.mydomain.org
This is a problem because (correctly) interrupts the user-webflow on Chrome with this warning:
"The information you are sending aren't protected"
In the cas.properties:
cas.server.name: https://sso.coll.mydomain.org
cas.server.prefix: https://sso.coll.mydomain.org/cas
Can anyone suggest me what configuration does make CAS to change the protocol in the Response Location Header?
Any suggests will be appreciated.
Thanks to all.
Ok guys, a simple CAS restart resolved the issue. But the reason remains a mistery.
I am trying to implement a reactive application with Spring Webflux and MongoDB.
I have the following configuration in application.properties file:
spring.data.mongodb.database=my-db
spring.data.mongodb.uri=mongodb://user:pass#host:port/my-db
However, when I try save some document to the MongoDB I am getting the following error:
backend_1 | Caused by: com.mongodb.MongoCommandException: Command failed with error 13 (Unauthorized): 'not authorized on test to execute command { insert: "user", ordered: true, $db: "test" }' on server database:27017. The full response is { "ok" : 0.0, "errmsg" : "not authorized on test to execute command { insert: \"user\", ordered: true, $db: \"test\" }", "code" : 13, "codeName" : "Unauthorized" }
backend_1 | at com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:179) ~[mongodb-driver-core-3.8.2.jar:na]
backend_1 | at com.mongodb.internal.connection.InternalStreamConnection$2$1.onResult(InternalStreamConnection.java:370) ~[mongodb-driver-core-3.8.2.jar:na]
I simply can not understand why the driver does not respect the configuration for the database name and tries to insert into the database test (and thus fail).
Am I missing something?
One more thing is that I am using the Java backend and MongoDB within a separate containers with docker compose.
I have found a solution for the authentication issue.
It was the authSource that I was missing, so I added it in the database URI and it worked.
spring.data.mongodb.uri=mongodb://user:pass#host:port/my-db?authSource=admin
However I still do not understand why the documents that my Java backend creates are being inserted in test database and not the database that I configured to work with.
Put this in application.properties file:
# MongoDB
spring.data.mongodb.host=localhost
spring.data.mongodb.port=27017
spring.data.mongodb.database=my-db
spring.data.mongodb.username=username
spring.data.mongodb.password=password
I'm using Google Cloud Data Flow and when I execute this code :
public static void main(String[] args) {
String query = "SELECT * FROM [*****.*****]";
Pipeline p = Pipeline.create(PipelineOptionsFactory.fromArgs(args).withValidation().create());
PCollection<TableRow> lines = p.apply(BigQueryIO.read().fromQuery(query));
p.run();
}
I have this
(332b4f3b83bd3397): java.io.IOException: Query job beam_job_d1772eb4136d4982be55be20d173f63d_testradiateurmodegfcvsoasc07281159145481871-query failed, status: {
"errorResult" : {
"message" : "Cannot read and write in different locations: source: EU, destination: US",
"reason" : "invalid"
},
"errors" : [ {
"message" : "Cannot read and write in different locations: source: EU, destination: US",
"reason" : "invalid"
}],
"state" : "DONE"
}.
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryQuerySource.executeQuery(BigQueryQuerySource.java:173)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryQuerySource.getTableToExtract(BigQueryQuerySource.java:120)
at org.apache.beam.sdk.io.gcp.bigquery.BigQuerySourceBase.split(BigQuerySourceBase.java:87)
at com.google.cloud.dataflow.worker.runners.worker.WorkerCustomSources.splitAndValidate(WorkerCustomSources.java:261)
at com.google.cloud.dataflow.worker.runners.worker.WorkerCustomSources.performSplitTyped(WorkerCustomSources.java:209)
at com.google.cloud.dataflow.worker.runners.worker.WorkerCustomSources.performSplitWithApiLimit(WorkerCustomSources.java:184)
at com.google.cloud.dataflow.worker.runners.worker.WorkerCustomSources.performSplit(WorkerCustomSources.java:161)
at com.google.cloud.dataflow.worker.runners.worker.WorkerCustomSourceOperationExecutor.execute(WorkerCustomSourceOperationExecutor.java:47)
at com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.executeWork(DataflowWorker.java:341)
at com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.doWork(DataflowWorker.java:297)
at com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:125)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:105)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:92)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I already read theses posts 37298504, 42135002
and https://github.com/GoogleCloudPlatform/DataflowJavaSDK/issues/405 but no solution work for me.
For more information :
The BigQuery table is located in EU
I tried to starting the job with --zone=europe-west1-b and region=europe-west1-b
I using the DataFlowRunner
When I go to the BigQuery Web UI, I see
theses temporary datasets
EDIT : I solved my problem by using the version 1.9.0 of the dataflow SDK
I am using jmxtrans for remote monitoring of tomcat jvm, my request json query is as below
{
"servers" : [ {
"alias" : "MY_TOMCAT",
"local" : false,
"host" : "myhost",
"port" : "myport",
"queries" : [ {
"obj" : "Catalina:type=GlobalRequestProcessor,name=\"http-nio-*\"",
"attr" : [ "requestCount", "requestProcessingTime" ],
"resultAlias" : "tomcat.global-request-processor.http-nio",
"outputWriters" : [ {
"#class" : "com.googlecode.jmxtrans.model.output.StdOutWriter",
"settings" : {
"debug" : true
}
} ]
} ],
"url" : "service:jmx:rmi:///jndi/rmi://myhost:myport/jmxrmi"
} ]
}
I have successfully configured jmxtrans to monitor ActiveMQ, but for tomcat its not working.
I am using tomcat-7.40 running on jdk7.
Review and let me know, are there any changes required to json request
I got this resolved. There is nothing wrong / missing with request json.
It was a firewall issue, as jmx port was blocked. After fixing firewall issue it started pulling data from the tomcat.