Spring boot mysql sonnection timeout during inserting new entry - java

I'm getting a timeout error during the insert query when I get 16 requests simultaneously.
I exceeded the timeout from 30 to 60 sec but without any success
2022-10-21 20:52:15.671 WARN 2375 --- [nio-8080-exec-6] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 0, SQLState: null
2022-10-21 20:52:15.672 ERROR 2375 --- [nio-8080-exec-6] o.h.engine.jdbc.spi.SqlExceptionHelper : HikariPool-1 - Connection is not available, request timed out after 60000ms.
2022-10-21 20:52:15.681 ERROR 2375 --- [nio-8080-exec-6] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.dao.DataAccessResourceFailureException: unable to obtain isolated JDBC connection; nested exception is org.hibernate.exception.JDBCConnectionException: unable to obtain isolated JDBC connection] with root cause
java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 60000ms.

Related

Getting connection timeout error during first Hit if application is idle for long period

I am using default configuration of Hikari pool and have set maxlifetime as 5 mins
I am getting Following errors ::
org.postgresql.util.PSQLException: This connection has been closed.
at org.postgresql.jdbc.PgConnection.checkClosed(PgConnection.java:877) ~[postgresql-42.2.23.jar!/:42.2.23]
at org.postgresql.jdbc.PgConnection.setNetworkTimeout(PgConnection.java:1610) ~[postgresql-42.2.23.jar!/:42.2.23]
at com.zaxxer.hikari.pool.PoolBase.setNetworkTimeout(PoolBase.java:566) ~[HikariCP-4.0.3.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.isConnectionAlive(PoolBase.java:173) ~[HikariCP-4.0.3.jar!/:na]
[nio-1025-exec-8] o.h.engine.jdbc.spi.SqlExceptionHelper : This connection has been closed.
o.h.engine.jdbc.spi.SqlExceptionHelper : HikariPool-1 - Connection is not available, request timed out after 30036ms.
o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 0, SQLState: 08003
we are using postgre edb database
please check the issue.

Mongodb Atlas "Got Socket exception on Connection To Cluster"

I'm using Java & Springboot and MongoDB Atlas and created a database which response to many Object's CURD
When I do the post on uploadingImage, I got this error Got Socket exception on Connection [connectionId{localValue:4, serverValue:114406}] to cluster0-shard-00-02.1c6kg.mongodb.net:27017
However when I call other object's CRUD, it works totally fine. I don't why it raise this exception. BTW all my CRUD operation of all objects works well on localhost when not connecting to MongoDB Atlas, That means my ImageDAO should be fine, I just used mongoTemplate.insert(Image).
I search online, and they said might be IP whitelist of Atlas, So I setup my Cluster open to any IP Address.
Also I set my timeout and socket configure like this in my .properties file:
spring.data.mongodb.uri=mongodb+srv://username:password#cluster0.1c6kg.mongodb.net/database?retryWrites=true&w=majority&keepAlive=true&pooSize=30&autoReconnect=true&socketTimeoutMS=361000000&connectTimeoutMS=3600000
it still not work, I think the problem definitely related to the timeout of socket, But I don't know where else I can config
The Error log is here:
2020-11-01 12:25:34.275 WARN 20242 --- [nio-8088-exec-1] org.mongodb.driver.connection : Got socket exception on connection [connectionId{localValue:4, serverValue:114406}] to cluster0-shard-00-02.1c6kg.mongodb.net:27017. All connections to cluster0-shard-00-02.1c6kg.mongodb.net:27017 will be closed.
2020-11-01 12:25:34.283 INFO 20242 --- [nio-8088-exec-1] org.mongodb.driver.connection : Closed connection [connectionId{localValue:4, serverValue:114406}] to cluster0-shard-00-02.1c6kg.mongodb.net:27017 because there was a socket exception raised by this connection.
2020-11-01 12:25:34.295 INFO 20242 --- [nio-8088-exec-1] org.mongodb.driver.cluster : No server chosen by WritableServerSelector from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=cluster0-shard-00-00.1c6kg.mongodb.net:27017, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=46076648, setName='atlas-d9ovwb-shard-0', canonicalAddress=cluster0-shard-00-00.1c6kg.mongodb.net:27017, hosts=[cluster0-shard-00-00.1c6kg.mongodb.net:27017, cluster0-shard-00-01.1c6kg.mongodb.net:27017, cluster0-shard-00-02.1c6kg.mongodb.net:27017], passives=[], arbiters=[], primary='cluster0-shard-00-02.1c6kg.mongodb.net:27017', tagSet=TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_WEST_2'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=1, lastWriteDate=Sun Nov 01 12:25:29 PST 2020, lastUpdateTimeNanos=104428017411386}, ServerDescription{address=cluster0-shard-00-02.1c6kg.mongodb.net:27017, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=cluster0-shard-00-01.1c6kg.mongodb.net:27017, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=41202444, setName='atlas-d9ovwb-shard-0', canonicalAddress=cluster0-shard-00-01.1c6kg.mongodb.net:27017, hosts=[cluster0-shard-00-00.1c6kg.mongodb.net:27017, cluster0-shard-00-01.1c6kg.mongodb.net:27017, cluster0-shard-00-02.1c6kg.mongodb.net:27017], passives=[], arbiters=[], primary='cluster0-shard-00-02.1c6kg.mongodb.net:27017', tagSet=TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_WEST_2'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=1, lastWriteDate=Sun Nov 01 12:25:29 PST 2020, lastUpdateTimeNanos=104428010234368}]}. Waiting for 30000 ms before timing out
2020-11-01 12:25:34.316 INFO 20242 --- [ngodb.net:27017] org.mongodb.driver.cluster : Discovered replica set primary cluster0-shard-00-02.1c6kg.mongodb.net:27017
2020-11-01 12:25:34.612 INFO 20242 --- [nio-8088-exec-1] org.mongodb.driver.connection : Opened connection [connectionId{localValue:5, serverValue:108547}] to cluster0-shard-00-02.1c6kg.mongodb.net:27017
2020-11-01 12:25:34.838 WARN 20242 --- [nio-8088-exec-1] org.mongodb.driver.connection : Got socket exception on connection [connectionId{localValue:5, serverValue:108547}] to cluster0-shard-00-02.1c6kg.mongodb.net:27017. All connections to cluster0-shard-00-02.1c6kg.mongodb.net:27017 will be closed.
2020-11-01 12:25:34.838 INFO 20242 --- [nio-8088-exec-1] org.mongodb.driver.connection : Closed connection [connectionId{localValue:5, serverValue:108547}] to cluster0-shard-00-02.1c6kg.mongodb.net:27017 because there was a socket exception raised by this connection.
2020-11-01 12:25:34.876 INFO 20242 --- [ngodb.net:27017] org.mongodb.driver.cluster : Discovered replica set primary cluster0-shard-00-02.1c6kg.mongodb.net:27017
2020-11-01 12:25:34.878 ERROR 20242 --- [nio-8088-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.data.mongodb.UncategorizedMongoDbException: Exception sending message; nested exception is com.mongodb.MongoSocketWriteException: Exception sending message] with root cause

sql exceptions when trying to insert duplicate records into the database

I have the below method which checks if a user exists and then adds it to the database if it does not exist.
#Transactional(isolation = Isolation.READ_COMMITTED)
public Optional<User> create(User user) {
Optional<User> userOptional = userRepository.findByEmail(user.getEmail());
if (userOptional.isPresent()) {
return Optional.empty();
}
User savedUser = userRepository.save(user);
return Optional.of(savedUser);
}
two threads t1 and t2 are trying to save the same user with the same email(example#example.com). The following occurs:
t1 cant find user
t2 also cant find user
t1 adds user to database
t2 tries to add user to database but fails as record already exists and throws following error:
2019-10-02 18:28:43 WARN UKPC000029 --- [nio-8090-exec-2]
o.h.e.j.s.SqlExceptionHelper : SQL Error: 1062, SQLState: 23000
2019-10-02 18:28:43 ERROR UKPC000029 --- [nio-8090-exec-2]
o.h.e.j.s.SqlExceptionHelper : Duplicate entry 'example#example.com' for key 'UK_tcks72p02h4dp13cbhxne17ad'
2019-10-02 18:28:43 ERROR UKPC000029 --- [nio-8090-exec-2] o.h.i.ExceptionMapperStandardImpl : HHH000346: Error during managed flush [org.hibernate.exception.ConstraintViolationException: could not execute statement]
2019-10-02 18:28:43 DEBUG UKPC000029 --- [nio-8090-exec-2] o.s.o.j.JpaTransactionManager : Initiating transaction rollback after commit exception
org.springframework.dao.DataIntegrityViolationException: could not execute statement; SQL [n/a]; constraint [UK_tcks72p02h4dp13cbhxne17ad]; nested exception is org.hibernate.exception.ConstraintViolationException: could not execute statement
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.convertHibernateAccessException(HibernateJpaDialect.java:296)
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:253)
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:536)
In order to get around this error I tried to use serializble isolation level
#Transactional(isolation = Isolation.SERIALIZABLE)
However in the same scenario I got the following error:
2019-10-02 18:52:56 WARN UKPC000029 --- [nio-8090-exec-2] o.h.e.j.s.SqlExceptionHelper : SQL Error: 1213, SQLState: 40001
2019-10-02 18:52:56 ERROR UKPC000029 --- [nio-8090-exec-2] o.h.e.j.s.SqlExceptionHelper : Deadlock found when trying to get lock; try restarting transaction
2019-10-02 18:52:56 ERROR UKPC000029 --- [nio-8090-exec-2] o.h.i.ExceptionMapperStandardImpl : HHH000346: Error during managed flush [org.hibernate.exception.LockAcquisitionException: could not execute statement]
2019-10-02 18:52:56 DEBUG UKPC000029 --- [nio-8090-exec-1] o.s.o.j.JpaTransactionManager : Not closing pre-bound JPA EntityManager after transaction
2019-10-02 18:52:56 DEBUG UKPC000029 --- [nio-8090-exec-2] o.s.o.j.JpaTransactionManager : Initiating transaction rollback after commit exception
org.springframework.dao.CannotAcquireLockException: could not execute statement; SQL [n/a]; nested exception is org.hibernate.exception.LockAcquisitionException: could not execute statement
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.convertHibernateAccessException(HibernateJpaDialect.java:287)
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:253)
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:536)
I was under the impression that both transactions would be done 1 after the other?
The following link
isolation explained
SERIALIZABLE isolation level is the most restrictive of all isolation levels. Transactions are executed with locking at all levels (read, range and write locking) so they appear as if they were executed in a serialized way. This leads to a scenario where none of the issues mentioned above may occur, but in the other way we don't allow transaction concurrency and consequently introduce a performance penalty.
However both threads are still checking for the user email even though the 1st transaction hasnt finished? What is the best way to deal with this problem? Is using java locks acceptable in this case?

hibernate_sequence creates duplicate key 'PRIMARY'

A previously working Spring-Boot application came up with a bug. It last worked 100%. I made zero changes to the code. It seems duplicate primary keys are being entered on the hibernate_sequence table.
Worked for three hours today with my mentor developer. We are both stumped. We've tried using a different database, renaming and launching a back up of the app. Have tried different ways to generate ids on the Entities. We updated Spring-Boot to most current version. Each time we drop/delete the hibernate_sequence table, you can see in the console when it is generated on initial app start up, you get Hibernate: insert into hibernate_sequence values (1) twice. At this point, since the code has not changed and it worked fine last Wednesday, my mentor feels it might be an update somewhere we are not aware of?
Github Repo of working code : https://github.com/chrisyoung0101/DrinkWithWineApp
IMG 1 : database before hibernate_sequence is generated
/ IMG 2 : console on app start up /
IMG 3 : database before hibernate_sequence is generated
Errors after trying to save to Pairing table in MySQL :
2019-05-19 18:33:23.698 WARN 4405 --- [nio-8080-exec-7] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 1062, SQLState: 23000
2019-05-19 18:33:23.698 ERROR 4405 --- [nio-8080-exec-7] o.h.engine.jdbc.spi.SqlExceptionHelper : Duplicate entry '1' for key 'PRIMARY'
2019-05-19 18:33:23.702 ERROR 4405 --- [nio-8080-exec-7] o.h.i.ExceptionMapperStandardImpl : HHH000346: Error during managed flush [org.hibernate.exception.ConstraintViolationException: could not execute statement]
2019-05-19 18:33:23.717 ERROR 4405 --- [nio-8080-exec-7] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.dao.DataIntegrityViolationException: could not execute statement; SQL [n/a]; constraint [PRIMARY]; nested exception is org.hibernate.exception.ConstraintViolationException: could not execute statement] with root cause
com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '1' for key 'PRIMARY'

Apache NIFI PutHiveStreaming Issue

I am using the NIFI PutHiveStreaming processor to write records into HDFS and I keep running into this issue. I am not able to make much of it because it seems as if I have adhered to all the configuration requirements. Any pointers from someone has successfully resolved this issue? (Nifi - 1.4.0 and Hive - 2.3.3)
2018-04-18 09:03:49,997 INFO [Timer-Driven Process Thread-5] hive.metastore Trying to connect to metastore with URI thrift://hive-metastore:9083
2018-04-18 09:03:49,999 INFO [Timer-Driven Process Thread-5] hive.metastore Connected to metastore.
2018-04-18 09:03:50,486 WARN [Timer-Driven Process Thread-5] o.a.h.h.m.RetryingMetaStoreClient MetaStoreClient lost connection. Attempting to reconnect.
org.apache.thrift.transport.TTransportException: null
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
2018-04-18 09:03:51,487 INFO [Timer-Driven Process Thread-5] hive.metastore Trying to connect to metastore with URI thrift://hive-metastore:9083
2018-04-18 09:03:51,491 INFO [Timer-Driven Process Thread-5] hive.metastore Connected to metastore.
2018-04-18 09:03:51,505 ERROR [Timer-Driven Process Thread-5] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=1b8f6a4a-e456-3c2b-74be-bc9a0927a43b] Failed to create HiveWriter for endpoint: {metaStoreUri='thrift://hive-metastore:9083', database='default', table='bi_events_identification_carrier', partitionVals=[2018, 3, 28] }: org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://hive-metastore:9083', database='default', table='bi_events_identification_carrier', partitionVals=[2018, 3, 28] }
org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://hive-metastore:9083', database='default', table='bi_events_identification_carrier', partitionVals=[2018, 3, 28] }
at org.apache.nifi.util.hive.HiveWriter.(HiveWriter.java:79)
at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
at org.apache.nifi.processors.hive.PutHiveStreaming.makeHiveWriter(PutHiveStreaming.java:968)
Caused by: org.apache.nifi.util.hive.HiveWriter$TxnBatchFailure: Failed acquiring Transaction Batch from EndPoint: {metaStoreUri='thrift://hive-metastore:9083', database='default', table='bi_events_identification_carrier', partitionVals=[2018, 3, 28] }
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:264)
at org.apache.nifi.util.hive.HiveWriter.(HiveWriter.java:73)
... 24 common frames omitted
Caused by: org.apache.hive.hcatalog.streaming.TransactionError: Unable to acquire lock on {metaStoreUri='thrift://hive-metastore:9083', database='default', table='bi_events_identification_carrier', partitionVals=[2018, 3, 28] }
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:578)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:547)
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:261)
... 25 common frames omitted
Caused by: org.apache.thrift.transport.TTransportException: null
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
I have set the concurrency and txn parameters in hive-site.xml as well.

Categories