Connection pool exhausted in Spring Boot with JdbcTemplate - java

I have an issue with connection pool being exhausted when querying a database using Spring Boot and JdbcNamedTemplate.
This is how it is supposed to work:
Get a request from outside, with some parameters about how the house is supposed to be built.
Based on parameters recieved, use some REST endpoints and two DAOs to gather data.
Use data to create House object and return it.
Rcieve more requests...
How it works as of now:
Request is good.
Data from REST endpoints - OK. Data with DAO -> OK only for first 50 requests with two DAOs, after that NOT OK. When one of the DAOs is disabled, no connections are blocked.
After 50 houses are built OK, rest will take forever to finish and have no windows in the end.
Makes it unusable for more requests as they will simply timeout.
I get this exception when I call endpoint more than 50 times (max pool size):
com.atomikos.jdbc.AtomikosSQLException: Connection pool exhausted - try increasing 'maxPoolSize' and/or 'borrowConnectionTimeout' on the DataSourceBean.
And it will stay like this until I restart the app. It seems like there is something off about my DAO or configuration, but I haven't been able to figure out what, despite searching all day. If anyone can help, I will be thankfull.
Extra info:
No other exceptions are thrown that I am aware of.
All the other data is retrieved correcly.
Send help.
UPDATE:
I did some more experiments:
This app uses another dao that I didnt mention before because I forgot.
It works almost the same, only it connects to a different database, so it has a separate configuration. It also takes advantage of JdbcNamedTemplate and #Qualifier is used to select the correct one.
Now, what I discovered is that disbling one or the other DAO will not eat the connections anymore. So the question is: What can't they coexist in peace?
This is the dao.
#Component
public class WindowsDao {
private static final String PARAM_1 = "param";
private final String SELECT_ALL = ""
+ " SELECT "
+ " STUFF "
+ " FROM TABLE "
+ " WHERE "
+ " THING =:" + PARAM_1
+ " WITH UR";
#Autowired
private NamedParameterJdbcTemplate myTemplate;
public Optional<List<BottomDealerText>> getWindows(
final WindowsCode windowCode {
final MapSqlParameterSource queryParameters = new MapSqlParameterSource()
.addValue(PARAM_1, windowCode.getValue())
final Optional<List<Window>> windows;
try {
windows = Optional.of(myTemplate.query(
SELECT_ALL,
queryParameters,
new WindowsRowMapper()));
}
catch (final EmptyResultDataAccessException e) {
LOG.warn("No results were found.");
return Optional.empty();
}
return windows;
}
}
DAO is called from this service:
#Service
#Transactional
public class WindowsService {
#Autowired
private WindowsDao windowsDao;
public Optional<List<Stuff>> getWindows(
final WindowCode windowCode) {
final Optional<List<Window>> windows = windowsDao.getWindows(
windowCode;
return windows;
}
}
Which is called from this service:
#Service
#Transactional
public class AssembleHouseService {
// some things
#Autowired
private WindowsService windowsService;
public House buildHouse(final SomeParams params) {
// This service will fetch parts of the house
HouseBuilder builder = House.builder();
// call other services and then...
builder.windows(windowsService.getWindows(args).orElse(/*something*/));
//and then some more things...
}
}
This is what I use to configure the datasource:
myDb:
driver: db2
schema: STUFF
unique-resource-name: STUFF
database-name: STUFF1
server-name: myServer
port: 12312
username: hello
password: world
driver-type: 4
min-pool-size: 2
max-pool-size: 50
RowMapper:
public class WindowsRowMapper implements RowMapper<Window> {
#Override
public Windows mapRow(final ResultSet rs, final int rowNum)
throws SQLException {
return new BottomDealerText(
re.getString("WSIZE"),
rs.getString("DESCRIPTION"),
rs.getString("COLOR"));
}
}

If you have two readonly DAOs in the same transaction then you may have hit a known bug in the Atomikos open source edition that manifests itself only in this particular scenario.
It's been fixed in the commercial edition but not (yet) in the open source.
Hope that helps

Just posting here for those who look for a workaround:
If you cant change to different version of atomikos (or just ditch it), what worked for me was adding
Propagation.REQUIRES_NEW
to services that used those different data sources, so it would be:
#Service
#Transactional(propagation = Propagation.REQUIRES_NEW)
It seems like putting this two read operations into separate transactions makes atomikos close the transaction and release connection properly.

Related

Is there a way to drop all tables or truncate inside a postgres testcontainer

I'm looking for a way to keep my component tests self contained.
So to achieve this behavior, in some of the tests I need to have a 'clean database' or at least a 'clean table'. I still couldn't find a way to do this inside a testcontainer.
So here is what I've tried so far:
My container setup class:
public class PostgreSqlTestContainer implements QuarkusTestResourceLifecycleManager {
public static final PostgreSQLContainer<?> POSTGRES = new PostgreSQLContainer<>("postgres:alpine");
#Override
public Map<String, String> start() {
POSTGRES.start();
return some_db_config_as_per_doc;
}
#Override
public void stop() {
POSTGRES.stop();
}
Here is the tests class:
#QuarkusTest
#QuarkusTestResource(PostgreSqlTestContainer.class)
class UserResourcesTest {
#Test
scenario_one(){
// create a new user
// do some stuff (#POST.. check HTTP == 201)
}
#Test
scenario_two(){
// create new user
// do some stuff (#POST.. check HTTP == 201) (pass)
// look for all users on database
// do more stuff (#GET.. check HTTP == 200) (pass)
// assert that only 1 user was found
// since scenario_one should not interfere with scenario_two (fail)
}
}
The second scenario fails since some 'dirty' from the first test was still on the db container.
I've tried to stop/start the container for each test. (very, very slow process, and sometimes I get an error, and very slow again).
#BeforeEach
void setup(){
PostgreSqlTestContainer.POSTGRES.stop();
PostgreSqlTestContainer.POSTGRES.start();
}
Also tried to truncate the table / drop the whole db:
#Inject
EntityManager entityManager;
#BeforeEach
private void rollBack(){
truncate();
}
void truncate(){
Query nativeQuery = entityManager.createNativeQuery("DROP DATABASE IF EXISTS db_name");
nativeQuery.executeUpdate();
}
I'm looking for any workaround for this problem, I just want to somehow use a #BeforeEach to clean up the DB before each test. I mean, all I want is a clean environment for each test.
Create a template test database with the name test_template.
After each test,
disconnect all sessions from the test database (not required with PostgreSQL v13):
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = 'test';
drop the database with
DROP DATABASE test;
In v13, use the additional FORCE option.
create a new test database with
CREATE DATABASE test TEMPLATE test_template;
Note: you have to enable autocommit in the JDBC driver for CREATE DATABASE and DROP DATABASE to work.

accessing child constant in parent class in java

OK, so I have an interesting problem. I am using java/maven/spring-boot/cassandra... and I am trying to create a dynamic instantiation of the Mapper setup they use.
I.E.
//Users.java
import com.datastax.driver.mapping.annotations.Table;
#Table(keyspace="mykeyspace", name="users")
public class Users {
#PartitionKey
public UUID id;
//...
}
Now, in order to use this I would have to explicitly say ...
Users user = (DB).mapper(Users.class);
obviously replacing (DB) with my db class.
Which is a great model, but I am running into the problem of code repetition. My Cassandra database has 2 keyspaces, both keyspaces have the exact same tables with the exact same columns in the tables, (this is not my choice, this is an absolute must have according to my company). So when I need to access one or the other based on a form submission it becomes a mess of duplicated code, example:
//myWebController.java
import ...;
#RestController
public class MyRestController {
#RequestMapping(value="/orders", method=RequestMethod.POST)
public string getOrders(...) {
if(Objects.equals(client, "first_client_name") {
//do all the things to get first keyspace objects like....
FirstClientUsers users = (db).Mapper(FirstClientUsers.class);
//...
} else if(Objects.equals(client, "second_client_name") {
SecondClientUsers users = (db).Mapper(SecondClientUsers.class);
//....
}
return "";
}
I have been trying to use methods like...
Class cls = Class.forName(STRING_INPUT_VARIABLE_HERE);
and that works ok for base classes but when trying to use the Accessor stuff it no longer works because Accessors have to be interfaces, so when you do Class cls, it is no longer an interface.
I am trying to find any other solution on how to dynamically have this work and not have to have duplicate code for every possible client. Each client will have it's own namespace in Cassandra, with the exact same tables as all other ones.
I cannot change the database model, this is a must according to the company.
With PHP this is extremely simple since it doesn't care about typecasting as much, I can easily do...
function getData($name) {
$className = $name . 'Accessor';
$class = new $className();
}
and poof I have a dynamic class, but the problem I am running into is the Type specification where I have to explicitly say...
FirstClientUsers users = new FirstClientUsers();
//or even
FirstClientUsers users = Class.forName("FirstClientUsers");
I hope this is making sense, I can't imagine that I am the first person to have this problem, but I can't find any solutions online. So I am really hoping that someone knows how I can get this accomplished without duplicating the exact same logic for every single keyspace we have. It makes the code not maintainable and unnecessarily long.
Thank you in advance for any help you can offer.
Do not specify the keyspace in your model classes, and instead, use the so-called "session per keyspace" pattern.
Your model class would look like this (note that the keyspace is left undefined):
#Table(name = "users")
public class Users {
#PartitionKey
public UUID id;
//...
}
Your initialization code would have something like this:
Map<String, Mapper<Users>> mappers = new ConcurrentHashMap<String, Mapper<Users>>();
Cluster cluster = ...;
Session firstClientSession = cluster.connect("keyspace_first_client");
Session secondClientSession = cluster.connect("keyspace_second_client");
MappingManager firstClientManager = new MappingManager(firstClientSession);
MappingManager secondClientManager = new MappingManager(secondClientSession);
mappers.put("first_client", firstClientManager.mapper(Users.class));
mappers.put("second_client", secondClientManager.mapper(Users.class));
// etc. for all clients
You would then store the mappers object and make it available through dependency injection to other components in your application.
Finally, your REST service would look like this:
import ...
#RestController
public class MyRestController {
#javax.inject.Inject
private Map<String, Mapper<Users>> mappers;
#RequestMapping(value = "/orders", method = RequestMethod.POST)
public string getOrders(...) {
Mapper<Users> usersMapper = getUsersMapperForClient(client);
// process the request with the right client's mapper
}
private Mapper<Users> getUsersMapperForClient(String client) {
if (mappers.containsKey(client))
return mappers.get(client);
throw new RuntimeException("Unknown client: " + client);
}
}
Note how the mappers object is injected.
Small nit: I would name your class User in the singular instead of Users (in the plural).

Apache Camel listener

I have created a router class and marked as a #Bean in #Configuration. One thing I am not very much sure is how frequently camel would be making a database call to get the select result? As soon as I have a new entry in the database, camel retrieve and process it.
public class SQLRouteBuilderForNewUserProcessing extends RouteBuilder {
#Override
public void configure() throws Exception {
//files refer camel files endpoint
//noop will not move or delete the files
from("sql:" +
"select id from users where status=" + Status.NEW.ordinal() +
"?" +
"consumer.onConsume=update users set status = " + Status.PROCESSING.ordinal()
" where id = :#id")
.bean(UserDataTranslator.class, "transformToUserData")
.to("log:uk.co.infogen.users?level=INFO");
}
}
by default, the sql consumer pool the database every 500ms. You can configure this with consumer.delay
from("sql:select ... &consumer.delay=5000")
.to(...)
see the documentation of the sql component
consumer.delay
long
500
Camel 2.11: SQL consumer only: Delay in milliseconds between each poll.
from http://camel.apache.org/sql-component.html

After a commited & shutdown transaction which added new class to a graph - a new Tx doesn't see the class in schema, though it is persisted

We persist a graph in a piece of code and then have another code, that tries to retrieve it. We open our transacitons with this Spring bean. Anyone who wants to access the database always calls the getGraph() method of this bean.
public class OrientDatabaseConnectionManager {
private OrientGraphFactory factory;
public OrientDatabaseConnectionManager(String path, String name, String pass) {
factory = new OrientGraphFactory(path, name, pass).setupPool(1,10);
}
public OrientGraphFactory getFactory() {
return factory;
}
public void setFactory(OrientGraphFactory factory) {
this.factory = factory;
}
/**
* Method returns graph instance from the factory's pool.
* #return
*/
public OrientGraph getGraph(){
OrientGraph resultGraph = factory.getTx();
resultGraph.setThreadMode(OrientBaseGraph.THREAD_MODE.ALWAYS_AUTOSET);
return resultGraph;
}
}
(I was unable to quite understand the thread_mode fully, but I think it should not be related to the problem.)
The code, that persists the graph commits and shuts down, as you can see here:
OrientDatabaseConnectionManager connMan; //this is an injected bean from above.
public boolean saveGraphToOrientDB(
SparseMultigraph<SocialVertex, SocialEdge> graph, String label) {
boolean isSavedCorrectly = false;
OrientGraph graphO = connMan.getGraph();
try {
graphDBinput.saveGraph(graph, label, graphO);
// LOG System.out.println("Graph was saved with label "+label);
isSavedCorrectly = true;
} catch (AlreadyUsedGraphLabelException ex) {
Logger.getLogger(GraphDBFacade.class.getName()).log(Level.SEVERE, null, ex);
} finally {
graphO.shutdown(); //calls .commit() automatically normally, but commit already happens inside.
}
return isSavedCorrectly;
}
This commit works well - the data are always persisted, I checked everytime in the orientdb admin interface, and the first persisted graph is always viewable okay. It might be important to note, that during the saving the label used defines new class (thus modifying schema, as I understand it) and uses it for the persisted graph.
The retrieval of the graph looks something like this:
#Override
public SocialGraph getSocialGraph(String label) {
OrientGraph graph = connMan.getGraph();
SocialGraph socialGraph = null;
try {
socialGraph = new SocialGraph(getAllSocialNodes(label, graph), getAllSocialEdges(label, graph));
} catch (Exception e) {
logger.error(e);
} finally {
graph.shutdown();
}
return socialGraph;
}
public List<Node> getAllSocialNodes(String label, OrientGraph graph) {
return constructNodes(graphFilterMan.getAllNodesFromGraph(label, graph));
}
public Set<Vertex> getAllNodesFromGraph(String graphLabel, OrientGraph graph) {
Set<Vertex> labelledGraph = new HashSet<>();
try{
Iterable<Vertex> configGraph = graph.getVerticesOfClass(graphLabel);
for(Vertex v : configGraph){ //THE CODE CRASHES HERE, WITH "CLASS WITH NAME graphLabel DOES NOT EXIST
labelledGraph.add(v);
}
} catch(Exception ex){
logger.error(ex);
graph.rollback();
}
return labelledGraph;
}
So the problem is, that when we persist a new graph with a new class, say "graph01" and then we want to retrieve it, it goes okay. Later, we create a "graph02" and we want to retrieve it, but it crashes, as commented above - OrientDb tells you, that the class with "graph02" name does not exist.
It does exist in the admin interface at the time, however, when I debug, the class actually is not in the schema right after call of factory.getTx()
Right at the beginning, when we get a transaction graph instance from the factory, we get a graph with a context in which the rawGraph's underlying database's metadata have the schema proxy delegate schema shared classes WITHOUT the new class, which I can apparently see commited in the database.
Or here on picture:
There should be one more class in the schema. The one that was persisted (and commited ) a while ago - which can also be seen in the orientDb admin interface (not present in the variable)
What I presume is happening is that the pool, from which the factory gets the transaction has somewhat cached schema or something. It does not refresh the schema, when we add a new class.
Why does the schema not show the new class, when we are trying to get the new graph out? Does schema not get refreshed?
I found here in schema documentation that
NOTE: Changes to the schema are not transactional, so execute them outside a transaction.
So should we create the new class outside a transaction and then we would get an update in the schema in the context?
//Maybe I am understanding the concepts wrong - I got in contact with OrientDb just yesterday and I am to find out the problem in an already written code.
Db we use is a remote:localhost/socialGraph
OrientDB of version 1.7.4
We noticed in our code about the same issue, schema changes aren't visible in pooled connections.
We also have a sort of factory that gets a connection. What we do is keep a schema version number, and each time we have some operation that changes the schema, we bump the number and when a new connection is opened, we check the schema version, if it is changed.
When the schema is changed, we reload the schema, close the pool and recreate it. The method is proven for us to work (we are currently on version 2.0.15).
Here's the relevant code:
private static volatile int schemaVersion = -1;
private OPartitionedDatabasePool pool;
protected void createPool() {
pool = new OPartitionedDatabasePool(getUrl(), getUsername(), getPassword());
}
#Override
public synchronized ODatabaseDocumentTx openDatabase() {
ODatabaseDocumentTx db = pool.acquire();
//DatabaseInfo is a simple class put in a static contect that holds the schema version.
DatabaseInfo databaseInfo = CurrentDatabaseInfo.getDatabaseInfo();
ODocument document = db.load((ORID) databaseInfo.getId(), "schemaVersion:0", true);
Integer version = document.field("schemaVersion");
if (schemaVersion == -1) {
schemaVersion = version;
} else if (schemaVersion < version) {
db.getMetadata().getSchema().reload();
schemaVersion = version;
pool.close();
createPool();
db = pool.acquire();
}
return db;
}
In the end the problem was, that we had two liferay projects, each had its own spring application context in its WAR file and when we deployed these projects as portlets within Liferay, the two projects created two contexts, in each having one OrientDatabaseConnectionManager.
In one context the schema was being changed. And even though I reset the connection and reloaded the schema, it only happened with the connection manager / factory in one context. The retrieving of the graph was happening in the portlet of the other project though, resulting in an outdate schema (which was not reloaded, because the reloading happened in the other spring context) - thus the error.
So you have to be careful - either share one spring application context with beans for all your portlets (which is possible by having a parent application context, you can read more about it here)
OR
check for changes in the schema from within the same project which you will also use to retrieve the data later.

Spring hibernate transactions propagations aren't working properly

I have a transactional class in my project with following 2 methods:
#Repository(value = "usersDao")
#Transactional(propagation = Propagation.REQUIRED)
public class UsersDaoImpl implements UsersDao {
#Autowired
SessionFactory sessionFactory;
/* some methods here... */
#Override
#Transactional(propagation = Propagation.REQUIRES_NEW,readOnly = false,rollbackFor = {Exception.class})
public void pay(Users payer, int money) throws Exception {
if( payer.getMoney() < money ) {
throw new Exception("");
}
payer.setMoney(payer.getMoney()-money);
this.sessionFactory.getCurrentSession().update(payer);
}
#Override
#Transactional(readOnly = false,rollbackFor = {Exception.class})
public void makeTransfer(Users from, Users to, int money) throws Exception {
System.out.println("Attempting to make a transfer from " + from.getName() + " to " + to.getName() + "... sending "+ money +"$");
to.setMoney(to.getMoney()+money);
if( from.getMoney() < 10 ) {
throw new Exception("");
}
pay(from, 10);
if( from.getMoney() < money ) {
throw new Exception("");
}
from.setMoney(from.getMoney()-money);
this.sessionFactory.getCurrentSession().update(from);
this.sessionFactory.getCurrentSession().update(to);
}
}
The assumption is that when somebody's making a transfer, they must pay 10$ tax. Let's say there are 2 users who have 100$ both and I want to make a transfer (User1->User2) of 95$. First in makeTransfer I check if User1 is able to pay a tax. He is so I'm moving forward and checking if he's got 95$ left for transfer. He doesn't so the transaction is rolled back. The problem is, in the end they both have 100$. Why? For method pay I set Propagation.REQUIRES_NEW, which means it executes in a separate transaction. So why is it also rolled back? The tax payment should be actually save into a database and only the transfer should be rolled back. The whole point of doing this for me is understanding propagations. I understand them teoretically but can't manage to do some real example of it(How propagation change affects my project). If this example is misunderstanding I'd love to see another one.
What M. Deinum said.
Furthermore, according to the Spring documentation:
Consider the use of AspectJ mode (see mode attribute in table below)
if you expect self-invocations to be wrapped with transactions as
well. In this case, there will not be a proxy in the first place;
instead, the target class will be weaved (that is, its byte code will
be modified) in order to turn #Transactional into runtime behavior on
any kind of method.
To use aspectj, write
<tx:annotation-driven transaction-manager="transactionManager" mode="aspectj"/>
instead of
<tx:annotation-driven transaction-manager="transactionManager" />
Source:
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/transaction.html
I solved my problem. Thanks guys for pointing out allowing AspectJ but that wasn't the end. I did deeper research and it turned out that in #Transactional Spring Beans there's an aspect proxy, which is responsible(as far as get it) for operating on transactions(creating, rolling back etc.) for #Transactional methods. But it fails on self reference, because it bypasses proxy, so no new transactions are created. A simple way I used to get this working is calling function pay(..) not refering to this, but to a bean from the spring container, like this:
UsersDaoImpl self = (UsersDaoImpl)this.context.getBean("usersDao");
self.pay(from, 10);
Now, as it refers to the bean, it goes through the proxy so it creates a new transaction.
Just for learning issues I used those two lines to detect whether the two transactions are the same objects or not:
TransactionStatus status = TransactionAspectSupport.currentTransactionStatus();
System.out.println("Current transaction: " + status.toString());

Categories