I'm using the following Java code using JDBC in order to insert new rows into my PostgreSQL database.
public void initialize(DataSource dataSource) {
setDataSource(dataSource);
DataSourceTransactionManager dataSourceTransactionManager = new DataSourceTransactionManager();
dataSourceTransactionManager.setDataSource(dataSource);
transactionTemplate = new TransactionTemplate(dataSourceTransactionManager);
}
transactionTemplate.execute(transactionStatus -> {
getJdbcTemplate().batchUpdate(insertGroupSql, groupsArgs);
return true;
});
in case the batchUpdate fails, the entity that failed to insert, is still kept in the cache and any other new entities that come afterwards are failed as well because it first tries to enter the invalid entity again.
How can I clear the cached entities from getJdbcTemplate() before execute?
Related
I am having a hard time changing the oracle datasource schema for my springboot app, that will eventually be used by my camel routes. I am logging in as user readonly, but all the data is in schema mydata. Readonly has read rights to the mydata schema.
I have tried calling ALTER SESSION SET CURRENT_SCHEMA=mydata against the datasource (by autowiring, and then getting the connection object from the datasource) and it doesn't work, I have no issue running selects from statement objects I create off the connection (see code below)
If I create a rest endpoint that executes ALTER SESSION SET CURRENT_SCHEMA=mydata and if I call that from postman or a browser, that will change my schema and my other endpoints will work, but I would prefer not to do it that way since I will have to call that endpoint. I guess I could call that endpoint in my springboot app when it loads but it just seems like the wrong way to do it.
I also do not want to hardcode/prefix all my tables with the schema name since different regions have different schema names, I'd like to configure the schema name in the properties file.
Here is my application.properties, I have tried various ways to set the schema in the properties file based on other stack overflow posts, and so far none of them work.
spring.datasource.first.url=jdbc:oracle:thin:#myserver:10100:db9
spring.datasource.first.username=readonly
spring.datasource.first.password=readonlypass
## DOESNT WORK ->spring.datasource.hikari.schema=mydata
## DOESNT WORK ->spring.datasource.hikari.first.schema=mydata
#sync database
spring.datasource.second.driverClassName=oracle.jdbc.OracleDriver
spring.datasource.second.url = jdbc:oracle:thin:myserver2:10100:db15
spring.datasource.second.username = eam
spring.datasource.second.password = eampass
Here is the code from my springboot application:
/**
* A spring-boot application that includes a Camel route builder to setup the Camel routes
*/
#SpringBootApplication
#ImportResource({"classpath:spring/camel-context.xml"})
public class Application extends RouteBuilder {
int workorderSyncFrequency = 5000;
//Autowired the first datasource in attempts to alter the session to set my schema name.
#Autowired
DataSource firstDataSource;
// must have a main method spring-boot can run
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
//setup first datasource
#Bean
#Primary
#ConfigurationProperties("spring.datasource.first")
public DataSourceProperties firstDataSourceProperties() {
return new DataSourceProperties();
}
#Bean
#Primary
#ConfigurationProperties("spring.datasource.first.configuration")
public DataSource firstDataSource() {
return firstDataSourceProperties().initializeDataSourceBuilder()
.type(HikariDataSource.class).build();
}
//setup second data source
#Bean
#ConfigurationProperties("spring.datasource.second")
public DataSourceProperties secondDataSourceProperties() {
return new DataSourceProperties();
}
#Bean
#ConfigurationProperties("spring.datasource.second.configuration")
public DataSource secondDataSource() {
return firstDataSourceProperties().initializeDataSourceBuilder()
.type(HikariDataSource.class).build();
}
#Override
public void configure() throws Exception {
Connection con = DataSourceUtils.getConnection(firstDataSource);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select count(*) from mydata.ASSET");
rs.next();
//simply testing I am using the correct datasource and I can query from the second schema and this works.
System.out.println("++++++++++++++++++++++ASSET COUNT+++++++++++++++++++"+rs.getInt(1));
//Tried both of these statements, neither works.
//stmt.executeQuery("ALTER SESSION SET CURRENT_SCHEMA=mydata");
//stmt.executeUpdate("ALTER SESSION SET CURRENT_SCHEMA=mydata");
//Connection is defaulted to autocommit tried this just in case.
con.commit();
//ASSET table doesnt exist on the readonly schema, only on the mydata schema
//if I call test3 I will get a table or view does not exist, unless I first call the "schema"
//endpoint below.
rest()
.get("test3")
.produces(MediaType.APPLICATION_JSON_VALUE)
.route()
.to("sql:SELECT * FROM ASSET where rownum < 10"
+ "?dataSource=firstDataSource&outputType=SelectList");
//This works if I call this route, but its a weird way to make this work.
rest()
.get("schema")
.produces(MediaType.APPLICATION_JSON_VALUE)
.route()
.to("sql:ALTER SESSION SET CURRENT_SCHEMA=mydata"
+ "?dataSource=firstDataSource&outputType=SelectList");
}
My Project is on spring-boot-starter-parent - "1.5.9.RELEASE" and I'm migrating it to spring-boot-starter-parent - "2.3.1.RELEASE".
This is multi-tenant env application, where one database will have multiple schemas, and based on the tenant-id, execution switches between schemas.
I had achieved this schema switching using SimpleNativeJdbcExtractor but in the latest Springboot version NativeJdbcExtractor is no longer available.
Code snippet for the existing implementation:
#Bean
#Scope(
value = ConfigurableBeanFactory.SCOPE_PROTOTYPE,
proxyMode = ScopedProxyMode.TARGET_CLASS)
public JdbcTemplate jdbcTemplate() {
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
SimpleNativeJdbcExtractor simpleNativeJdbcExtractor = new SimpleNativeJdbcExtractor() {
#Override
public Connection getNativeConnection(Connection con) throws SQLException {
LOGGER.debug("Set schema for getNativeConnection "+Utilities.getTenantId());
con.setSchema(Utilities.getTenantId());
return super.getNativeConnection(con);
}
#Override
public Connection getNativeConnectionFromStatement(Statement stmt) throws SQLException {
LOGGER.debug("Set schema for getNativeConnectionFromStatement "+Utilities.getTenantId());
Connection nativeConnectionFromStatement = super.getNativeConnectionFromStatement(stmt);
nativeConnectionFromStatement.setSchema(Utilities.getTenantId());
return nativeConnectionFromStatement;
}
};
simpleNativeJdbcExtractor.setNativeConnectionNecessaryForNativeStatements(true);
simpleNativeJdbcExtractor.setNativeConnectionNecessaryForNativePreparedStatements(true);
jdbcTemplate.setNativeJdbcExtractor(simpleNativeJdbcExtractor);
return jdbcTemplate;
}
Here Utilities.getTenantId() ( Stored value in ThreadLocal) would give the schema name based on the REST request.
Questions:
What are the alternates to NativeJdbcExtractor so that schema can be dynamically changed for JdbcTemplate?
Is there any other way, where while creating the JdbcTemplate bean I can set the schema based on the request.
Any help, code snippet, or guidance to solve this issue is deeply appreciated.
Thanks.
When I was running the application in debug mode I saw Spring was selecting Hikari Datasource.
I had to intercept getConnection call and update schema.
So I did something like below,
Created a Custom class which extends HikariDataSource
public class CustomHikariDataSource extends HikariDataSource {
#Override
public Connection getConnection() throws SQLException {
Connection connection = super.getConnection();
connection.setSchema(Utilities.getTenantId());
return connection;
}
}
Then in the config class, I created bean for my CustomHikariDataSource class.
#Bean
public DataSource customDataSource(DataSourceProperties properties) {
final CustomHikariDataSource dataSource = (CustomHikariDataSource) properties
.initializeDataSourceBuilder().type(CustomHikariDataSource.class).build();
if (properties.getName() != null) {
dataSource.setPoolName(properties.getName());
}
return dataSource;
}
Which will be used by the JdbcTemplate bean.
#Bean
#Scope(
value = ConfigurableBeanFactory.SCOPE_PROTOTYPE,
proxyMode = ScopedProxyMode.TARGET_CLASS)
public JdbcTemplate jdbcTemplate() throws SQLException {
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
return jdbcTemplate;
}
With this approach, I will have DataSource bean created only once and for every JdbcTemplate access, the proper schema will be updated during runtime.
There's no need to get rid of JdbcTemplate. NativeJdbcExtractor was removed in Spring Framework 5 as it isn't needed with JDBC 4.
You should replace your usage of NativeJdbcExtractor with calls to connection.unwrap(Class). The method is inherited by Connection from JDBC's Wrapper.
You may also want to consider using AbstractRoutingDataSource which is designed to route connection requests to different underlying data sources based on a lookup key.
I am facing some issue with transactions configured in my code.
Below is the code with transactions which writes data to DB.
Writer.java
class Writer {
#Inject
private SomeDAO someDAO;
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void write(){
this.batchWrite();
}
private void batchWrite () {
try {
someDAO.writeToTable1(true);
} catch(Exception ex) {
someDAO.writeToTable1(false);
}
someDAO.writeToTable2();
}
}
SomeDAO.java
class SomeDAO {
#Inject
private JdbcTemplate JdbcTemplate;
public void writeToTable1(boolean flag) {
// Writes data to table 1 using jdbcTemplate
jdbcTemplate.update();
}
pulic void writeToTable2() {
// Writes data to table 2 using jdbcTemplate
jdbcTemplate.update();
}
}
Here data is getting stored into table 1 properly but sometimes, table 2 is getting skipped.
I am not sure how this is happening as both the tables have been written within same transaction.
Either transaction is partially committing the data or partially rolling back.
I have doubt that in the SomeDAO class I am injecting JdbcTemplate object which is creating new connection instead of using existing connection of transaction.
Can anyone please help me here?
Try binding a Transaction Manager bean having your jdbcTemplate inside #Transactional:
//Create a bean
#Bean
public PlatformTransactionManager txnManager() throws Exception {
return new DataSourceTransactionManager(jdbcTemplate().getDataSource());
}
And then use this transaction manager in #Transactional("txnManager").
I'm implementing a JDBC database access API (basically a wrapper) and I'm usnig Spring JdbcTemplate with PlatformTransactionManager to handle transactional operations. Everything looks ok, but I cannot understand how jdbcTemplate manage concurrent transactions.
I'll give you a simplified example based on the creation of students to make my point. Let's create 2 students, John and Jack. The first without erros and the seconds with one error, there's the steps and the code below.
John starts a transaction
Execute John insert without commit
Wait for Jack insert
Jack starts a transaction
Execute Jack insert with an error (age as null but database required NON - NULL)
Rollback Jack transaction
Commit John trasaction
StudentDAO
public class StudentJDBCTemplate implements StudentDAO {
private DataSource dataSource;
private JdbcTemplate jdbcTemplateObject;
private PlatformTransactionManager transactionManager;
// constructor, getters and setters
public TransactionStatus startTransaction() throws TransactionException {
TransactionDefinition def = new DefaultTransactionDefinition();
transactionManager.getTransaction(def);
}
public void commitTransaction(TransactionStatus status) throws TransactionException {
transactionManager.commit(status);
}
public void rollbackTransaction(TransactionStatus status) throws TransactionException {
transactionManager.rollback(status);
}
public void create(String name, Integer age){
String SQL1 = "insert into Student (name, age) values (?, ?)";
jdbcTemplateObject.update( SQL1, name, age);
return;
}
}
MainApp
public class MainApp {
public static void main(String[] args){
// setup db connection etc
StudentJDBCTemplate studentDao = new StudentJDBCTemplate();
TransactionStatus txJohn = studentDao.startTransaction();
TransactionStatus txJack = studentDao.startTransaction();
studentDao.create("John", 20);
try {
studentDao.create("Jack", null); // **FORCE EXCEPTION**
} catch(Exception e){
studentDao.rollback(txJack);
}
studentDao.commit(txJohn);
}
}
How JdbcTemplate knows that 1 transaction is ok but the other is not? From my undertanding, despite we have created 2 transactions, JdbcTemplate will rollback Jack AND John transactions, because query, execute and update methods does not require TransactionStatus as a parameter. That means that Spring jdbcTemplate only supports 1 transaction at time?!
All the operations in a single transaction are always executed as a single unit so either all will be committed or none.
If John starts a transaction which insert and then update then either both (insert and update) will succeed or none and will not be impacted by the transaction started by Jack.
Now how the concurrent transactions interfere with each other is controlled by isolation level i.e. how a transaction sees data modified by another concurrent transaction.
I use following tecnologies:
TestNG(6.9.10)
Spring(4.3.2.RELEASE)
Hibernate(5.1.0.Final)
Java 8
I test some code with functionality by integration tests and i need to check the entity for correct save/update/delete or any other changes. There are sessionFactory configuration in my .xml :
<bean id="sessionFactory" class="org.springframework.orm.hibernate5.LocalSessionFactoryBean"
p:dataSource-ref="dataSource" p:hibernateProperties="jdbcProperties">
<property name="packagesToScan" value="my.package"/>
</bean>
and test class example:
#ContextConfiguration(locations = {"classpath:/applicationContext-test.xml",
"classpath:/applicationContext-dao.xml",
"classpath:/applicationContext-orm.xml"})
public class AccountServiceTest extends AbstractTransactionalTestNGSpringContextTests {
#Autowired
private SomeService someService;
#Autowired
private SessionFactory sessionFactory;
#Test
public void updateEntity() {
//given
Long entityId = 1L;
SomeClass expected = someService.get(entityId);
String newPropertyValue = "new value";
//when
someService.changeEntity(entity, newPropertyValue);
// Manual flush is required to avoid false positive in test
sessionFactory.getCurrentSession().flush();
//then
expected = someService.get(entityId);
Assert.assertEquals(expected.getChangedProperty() , newPropertyValue);
}
service method:
#Transactional
#Override
public int changeEntity(entity, newPropertyValue) {
return dao().executeNamedQuery(REFRESH_ACCESS_TIME_QUERY,
CollectionUtils.arrayToMap("id", entity.getId(), "myColumn", newPropertyValue));
}
dao:
#Override
public int executeNamedQuery(final String query, final Map<String, Object> parameters) {
Query queryObject = sessionFactory.getCurrentSession().getNamedQuery(query);
if (parameters != null) {
for (Map.Entry<String, Object> entry : parameters.entrySet()) {
NamedQueryUtils.applyNamedParameterToQuery(queryObject, entry.getKey(), entry.getValue());
}
}
return queryObject.executeUpdate();
}
But my entity property didn't change after flush()
as described here, change #Autowire SessionFactory with #PersistenceContext EntityManager , i should use EntityManager to flush() - but i can't do this - i can't transform sessionFactory to EntityManager, and i don't need in creation of EntityManager for my application - because i need to change my .xml config file and others.
Is there are any another solutions of this problem?
Your code is actually working as expected.
Your test method is transactional and thus your Session is alive during the whole execution of the test method. The Session is also the 1st level cache for hibernate and when loading an entity from the database it is put into the session.
So the line SomeClass expected = someService.get(entityId); will load the entity from the database and with it also put it in the Session.
Now this line expected = someService.get(entityId); first checks (well actually the dao method underneath) checks if the entity of the requested type with the id is already present in the Session if so it simply returns it. It will not query the database!.
The main problem is that you are using hibernate in a wrong way, you are basically bypassing hibernate with the way you are updating your database. You should update your entity and persist it. You should not write queries to update the database!
Annotated test method
#Test
public void updateEntity() {
//given
Long entityId = 1L;
SomeClass expected = someService.get(entityId); // load from db and put in Sesion
String newPropertyValue = "new value";
//when
someService.changeEntity(entity, newPropertyValue); // update directly in database bypass Session and entity
// Manual flush is required to avoid false positive in test
sessionFactory.getCurrentSession().flush();
//then
expected = someService.get(entityId); // return entity from Session
Assert.assertEquals(expected.getChangedProperty() , newPropertyValue);
}
To only fix the test add a call to clear() after the flush().
sessionFactory.getCurrentSession().clear();
However what you actually should do is stop writing code like that and use Hibernate and persistent entities in the correct way.
#Test
public void updateEntity() {
//given
Long entityId = 1L;
String newPropertyValue = "new value";
SomeClass expected = someService.get(entityId);
expected.setMyColumn(newPropertyValue);
//when
someService.changeEntity(entity);
sessionFactory.getCurrentSession().flush();
// now you should use a SQL query to verify the state in the DB.
Map<String, Object> dbValues = getJdbcTemplate().queryForMap("select * from someClass where id=?", entityId);
//then
Assert.assertEquals(dbValues.get("myColumn"), newPropertyValue);
}
Your dao method should look something like this.
public void changeEntity(SomeClass entity) {
sessionFactory.getCurrentSession().saveOrUpdate(entity);
}