In spring if we define method in repository like this:findByName(String name), we could call this method to retrieve data back. What I want is that, could I have some ways to call 2 or more methods like I say above, and spring sends query to database in just one round instead of 2rounds? I would like to optimize performance in the case that I am certain that some sql queries will be sent togother
update: one round means in one connection we send multi sql queries. The object is to avoid more than one round trip time when there is more than 1 sql query is about to send.
e.g., query 1 is select * from table where xx=bb
query 2 is selext * from another_table where zz=cc
in trivial way, we may send 2 queries like this:
1. send query 1 by calling repository's findbyxx method
2. send query 2 by calling repository's findbyzz method
in above case, query 2 will be sent after query 1's response came back. This is a waste IMHW. I am seeking a way to send these 2 queries at once and got answer at once.
If you want to keep the database connection between these two queries, you must set up a transaction manager for your JPA Configuration:
<bean id="txManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="yourEntityManagerFactory" />
</bean>
<tx:annotation-driven transaction-manager="txManager" />
This will imply that when you annotate a #Service's method with #Transaction (or the whole class), the same session/connection will be kept between your queries.
For more info: https://www.baeldung.com/transaction-configuration-with-jpa-and-spring
Related
I have the page for which I should have a several queries from different DAO classes.
I want to use the same connection for all queries. I'm getting the connection from data source for this particular user request.
The idea is to use this connection for all dao methods which are needed for this user request.
I see only one way now - pass connection as param to each DAO method, something like that:
productDAO.get(connection, param);
objectDAO.get(connection, param);
The question is - is it possible to have another solution for passing connection to different DAO methods for single user request?
The technologies in project: JDBC, Java EE
How to handle the situation where we have a Postgres database running with many database roles (representing the users) and want to use hibernate so every database statement would be executed using a connection fetched with the specific user?
To get a Session/EntityManager we need to fetch it from an EntityManagerFactory, which requires a DB user/password, usually specified in persistence.xml like this:
<property name="javax.persistence.jdbc.user" value="SYSDBA"/>
<property name="javax.persistence.jdbc.password" value="masterkey"/>
Of course i can create a Session/EntityManager for every user using a separate EntityManagerFactory, but this is a costly operation. How can this problem be solved?
If the RDBMS is PostgreSQL I think the best way to accomplish this would be to call the SET ROLE command. This command will change the role and permissions to whatever role is specified. It will carry out all SQL commands during the session as if you logged in with that role in the beginning.
Here is a link to the Postgres documentation.
Using Spring Integration I am trying to built a simple message producing component. Basically something like this:
<jdbc:inbound-channel-adapter
channel="from.database"
data-source="dataSource"
query="SELECT * FROM my_table"
update="DELETE FROM my_table WHERE id IN (:id)"
row-mapper="someRowMapper">
<int:poller fixed-rate="5000">
<int:transactional/>
</int:poller>
</jdbc:inbound-channel-adapter>
<int:splitter
id="messageProducer"
input-channel="from.database"
output-channel="to.mq" />
<jms:outbound-channel-adapter
channel="to.mq"
destination="myMqQueue"
connection-factory="jmsConnectionFactory"
extract-payload="true" />
<beans:bean id="myMqQueue" class="com.ibm.mq.jms.MQQueue">
<!-- properties omitted --!>
</beans:bean>
The "messageProducer" may produce several messages per poll but not necessarily one per row.
My concern is that I want to make sure that rows are not deleted from my_table unless the messages produced has been committed to the MQ channel.
On the other hand I will accept that rows in case of db- or network failure are not deleted thus causing duplicate messages to be produced. In other words I will settle for a non-XA one-phase commit with possible duplicates.
When trying to figure out what I need to put to my Spring configuration I quickly get lost in endless discussions about transaction managers, AOP and transaction advice chains which I find difficult to understand - I know I ought to though.
But I fear that I will spend a lot of time cooking up a configuration that is not really necessary for my problem at hand.
So - my question is: Can it be that simple - or do I need to provide explicit configuration for transaction synchronization?
But can I do something similar with a jdbc/jms mix?
I'd say "Yes".
Please, read Dave Syer's article about Best effort 1PC, where the ChainedTransactionManager came from.
In a Java application, I am using Spring-Data to access a Neo4j database via the REST binding.
The spring.xml used as a context contains the following lines:
<neo4j:config graphDatabaseService="graphDatabaseService" />
<neo4j:repositories base-package="org.example.graph.repositories"/>
<bean id="graphDatabaseService"
class="org.springframework.data.neo4j.rest.SpringRestGraphDatabase">
<constructor-arg index="0" value="http://example.org:1234/db/data" />
</bean>
My repository is very simple:
public interface FooRepository extends GraphRepository<Foo> {
}
Now, I would like to loop through some Foos:
for (Foo foo : fooRepository.findAll(new PageRequest(0, 5))) //...
However, the performance of this request is awful: It takes over 400 seconds (!) to complete.
After a bit of debugging, I found out that Spring-data generates the following query:
START `foo`=node:__types__(className="org.example.Foo") RETURN `foo`
It then looks like as if paging is done on the client and all Foos (more than 100,000) are transferred to the client. When issuing the above query to the Neo4j server using the web interface, it takes around 60 seconds. However, if I manually append a "LIMIT 5", the execution time reduces to around 0.5 seconds.
What am I doing wrong so that spring-data does not use server-side, CYPHER pagination?
According to Programming Model
the expensive operations like traversals and querying are executed efficiently on the server side by using the REST API to forward those calls.
Or does this exclude the pagination?
What other options do I have in this case?
You can do the below to handle this server side.
Provide your own query method in the repository
The cypher query should use order, skip and limit and parameterize them so that you can pass in the skip and limit values on a per page basis.
E.g.
start john=node:users("name:pangea")
match john-[:HAS_SEEN]-(movie)
return movie
order by movie.name?
skip 20
limit 10
I am using spring JDBC inbound channel adapter in my web application. If I deploy this application in clustered environment, two or more instances pickup the same job and run.
Can anybody help to overcome this issue by changing the spring configuration ?
I have attached my spring configuration.
<int-jdbc:inbound-channel-adapter
query=" SELECT JOBID,
JOBKEY,
JOBPARAM
FROM BATCHJOB
WHERE JOBSTATUS = 'A' "
max-rows-per-poll="1" channel="inboundAdhocJobTable" data-source="dataSource"
row-mapper="adhocJobMapper"
update=" delete from BATCHJOB where JOBKEY in (:jobKey)"
>
<int:poller fixed-rate="1000" >
<int:advice-chain>
</int:advice-chain>
</int:poller>
</int-jdbc:inbound-channel-adapter>
Unfortunately this will not be possible without some sort of syncing. Additionally using the database as some sort of message queue is not a good idea (http://mikehadlow.blogspot.de/2012/04/database-as-queue-anti-pattern.html). I'd try to follow different approaches:
Use some sort of message bus + message store to store the jobs objects rather than executing SQL directly. In this case you'll have to change the way jobs are being stored. Either by using some sort of message store backed channel (Spring integration only) or push them to a message queue like RabbitMQ to store these jobs.
I'm not 100% sure but remember that Spring Batch offers something similar like Master-Slave-Job splitting and synchronization. Maybe you have a look there.