IBM Websphere Command Cache Invalidation - java

My business flow is following:
Invalidate a command
Fetch data from command (database operations, little slower)
Step2 would be access by many concurrent users.
Now, when a command in invalidated, and user tries to fetch the data, multiple database queries starts executing because execute is little slower.
Is there any way to stop this multiple executions of queries?
In other words, the question is: Can we make the execution of command,
and fetching data from command as Synchronized?

Yes, you can do something like this.
public class Fetcher {
private static String data;
private long timestamp;
public synchronized String fetchData() {
String result="";
if (data!=null) {
result=data;
// let's invalidate too old data
if (new Date().getTime()-timestamp> 100000)
data=null;
} else {
DAO db = DAO.getConnection();
data = db.performQuery();
result=data;
}
return result;
}
}

If you are using a Dynacache cacheable command and the queries are the same for users, then the command should get cached after the first execution.
Only the first execution should hit the database, after that the data should be fetched from cache until the cache is invalidated.
I usually use Dynacache as part of IBM Websphere Commerce suite.
Websphere Commerce uses a scheduled command to check a table called CACHEIVL.
You would setup triggers which would insert an invalidation id into CACHEIVL when the target table is changed.
Since you don't have the scheduled Dynacache command you can implement something specific to your use case using Websphere schedulers,
Here is an example of a cacheable command using Dynacache.

Related

Faster way of updating database table using Hibernate (Java 8 reduction?)

I am working on a monitoring tool developed in Spring Boot using Hibernate as ORM.
I need to compare each row (already persisted rows of sent messages) in my table and see if a MailId (unique) has received a feedback (status: OPENED, BOUNCED, DELIVERED...) Yes or Not.
I get the feedbacks by reading csv files from a network folder. The CSV parsing and reading of files goes very fast, but the update of my database is very slow. My algorithm is not very efficient because I loop trough a list that can have hundred thousands of objects and look in my table.
This is the method that make the update in my table by updating the "target" Object (row in table database)
#Override
public void updateTargetObjectFoo() throws CSVProcessingException, FileNotFoundException {
// Here I make a call to performProcessing method which reads files on a folder and parse them to JavaObjects and I map them in a feedBackList of type Foo
List<Foo> feedBackList = performProcessing(env.getProperty("foo_in"), EXPECTED_HEADER_FIELDS_STATUS, Foo.class, ".LETTERS.STATUS.");
for (Foo foo: feedBackList) {
//findByKey does a simple Select in mySql where MailId = foo.getMailId()
Foo persistedFoo = fooDao.findByKey(foo.getMailId());
if (persistedFoo != null) {
persistedFoo.setStatus(foo.getStatus());
persistedFoo.setDnsCode(foo.getDnsCode());
persistedFoo.setReturnDate(foo.getReturnDate());
persistedFoo.setReturnTime(foo.getReturnTime());
//The save account here does an MySql UPDATE on the table
fooDao.saveAccount(foo);
}
}
}
What if I achieve this selection/comparison and update action in Java side? Then re-update the whole list in database?
Will it be faster?
Thanks to all for your help.
Hibernate is not particularly well-suited for batch processing.
You may be better off using Spring's JdbcTemplate to do jdbc batch processing.
However, if you must do this via Hibernate, this may help: https://docs.jboss.org/hibernate/orm/5.2/userguide/html_single/chapters/batch/Batching.html

hibernate read only queries cache mechanism

This question is related to my other question
I am building a Spring web application which reads data from DB using hibernate. My App will not be aware of any changes(Updates/Inserts) done to the DB. Is there a way to use query cache in such a scenario?
I configured query cache, and it is not invalidating the cache when I update the DB from different App. And I think it is the expected behavior.
I need the queries to be cached and invalidated when there is an update in DB. How to achieve this?
I am not sure is there any automatic way for refreshing the cache. But i have solved this problem in my last project. Expose a method like below and give access to admin. Once any modification done in DB externally call this method to refresh your cache.
public void refreshCache()
{
try {
Map<String, ClassMetadata> classesMetadata = sessionFactory.getAllClassMetadata();
for (String entityName : classesMetadata.keySet()) {
sessionFactory.evictEntity(entityName);
}
} catch (Exception e) {
e.printStackTrace();
}
}
Well if you are using Oracle , the following command will give you the last updated unique scn on the table
select max(ora_rowscn) from TableName;
output
10772982279880
further you convert this to timestamp if you want
select scn_to_timestamp(10772982279880) from dual
but idont think you need to convert it into time , just cache the the rowscn alone and periodically check the table , if there is a change you can evict the cache regions.
Please note that this supports version > 10g

Activiti BPM get Variables within Task

is it possible to get all process or task variables using TaskService:
processEngine.getTaskService.createTaskQuery().list();
I know there is an opportunity to get variables via
processEngine.getTaskService().getVariable()
or
processEngine.getRuntimeService().getVariable()
but every of operation above goes to database. If I have list of 100 tasks I'll make 100 queries to DB. I don't want to use this approach.
Is there any other way to get task or process related variables?
Unfortunately, there is no way to do that via the "official" query API! However, what you could do is writing a custom MyBatis query as described here:
https://app.camunda.com/confluence/display/foxUserGuide/Performance+Tuning+with+custom+Queries
(Note: Everything described in the article also works for bare Activiti, you do not need the fox engine for that!)
This way you could write a query which selects tasks along with the variables in one step. At my company we used this solution as we had the exact same performance problem.
A drawback of this solution is that custom queries need to be maintained. For instance, if you upgrade your Activiti version, you will need to ensure that your custom query still fits the database schema (e.g., via integration tests).
If it is not possible to use the API as elsvene says, you can query yourself the database. Activiti has several tables on the database.
You have act_ru_variable, were the currently running processes store the variables. For the already finished processess you have act_hi_procvariable. Probably you can find a detailed explanation on what is on each table in activiti userguide.
So you just need to make queries like
SELECT *
FROM act_ru_variable
WHERE *Something*
The following Test, sends a value object (Person) to a process which just adds a few tracking infos for demonstration.
I had the same problem, to get the value object after execution the service to do some validation in my test.
The following piece of code shows the execution and the gathering of the task varaible after the execution was finished.
#Test
public void justATest() {
Map<String, Object> inVariables = new HashMap<String, Object>();
Person person = new Person();
person.setName("Jens");
inVariables.put("person", person);
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("event01", inVariables);
String processDefinitionId = processInstance.getProcessDefinitionId();
String id = processInstance.getId();
System.out.println("id " + id + " " + processDefinitionId);
List<HistoricVariableInstance> outVariables =
historyService.createHistoricVariableInstanceQuery().processInstanceId(id).list();
for (HistoricVariableInstance historicVariableInstance : outVariables) {
String variableName = historicVariableInstance.getVariableName();
System.out.println(variableName);
Person person1 = (Person) historicVariableInstance.getValue();
System.out.println(person1.toString());
}
}

Hibernate Batch Processing Using Native SQL

I have an application using hibernate. One of its modules calls a native SQL (StoredProc) in batch process. Roughly what it does is that every time it writes a file it updates a field in the database. Right now I am not sure how many files would need to be written as it is dependent on the number of transactions per day so it could be zero to a million.
If I use this code snippet in while loop will I have any problems?
#Transactional
public void test()
{
//The for loop represents a list of records that needs to be processed.
for (int i = 0; i < 1000000; i++ )
{
//Process the records and write the information into a file.
...
//Update a field(s) in the database using a stored procedure based on the processed information.
updateField(String.valueOf(i));
}
}
#Transactional(propagation=propagation.MANDATORY)
public void updateField(String value)
{
Session session = getSession();
SQLQuery sqlQuery = session.createSQLQuery("exec spUpdate :value");
sqlQuery.setParameter("value", value);
sqlQuery.executeUpdate();
}
Will I need any other configurations for my data source and transaction manager?
Will I need to set hibernate.jdbc.batch_size and hibernate.cache.use_second_level_cache?
Will I need to use session flush and clear for this? The samples in the hibernate tutorial is using POJO's and not native sql so I am not sure if it is also applicable.
Please note another part of the application is already using hibernate so as much as possible I would like to stick to using hibernate.
Thank you for your time and I am hoping for your quick response. If it is also possible could code snippet would really be useful for me.
Application Work Flow
1) Query Database for the transaction information. (Transaction date, Type of account, currency, etc..)
2) For each account process transaction information. (Discounts, Current Balance, etc..)
3) Write the transaction information and processed information to a file.
4) Update a database field based on the process information
5) Go back to step 2 while their are still accounts. (Assuming that no exception are thrown)
The code snippet will open and close the session for each iteration, which definitely not a good practice.
Is it possible, you have a job which checks how many new files added in the folder?
The job should run say every 15/25 minutes, checking how much files are changed/added in last 15/25 minutes and updates the database in batch.
Something like that will lower down the number of open/close session connections. It should be much faster than this.

How to setup concurrent calls in Oracle 10g Java VM

If somebody can explain me how to properly configure plsql java wrapper when different database users invoke same procedure to ensure correct concurrent resource access handling.
DBMS and JAVA: Oracle 10g, internal JavaVM 1.4.2
I have MyDatabse with 1 shema owner and 10 db users granted to connect to it:
DBOWNER
DBUSER01
DBUSER02
...
DBUSER10
I have PL/SQL wrapper procedure:
my_package.getUser() that wrapps UserHandler.getUser()
I have java class UserHandler uploaded to MyDatabase with loadjava:
public class UserHandler {
private static final int MAX_USER_COUNT = 10;
private static final String USERNAME_TEMPLATE = "EIS_ORA_20";
private static int currentUserSeed = 0;
/**
* Generates EIS user according to pattern agreed by EIS developers. It
* circles user pool with round-robin method ensuring concurrent calls.
*
* #return valid EIS USERNAME
*/
synchronized public static String getUser() {
String newUser = USERNAME_TEMPLATE + currentUserSeed;
currentUserSeed++;
currentUserSeed = currentUserSeed % MAX_USER_COUNT;
return newUser;
}
}
The idea of wrapper is to ensure proper distribution of external information system usernames to DBUSERS connected to MyDatabase with Oracle Forms Client Application.
My problem is that when 5 users concurently call procedure my_package.getUser() I got:
DBUSER01 - call to my_package.getUser() returned EIS_ORA_200
DBUSER02 - call to my_package.getUser() returned EIS_ORA_200
DBUSER03 - call to my_package.getUser() returned EIS_ORA_200
DBUSER04 - call to my_package.getUser() returned EIS_ORA_200
DBUSER05 - call to my_package.getUser() returned EIS_ORA_200
I was expected that each DBUSER would get different user (as I confirmed in my JUnit tests where multiple concurrent threads invoke UserHandler.getUser()).
Later I've red that plsql wrapper calls can be setup in 2 maner:
to share java memory space between DBUSERS or
to separate memory space for each DBUSER
My conclusion is that UserHandler class is loaded for each DBUSER separately and that is why I have no use of static counter and synchronized method.
How to configure MyDatabase to force calls to my_package.getUser() use same java space for each DBUSER?
Thank you very much!
I don't believe there is any way to configure Oracle to share a JVM between multiple user sessions. The Java Developer's Guide for 10g states:
Oracle JVM model
Even when thousands of users connect
to the server and run the same Java
code, each user experiences it as if
he is running his own Java code on his
own JVM...
Generally the appropriate way to share data between sessions in an RDBMS is with database objects. In this case the simplest thing would be to use an Oracle sequence, with minvalue 1, maxvalue 10, and cycling enabled. You could just select from the sequence directly in the Java code.
Another approach would be to simply generate a uniformly-distributed random number between 1 and 10. If there are enough sessions then over time this should distribute the sessions evenly.

Categories