I am using Spring MVC + hibernate in my application. Application server is IBM WebSphere v7. While restarting and after restart application, Db2syscs process makes CPU usage 99% and consumes memory usage about 1,034,352K. This goes about 10~15 minutes. I tried increasing the heap size allocated to DB2, which made no difference.
EDIT
These are my hibernate properties in spring configuration file. Will adding cache, pool properties make any affect?
<prop key="hibernate.dialect">org.hibernate.dialect.DB2Dialect</prop>
<prop key="hibernate.generate_statistics">true</prop>
<prop key="hibernate.show_sql">false</prop>
<prop key="hibernate.connection.datasource">jdbc/logincfg</prop>
<prop key="hibernate.transaction.factory_class">org.hibernate.transaction.CMTTransactionFactory
</prop>
<prop key="hibernate.transaction.manager_lookup_class">org.hibernate.transaction.WebSphereExtendedJTATransactionLookup
</prop>
If you're running on linux, try a monitor such as db2top to determine which query is churning your CPU.
db2top -d <your database name>
WebSphere itself handles all pooling and cache level for its Data sources. All JDBC related configurations can be performed through WebSphere administrative console. Try using connection pool datasource instead of XA datasource. Also check heuristic hazard in server configuration. These will decrease load on CPU.
Related
currently we are aiming to do load balance 2 active servers 50/50.
Java application uses hibernate search locally, I have centralized this directory so both server uses the same directory.
I want to share hibernate indexing with multiple servers
I have set following so there is no locking between read/writefrom the servers
property name="hibernate.search.default.locking_strategy" value="none"
Does anyone know if this will be an issue?
I can't really answer your question but I'd like to share some considerations.
We used this kind of configuration during years in production (no custom lock strategy) and we experimented so much problem (stale nfs file handle, dead lock and index corruption).
We tried to defer all the index updates operation to a single server using JMS but even in this mode we experimented some problems (much less than in the mode where update operation occurs on many servers however)
Note also that putting the index files on NFS is strongly discouraged
We finally gave up hibernate search, for distributed indexes I'll personally advise to use elastic search.
However this is theoretically possible as stated there : https://docs.jboss.org/hibernate/search/5.1/reference/en-US/html/search-architecture.html#_lucene
This mode targets non clustered applications, or clustered applications where the Directory is taking care of the locking strategy.
I don't really know how the "Directory" is expected to handle the locking strategy
The previously used datasource configuration regarding hibernate search :
<!-- hibernate search configuration -->
<!-- index root folder location -->
<prop key="hibernate.search.default.indexBase">${env.ftindex.location.root}/${datasource.database}/${ftindex.folder}</prop>
<!-- global analyzer -->
<prop key="hibernate.search.analyzer">custom_analyzer</prop>
<!-- asynchronous indexing for performance considerations -->
<prop key="org.hibernate.worker.execution">async</prop>
<!-- max number of indexing operation to be processed asynchronously (before session flush) to avoid OutOfMemoryException -->
<prop key="hibernate.search.worker.buffer_queue.max">100</prop>
I lost my projects' database from my pc. But I have my JSP project which includes hibernate. Whenever I run the project it says 'org.hibernate.exception.SQLGrammarException: Cannot open connection' because the database not exist in the server. Can I recreate my database with the help of the POJO files? I am using netbeans and MySQL server 5.1
Yes you can recreate your database using hibernate.
In our session factory you need to set the following property.
<property name="jpaProperties">
<props>
<prop key="hibernate.hbm2ddl.auto">create</prop>
</props>
</property>
hibernate.hbm2ddl.auto Automatically validate or exports schema DDL to the database when the SessionFactory is created
There are a couple of caveats you need to understand:
This will only recreate objects which you had mapped. This process will not recreate any database object which you may have had but didn't have mapped.
Certain object names may not be named what they were previously. One classic example would be the names of your foreign-key constraints.
This will at least get you back up with a majority of what you lost at the very least.
I have a eclipse workspace with many projects, one project "shareddata" contains all my jpa entities, services, persistence.xml (using spring-data) and such. In my other projects i have included "shareddata" as dependency in my maven pom.xml .
When i start one of the other projects, jpa/hibernate validates and updates my database tables (hbm2ddl.auto = true). This works nicely.
But to test my entire project in need to start several projects that all include the "shareddata" project. So every single project validates and updates my database tables. This takes quite a bit of time.
Is it possible only to enable "hbm2ddl.auto" for one single project? Or is it possible to dynamically disable "hbm2ddl.auto" at application startup?
If that is possible than i could start up my jms server project and do database validation. Next i start up my other projects (tomcat and several server apps) and they won't do the database validation.
Saves me a lot of time :-)
I did such things via system properties. Unfortunately I do not know how do you initialize hibernate context. I personally did it via Spring that supports system properties using ${propName} syntax. If you can use this notation just use it in your configuration files and set appropriate property in the beginning of your unit test.
It took a little while to figure out, but jpa is configured in my applicationContext.xml (or variations like root-context.xml). LocalContainerEntityManagerFactoryBean does the initialization.
Lucky LocalContainerEntityManagerFactoryBean accepts parameters that seem to override the values set in persistence.xml . So i set hibernate.hbm2ddl.auto to none in persistence.xml and use the following xml to enable hibernate.hbm2ddl.auto for a specific project:
<!-- Add JPA support -->
<bean id="emf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="loadTimeWeaver">
<bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver" />
</property>
<property name="jpaProperties">
<props>
<prop key="hibernate.hbm2ddl.auto">update</prop>
</props>
</property>
</bean>
Hope this may help someone with the same problem.
After setting up all the possible settings of PgPool on CentOS, when I tested it using my Java application, I found that it is not working.
After reading manual on internet (you can find here), I found that it will not work for JDBC statements if they have been set to false (for auto commit).
Since I am using Hibernate, I am quiet sure that it is using transaction to set the values.
My question is, if this is true, which method of is useful to replicate my databases. I hear about parallel mode, but I am not sure whether it will work for Java application. Can anybody guide and provide me samples for it?
Modification transactions at the end of business methods work as you describe: a BEGIN/END block is created containing all modification queries that are either all committed or rolled back.
This is done by setting autocommit to false, but this does not mean that all queries made by Hibernate are done in this mode. The same query depending on the required isolation mode might be executed either in auto-commit or non auto-commit mode.
For the usual case of a transaction in READ_COMMITED mode, queries like find by Id or named queries will run in it's own database transaction with auto-commit true ( and so without a BEGIN/END block).
Find by Ids and other read queries will only trigger a BEGIN block if they are run in at least REPEATABLE_READ isolation mode.
This mean that if you use the default REPEATABLE_READ isolation mode, the load balancing will work fine because most select queries will run with in auto-commit = true.
You can confirm this by logging all SQL queries sent to the database using for example log4jdbc. This will print all the SQL actually sent to the database.
If by parallel mode what you meant is the transaction isolation level, from this page you can see that PostgreSQL supports 4 level of isolations, and it is configurable from hibernate by setting the property: hibernate.connection.isolation to 1, 2, 4, or 8, from lower level to the highest.
Read committed is the default isolation level in PostgreSQL, one level above dirty read.
Serialization is the highest level and it is very expensive, because if 2 transactions are to be made on the same table, there will be a lock, if locking happens more than the time out that was set on the Database/Hibernate, then it will throw a time out exception.
Not sure if you have heard about them but following are frameworks that can be used with hibernate to improve performance:
C3P0 for a more advanced connection pooling
Ehcache for boosting performance by enabling cache
They are easy to configure and do not depends on OS. I did not have any experience with PgPool so I can't give comments on performance comparison.
Following are the sample hibernate settings that you might want to try:
<prop key="hibernate.show_sql">false</prop>
<prop key="hibernate.format_sql">false</prop>
<prop key="hibernate.connection.isolation">4</prop>
<prop key="hibernate.connection.autocommit">false</prop>
<prop key="hibernate.c3p0.min_size">5</prop>
<prop key="hibernate.c3p0.max_size">20</prop>
<prop key="hibernate.c3p0.timeout">1800</prop>
<prop key="hibernate.c3p0.max_statements">50</prop>
<prop key="hibernate.cache.provider_class"> org.hibernate.cache.EhCacheProvider</prop>
<prop key="net.sf.ehcache.configurationResourceName">WEB-INF/ehcache.xml</prop>
I hope this can help you in optimizing your application in regards of database transactions. There are a lot more that you can actually check e.g. table indexing, or using a profiler to find out which transactions cost the most.
I am running Quartz job in clustered mode. Here is my config. Is it possible to change node of job runtime (JMX RMI)?
For example my server has 2 nodes. First is too busy so I need to change job to second one.
<property name="quartzProperties">
<props>
<prop key="org.quartz.scheduler.instanceName">myApp</prop>
<prop key="org.quartz.scheduler.instanceId">AUTO</prop>
<prop key="org.quartz.jobStore.misfireThreshold">60000</prop>
<prop key="org.quartz.jobStore.class">org.quartz.impl.jdbcjobstore.JobStoreTX</prop>
<prop key="org.quartz.jobStore.driverDelegateClass">org.quartz.impl.jdbcjobstore.StdJDBCDelegate</prop>
<prop key="org.quartz.jobStore.tablePrefix">q</prop>
<prop key="org.quartz.jobStore.isClustered">true</prop>
<prop key="org.quartz.threadPool.class">org.quartz.simpl.SimpleThreadPool</prop>
<prop key="org.quartz.threadPool.threadCount">5</prop>
<prop key="org.quartz.threadPool.threadPriority">5</prop>
<prop key="org.quartz.scheduler.skipUpdateCheck">true</prop>
<prop key="org.quartz.scheduler.jmx.export">true</prop>
<prop key="org.quartz.scheduler.jmx.objectName">quartz:type=QuartzScheduler,name=JmxScheduler,instanceId=NONE_CLUSTER</prop>
</props>
Not directly. I don't think choosing the server that a job runs on is part of the standard version of Quartz. It is available in Quartz Scheduler Where.
If you want to proceed with RMI, you could probably write a program that turns off one of the schedulers in the cluster based on conditional logic (if you disable the job, it would prevent future execution on all servers). From the manual:
When using Quartz via RMI, you need to start an instance of Quartz
with it configured to "export" its services via RMI. You then create
clients to the server by configuring a Quartz scheduler to "proxy" its
work to the server.
To turn on RMI:
<prop key="org.quartz.scheduler.rmi.export">true</prop>
This page from O'Reilly describes the entire process in detail and shows an example of managing a remote instance from a client. Modify their example to turn off the scheduler.
If you're open to an off-the-shelf solution, the MySchedule project is a web-based UI for managing Quartz. It's capable of managing remote instances.
An alternate approach is to manage the synchronization outside of Quartz. Allow the jobs to fire on all of the nodes, but use your own logic within the job to determine whether the current node should actually do any processing. You could use JGroups or a similar library to communicate load information between nodes.
Lastly, have you considered that Quartz might not be the right tool for the job? It sounds like a distributed queue might be appropriate. For example, a set of competing clients pull work items from a queue as quickly as they can process them.