I'm searching for a fast (really fast) way to test changes to hibernate queries. I have a huge application with thousands of different HQL queries (in XML files) and 100+ mapped classes and i dont want to redeploy the whole application to just test one tiny change to a query.
How would a good setup look like to free me from redeployment and enable a fast query check?
With Intellij IDEA 8.1.3 the mechnism of choice is called 'Facet'. To instantly test HQL queries:
create a data source Tools -> Data Source, Add Data Source, define driver, username and password of yor development db
in case you dont have already a hibernate.cfg or you configure your session factory in a different way than via xml: create a hibernate.cfg file referencing all XML mapping's (define a name for the session factory, just for easier handling)
in 'Project Structure' add Facet to your module of choice and assign the recently defined data source to the new facet
switch to Java EE View
Open Hibernate Facets - Node
Right click Session factory and choose "Open HQL Console"
enter HQL query in console
...and your're done.
sorry for this RTFM question.
You can use hibernate tools in eclipse to run queries. This will allow you to run HQL whenever you want to try something.
If you're using IntelliJ, there is Hibero.
There is a standalone editor from sun, but I haven't tried it.
I wrote a simple tool to test & preview HQL, this is just one java class with main method.
you can find the code here: https://github.com/maheskrishnan/HQLRunner
here's the screen shot...
I test my HQL queries in unit-tests with the HSQLDB database. Just create an entity manager, cast it to a hibernate session and query away.
final EntityManagerFactory entityManagerFactory = Persistence.createEntityManagerFactory("tacs-test", props);
final EntityManager entityManager = entityManagerFactory.createEntityManager();
return (Session)entityManager.getDelegate();
Best
Anders
You said the quickest way, I'm not sure if you meant the quickest way to get going, or the quickest way to perform ongoing tests, with some initial investment to get the tests implemented. This answer is more the latter.
The way I've done this before was to implement some simple integration testing with JUnit and DBUnit.
In essence, you'll be using DBUnit to set up your test database with a known and representative set of data, and then plain JUnit to exercise the methods containing your HQL queries, and verify the results.
For instance,
Set up your database first to contain only a fixed set of data e.g.,
Product Name, Price
Acme 100 Series Dynamite, $100
Acme 200 Series Dynamite, $120
Acme Rocket, $500
This is something you'd do in your JUnit test case's setup() method.
Now let's assume you have a DAO for this entity, and there's a "findProductWithPriceGreaterThan(int)" method. In your test, you'd do something like:
public void testFindProductWithPriceGreaterThanInt() {
ProductDAO dao = new HibernateProductDAO();
//... initialize Hibernate, or perhaps do this in setup()
List products = dao.findProductWithPriceGreaterThan(110);
assertEquals(2, products.size());
//... additional assertions to verify the content of the list.
}
In the eclipse Market, you can search for JBoss Tools and choose only Hibernate tools from the given list.
In eclipse
Install Hibernate tools(Jboss)
Switch to hibernate perpective
Open/click Hibernate Configuration window
Rt Click on the window and Add Configuration
Rt Click on the window click/open HQL editor
Type and execute your HQL queries and get your result in the Hibernate Query result window
Follow this link for more info http://docs.jboss.org/tools/OLD/2.0.0.GA/hibernatetools/en/html/plugins.html
Related
I am new to Java and started with Spring Boot and Spring Data JPA, so I know 2 ways on how to fetch data:
by Repository layer, with Literal method naming: FindOneByCity(String city);
by custom repo, with #Query annotation: #Query('select * from table where city like ?');
Both ways are statical designed.
How should I do to get data of a query that I have to build at run time?
What I am trying to achieve is the possibility to create dynamic reports without touching the code. A table would have records of reports with names and SQl queries with default parameters like begin_date, end_date etc, but with a variety of bodies. Example:
"Sales report by payment method" | select * from sales where met_pay = %pay_method% and date is between %begin_date% and %end_date%;
The Criteria API is mainly designed for that.
It provides an alternative way to define JPA queries.
With it you could build dynamic queries according to data provided at runtime.
To use it, you will need to create a custom repository implementation ant not only an interface.
You will indeed need to inject an EntityManager to create needed objects to create and execute the CriteriaQuery.
You will of course have to write boiler plate code to build the query and execute it.
This section explains how to create a custom repository with Spring Boot.
About your edit :
What I am trying to achieve is the possibility to create dynamic
reports without touching the code. A table would have records of
reports with names and SQl queries with default parameters like
begin_date, end_date etc, but with a variety of bodies.
If the queries are written at the hand in a plain text file, Criteria will not be the best choice as JPQL/SQL query and Criteria query are really not written in the same way.
In the Java code, mapping the JPQL/SQL queries defined in a plain text file to a Map<String, String> structure would be more adapted.
But I have some doubts on the feasibility of what you want to do.
Queries may have specific parameters, for some cases, you would not other choice than modifying the code. Specificities in parameters will do query maintainability very hard and error prone. Personally, I would implement the need by allowing the client to select for each field if a condition should be applied.
Then from the implementation side, I would use this user information to build my CriteriaQuery.
And there Criteria will do an excellent job : less code duplication, more adaptability for the query building and in addition more type-checks at compile type.
Spring-data repositories use EntityManager beneath. Repository classes are just another layer for the user not to worry about the details. But if a user wants to get his hands dirty, then of course spring wouldn't mind.
That is when you can use EntityManager directly.
Let us assume you have a Repository Class like AbcRepository
interface AbcRepository extends JpaRepository<Abc, String> {
}
You can create a custom repository like
interface CustomizedAbcRepository {
void someCustomMethod(User user);
}
The implementation class looks like
class CustomizedAbcRepositoryImpl implements CustomizedAbcRepository {
#Autowired
EntityManager entityManager;
public void someCustomMethod(User user) {
// You can build your custom query using Criteria or Criteria Builder
// and then use that in entityManager methods
}
}
Just a word of caution, the naming of the Customized interface and Customized implementating class is very important
In last versions of Spring Data was added ability to use JPA Criteria API. For more information see blog post https://jverhoelen.github.io/spring-data-queries-jpa-criteria-api/ .
I have a Java application using Hibernate 4.3.6. We use two different databases (one for regular deploy, other for unit/integration tests). There's a common db-function we'd like to use, but it's called different in each db dialect and Hibernate has no support for it. We've fixed this by simply creating subclasses for each Dialect and using:
this.registerFunction("normalizedFunctionName",
new SQLFunctionTemplate(StandardBasicTypes.INTEGER, "arbitraryFunctionName(?1, ?2)"));
I can now use normalizedFunctionName(?, ?) in HQL. However I'd like to use it when using the Criteria API, something like:
sessionFactory.getCurrentSession()
.createCriteria(SomeClass.class)
.add(
Restrictions.lt("normalizedFunctionName(value, 'bla')", 3)
);
But this doesn't work. Now I've discovered there's:
Restrictions.sqlRestriction("arbitraryFunctionName(value, 'bla') < 3");
But since that's native SQL and not HQL, I can't use it.
So, my questions are:
Is there an HQL version of the Restrictions.sqlRestriction() feature?
Or is there another alternative to accomplish what I'm trying to do?
Named parameters, just like JdbcTemplate from Spring
XML configuration for JDBC connection settings
XML configuration for queries. Something like Hibernate <sql-query>. See Named SQL queries for an example
I'm thinking of trying to build my own, but I thought I'd ask here, maybe it's already been done.
Obviously I don't want to use neither an ORM nor JdbcTemplate.
What about MyBatis?
(source: mybatis.org)
I'am looking for the same thing, meanwhile try out DBUtils Utility:
http://commons.apache.org/dbutils/
Lightweight, open source and no dependencies.
Try JdbcSession from jcabi-jdbc. It's simple (as you want) and requires you to create a java.sql.DataSource before, for example (using BoneCP and H2 database):
BoneCPDataSource source = new BoneCPDataSource();
source.setDriverClass("org.h2.Driver");
source.setJdbcUrl("jdbc:h2:mem:x");
String name = new JdbcSession(source)
.sql("SELECT name FROM user WHERE id = ?")
.set(555)
.select(new SingleHandler<String>(String.class));
I have a Spring/Hibernate webapp that has some integration tests that run on an in-memory HSQL database. Hibernate takes this blank database and creates all of my test tables and constraints thanks to hbm2ddl=create. However, I have a new bean that checks for a particular config value from the database during its afterPropertiesSet() method, and so when this bean is initialized, such a row needs to exist in the database.
Is there any good way to set up a Java/Spring/Hibernate equivalent of Rail's test fixtures? I'm trying to find a way to tell Hibernate "whenever you create this table, insert these rows immediately afterwards". I couldn't find a callback or a hook I could add, but maybe there's another way.
I'm trying to find a way to tell Hibernate "whenever you create this table, insert these rows immediately afterwards"
Since Hibernate 3.1, you can include a file called import.sql in the runtime classpath of Hibernate and at the time of schema export, Hibernate will execute the SQL statements contained in that file after the schema has been exported.
This feature has been announced in the Rotterdam JBug and Hibernate's import.sql blog post:
import.sql: easily import data in your unit tests
Hibernate has a neat little feature
that is heavily under-documented and
unknown. You can execute an SQL script
during the SessionFactory creation
right after the database schema
generation to import data in a fresh
database. You just need to add a file
named import.sql in your classpath
root and set either create or
create-drop as your
hibernate.hbm2ddl.auto property.
I use it for Hibernate Search in
Action now that I have started the
query chapter. It initializes my
database with a fresh set of data for
my unit tests. JBoss Seam also uses it
a lot in the various examples.
import.sql is a very simple feature
but is quite useful at time. Remember
that the SQL might be dependent on
your database (ah portability!).
#import.sql file
delete from PRODUCTS
insert into PRODUCTS (PROD_ID, ASIN, TITLE, PRICE, IMAGE_URL, DESCRIPTION) values ('1', '630522577X', 'My Fair Lady', 19.98, '630522577X.jpg', 'My Fair blah blah...');
insert into PRODUCTS (PROD_ID, ASIN, TITLE, PRICE, IMAGE_URL, DESCRIPTION) values ('2', 'B00003CXCD', 'Roman Holiday ', 12.98, 'B00003CXCD.jpg', 'We could argue that blah blah');
For more information about this
feature, check Eyal's blog, he
wrote a nice little entry about it.
Remember if you want to add additional
database objects (indexes, tables and
so on), you can also use the auxiliary
database objects feature.
It is still not really documented.
In hibernate 3.6 the configuration that allows to run arbitrary sql commands is:
hibernate.hbm2ddl.import_files
See in http://docs.jboss.org/hibernate/core/3.6/reference/en-US/html_single/, noticing there is an error in the documentation: the property is import_files, with an s in the end.
If you're talking about JUnit tests and using AbstractTransactionalDataSourceSpringContextTests there's methods you can override like onSetupBeforeTransaction that provide a hook to pre-populate test table data etc.
I just started integrating Hibernate Search with my Hibernate application. The data is indexed by using Hibernate Session every time I start the server.
FullTextSession fullTextSession = Search.getFullTextSession(session);
Transaction tx = fullTextSession.beginTransaction();
List books = session.createQuery("from Book as book").list();
for (Book book : books) {
fullTextSession.index(book);
}
tx.commit(); //index is written at commit time
It is very awkward and the server takes 10 minutes to start.
Am I doing the this in right way?
I wrote a scheduler which will update the indexes periodically. Will this update the existing index entries automatically, or create duplicate indices?
As detailed in the Hibernate Search guide, section 3.6.1, if you are using annotations (by now the default), the listeners which launch indexing on store are registered by default:
Hibernate Search is enabled out of the
box when using Hibernate Annotations
or Hibernate EntityManager. If, for
some reason you need to disable it,
set
hibernate.search.autoregister_listeners
to false.
An example on how to turn them on by hand:
hibConfiguration.setListener("post-update", new FullTextIndexEventListener());
hibConfiguration.setListener("post-insert", new FullTextIndexEventListener());
hibConfiguration.setListener("post-delete", new FullTextIndexEventListener());
All you need to do is annotate the entities which you want to be indexed with the
#Indexed(index = "fulltext")
annotation, and then do the fine-grained annotation on the fields, as detailed in the user guide.
So you should neither launch indexing by hand when storing, neither relaunch indexing whae the application starts, unless you have entities which have been stored before indexing was enabled.
You may get performance problems when you are storing an object which say has an "attachment" and so you are indexing that in the same scope of the transaction which is storing the entity. See here:
Hibernate Search and offline text extraction
for a solution that solves this problem.
Provided you are using a FSDirectoryProvider (which is the default) the Lucene index is persisted on disk. This means there is no need to index on very startup. If you have existing database you want of course to create an initial index using the fullTextSession.index() functionality. However, this should not be on application startup. Consider exposing some sort of trigger url, or admin interface.
Once you have the initial index I would recommend to use automatic indexing. This means that the Lucene index gets automatically updated if a books get created/updated/deleted. Automatic indexing should also be enabled by default.
I recommend you refer to the automatic and manual indexing sections in the online manual - http://docs.jboss.org/hibernate/stable/search/reference/en/html_single
--Hardy
I currently use Hibernate Search's automatic indexing with JPA and it works really well. To create your indexes initially you can just call the following:
FullTextEntityManager fullTextEntityManager =
Search.getFullTextEntityManager(entityManager);
try {
fullTextEntityManager.createIndexer().startAndWait();
} catch (InterruptedException e) {
// Exception handling
}
where "entityManager" is just a javax.persistence.EntityManager. The above will index all fields marked with #Field for all entities marked as #Indexed.
Then as long as you do all your updates, etc, through the entity manager the indexes are automatically updated. You can then search as per usual but be sure to recreate your EntityManager on each search (you can use the EntityManagerFactory to do so).