I'm making a Spring Roo application. I'm using hibernate with a Reverse Engineered MSSQL DB and I want to create a different version of list.jspx (similar but features a WHERE clause) called listtermianted. What steps do I need to take to create a new view that is populated with the same info as list.jspx just narrowed down based on the SQL?
It is not a good idea to modify the roo generated aj file. The roo documentation states that roo can in fact delete this file. Sadly, I don't have a solution for you.
I was able to figure this out if anyone ever needs to do this. Create whatever methods with the proper SQL in the repsective jpa active record file. In controller_roo_controller.aj create a method (I just copied list and renamed it and referenced my methods I created in the java file). Then be sure to add the reference in views.xml. Finally create the actual file under views.
Related
In my spring project I use the #Entity annotation and let hibernate create the database tables automatically. Now I need some trigger on my database. Since you cannot combine letting Hibernate create the schema and initializing the schema via schema.sql I don't know how to create database trigger. I cannot create trigger in the schema.sql because the table is created afterwards by hibernate.
Since I dont want to rewrite the whole project, I was wondering HOW TO CREATE DATABASE TRIGGER ON JAVA LEVEL.
What would be the professional way? Would be nice if you could provide a simple code example
I tried #EntityListener but you cannot inject crudrepositories, this option might not be the best. Thats why I'm looking for other solutions.
There are several options if you are absolutely sure that triggers are what you need.
First of all, there is no "Java way" for creating triggers, at least not that I am aware of. You could implement something like this though, but it would be much more complicated than simply maintaining the triggers definition in SQL statements files and applying them whenever is needed.
Second, why the need to create triggers with Spring? Triggers are not more than code that represent encapsulate some business logic. As such, it might be a good idea to maintain them in separate SQL files and apply them whenever trigger's code is updated. If what you are looking for is to apply them automatically, you can should look for tools like Liquibase that enables these kind of automated tasks.
If you insist on applying triggers with Spring, then you might consider using the automatic database initialization provided by Spring, which can run automatically SQL files containing DDL/DML statements. For example, if you use MySQL you might have file called schema-mysql.sql under Spring src/main/resources folder with your triggers definitions. Note that this will execute the SQL files everytime the application starts, so you will have to control such cases with the specifics statements of your database, like DROP TRIGGER IF EXISTS my_trigger; for the case of MySQL.
In my pasts experiences, whenever we needed to use triggers we simply maintained them in separate SQL files, and apply them automatically using Liquibase, but this was in rare occasions given that using triggers highly couples you to the database vendor, which brings to the table other kinds of problems.
Disclaimer: I am a noob in Spring. What I am asking may be very "odd" as I don't even know what I don't know.
I am trying to create a batch data movement/manipulation tool (may I say a ETL tool) using Java. Someone suggested me to check out spring-batch which I really liked as it has many libraries for data reading/writing and processing.
But my trouble is- my data sources (flatfile or table) are not fixed. There is a fronted where user will select which flatfile or database table(s) they want to load and the program will automatically load that. This means, usual things like:
Source / target entity structures
source or target database URL/DSN
Job parameters etc.
are not pre-determined in my case. They are determined in runtime. But, so far, whatever spring-batch examples I have seen - they have configured these information in XML. I can't do that as that will make these information static.
My Question is - If I do not want to use Spring Container (and all its XML based bean configuration) but still want to use spring-batch to take advantage of it's batch processing libraries, will that be possible/viable?
No, you need to use the Spring Container for using spring batch and all its XML or annotation based bean configuration. However, what you are trying is achievable, you just need to find way to make it configurable by using parameters in Spring batch. You can take anyone example from internet and start working on it to make it configurable.
Like you can utilize file reader from Spring by simply writing custom mapper. You save the effort to create and maintain file reading logic.
You can have writer which can query which you create dynamically based on your table and file at run time.
Examples shows everything in xml for making simple to understand, how ever if you explore little bit almost everything can be done at runtime.
I have a much used project that I am working on currently updating. There are several places where this project can be installed, and in the future it is not certain what version is used where and to what version one might be updated to in the future. Right now they are all the same, though.
My problem stems from the fact that there might be many changes to the hibernate entity classes, and it must be easy to update to a newer version without any hassle, and no loss of database content. Just replace WAR and start and it should migrate itself.
To my knowledge Hibernate does no altering of tables unless hibernate.hbm2ddl.auto=create, but which actually throws away all the data?
So right now when the Spring context has fully loaded, it executes a bean that will migrate the database to the current version by going through all the changes from versionX to versionY (what version it previously was is saved in the database), and manually alter the table.
It's not much hassle doing a few hard-coded ALTER TABLE to add some columns, but when it comes to adding complete new tables, it feels silly to have to write all that...
So my question(s) is this:
Is there any way to send an entity class and a dialect to Hibernate
code somewhere, and get back a valid SQL query for creating a table?
And even better, somehow create an SQL string for adding a column to a table, dialect-safe?
I hope this is not a silly question, and I have not missed something obvious when it comes to Hibernate...
have you tried
hibernate.hbm2ddl.auto=update
it retains all the database with the data and append only columns and tables you have changed in entity.
I don't think you'll be able to fully automate this. Hibernate has the hbm2ddl tool (available as an ant task or a maven plugin) to generate the required DDL statements from your hibernate configuration to create an empty database but I'm not aware of any tools that can do an automatic "diff" between two versions. In any case you're probably better off doing the diff carefully by hand, as only you know your object model well enough to be able to pick the right defaults for new properties of existing entities etc.
Once you have worked out your diffs you can use a tool like liquibase to manage them and handle actually applying the updates to a database at application start time.
Maybe you should try a different approach. In stead of generating an schema at runtime update, make one 'by hand' (could be based on a hibernate generated script though).
Store a version number in the database and create an update script for every next version. The only thing you have to do now is determine in which version the database currently is and sequentially run the necessary update scripts to get it to the current version.
To make it extra robust you can make a unit/integration test which runs every possible database update and checks the integrity of the resulting database.
I used this method for an application I build and it works flawlessly. An other example of an implementation of this pattern is Android. They have an upgrade method in their API
http://developer.android.com/reference/android/database/sqlite/SQLiteOpenHelper.html#onUpgrade(android.database.sqlite.SQLiteDatabase, int, int)
Don't use Hibernate's ddl. It throws away your data if you want to migrate. I suggest you take a look at Liquibase. Liquibase is a database version control. It works using changesets. Each changeset can be created manually or you can let Liquibase read your Hibernate config and generate a changeset.
Liquibase can be started via Spring so it should fit right in with your project ;-)
In Spring Roo I used this tutorial with my custom xsd to generate objects.
After that I used command controller all ~.web, controller is generated but without CRUD functions.
If I make manual objects in Roo controller with CRUD functions is generated. Any idea what is the problem?
Schema file xsd is in my case important for REST data exchange.
The tutorial you're referring to indeed explains how you can create a Java (domain) model based on a provided xml Schema, but the controller all ~.web command currently (version 1.1.0) only creates controllers and corresponding CRUD functions for actual Roo (database) entities. As the generated Java classes are not marked as Roo entities, the controller command will not create the by you expected CRUD commands, which it, as you stated, will do for manually created entities as these are marked as Roo entities (see the #RooEntity annotation on these).
As the tutorial also states, you will need to manually update your controller and view (*.jspx) files to implement the CRUD functionality when you use the Spring Roo jaxb addon. I know, from checking the forum and Jira issues, that there are currently some ideas on also having Spring Roo create basic CRUD functionality for normal (none-entity) beans (see issue ROO-344 and it's related ROO-277 issue), but these are currently only ideas and most likely won't be implemented within the near future. So, when using a XML schema as base for your Spring Roo domain model, you'll still need to do quite a bit of manual coding to have a basic CRUD application, as opposed to using a database as your base for generating your domain model, as it then basically is executing a couple of commands in the Roo shell and you're done.
If your XML schema is but a definition of your domain model and you actually do want your data to be stored in and retrieved from the database by your application, as oposed to call a REST webservice for retrieving and storing changes, you might try to use the jaxb addon to generate the model and then annotate the generated classes, but as I haven't done that myself before, I'm not sure if that will work, but it might be worth trying.
Spring Roo generates AspectJ (.aj) files next to the .java source files. So, you won't see the methods in your source files. They are in the .aj files but are present after compile in the generated .class files.
I'm currently working on a desktop application using JPA/Hibernate to persist data in a H2 database. I'm curious what my options are if I need to make changes to the database schema in the future for some reason. Maybe I'll have to introduce new entities, remove them or just change the types of properties in an entity.
Is there support in JPA/Hibernate to do this?
Would I have to manually script a solution?
I usually let Hibernate generate the DDL during development and then create a manual SQL migration script when deploying to the test server (which I later use for UAT and live servers as well).
The DDL generation in Hibernate does not offer support for data migration at all, if you only do as much as adding a non-null field, DDL generation cannot help you.
I have yet to find any truely useful migration abstraction to help with this.
There are a number of libraries (have a look at this SO question for examples), but when you're doing something like splitting an existing entity into a hierarchy using joined inheritance, you're always back to plain SQL.
Maybe I'll have to introduce new entities, remove them or just change the types of properties in an entity.
I don't have any experience with it but Liquibase provides some Hibernate Integration and can compare your mappings against a database and generate the appropriate change log:
The LiquiBase-Hibernate integration records the database changes required by your current Hibernate mapping to a change log file which you can then inspect and modify as needed before executing.
Still looking for an opportunity to play with it and find some answers to my pending questions:
does it work when using annotations?
does it require an hibernate.cfg.xml file (although this wouldn't be a big impediment)?
Update: Ok, both questions are covered by Nathan Voxland in this response and the answers are:
yes it works when using annotations
yes it requires an hibernate.cfg.xml (for now)
There are two options:
db-to-hibernate - mirror DB changes to your entities manually. This means your DB is "leading"
hibernate-to-db - either use hibernate.hbm2ddl.auto=update, or manually change the DB after changing your entity - here your object model is "leading"