I can't be the only person who has this problem so I am looking for suggestions.
We run our apps on Oracle but our integration tests use h2 for speedy in-memory testing, the database being built from DDL scripts at the start of testing.
The problem is that the use/syntax of some DDL commands differs between Oracle and h2/hsqldb. For example today I spent some time before I realised that 'grant select on ...' works on sequences in Oracle but only on tables in h2.
In a previous project we had an adapter to remove/translate such erroneous commands, which meant that our test database ran quite different code to that we implement to prod. While everything is very thoroughly acceptance tested it means that certain problems might not be spotted until quite late on in the dev cycle.
On my latest project I sense I am going down the same path - so surely others must also have trodden it.
Any suggestions? We're using java/maven so appropriate soutions welcome!
There isn't such an adapter to my knowledge.
Anyway, I'd say that you're not going to achieve your goals with such an adapter. For one, the feature set of Oracle is not easily found in any other solution ( not that that's necessarily an advantage for Oracle).
Related
I've searched for best practices, tools and libraries to test oracle db views, but did not find much. Some SQL editors have their own built-in ways, there are some libraries for SQL Server and there's http://utplsql.org/ for PL/SQL, which seems to be the closest thing, but I'm not sure if it fits my needs.
Problem statement: we have tons of business logic written as SQL views. Over time this has become very hard to maintain and small changes can cause surprising regressions. (for all the usual reasons).
I would like something that integrates with a standard java build pipeline so that when we deploy any changes to the db objects (which we currently do with liquibase) we can run a full test suite and reduce the risk of regressions.
If a java solution is not possible, then a SQL or PL/SQL one might also be OK.
The first naïve approach I could think of would be something that
creates some test tables with test data
mocks the view that needs to be tested, by replacing the source tables with the test ones
runs a "select *" from the view and compares it with the desired output
drops the test tables and the mocked view
Is there any existing tool/library that does the above?
If not, is there any particular reason why the above approach would not work?
Thanks
I'm trying to do some crawling with Nutch and I'd like to test out Cassandra as a backend, however using the latest version of nutch and its dependencies Cassandra throws a variety of errors as you move through the inject, generate, fetch, etc. process.
The errors are all related to actual problems in code, not out of memory or configuration. I've fixed some of them by modifying code within gora-cassandra, but it's still not functional.
My question is, does a working version of these 2 projects exist? By working i mean you can run through inject, generate, fech, parse, updatedb on at least a small set of urls, without error.
Here's an example of one of the classes giving an error during fetch:
java.lang.NullPointerException
at org.apache.gora.cassandra.query.CassandraSuperColumn.getUnionIndex
I have used HBase as the backend and that just works, although HBase itself is a monster to manage so that's why i'd like to test out Cassandra. However, i'm about to give up on this as I don't think I should be having to modify gora-cassandra code just to get a basic example to run.
Thanks
According to this link it's just broken, which is about 3 months old http://lucene.472066.n3.nabble.com/Re-user-Digest-3-Jun-2017-19-27-20-0000-Issue-2758-td4339060.html
Its unclear why backends that do not work are even documented.
HBase is most widely used, followed by MongoDB... on the other end of
the spectrum, Cassandra is least used and broken. It has not been
maintained for quite some time... and yes this is reflected by use of
Super Columns. We are currently re-writing the backend as part of a
GSoC project.
I would agree with the guy making the original statement, Its unclear why backends that do not work are even documented.
Really tired of this project and its lack usable documentation.
I am managing a web-based project based on java, subversion and svn with 8 developers. Unfortunately, mangling DB changes is a big problem for the project. In our case, every user may update the tables and forgot to put the change scripts in svn. So, it takes lots of our time to see and debug an issue raised because of an un-updated table or view.
So, I wonder, is there any method, tool or plug-in for oracle 11g to keep all DB changes as scripts for us somewhere, e.g. on svn?
Edit 1: Getting a dump from the whole db does not solve my problem, because in the real environment I cannot discard customer data and go back to a new dump.
I think this is just what you need. An open source database change management system. Liquibase.
http://www.liquibase.org/
Do not store change scripts, only scripts that drop and recreate all your objects. Developers should change and run those scripts on a local instance, run automated unit tests, and then check-in their changes.
Rebuilding from scratch is so much better than constantly running alter scripts. You'll never be in control of your application until everyone can easily rebuild the entire system from scratch.
(I assume you're asking about development on trunk, where you have lots of little changes. For major upgrades, like moving from version 1.1 to version 1.2, you'll still need to use change scripts to help preserve data.)
More cheap and worse solution, than Liquibase, according to Oracle: exporting only schema topic, can be
post-commit hook, which
expdp ... DUMPFILE=file.dmp CONTENT=METADATA_ONLY into dir, which is WC or special location in repository
commit this file.dmp
There are two aspects to maintaining database changes. One, as you mentioned, in in the form of scripts that can be applied to an older schema to upgrade it. However, this is part of the answer, as it is really hard for a developer to look at scripts, parse them, and figure out how recent schema changes may affect their work.
So, in addition to change scripts, I would suggest that you also check in a human-readable version of the database metadata, in a text file. SchemaCrawler is one such free tool that is designed for this purpose, and produces rich metadata information in a format that is designed to be diffed. I have found that database metadata changes over time become traceable if you make it a nightly process to automate check-ins of schema metadata.
Please try this tool: www.dbapply.com
It has both GUI for manual deployment of scripts from Subversion repository and command line interface for continuous integration.
Supports Subversion branches.
Can work on Windows and Linux (you need JRE 8).
My application is always developing, so occasionally - when the version upgrades - some tables need to be created/altered/deleted, some data modified, etc. Generally some sql code needs to be executed.
Is there a Java library that can be used to keep my database structure up to date (by analyzing something like "db structure version" information and executing custom sql to code to update from one version to another)?
Also it would be great to have some basic actions (like add/remove column) ready to use with minimal configuration, ie name/type and no sql code.
Try DBDeploy. Although I haven't used it in the past, it sounds like this project would help in your case. DBDeploy is a database refactoring manager that:
"Automates the process of establishing
which database refactorings need to be
run against a specific database in
order to migrate it to a particular
build."
It is known to integrate with both Ant and Maven.
Try Liquibase.
Liquibase is an open source (Apache
2.0 Licensed), database-independent library for tracking, managing and
applying database changes. It is built
on a simple premise: All database
changes are stored in a human readable
yet trackable form and checked into
source control.
Supported features:
Extensibility
Merging changes from multiple developers
Code branches
Multiple Databases
Managing production data as well as various test datasets
Cluster-safe database upgrades
Automated updates or generation of SQL scripts that can be approved and
applied by a DBA
Update rollbacks
Database ”diff“s
Generating starting change logs from existing databases
Generating database change documentation
We use a piece of software called Liquibase for this. It's very flexible and you can set it up pretty much however you want it. We have it integrated with Maven so our database is always up to date.
You can also check Flyway (400 questions tagged on SOW) or mybatis (1049 questions tagged). To add to the comparison the other options mentioned: Liquibase (663 questions tagged) and DBDeploy (24 questions tagged).
Another resource that you can find useful is the feature comparison in the Flyway website (There are other related projects mentioned there).
You should take a look into OR Mapping libraries, e.g. Hibernate
Most ORM mappers have logic to do schema upgrades for you, I have successfully used Hibernate which gets at least the basic stuff right automatically.
I've been working on a Django project using South to track and manage database schema changes. I'm starting a new Java project using Google Web Toolkit and wonder if there is an equivalent tool. For those who don't know, here's what South does:
Automatically recognize changes to my Python database models (add/delete columns, tables etc.)
Automatically create SQL statements to apply those changes to my database
Track the applied schema migrations and apply them in order
Allow data migrations using Python code. For example, splitting a name field into a first-name and last-name field using the Python split() function
I haven't decided on my Java ORM yet, but Hibernate looks like the most popular. For me, the ability to easily make database schema changes will be an important factor.
Wow, South sounds pretty awesome! I'm not sure of anything out-of-the-box that will help you nearly as much as that does, however if you choose Hibernate as your ORM solution you can build your own incremental data migration suite without a lot of trouble.
Here was the approach that I used in my own project, it worked fairly well for me for over a couple of years and several updates/schema changes:
Maintain a schema_version table in the database that simply defines a number that represents the version of your database schema. This table can be handled outside of the scope of Hibernate if you wish.
Maintain the "current" version number for your schema inside your code.
When the version number in code is newer than what's the database, you can use Hibernate's SchemaUpdate utility which will detect any schema additions (NOTE, just additions) such as new tables, columns, and constraints.
Finally I maintained a "script" if you will of migration steps that were more than just schema changes that were identified by which schema version number they were required for. For instance new columns needed default values applied or something of that nature.
This may sounds like a lot of work, especially when coming from an environment that took care of a lot of it for you, but you can get a setup like this rolling pretty quickly with Hibernate and it is pretty easy to add onto as you go on. I never ended up making any changes to my incremental update framework over that time except to add new migration tasks.
Hopefully someone will come along with a good answer for a more "hands-off" approach, but I thought I'd share an approach that worked pretty well for me.
Good luck to you!
as I'm looking for the same heres what I've achieved so far.
We first used dbdeploy. It manages the most stuff for you but you will have to write all the transition scripts by yourself! That means every change you make has to be in its own script which you will have to write from scratch. Not very handy, but works very reliable.
The second thing I encountered is liquibase. It stores the configuration in one single xml file. Not very intuitive to read, but managable. Plus there is an Intellij Idea plugin for it. At the moment of writing it still has some minor issues, but as the author assured me, they will be fixed soon.
The perfect solution would be to get south working with your java environment. That really would be a tool to marry :D
Maybe try flyaway. Seems like a good alternative.
I've been thinking about using django-jython just for db migrations in our legacy Java application. The latest Jython version is 2.5.4rc1, but I think I can mitigate the risk by just using it for South migrations.
Especially since I can use inspectdb to generate the models for me. And then replace parts of the Java with Python "seamlessly".
If you're using hibernate, then checkout liquibase
http://www.liquibase.org/databases.html
It's been around for 10 years so it's pretty solid. It may support other ORM's, just have a dig around on their website. Checkout the liquibase+hibernate extension here:
https://github.com/liquibase/liquibase-hibernate