I have been fetching data from database to my JAVA program by using JDBC till now and I am doing so by creating a connection class and in that class I will use the DriverManager.getConnection() fnc, but as I am moving towards JPA I have learnt that a persistence.xml file is needed to connect to mysql. In the project where I am using JPA am i supposed to create this connection class again and should I place the connector jar file in the project where I will be using jpa.
Excuse my unawarness of the concept I am still in the learning phase.Any help is appreciated as I am new to this.
Here Need to clear for you.
JPA is a specification. Different ORM technology uses it. Like hibernate implemented JPA specification. Specification defines how it works.
Hibernate is ORM technology. It binds Your Plain Java Object (Entity) to your database tables. And table's column will be Entity's field. Like Table has Id column of number type, in entity it will be Long id; . table name will be entity name or others that is defined in hibernate docs.
3rd One is Database connector. Yes there is different types of connector for every database. like for mysql connector is using to connect your implemented code with databases. You can think Its a communication Layer for database and your code. Your code is communicating with database through this connector.
Hope now you get concept why mysql connector is needed to connect. Happy Coding :)
I need suggestions/help on the issue below:
I am working on oracle migration work for a Java web application. I want to move my application from oracle 9i to 11g
The environment is :
Jdk – 1.4.2
Weblogic 8.1(SP6)
Database to connect to – Oracle 11g
weblogic.db.url=jdbc:oracle:thin:#${weblogic.db.host}:${weblogic.db.port}/
weblogic.db.driver=oracle.jdbc.OracleDriver
Oracle JDBC Driver version - "10.2.0.2.0"
When I query any table that has a CLOB datatype , the query fails to execute with the following error:
“Cannot assign value of type 'weblogic.jdbc.wrapper.Clob_oracle_sql_CLOB' to property 'description_en' of type 'oracle.sql.CLOB'”.
I have read in oracle docs that weblogic 8.1(S6) supports oracle 11g .
Any other query which returns other than CLOB data type, it works fine, the issue is only with CLOB datatype that too with Oracle 11g :(
The same code works fine if it is connected to Oracle 9i, the only problem is with Oracle 11g.
My assumption is that I may be missing some extra wrappers/extensions which may be needed to map the CLOB datatype as I think there is no direct support from weblogic 8.1.
I also am thinking on below lines:
If the application includes its own Oracle jar file so is not using the data source provided by WebLogic. But I do not know on how should I ascertain this.
Please help!
You need to upgrade your Java version. Java 1.4 is not supported by modern Oracle drivers.
Also, it is best practice to add the Oracle driver jars to the container classpath, and not include them in your application. Then the application needs to reference a datasource provided by the container. If you plan some Oracle-specific fireworks, you may need the driver jars at compile time. You need to mark them as "provided" in your Maven pom.xml.
I use JDBC and created h2 database called usaDB from sql script. Then I filled all tables with jdbc.
The problem is that after I connect to usaDB at localhost:8082 I cannot see on the left tree
my tables. There is only INFORMATION_SCHEMA database and rootUser which I specified creating usaDB.
How to view the content of tables in my h2 database?
I tried query SELECT * FROM INFORMATION_SCHEMA.TABLES.
But it returned many table names except those I created. My snapshot:
I had the same issue and the answer seems to be really stupid: when you type your database name you shouldn't add ".h2.db" suffix, for example, if you have db file "D:\somebase.h2.db" your connection string should be like "jdbc:h2:file:/D:/somebase". In other way jdbc creates new empty database file named "somebase.h2.db.h2.db" and you see what you see: only system tables.
You can use the SHOW command:
Using this command, you can lists the schemas, tables, or the columns of a table. e.g.:
SHOW TABLES
This problem drove me around the twist and besides this page I read many (many!) others until I solved it.
My Use Case was to see how a SpringBatch project created in STS using :: Spring Boot :: (v1.3.1.RELEASE) was going to behave with the H2 database; to do the latter, I needed to be able to get the H2 console running as well to query the DB results of the batch run.
This is what I did and found out:
Created an Web project in STS using Spring Boot:
Added the following to the pom.xml of the latter:
Added a Spring configuration file as follows to the project:
This solves the Web project deficiencies in STS. If you run the project now, you can access the H2 console as follows: http://localhost:8080/console
Now create a SpringBatch project in STS as follows (the alternative method creates a different template missing most of the classes for persisting data. This method creates 2 projects: one Complete, and the other an initial. Use the Complete in the following.):
The SpringBatch project created with STS uses an in memory H2 database that it CLOSES once the application run ends; once you run it, you can see this in the logging output.
So what we need is to create a new DataSource that overrides the default that ships with the project (if you are interested, just have a look at the log messages and you will see that it uses a default datasource...this is created from:
o.s.j.d.e.EmbeddedDatabaseFactory with the following parameters:
Starting embedded database: url='jdbc:hsqldb:mem:testdb', username='sa')
So, it starts an in memory, and then closes it. You have no chance of seeing the data with the H2 console; it has come and gone.
So, create a DataSource as follows:
You can of course use a properties file to map the parameters, and profiles for different DataSource instances...but I digress.
Now, make sure you set the bit that the red arrow in the picture is pointing to, to a location on your computer where a file can be persisted.
Running the SpringBatch (Complete project) you should now have a db file in that location after it runs (persisting Person data)
Run the Web project you configured previously in these steps, and you WILL :=) see your data, and all the Batch job and step run data (et voila!):
Painful but rewarding. Hope it helps you to really BOOTSTRAP :=)
I have met exactly this problem.
From what you describe, I suppose that you connect your jdbc with the "real" h2 server, but you are connecting on web application to database by the wrong mode (embedded-in-memory mode, aka h2mem). It means that h2 will create a new database in-memory, instead of using your true database stored elsewhere.
Please make sure that when you connect to this database, you use the mode Generic H2 (Server), NOTGeneric H2 (Embedded). You can refer to the picture below.
Version of jar file and installed h2 database should be same.
If in case you have created and populated H2 database table using maven dependency in spring boot, then please do change the JDBC URL as jdbc:h2:mem:testdb while connecting to H2 using web console.
It is an old question, but I came across the same problem. Eventually I found out that the default JDBC URL is pointing a test server rather than my application. After correcting it, I could access the right DB.
I tried with both Generic H2 (Embedded) and the Generic H2 (Server) options, both worked as long as the JDBC URL: is provided correctly.
In grails 4.0.1 the jdbc URL for development is jdbc:h2:mem:devDb. Check your application.yml file for the exact URL.
For the people who are using H2 in embedded(persistent mode) and want to "connect" to it from IntelliJ(other IDEs probably apply too).
Using for example jdbc url as follows: jdbc:h2:./database.h2
Note, that H2 does not allow implicit relative paths, and requires adding explicit ./
Relative paths are relative to current workdir
When you run your application, your workdir is most likely set to your project's root dir
On the other hand, IDE's workdir is most likely not your project's root
Hence, in IDE when "connecting" to your database you need to use absolute path like: jdbc:h2:/Users/me/projects/MyAwesomeProject/database.h2
For some reason IntelliJ by default also adds ;MV_STORE=false. It disables MVStore engine which in fact is currently used by default in H2.
So make sure that both your application and your IDE use the same store engine, as MVStore and PageStore have different file layouts.
Note that you cannot "connect" to your database if your application is using it because of locking. The other way around applies too.
In my case the issue was caused by the fact that I didn't set the h2 username, password in java. Unfortunatelly, Spring didn't display any errors to me, so it was not easy to figure out. Adding this lines to dataSource method helped me fix the issue:
dataSource.setUsername("sa");
dataSource.setPassword("");
Also, I should have specified the schema when creating tables in schema.sql
Selecting Generic H2 (Server) solved for me. We tempted to use default Generic H2 (Embedded) which is wrong.
I'm developing a Spring MVC web application using Windows 7, Eclipse Juno, Eclipselink JPA as ORM and Glassfish as application server with Oracle 11g. While I was working with Eclipselink I noticed when I update a table manually by execute an update PL/SQL query it doesn't has any affect on entities that already retrieved by Eclipselink until restart the server. Although, I disabled Eclipselink cache by having <shared-cache-mode>NONE</shared-cache-mode> in persistance.xml and using EntityManager.clear(), EntityManager.close() and #Cacheable(false).
Then, I noticed when I update tables using Oracle-SQLDeveloper table designer it totally works fine and entities are showing updated information. So I checked SQLDeveloper log to see what query it's using to update rows and I saw that it's using ORA_ROWSCN and ROW ROWID in where clause. After that, I exactly used the same where clause as the one SQLDeveloper used to update tables, but still entities were showing old information.
I'm wondering what factors are involved here that Eclipslink is not fetching real time data from database ? but, after updating table with SQLDeveloper designer Eclipselink is showing real time data. It seems that modifying a table data with SQLDeveloper table designer also marks the table as changed by using some database features. Then, Eclipselink will read the mark before hitting the table.
Also to get more clarification, anyone knows what steps are involved in Eclipselink before it decides to hit database when user commands to execute a TypedQuery ? I'm so curious that where it stores cached entities ? since cache rest just by restarting the computer; I tried restarting Glassfish, killing the java process and logoff current user, but none of them worked. Why Eclipselink still is caching entities since I configured it to not use any caching procedure? Is it possible to completely turn off cache in Eclipselink?
I used to have a database called database and everything was working well using hibernate and its models.
I removed <property name="hibernate.hbm2ddl.auto"> to avoid update or create as it's a production server, we want to do it manually.
We recently switched to database2 and so we switched the hibernate configuration file and all the hibernate XML models.
`<class name="com.api.models.database.MmApplications" table="mm_applications" catalog="database2">`
but it keeps looking for database event if we migrated the database, the models and the connexion.
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'database.mm_applications' doesn't exist
Does someone can help me ?
UPDATE ----
Hibernate is connecting to the right database (database2), but there is a prefix as a prefix database. making the queries hitting the database instead of database2, and when I try to force the default_schema my queries become :
`... from database.database2.mm_applications ....`
Any idea?
My database is specified in the hibernate.connection.url property. Have you changed that also ? An example would be: jdbc:mysql://localhost/mydatabase
Also, instead of removing hibernate.hbm2ddl.auto then perhaps you should set its value to validate. That way hibernate will ensure that the datamodel matches the database schema.
I found the problem, It was an other application deployed on the same tomcat server using hibernate as well with another database (database) making a conflict with the new application ...
There is still something weird, by connecting to any database, hibernate will use the specified catalog in the hibernate models and so constructing the request using the catalog.table_name
Hope this help someone someday.