What's the easiest way to create a new Postgres scheme inside the database on the runtime and also, create the tables written inside a SQL file?
\This is a Spring boot application and the method receives the schema name that needs to be created for the db.
Although it sounds like this would be a case for using Liquibase or Flyway or any other tool, here is a simple (but very hacky) solution/starting point:
(rough) Steps:
create the whole ddl query, which consists of the "create and use schema part" and the content of your SQL file
inject the entity manager
run the whole ddl query as a native query
Example/(hacky) Code:
Here a simple controller class defining a GET method that takes a parameter called "schema":
#Controller
public class FooController {
private static final String SCHEMA_FORMAT = "create schema %s; set schema %s; ";
#PersistenceContext
EntityManager entityManager;
#Value("classpath:foo.sql")
Resource fooResource;
#GetMapping("foo")
#Transactional
public ResponseEntity<?> foo(#RequestParam("schema") String schema)
throws IOException {
File fooFile = new ClassPathResource("foo.sql").getFile();
String ddl = new String(Files.readAllBytes(fooFile.toPath()));
String schemaQuery = String.format(SCHEMA_FORMAT, schema, schema);
String query = String.format("%s %s", schemaQuery, ddl);
entityManager.createNativeQuery(query).executeUpdate();
return ResponseEntity.noContent().build();
}
}
Related
While I have Java Batch jobs that read data, process it and store it in other places in the database, now I need a step to actually remove data from the database. All I need to run is a delete query via JPA.
The chunk based Reader/Processor/Writer pattern does not make sense here. But the Batchlet alternative is giving me a headache either. What did I do?
I created a Batchlet that gets invoked via CDI. At that moment it is easy to inject my JPA EntityManager. What is not easy is to run the update query. Code looks like this:
package ...;
import javax.batch.api.BatchProperty;
import javax.inject.Inject;
import javax.inject.Named;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
#Named("CleanerBatchlet")
public class CleanerBatchlet extends AbstractBatchlet {
public static final Logger log = LogManager.getLogger(CleanerBatchlet.class);
#PersistenceContext(unitName = "...")
private EntityManager entityManager;
#Inject
#BatchProperty(name = "technologyIds")
private String technologyIds;
private void clearQueue(long technologyId) {
//EntityManager entityManager = ...getEntityManager();
//entityManager.getTransaction().begin();
Query q = entityManager.createQuery("delete from Record r where r.technologyId=:technologyId");
q.setParameter("technologyId", technologyId);
int count = q.executeUpdate();
//entityManager.getTransaction().commit();
log.debug("Deleted {} entries from queue {}", count, technologyId);
//entityManager.close();
}
#Override
public String doProcess() throws Exception {
log.debug("doProcess()");
out.println("technologyIds=" + technologyIds);
log.info("technologyIds=" + technologyIds);
try {
String[] parts = technologyIds.split(",");
for (String part: parts) {
long technologyId = Long.parseLong(part);
clearQueue(technologyId);
}
} catch (NullPointerException | NumberFormatException e) {
throw new IllegalStateException("technologyIds must be set to a string of comma-separated numbers.", e);
}
return "COMPLETED";
}
}
As you can see some lines are commented out - these are the ones I am experimenting with.
So if I run the code as-is, I get an exception telling me that the update query requires a transaction. This is regardless of which of the two persistence units in my project I use (one is configured for JTA, the other is not).
javax.persistence.TransactionRequiredException: Executing an update/delete query
It also does not matter whether I uncomment the transaction handling code begin/commit. I still get the same error that a transaction is required to run the update query.
Even when I try to circumvent CDI and JTA completely by creating my own EntityManager via the Persistence API (and close it afterwards, respectively) I do get the very same exception.
So how can I run this delete query or other update queryies from within the batch job?
I'd suggest using plain jdbc to run this delete query, with either auto commit or manual transaction commit.
During the batchlet processing, the incoming transaction is suspended. So the entity manager does not have a transaction context.
Ultimately I made it work by following this tutorial: https://dzone.com/articles/resource-local-vs-jta-transaction-types-and-payara
and going for the Classic RESOURCE_LOCAL Application pattern.
It involves injecting the nonJTA EntityManagerFactory, using that to create the entitymanager and closing it after use. Of course the transaction has to be managed manually but after all now it works.
The essential excerpt of my code looke like this:
#PersistenceUnit(unitName = "...")
private EntityManagerFactory emf;
#Inject
#BatchProperty(name = "technologyIds")
private String technologyIds;
private void clearQueue(long technologyId) {
EntityManager entityManager = emf.createEntityManager();
entityManager.getTransaction().begin();
Query q = entityManager.createQuery("delete from Record r where r.technologyId=:technologyId");
q.setParameter("technologyId", technologyId);
q.executeUpdate();
entityManager.getTransaction().commit();
entityManager.close();
}
I have started converting an existing Spring Boot(1.5.4.RELEASE) application to support multi-tenant capabilities. So i am using MySQL as the database and Spring Data JPA as the data access mechanism. i am using the schema based multi-tenant approach. As Hibernate document suggests below
https://docs.jboss.org/hibernate/orm/4.2/devguide/en-US/html/ch16.html
I have implemented MultiTenantConnectionProvider and CurrentTenantIdentifierResolver interfaces and I am using ThreadLocal variable to maintain the current tenant for the incoming request.
public class TenantContext {
final public static String DEFAULT_TENANT = "master";
private static ThreadLocal<Tenant> tenantConfig = new ThreadLocal<Tenant>() {
#Override
protected Tenant initialValue() {
Tenant tenant = new Tenant();
tenant.setSchemaName(DEFAULT_TENANT);
return tenant;
}
};
public static Tenant getTenant() {
return tenantConfig.get();
}
public static void setTenant(Tenant tenant) {
tenantConfig.set(tenant);
}
public static String getTenantSchema() {
return tenantConfig.get().getSchemaName();
}
public static void clear() {
tenantConfig.remove();
}
}
Then i have implemented a filter and there i set the tenant dynamically looking at a request header as below
String targetTenantName = request.getHeader(TENANT_HTTP_HEADER);
Tenant tenant = new Tenant();
tenant.setSchemaName(targetTenantName);
TenantContext.setTenant(tenant);
This works fine and now my application points to different schema based on the request header value.
However there is a master schema where i store the some global settings and i need to access that schema while in a middle of a request for a tenant. Therefore i tried to hard code the Threadlocal variable just before that database call in the code as below.
Tenant tenant = new Tenant();
tenant.setSchemaName("master");
TenantContext.setTenant(tenant);
However this does not point to the master schema and instead it tries to access the original schema set during the filter. What is the reason for this?
As per my understanding Hibernate invokes openSession() during the first database call to a tenant and after i try to invoke another database call for "master" it still use the previous tenant as CurrentTenantIdentifierResolver invokes only during the openSession(). However these different database calls does not invoke within a transaction.
Can you please help me to understand the issue with my approach and any suggestions to fix the issue
Thanks
Keth
#JonathanJohx actually i am trying to override the TenantContext set by the filter in one of the controllers. First i am loging in a tenant where TenantContext is set to that particular tenant. While the request is in that tenant i am requesting data from master. In order to do that i am simply hard code the tenant as below
#RestController
#RequestMapping("/jobTemplates")
public class JobTemplateController {
#Autowired
JobTemplateService jobTemplateService;
#GetMapping
public JobTemplateList list(Pageable pageable){
Tenant tenant = new Tenant();
tenant.setSchemaName(multitenantMasterDb);
TenantContext.setTenant(tenant);
return jobTemplateService.list(pageable);
}
I am using AWS ECS to host my application and using DynamoDB for all database operations. So I'll have same database with different table names for different environments. Such as "dev_users" (for Dev env), "test_users" (for Test env), etc.. (This is how our company uses same Dynamo account for different environments)
So I would like to change the "tableName" of the model class using the environment variable passed through "AWS ECS task definition" environment parameters.
For Example.
My Model Class is:
#DynamoDBTable(tableName = "dev_users")
public class User {
Now I need to replace the "dev" with "test" when I deploy my container in test environment. I know I can use
#Value("${DOCKER_ENV:dev}")
to access environment variables. But I'm not sure how to use variables outside the class. Is there any way that I can use the docker env variable to select my table prefix?
My Intent is to use like this:
I know this not possible like this. But is there any other way or work around for this?
Edit 1:
I am working on the Rahul's answer and facing some issues. Before writing the issues, I'll explain the process I followed.
Process:
I have created the beans in my config class (com.myapp.users.config).
As I don't have repositories, I have given my Model class package name as "basePackage" path. (Please check the image)
For 1) I have replaced the "table name over-rider bean injection" to avoid the error.
For 2) I printed the name that is passing on to this method. But it is Null. So checking all the possible ways to pass the value here.
Check the image for error:
I haven't changed anything in my user model class as beans will replace the name of the DynamoDBTable when the beans got executed. But the table name over riding is happening. Data is pulling from the table name given at the Model Class level only.
What I am missing here?
The table names can be altered via an altered DynamoDBMapperConfig bean.
For your case where you have to Prefix each table with a literal, you can add the bean as such. Here the prefix can be the environment name in your case.
#Bean
public TableNameOverride tableNameOverrider() {
String prefix = ... // Use #Value to inject values via Spring or use any logic to define the table prefix
return TableNameOverride.withTableNamePrefix(prefix);
}
For more details check out the complete details here:
https://github.com/derjust/spring-data-dynamodb/wiki/Alter-table-name-during-runtime
I am able to achieve table names prefixed with active profile name.
First added TableNameResolver class as below,
#Component
public class TableNameResolver extends DynamoDBMapperConfig.DefaultTableNameResolver {
private String envProfile;
public TableNameResolver() {}
public TableNameResolver(String envProfile) {
this.envProfile=envProfile;
}
#Override
public String getTableName(Class<?> clazz, DynamoDBMapperConfig config) {
String stageName = envProfile.concat("_");
String rawTableName = super.getTableName(clazz, config);
return stageName.concat(rawTableName);
}
}
Then i setup DynamoDBMapper bean as below,
#Bean
#Primary
public DynamoDBMapper dynamoDBMapper(AmazonDynamoDB amazonDynamoDB) {
DynamoDBMapper mapper = new DynamoDBMapper(amazonDynamoDB,new DynamoDBMapperConfig.Builder().withTableNameResolver(new TableNameResolver(envProfile)).build());
return mapper;
}
Added variable envProfile which is an active profile property value accessed from application.properties file.
#Value("${spring.profiles.active}")
private String envProfile;
We have the same issue with regards to the need to change table names during runtime. We are using Spring-data-dynamodb 5.0.2 and the following configuration seems to provide the solutions that we need.
First I annotated my bean accessor
#EnableDynamoDBRepositories(dynamoDBMapperConfigRef = "getDynamoDBMapperConfig", basePackages = "my.company.base.package")
I also setup an environment variable called ENV_PREFIX which is Spring wired via SpEL.
#Value("#{systemProperties['ENV_PREFIX']}")
private String envPrefix;
Then I setup a TableNameOverride bean:
#Bean
public DynamoDBMapperConfig.TableNameOverride getTableNameOverride() {
return DynamoDBMapperConfig.TableNameOverride.withTableNamePrefix(envPrefix);
}
Finally, I setup the DynamoDBMapperConfig bean using TableNameOverride injection. In 5.0.2, we had to setup a standard DynamoDBTypeConverterFactory in the DynamoDBMapperConfig builder to avoid NPE.:
#Bean
public DynamoDBMapperConfig getDynamoDBMapperConfig(DynamoDBMapperConfig.TableNameOverride tableNameOverride) {
DynamoDBMapperConfig.Builder builder = new DynamoDBMapperConfig.Builder();
builder.setTableNameOverride(tableNameOverride);
builder.setTypeConverterFactory(DynamoDBTypeConverterFactory.standard());
return builder.build();
}
In hind sight, I could have setup a DynamoDBTypeConverterFactory bean that returns a standard DynamoDBTypeConverterFactory and inject that into the getDynamoDBMapperConfig() method using the DynamoDBMapperConfig builder. But this will also do the job.
I up voted the other answer but here is an idea:
Create a base class with all your user details:
#MappedSuperclass
public abstract class AbstractUser {
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
private Long id;
private String firstName;
private String lastName;
Create 2 implentations with different table names and spirng profiles:
#Profile(value= {"dev","default"})
#Entity(name = "dev_user")
public class DevUser extends AbstractUser {
}
#Profile(value= {"prod"})
#Entity(name = "prod_user")
public class ProdUser extends AbstractUser {
}
Create a single JPA respository that uses the mapped super classs
public interface UserRepository extends CrudRepository<AbstractUser, Long> {
}
Then switch the implentation with the spring profile
#RunWith(SpringJUnit4ClassRunner.class)
#DataJpaTest
#Transactional
public class UserRepositoryTest {
#Autowired
protected DataSource dataSource;
#BeforeClass
public static void setUp() {
System.setProperty("spring.profiles.active", "prod");
}
#Test
public void test1() throws Exception {
DatabaseMetaData metaData = dataSource.getConnection().getMetaData();
ResultSet tables = metaData.getTables(null, null, "PROD_USER", new String[] { "TABLE" });
tables.next();
assertEquals("PROD_USER", tables.getString("TABLE_NAME"));
}
}
I want to execute sql statement inside my spring boot controller class with out defining any method in the jpa repository. The statement i want to use is
SELECT UUID();
This statement is database related and is not associated with a particular entity.
It would be nice if any one can provide solution for the execution of the above statement via
spring controller class
jpa repository (if recommended)
update
controller:
#Autowired
JdbcTemplate jdbcTemplate;
#RequestMapping(value = "/UUID", method = RequestMethod.GET)
public ResponseEntity<String> getUUID() {
String uuid = getUUID();
return buildGuestResponse(uuid);
}
public String getUUID(){
UUID uuid = (UUID)jdbcTemplate.queryForObject("select UUID()", UUID.class);
return uuid.toString();
}
You can use JdbcTemplate in your code.
The bean you will need in your configuration class is:-
#Bean
public JdbcTemplate jdbcTemplate(DataSource dataSource)
{
return new JdbcTemplate(dataSource);
}
And the code to run the query is:-
#Autowired
private JdbcTemplate JdbcTemplate;
public String getUUID(){
UUID uuid = (UUID)jdbcTemplate.queryForObject("select UUID()", UUID.class);
return uuid.toString();
}
or may be like this:-
public UUID getUUID(){
UUID uuid = (UUID)jdbcTemplate.queryForObject("select UUID()", UUID.class);
return uuid;
}
This is generally architecturally bad design to execute any SQL (do any persistence) on presentation layer (controller or view) in JEE applications.
The best option is make controller to use service layer, when service layer calling the persistence layer for: obtaining, saving or updating data.
In any case, you can use Spring Data JDBC. Something like:
import org.springframework.jdbc.core.JdbcTemplate;
....
UUID uuid = (UUID)jdbcTemplate.query("SELECT UUID()", UUID.class);
....
I am developing a document management application in spring using jpa and MySQL. The application is currently accepting a document and its meta data from a user web form createOrUpdateDocumentForm.jsp into the controller DocumentController.java. However, the data is not making its way into the MySQL database. Can someone show me how to alter my code so that the document and its metadata get stored in the underlying database?
The flow of data (including the pdf document) seems to go through the following objects:
createOrUpdateDocumentForm.jsp //omitted for brevity, since it is sending data to controller (see below)
Document.java
DocumentController.java
ClinicService.java
JpaDocumentRepository.java
The MySQL database
I will summarize relevant parts of each of these objects as follows:
The jsp triggers the following method in DocumentController.java:
#RequestMapping(value = "/patients/{patientId}/documents/new", headers = "content-type=multipart/*", method = RequestMethod.POST)
public String processCreationForm(#ModelAttribute("document") Document document, BindingResult result, SessionStatus status, #RequestParam("file") final MultipartFile file) {
document.setCreated();
byte[] contents;
Blob blob = null;
try {
contents = file.getBytes();
blob = new SerialBlob(contents);
} catch (IOException e) {e.printStackTrace();}
catch (SerialException e) {e.printStackTrace();}
catch (SQLException e) {e.printStackTrace();}
document.setContent(blob);
document.setContentType(file.getContentType());
document.setFileName(file.getOriginalFilename());
System.out.println("----------- document.getContentType() is: "+document.getContentType());
System.out.println("----------- document.getCreated() is: "+document.getCreated());
System.out.println("----------- document.getDescription() is: "+document.getDescription());
System.out.println("----------- document.getFileName() is: "+document.getFileName());
System.out.println("----------- document.getId() is: "+document.getId());
System.out.println("----------- document.getName() is: "+document.getName());
System.out.println("----------- document.getPatient() is: "+document.getPatient());
System.out.println("----------- document.getType() is: "+document.getType());
try {System.out.println("[[[[BLOB LENGTH IS: "+document.getContent().length()+"]]]]");}
catch (SQLException e) {e.printStackTrace();}
new DocumentValidator().validate(document, result);
if (result.hasErrors()) {
System.out.println("result.getFieldErrors() is: "+result.getFieldErrors());
return "documents/createOrUpdateDocumentForm";
}
else {
this.clinicService.saveDocument(document);
status.setComplete();
return "redirect:/patients?patientID={patientId}";
}
}
When I submit a document through the web form in the jsp to the controller, the System.out.println() commands in the controller code output the following, which indicate that the data is in fact getting sent to the server:
----------- document.getContentType() is: application/pdf
----------- document.getCreated() is: 2013-12-16
----------- document.getDescription() is: paper
----------- document.getFileName() is: apaper.pdf
----------- document.getId() is: null
----------- document.getName() is: apaper
----------- document.getPatient() is: [Patient#564434f7 id = 1, new = false, lastName = 'Frank', firstName = 'George', middleinitial = 'B', sex = 'Male', dateofbirth = 2000-11-28T16:00:00.000-08:00, race = 'caucasian']
----------- document.getType() is: ScannedPatientForms
[[[[BLOB LENGTH IS: 712238]]]] //This indicates the file content was converted to blob
The Document.java model is:
#Entity
#Table(name = "documents")
public class Document {
#Id
#GeneratedValue
#Column(name="id")
private Integer id;
#ManyToOne
#JoinColumn(name = "client_id")
private Patient patient;
#ManyToOne
#JoinColumn(name = "type_id")
private DocumentType type;
#Column(name="name")
private String name;
#Column(name="description")
private String description;
#Column(name="filename")
private String filename;
#Column(name="content")
#Lob
private Blob content;
#Column(name="content_type")
private String contentType;
#Column(name = "created")
private Date created;
public Integer getId(){return id;}
public void setId(Integer i){id=i;}
protected void setPatient(Patient patient) {this.patient = patient;}
public Patient getPatient(){return this.patient;}
public void setType(DocumentType type) {this.type = type;}
public DocumentType getType() {return this.type;}
public String getName(){return name;}
public void setName(String nm){name=nm;}
public String getDescription(){return description;}
public void setDescription(String desc){description=desc;}
public String getFileName(){return filename;}
public void setFileName(String fn){filename=fn;}
public Blob getContent(){return content;}
public void setContent(Blob ct){content=ct;}
public String getContentType(){return contentType;}
public void setContentType(String ctype){contentType=ctype;}
public void setCreated(){created=new java.sql.Date(System.currentTimeMillis());}
public Date getCreated() {return this.created;}
#Override
public String toString() {return this.getName();}
public boolean isNew() {return (this.id == null);}
}
The ClinicService.java code that is called from the DocumentController is:
private DocumentRepository documentRepository;
private PatientRepository patientRepository;
#Autowired
public ClinicServiceImpl(DocumentRepository documentRepository, PatientRepository patientRepository) {
this.documentRepository = documentRepository;
this.patientRepository = patientRepository;
}
#Override
#Transactional
public void saveDocument(Document doc) throws DataAccessException {documentRepository.save(doc);}
The relevant code in JpaDocumentRepository.java is:
#PersistenceContext
private EntityManager em;
#Override
public void save(Document document) {
if (document.getId() == null) {this.em.persist(document);}
else {this.em.merge(document);}
}
Finally, the relevant parts of the SQL code that creates the database include:
CREATE TABLE IF NOT EXISTS documenttypes (
id INT(4) UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(80),
INDEX(name)
);
CREATE TABLE IF NOT EXISTS patients (
id INT(4) UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
first_name VARCHAR(30),
middle_initial VARCHAR(5),
last_name VARCHAR(30),
sex VARCHAR(20),
date_of_birth DATE,
race VARCHAR(30),
INDEX(last_name)
);
CREATE TABLE IF NOT EXISTS documents (
id int(11) UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
client_id int(4) UNSIGNED NOT NULL,
type_id INT(4) UNSIGNED,
name varchar(200) NOT NULL,
description text NOT NULL,
filename varchar(200) NOT NULL,
content mediumblob NOT NULL,
content_type varchar(255) NOT NULL,
created timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (client_id) REFERENCES patients(id),
FOREIGN KEY (type_id) REFERENCES documenttypes(id)
);
What changes do I make to this code so that it saves the document in the documents table of the MySQL database using jpa?
#CodeMed, it took me a while, but I was able to reproduce the issue. It might be a configuration issue : #PersistenceContext might be scanned twice, it might be scanned by your root-context and your web-context. This cause the #PersistenceContext to be shared, therefore it is not saving your data (Spring doesn't allow that). I found it weird that no messages or logs where displayed . if you tried this snippet below on you Save(Document document) you will see the actual error :
Session session = this.em.unwrap(Session.class);
session.persist(document);
To solve the problem, you can do the following (avoid the #PersistenceContext to be scanned twice) :
1- Make sure that all your controller are in a separate package like com.mycompany.myapp.controller, and in your web-context use the component-scan as <context:component-scan annotation-config="true" base-package="com.mycompany.myapp.controller" />
2- Make sure that others component are in differents package other than the controller package , for example : com.mycompany.myapp.dao, com.mycompany.myapp.service ....
and then in your root-context use the component-scan as
<context:component-scan annotation-config="true" base-package="com.mycompany.myapp.service, com.mycompany.myapp.dao" />
Or show me yours spring xml configurations and your web.xml, I will point you to the right direction
Your JPA mappings seem good. Obviously, #Lob requires data type to be byte[] / Byte[] / or java.sql.Blob. Based on that, plus your symptoms and debugging printout it seems your code doing the correct data manipulation (JPA annotations good), but the combination of spring + MySQL isn't commiting. This suggests a minor problem with your spring transactional config OR with your MySQL data type.
1. Transactional Behaviour
The relevant code in JpaDocumentRepository.java is:
#PersistenceContext
private EntityManager em;
#Override
public void save(Document document) {
if (document.getId() == null) {this.em.persist(document);}
else {this.em.merge(document);}
}
You're not using EJBs (hence no 'automatic' container-managed transactions).
You're using JPA within Servlets/java classes (hence you require 'manual' transaction demarcation - outside servlet container; in your code or via Spring config).
You are injecting the entity manager via #PersistenceContext (i.e. container-managed entity manager backed by JTA, not a Entity Manager resource-local transaction, em.getTransaction())
You have marked your 'parent' method as #Transactional (i.e. spring proprietary transcations - annotation later standardised in Java EE 7).
The annotations and code should give transactional behaviour. Do you have a Spring correctly configured for JTA transactions? (Using JtaTransactionManager, not DataSourceTransactionManager which gives JDBC driver local transactions) Spring XML should contain something very similar to:
<!-- JTA requires a container-managed datasource -->
<jee:jndi-lookup id="jeedataSource" jndi-name="jdbc/mydbname"/>
<!-- enable the configuration of transactional behavior based on annotations -->
<tx:annotation-driven transaction-manager="txManager"/>
<!-- a PlatformTransactionManager is still required -->
<bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager" >
<!-- (this dependency "jeedataSource" must be defined somewhere else) -->
<property name="dataSource" ref="jeedataSource"/>
</bean>
Be suspicious of additional parameters / settings.
This is the manually coded version of what Spring must do (for understanding only - don't code this). Uses UserTransaction (JTA), not em.getTransaction() of type EntityTransaction (JDBC local):
// inject a reference to the servlet container JTA tx
#Resource UserTransaction jtaTx;
// servlet container-managed EM
#PersistenceContext private EntityManager em;
public void save(Document document) {
try {
jtaTx.begin();
try {
if (document.getId() == null) {this.em.persist(document);}
else {this.em.merge(document);}
jtaTx.commit();
} catch (Exception e) {
jtaTx.rollback();
// do some error reporting / throw exception ...
}
} catch (Exception e) {
// system error - handle exceptions from UserTransaction methods
// ...
}
}
2. MySQL Data Type
As shown here (at bottom), MySql Blobs are a bit special compared to other databases. The various Blobs and their maximum storage capacities are:
TINYBLOB - 255 bytes
BLOB - 65535 bytes
MEDIUMBLOB - 16,777,215 bytes (2^24 - 1)
LONGBLOB - 4G bytes (2^32 – 1)
If (2) turns out to be your problem:
increase the MySQL type to MEDIUMBLOB or LONGBLOB
investigate why you didn't see an error message (v important). Was your logging properly configured? Did you check logs?
I'm not a Hibernate-with-annotations expert (I've been using it since 2004, but with XML config). Anyway, I'm thinking that you're mixing annotations incorrectly. You've indicated that you don't want the file field persisted with #Transient, but you've also said it's a #Lob, which implies you do want it persisted. Looks like #Lob is winning, and Hibernate is trying to resolve the field to a column by using the field name.
Take off the #Lob and I think you'll be set.
This is not a direct answer to your question (sorry but I'm not a fan of hibernate so can't really help you there) but you should consider using a NoSQL database such as MongoDB rather than MySQL for a job like this. I've tried both and the NoSQL databases are a much better fit to this sort of requirement.
You will find that in situations like this it performs much better than MySQL can do and SpringData MongoDB allows you to very easily save and load Java objects that automatically get mapped to MongoDB ones.