I can delete specific record using Liferay Service Builder but what to do when I want to delete All the Records from that table.
I am new to Liferay So any Help would be appreciated...!!!
As your entity name is Location, add following method in your LocationLocalServiceImpl.java and build service:
public void deleteAllLocations(){
try{
LocationUtil.removeAll();
}catch(Exception ex){
// Log exception here.
}
}
On successful build, deleteAllLocations will be copied to LocationLocalServiceUtil.java from where you can use it in your action class as:
LocationLocalServiceUtil.deleteAllLocations();
The question already has an answer that the asker is satisfied with but I thought I'd add another just the same. Since you're writing custom method in your service implementation (in your case LocationLocalServiceImpl):
You have direct access to the persistence bean so there is no need to use the LocationUtil.
The accepted answer suggests catching any Exception and logging it. I disagree with this because it will fail silently and depending on the application logic, could cause problems later on. For example, if your removeAll is called within a transaction whose success depends on the correct removal of all entities and the accepted approach fails, the transaction won't be rolled back since you don't throw a SystemException.
With this in mind, consider the following (within your implementation, as above):
public void deleteAllLocations() throws SystemException {
locationPersistence.removeAll();
}
Then, wherever you're calling it from (for example in a controller), you have control over what happens in the case of a failure
try {
LocationLocalServiceUtil.removeAllLocations();
} catch (SystemException e) {
// here whatever you've removed has been rolled back
// instead of just logging it, warn the user that an error occurred
SessionErrors.add(portletRequest, "your-error-key");
log.error("An error occurred while removing all locations", e);
}
Having said that, your LocationUtil class is available outside of the service so you can call it from a controller. If your goal is only to remove all Location entities without doing anything else within the context of that transaction, you can just use the LocationUtil in your controller. This would save you from having to rebuild the service layer.
Related
I have a test method which sometimes fails during deploy and sometimes does not. I have never seen it fail on my local. You can see my code below.
I have the following retry mechanism which is asynchronously called from another service:
#Transactional
public boolean retry(NotificationOrder order) {
notificationService.send(order);
return true;
}
public void resolveOnFailedAttempt(Long orderId) { //automatically called if `retry` method fails
notificationOrderCommonTransactionsService.updateNotificationOrderRetryCount(orderId);
}
The notification service is like this :
#Service
#RequiredArgsConstructor
public class NotificationServiceImpl implements NotificationService {
private final NotificationOrderCommonTransactionsService notificationOrderCommonTransactionsService;
#Override
#Transactional
public NotificationResponse send(NotificationOrder order) {
NotificationRequest request;
try {
request = prepareNotificationRequest(order);
} catch (Exception e) {
notificationOrderCommonTransactionsService.saveNotificationOrderErrorMessage(order.getId(),
e.getMessage());
throw e;
}
...
return response;
}
private void prepareNotificationRequest(NotificationOrder order) {
...
throw new Exception("ERROR");
}
}
And the commmon transactions service is like this :
#Transactional(propagation = Propagation.REQUIRES_NEW)
public NotificationOrder saveNotificationOrderErrorMessage(Long orderId, String errorMessage) {
NotificationOrder order = notificationRepository.findOne(orderId);
order.setErrorDescription(errorMessage);
notificationRepository.save(order);
return order;
}
public NotificationOrder updateNotificationOrderRetryCount(Long orderId) {
NotificationOrder order = notificationRepository.findOne(orderId);
order.setRetryCount(order.getRetryCount() + 1);
order.setOrderStatus(NotificationOrderStatus.IN_PROGRESS);
notificationRepository.save(order);
return order;
}
Here is my integration test :
#Test
public void test() {
NotificationOrderRequest invalidRequest = invalidRequest();
ResponseEntity<NotificationOrderResponse> responseEntity = send(invalidRequest);
NotificationOrder notificationOrder = notificationOrderRepository.findOne(1);
softly.assertThat(notificationOrder.getOrderStatus().isEqualTo(NotificationOrderStatus.IN_PROGRESS))
softly.assertThat(notificationOrder.getErrorDescription())
.isEqualTo("ERROR"); //This the line that fails.
softly.assertThat(responseEntity.getStatusCode()).isEqualTo(HttpStatus.OK);
}
In the test method it is confirmed that updateNotificationOrderRetryCount is called and the order status is updated as IN_PROGRESS. However, the error message is null and I get the following assertion error :
-- failure 1 --
Expecting:
<null>
to be equal to:
<"ERROR">
but was not.
I expect saveNotificationOrderErrorMessage transaction to be completed and the changes to be committed before updateNotificationOrderRetryCount method is called. But it seems like it does work that way. Could anyone help me find out why my code behave like this ?
How can I reproduce this error on my local? And what can I do to fix it ?
Thanks.
Try enabling SQL logging and parameter bind logging and look through the statements. I don't know all of your code, but maybe your are setting the message to null somewhere? It could also be, that the actions are interleaving somehow such that updateNotificationOrderRetryCount is called before/while saveNotificationOrderErrorMessage in a way that causes this. If both run right before commit, but saveNotificationOrderErrorMessage commits before updateNotificationOrderRetryCount, you could see the error message being overwritten with null.
If the code snippet of the question is accurate, pay attention to the fact that you are rethrowing the exception raised in the prepareNotificationRequest method, I assume in order to enable the retry mechanism:
NotificationRequest request;
try {
request = prepareNotificationRequest(order);
} catch (Exception e) {
notificationOrderCommonTransactionsService.saveNotificationOrderErrorMessage(order.getId(),
e.getMessage());
throw e; // You are rethrowing the exception
}
For your comment, the exception thrown extends RuntimeException.
As the Spring documentation indicates:
In its default configuration, the Spring Frameworkâs transaction infrastructure code marks a transaction for rollback only in the case of runtime, unchecked exceptions. That is, when the thrown exception is an instance or subclass of RuntimeException. ( Error instances also, by default, result in a rollback). Checked exceptions that are thrown from a transactional method do not result in rollback in the default configuration.
Probably Spring is performing rollback of the initial transaction, that one associated with saveNotificationOrderErrorMessage. I realize that this method is annotated as #Transactional(propagation = Propagation.REQUIRES_NEW) and that it is initiating a new transaction, but perhaps the problem could be related with it.
When the retry mechanism takes place, another transaction, associated with the invocation of the method updateNotificationOrderRetryCount is performed, and this transaction is successfully committed. This is the reason why the changes performed in this second method are properly committed.
The solution of the problem will depend on how your retry mechanism is implemented, but you can, for example, raise the original exception and, as a first step in the retry mechanism, trace the problem in the database or, raise a checked exception - Spring by default will not perform rollback for it - and handle it as appropriate.
Update
Another possible reason of the problem could be the transaction demarcations in the send method.
This method is annotated as #Transactional. As a consequence, Spring will initiate a new transaction for it.
The error occurs, and you trace the error in the database, in a new transaction but please, be aware that the initial transaction is still there.
Although not described in your code, in some way, the retry mechanism takes place, and updates the retry count. It this operation is performed within the initial transaction (or a higher level one), due to the transaction boundaries, database isolation levels, and related stuff, it is possible that this transaction, the initial, fetch an actually outdated, but current from the transaction boundary point of view, NotificationOrder. And this information is the one that finally is committed, overwriting the information of the error. I hope you get the idea.
One simple solution, maybe for both possibilities, could be to include the error message in the updateNotificationOrderRetryCount method itself, reducing the problem to a single transaction:
/* If appropriate, mark it as Transactional */
#Transactional
public NotificationOrder updateNotificationOrderRetryCount(Long orderId, String errorMessage) {
NotificationOrder order = notificationRepository.findOne(orderId);
order.setRetryCount(order.getRetryCount() + 1);
order.setOrderStatus(NotificationOrderStatus.IN_PROGRESS);
order.setErrorDescription(errorMessage);
// It is unnecessary, all the changes performed in the entity within the transaction will be committed
// notificationRepository.save(order);
return order;
}
OK, what I'm trying to accomplish is the following:
In a java enterprise bean I want to move a file to a different directory unless a database operation fails (namely, I want to store the correct location of the file in the database. Now if something went wrong within the transaction and it's rolled back, my database points to a wrong location for the file if I already moved the file. Not good.).
I tried to fire an event with an observing method using the transaction phase AFTER_SUCCESS to move the file. So far so good. But the file move could also fail (maybe I don't have access to the target directory or something like that) and I want to write that failure into the data base as well. Unfortunately it seems like the observer method does not provide me with a transaction and my method call fails.
Is the idea of calling a service method from the observing method a bad one? Or am I doing it wrong?
Generally, you should work with transactional resource firstly and then with non-transactional. The reason is that you can roll-back transactional resource but you cannot roll-back non transactional one.
I mean: if you was able to update row in database and then trying to move file and it fails, you can safely roll-back database update. But, if you moving file and it success, but you cannot update database for some reason - you cannot roll-back moved file.
In your particular case I would suggest to not actually move the file, but copy it instead. So that in database you will always have actual location of new copy. And in different thread you can delete old copies somehow. You need to use copies because IOException can be thrown when actual file was moved and when you rollback transaction in database you will end up with wrong old location. Try to use this approach (using EJB container-managed transactions; you can safely find Spring variant of that):
#TransactionAttribute(REQUIRED)
void move(String newLocation, int fileId) throws CouldNotMoveException, DatabaseException {
try {
database.updateFileLocation(fileId, newLocation);
} catch (Exception exc) {
throw new DatabaseException(exc);
}
try {
file.copyFile(fileId, newLocation);
} catch (IOException exc) {
throw new CouldNotMoveException(exc);
}
}
You will need to create your exceptions like this in order to rollback your transactions (or just use RuntimeException - check documentation on your container about reacting to exception and rollback policies):
#ApplicationException(rollback = true)
public class DatabaseException extends Exception {
// omited
}
#ApplicationException(rollback = true)
public class CouldNotMoveException extends Exception {
// omited
}
Here you client code can react on CouldNotMoveException and write the wrong move in database so you will fullfil your requirements.
We have an application with three databases. Two of them are only very seldomly updated. We tried JPA to create transactions around it and it worked for the databases, but grails then did not work on different places (gsp related I am told). This was tried quite a while ago (and not by me).
Due to delivery pressure we needed a solution that at least works for us, so I created a new aspect for the methods changing data in multiple databases. I got this to work, it is a fairly simple approach.
In the aspect we request to start a transaction for each data source, by calling getTransaction(TransactionDefinition def) with the propagation set to REQUIRES_NEW. We then proceed and finally rollback or commit depending on the outcome of the call.
However, one test flow failed. This is the scenario where the code requests a rollback by calling TransactionAspectSupport.currentTransactionStatus().setRollbackOnly(). Of the three TransactionStatusses obtained initially, none actually returns isRollbackOnly() with true. However calling TransactionAspectSupport.currentTransationStatus().isRollbackOnly() does return true. So this seems to point to a different transaction status.
I have not been able to figure out how to make this work, other than checking this additional status. I could not find a way to change the currentTransactionStatus to the one of created TransactionStatus. Looking at the TransactionTemplate implementation, I seem to do things correctly (it also just calls getTransaction() on the datasource).
The code calling the decorated method has specified #Transactional(propagation=Propagation.NOT_SUPPORTED), so I expected no currentTransactionStatus, but one is there.
However, if it is not there the proxied code will not be able to request a rollback the standard way, which I want to be able to fix.
So the question is, how to start a transaction correctly from an Aspect so that the currentTransactionStatus is set correctly or how to set the currentTransactionStatus to what I think is the correct one.
Regards,
Wim Veldhuis.
I finally figured it out.
#Transactional leads to a different code path, where eventually TransactionAspectSupport.invokeWithinTransaction is invoked. This method will set up the current transaction correctly.
So in order to make my approach working, I needed to derive from TransactionAspectSupport, do a number of cast operations so I could get to the correct values for the invokeWithinTransaction call, and within the guarded function block use getTransaction(def) to obtain txns for the OTHER databases. I have choose the most important database to be the one used for invoke...
To make it work I had also to provide a TransactionAttributeSource, that returned my default transaction attributes.That one is stored into the TransactionAspectSupport base class during initialization.
#Around("#annotation(framework.db.MultiDbTransactional)")
public Object multiDbTransaction(ProceedingJoinPoint proceedingJoinPoint) throws Throwable {
// Get class and method, needed for parent invocation. We need to cast to the actual
// implementation
MethodInvocationProceedingJoinPoint mipJoinPoint = (MethodInvocationProceedingJoinPoint) proceedingJoinPoint;
MethodSignature signature = (MethodSignature) mipJoinPoint.getSignature();
Class<?> clazz = mipJoinPoint.getTarget().getClass();
Method method = signature.getMethod();
return invokeWithinTransaction(method, clazz, new InvocationCallback() {
#Override
public Object proceedWithInvocation() throws Throwable {
// This class will create the other transactions, not of interest here.
MultiDbTxnContext ctx = new MultiDbTxnContext();
ctx.startTransactions();
/*
* We have started the transactions, so do the job. We mimic DEFAULT spring behavior
* regarding exceptions, so runtime exceptions roll back, the rest commits.
*/
try {
Object result = proceedingJoinPoint.proceed();
ctx.finishTransactions();
return result;
} catch (Error | RuntimeException re) {
ctx.rollbackTransactions();
throw re;
} catch (Throwable t) {
ctx.commitTransactions();
throw t;
}
}
});
}
In my service code, I am trying to create or update a Person domain object:
#Transactional
def someServiceMethod(some params....) {
try{
def person = Person.findByEmail(nperson.email.toLowerCase())
if (!person) {
person = new Person()
person.properties = nperson.properties
} else {
// update the person parameters (first/last name)
person.firstName = nperson.firstName
person.lastName = nperson.lastName
person.phone = nperson.phone
}
if (person.validate()) {
person.save(flush: true)
//... rest of code
}
// rest of other code....
} catch(e) {
log.error("Unknown error: ${e.getMessage()}", e)
e.printStackTrace()
return(null)
}
Now above code OCCASIONALLY when trying to save a Person object with already existing email throws following exception:
Hibernate operation: could not execute statement; SQL [n/a]; Duplicate entry 'someemail#gmail.com' for key 'email_UNIQUE'; nested exception is com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry 'someemail#gmail.com' for key 'email_UNIQUE'
This is very strange because I am already finding the person by email and hence the save() should try to update the record instead of creating the new one.
I was wondering why is this happening!
EDIT:
I am on grails 2.4.5 and Hibernate plugin in BuildConfig is:
runtime ':hibernate4:4.3.8.1'
EDIT2:
My application is on multiple servers hence synchronized block won't work
If this is concurrency issue, here is what we do in such case. We have a lot of concurrent background processes which work on the same tables. If there is such operation it indeed is in synchronized block, so code may look like:
class SomeService {
static transactional = false //service cannot be transactional
private Object someLock = new Object() //synchronized block on some object must be used
def someConcurrentSafeMethod(){
synchronized(someLock){
def person = Person.findByEmail(nperson.email.toLowerCase())
...
person.save(flush: true) // flush is very important, must be done in synchronized block
}
}
}
There are few important points to make this working (from our experience, not official):
Service cannot be transactional - if service is transactional, transaction is commited after the method returns value and synchronization inside method will not be enough. Programmatic transactions may be another way
synchronized method is not enough synchronized def someConcurrentSafeMethod() will not work - probably because service is wrapped in proxy
Session MUST be flushed inside synchronized block
every object which will be saved, should be read in synchronized block, if you pass it from external method, you may run into optimistic locking failed exception
UPDATED
Because application is deployed on distributed system, above will not solve the issue here (still may help others). After discussion we had on Slack, I just summarize potential ways to do that:
pessimistic locking of updated objects and lock of whole table for inserts (if possible)
moving 'dangerous' database related methods to single server with some API like REST and calling it from other deployments (and using synchronized approach from above)
using multiple save approach - if operation fails, catch exception and try again. This is supported by integration libraries like Spring Integration or Apache Camel and is one of enterprise patterns. See request-handler-advice-chain for Spring Integration as an example
use something to queue operations, for example JMS server
If anyone has more ideas please share them.
I am working with a system that uses EJB 2. The system consists of two separate applications, one is for user management and the other is the actual application containing business logic.
In the business logic application I have a bean managed entity bean that represents a User.
The application reads information from the user management database, but cannot modify it.
Whenever a user is modified in the user management application, the business logic application is notified that the user has changed. This is implemented as a call to the business application to remove the bean, which causes Weblogic to remove it from the cache and to "delete" it (which does nothing - see code for ejbRemove below). The next time the business application needs the user it will re-load it from the database.
We use the following code to invalidate a single user:
try
{
UserHome home = (UserAuthHome) getHome("User", UserHome.class);
User ua = home.findByPrimaryKey(user);
ua.remove(); // This will remove a single cached User bean in the business logic application
}
catch ...
This works fine, but sometimes (epsecially when doing development) I need to invalidate all cached User beans in the business application. I would like to do this programatically - starting the management console takes too long. There are too many users to do a call for every user.
Possible solutions could include:
--Accessing the bean cache and get a list of the cached User beans.
--Telling WLS to scrap all items in the current User bean cache and re-read them from the database.
Unfortunately I don't know how to do either of these.
I tried to search for a solution, but my internet search karma didn't find anything useful.
Additional information:
Persistance:
<persistence-type>Bean</persistence-type>
<reentrant>false</reentrant>
Caching:
<entity-descriptor>
<entity-cache>
<max-beans-in-cache>500</max-beans-in-cache>
<concurrency-strategy>Exclusive</concurrency-strategy>
<cache-between-transactions>true</cache-between-transactions>
</entity-cache>
<persistence></persistence>
</entity-descriptor>
Bean Code (in the business application):
public void ejbLoad()
{
thisLogger().entering(getUser(m_ctx), "ejbLoad()");
// Here comes some code that connects to the user database and fetches the bean data.
...
}
public void ejbRemove()
{
// This method does nothing
}
public void ejbStore()
{
// This method does nothing
}
public void ejbPostCreate()
{
// This method is empty
}
/**
* Required by EJB spec.
* <p>
* This method always throws CreateException since this entity is read only.
* The remote reference should be obtained by calling ejbFindByPrimaryKey().
*
* #return
* #exception CreateException
* Always thrown
*/
public String ejbCreate()
throws CreateException
{
throw new CreateException("This entity should be called via ejbFindByPrimaryKey()");
}
I did som additional research and was able to find a solution to my problem.
I was able to use weblogic.ejb.CachingHome.invalidateAll(). However, to do so I had to change the concurrency strategy of my bean to ReadOnly. Apparantly, Exclusive concurrency won't make the home interface implement weblogic.ejb.CachingHome:
<entity-descriptor>
<entity-cache>
<max-beans-in-cache>500</max-beans-in-cache>
<read-timeout-seconds>0</read-timeout-seconds>
<concurrency-strategy>ReadOnly</concurrency-strategy>
<cache-between-transactions>true</cache-between-transactions>
</entity-cache>
<persistence></persistence>
</entity-descriptor>
And finally, the code for my new function invalidateAllUsers:
public void invalidateAllUsers() {
logger.entering(getUser(ctx), "invalidateAll()"); // getUser returns a string with the current user ( context.getCallerPrincipal().getName() ).
try {
UserHome home = (UserAuthHome) getHome("User", UserHome.class); // Looks up the home interface
CachingHome cache = (CachingHome)home; // Sweet weblogic magic!
cache.invalidateAll();
} catch (RemoteException e) {
logger.severe(getUser(ctx), "invalidate(date)", "got RemoteException", e);
throw new EJBException(e);
}
}