I am working with a system that uses EJB 2. The system consists of two separate applications, one is for user management and the other is the actual application containing business logic.
In the business logic application I have a bean managed entity bean that represents a User.
The application reads information from the user management database, but cannot modify it.
Whenever a user is modified in the user management application, the business logic application is notified that the user has changed. This is implemented as a call to the business application to remove the bean, which causes Weblogic to remove it from the cache and to "delete" it (which does nothing - see code for ejbRemove below). The next time the business application needs the user it will re-load it from the database.
We use the following code to invalidate a single user:
try
{
UserHome home = (UserAuthHome) getHome("User", UserHome.class);
User ua = home.findByPrimaryKey(user);
ua.remove(); // This will remove a single cached User bean in the business logic application
}
catch ...
This works fine, but sometimes (epsecially when doing development) I need to invalidate all cached User beans in the business application. I would like to do this programatically - starting the management console takes too long. There are too many users to do a call for every user.
Possible solutions could include:
--Accessing the bean cache and get a list of the cached User beans.
--Telling WLS to scrap all items in the current User bean cache and re-read them from the database.
Unfortunately I don't know how to do either of these.
I tried to search for a solution, but my internet search karma didn't find anything useful.
Additional information:
Persistance:
<persistence-type>Bean</persistence-type>
<reentrant>false</reentrant>
Caching:
<entity-descriptor>
<entity-cache>
<max-beans-in-cache>500</max-beans-in-cache>
<concurrency-strategy>Exclusive</concurrency-strategy>
<cache-between-transactions>true</cache-between-transactions>
</entity-cache>
<persistence></persistence>
</entity-descriptor>
Bean Code (in the business application):
public void ejbLoad()
{
thisLogger().entering(getUser(m_ctx), "ejbLoad()");
// Here comes some code that connects to the user database and fetches the bean data.
...
}
public void ejbRemove()
{
// This method does nothing
}
public void ejbStore()
{
// This method does nothing
}
public void ejbPostCreate()
{
// This method is empty
}
/**
* Required by EJB spec.
* <p>
* This method always throws CreateException since this entity is read only.
* The remote reference should be obtained by calling ejbFindByPrimaryKey().
*
* #return
* #exception CreateException
* Always thrown
*/
public String ejbCreate()
throws CreateException
{
throw new CreateException("This entity should be called via ejbFindByPrimaryKey()");
}
I did som additional research and was able to find a solution to my problem.
I was able to use weblogic.ejb.CachingHome.invalidateAll(). However, to do so I had to change the concurrency strategy of my bean to ReadOnly. Apparantly, Exclusive concurrency won't make the home interface implement weblogic.ejb.CachingHome:
<entity-descriptor>
<entity-cache>
<max-beans-in-cache>500</max-beans-in-cache>
<read-timeout-seconds>0</read-timeout-seconds>
<concurrency-strategy>ReadOnly</concurrency-strategy>
<cache-between-transactions>true</cache-between-transactions>
</entity-cache>
<persistence></persistence>
</entity-descriptor>
And finally, the code for my new function invalidateAllUsers:
public void invalidateAllUsers() {
logger.entering(getUser(ctx), "invalidateAll()"); // getUser returns a string with the current user ( context.getCallerPrincipal().getName() ).
try {
UserHome home = (UserAuthHome) getHome("User", UserHome.class); // Looks up the home interface
CachingHome cache = (CachingHome)home; // Sweet weblogic magic!
cache.invalidateAll();
} catch (RemoteException e) {
logger.severe(getUser(ctx), "invalidate(date)", "got RemoteException", e);
throw new EJBException(e);
}
}
Related
I can delete specific record using Liferay Service Builder but what to do when I want to delete All the Records from that table.
I am new to Liferay So any Help would be appreciated...!!!
As your entity name is Location, add following method in your LocationLocalServiceImpl.java and build service:
public void deleteAllLocations(){
try{
LocationUtil.removeAll();
}catch(Exception ex){
// Log exception here.
}
}
On successful build, deleteAllLocations will be copied to LocationLocalServiceUtil.java from where you can use it in your action class as:
LocationLocalServiceUtil.deleteAllLocations();
The question already has an answer that the asker is satisfied with but I thought I'd add another just the same. Since you're writing custom method in your service implementation (in your case LocationLocalServiceImpl):
You have direct access to the persistence bean so there is no need to use the LocationUtil.
The accepted answer suggests catching any Exception and logging it. I disagree with this because it will fail silently and depending on the application logic, could cause problems later on. For example, if your removeAll is called within a transaction whose success depends on the correct removal of all entities and the accepted approach fails, the transaction won't be rolled back since you don't throw a SystemException.
With this in mind, consider the following (within your implementation, as above):
public void deleteAllLocations() throws SystemException {
locationPersistence.removeAll();
}
Then, wherever you're calling it from (for example in a controller), you have control over what happens in the case of a failure
try {
LocationLocalServiceUtil.removeAllLocations();
} catch (SystemException e) {
// here whatever you've removed has been rolled back
// instead of just logging it, warn the user that an error occurred
SessionErrors.add(portletRequest, "your-error-key");
log.error("An error occurred while removing all locations", e);
}
Having said that, your LocationUtil class is available outside of the service so you can call it from a controller. If your goal is only to remove all Location entities without doing anything else within the context of that transaction, you can just use the LocationUtil in your controller. This would save you from having to rebuild the service layer.
I am developing an architecture in Java using tomcat and I have come across a situation that I believe is very generic and yet, after reading several questions/answers in StackOverflow, I couldn't find a definitive answer. My architecture has a REST API (running on tomcat) that receives one or more files and their associated metadata and writes them to storage. The configuration of the storage layer has a 1-1 relationship with the REST API server, and for that reason the intuitive approach is to write a Singleton to hold that configuration.
Obviously I am aware that Singletons bring testability problems due to global state and the hardship of mocking Singletons. I also thought of using the Context pattern, but I am not convinced that the Context pattern applies in this case and I worry that I will end up coding using the "Context anti-pattern" instead.
Let me give you some more background on what I am writing. The architecture is comprised of the following components:
Clients that send requests to the REST API uploading or retrieving "preservation objects", or simply put, POs (files + metadata) in JSON or XML format.
The high level REST API that receives requests from clients and stores data in a storage layer.
A storage layer that may contain a combination of OpenStack Swift containers, tape libraries and file systems. Each of these "storage containers" (I'm calling file systems containers for simplicity) is called an endpoint in my architecture. The storage layer obviously does not reside on the same server where the REST API is.
The configuration of endpoints is done through the REST API (e.g. POST /configEndpoint), so that an administrative user can register new endpoints, edit or remove existing endpoints through HTTP calls. Whilst I have only implemented the architecture using an OpenStack Swift endpoint, I anticipate that the information for each endpoint contains at least an IP address, some form of authentication information and a driver name, e.g. "the Swift driver", "the LTFS driver", etc. (so that when new storage technologies arrive they can be easily integrated to my architecture as long as someone writes a driver for it).
My problem is: how do I store and load configuration in an testable, reusable and elegant way? I won't even consider passing a configuration object to all the various methods that implement the REST API calls.
A few examples of the REST API calls and where the configuration comes into play:
// Retrieve a preservation object metadata (PO)
#GET
#Path("container/{containername}/{po}")
#Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
public PreservationObjectInformation getPOMetadata(#PathParam("containername") String containerName, #PathParam("po") String poUUID) {
// STEP 1 - LOAD THE CONFIGURATION
// One of the following options:
// StorageContext.loadContext(containerName);
// Configuration.getInstance(containerName);
// Pass a configuration object as an argument of the getPOMetadata() method?
// Some sort of dependency injection
// STEP 2 - RETRIEVE THE METADATA FROM THE STORAGE
// Call the driver depending on the endpoint (JClouds if Swift, Java IO stream if file system, etc.)
// Pass poUUID as parameter
// STEP 3 - CONVERT JSON/XML TO OBJECT
// Unmarshall the file in JSON format
PreservationObjectInformation poi = unmarshall(data);
return poi;
}
// Delete a PO
#DELETE
#Path("container/{containername}/{po}")
public Response deletePO(#PathParam("containername") String containerName, #PathParam("po") String poName) throws IOException, URISyntaxException {
// STEP 1 - LOAD THE CONFIGURATION
// One of the following options:
// StorageContext.loadContext(containerName); // Context
// Configuration.getInstance(containerName); // Singleton
// Pass a configuration object as an argument of the getPOMetadata() method?
// Some sort of dependency injection
// STEP 2 - CONNECT TO THE STORAGE ENDPOINT
// Call the driver depending on the endpoint (JClouds if Swift, Java IO stream if file system, etc.)
// STEP 3 - DELETE THE FILE
return Response.ok().build();
}
// Submit a PO and its metadata
#POST
#Consumes(MediaType.MULTIPART_FORM_DATA)
#Path("container/{containername}/{po}")
public Response submitPO(#PathParam("containername") String container, #PathParam("po") String poName, #FormDataParam("objectName") String objectName,
#FormDataParam("inputstream") InputStream inputStream) throws IOException, URISyntaxException {
// STEP 1 - LOAD THE CONFIGURATION
// One of the following options:
// StorageContext.loadContext(containerName);
// Configuration.getInstance(containerName);
// Pass a configuration object as an argument of the getPOMetadata() method?
// Some sort of dependency injection
// STEP 2 - WRITE THE DATA AND METADATA TO STORAGE
// Call the driver depending on the endpoint (JClouds if Swift, Java IO stream if file system, etc.)
return Response.created(new URI("container/" + container + "/" + poName))
.build();
}
** UPDATE #1 - My implementation based on #mawalker's comment **
Find below my implementation using the proposed answer. A factory creates concrete strategy objects that implement lower-level storage actions. The context object (which is passed back and forth by the middleware) contains an object of the abstract type (in this case, an interface) StorageContainerStrategy (its implementation will depend on the type of storage in each particular case at runtime).
public interface StorageContainerStrategy {
public void write();
public void read();
// other methods here
}
public class Context {
public StorageContainerStrategy strategy;
// other context information here...
}
public class StrategyFactory {
public static StorageContainerStrategy createStorageContainerStrategy(Container c) {
if(c.getEndpoint().isSwift())
return new SwiftStrategy();
else if(c.getEndpoint().isLtfs())
return new LtfsStrategy();
// etc.
return null;
}
}
public class SwiftStrategy implements StorageContainerStrategy {
#Override
public void write() {
// OpenStack Swift specific code
}
#Override
public void read() {
// OpenStack Swift specific code
}
}
public class LtfsStrategy implements StorageContainerStrategy {
#Override
public void write() {
// LTFS specific code
}
#Override
public void read() {
// LTFS specific code
}
}
Here is the paper Doug Schmidt (in full disclosure my current PhD Advisor) wrote on the Context Object Pattern.
https://www.dre.vanderbilt.edu/~schmidt/PDF/Context-Object-Pattern.pdf
As dbugger stated, building a factory into your api classes that returns the appropriate 'configuration' object is a pretty clean way of doing this. But if you know the 'context'(yes, overloaded usage) of the paper being discussed, it mainly for use in middleware. Where there are multiple layers of context changes. And note that under the 'implementation' section it recommends use of the Strategy Pattern for how to add each layer's 'context information' to the 'context object'.
I would recommend a similar approach. Each 'storage container' would have a different strategy associated with it. Each "driver" therefore has its own strategy impl. class. That strategy would be obtained from a factory, and then used as needed. (How to design your Strats... best way (I'm guessing) would be to make your 'driver strat' be generic for each driver type, and then configure it appropriately as new resources arise/the strat object is assigned)
But as far as I can tell right now(unless I'm reading your question wrong), this would only have 2 'layers' where the 'context object' would be aware of, the 'rest server(s)' and the 'storage endpoints'. If I'm mistaken then so be it... but with only 2 layers, You can just use 'strategy pattern' in the same way you were thinking 'context pattern', and avoid the issue of singletons/Context 'anti-pattern'. (You 'could' have a context object, which contains the strategy for which driver to use, and then a 'configuration' for that driver... that wouldn't be insane, and might fit well with your dynamic HTTP configuration.)
The Strategy(s) Factory Class doesn't 'have to' be singleton/have static factory methods either. I've made factories that are objects before just fine, even with D.I. for testing. There is always trade-offs to different approaches, but I've found better testing to be worth it in almost all cases I've ran into.
I have a RemoteServiceServlet class implements several services (methods).
All the methods need to check the session and get the corresponding user info before doing anything. Since the class have more than 20 service methods, doing so in every service is a nightmare. Is there a way to run some session checking method automatically for all the incoming requests? Or how can I solve this problem?
Here is an example pseudo-code for my situation.
public class OnboardingServiceImpl extends RemoteServiceServlet implements OnboardingService {
private String checkSessionAndGetUser(){...}
public void service1(){
// check session
// get user and do something based on the user data
}
public void service2(){
// check session
// get user and do something based on the user data
}
public void service3(){
// check session
// get user and do something based on the user data
}
...
public void service20(){
// check session
// get user and do something based on the user data
}
}
As you can see, service1, service2, ..., service 20 all need to get the user info based on the session, but I do not want to repeat writing the code for every service. Any help will be appreciated.
I'd suggest to override processCall(RPCRequest rpcRequest)
#Override
public String processCall(RPCRequest rpcRequest) throws SerializationException {
//your checks here, in case of error:
//return RPC.encodeResponseForFailedRequest(null, new Exception("Invalid session"));
// note that you'll have to use a serializable exception type here.
return super.processCall(rpcRequest);
}
RemoteServletServlet's doPost is final, but not service, so you can put your code there.
…or use a servlet filter.
This will however be done outside the "RPC" (before the request is even decoded), so response cannot just be a "throw exception and have it passed to onFailure on client side".
For that, you'll have to either use aspect-oriented programming (such as AspectJ) to "inject" code into all your methods, or call an init method at the beginning of each method (you'll keep repeating the code, but that could possibly be reduced to a one-liner).
I have an immutable User entity:
public class User {
final LocalDate lastPasswordChangeDate;
// final id, name, email, etc.
}
I need to add a method that will return information if the user's password must be changed i.d. it has not been changed for more than the passwordValidIntervalInDays system setting.
The current approach:
public class UserPasswordService {
private SettingsRepository settingsRepository;
#Inject
public UserPasswordService(SettingsRepository settingsRepository) {
this.settingsRepository = settingsRepository;
}
public boolean passwordMustBeChanged(User user) {
return user.lastPasswordChangeDate.plusDays(
settingsRepository.get().passwordValidIntervalInDays
).isBefore(LocalDate.now());
}
}
The question is how to make the above code more object oriented and avoid the anemic domain model antipattern? Should the passwordMustBeChanged method be moved to User if so how to access SettingsRepository, should it be injected into User's constructor, or should a Settings instance be provided to the ctor, or should the passwordMustBeChanged method require a Settings instance to be provided?
The code of Settings and SettingsRepository is not important, but for completness, here it is:
public class Settings {
int passwordValidIntervalInDays;
public Settings(int passwordValidIntervalInDays) {
this.passwordValidIntervalInDays = passwordValidIntervalInDays;
}
}
public class SettingsRepository {
public Settings get() {
// load the settings from the persistent storage
return new Settings(10);
}
}
For a system-wide password expiration policy your approach is not that bad, as long as your UserPasswordService is a domain service, not an application service. Embedding the password expiration policy within User would be a violation of the SRP IMHO, which is not much better.
You could also consider something like (where the factory was initialized with the correct settings):
PasswordExpirationPolicy policy = passwordExpirationPolicyFactory().createDefault();
boolean mustChangePassword = user.passwordMustBeChanged(policy);
//class User
public boolean passwordMustBeChanged(PasswordExpirationPolicy policy) {
return policy.hasExpired(currentDate, this.lastPasswordChangeDate);
}
If eventually the policy can be specified for individual users then you can simply store policy objects on User.
You could also make use of the ISP with you current design and implement a PasswordExpirationPolicy interface on your UserPasswordService service. That will give you the flexibility of refactoring into real policy objects later on without having to change how the User interacts with the policy.
If you had a Password value object you may also make things slightly more cohesive, by having something like (the password creation date would be embedded in the password VO):
//class User
public boolean passwordMustBeChanged(PasswordExpirationPolicy policy) {
return this.password.hasExpired(policy);
}
just to throw out another possible solution would be to implement a long-running process that could do the expiration check and send a command to a PasswordExpiredHandler that could mark the user with having an expired password.
I have stumbled upon a document that provides an answer to my question:
A common problem in applying DDD is when an entity requires access to data in a repository or other gateway in order to carry out a business operation. One solution is to inject repository dependencies directly into the entity, however this is often frowned upon. One reason for this is because it requires the plain-old-(C#, Java, etc…) objects implementing entities to be part of an application dependency graph. Another reason is that is makes reasoning about the behavior of entities more difficult since the Single-Responsibility Principle is violated. A better solution is to have an application service retrieve the information required by an entity, effectively setting up the execution environment, and provide it to the entity.
http://gorodinski.com/blog/2012/04/14/services-in-domain-driven-design-ddd/
I have 3 classes, Main, Pin, and Employee. (More classes to be added in the future)
The Main class runs a program that prompts use to enter an password from a method in the Pin class. The password is sent using SQL to a database to be verified from methods in the Employee and Pin classes. If the password matches, it returns back the ID of that employee.
The problem I am having is saving the ID. I retrieved the ID, but I want to be able to use this ID for the rest of the program. I want to save it in the Employee class but it is returned in a method from the Pin class. How would I be able to save this ID to be used by the Employee class, or any class?
Hmm, if I understand you correctly you may want something like a composition of Employee and Pin?
Employee e = ...
e.setPin(pin);
Or is the problem, that you don't know how to assign a specific pin to an Employee, so that it is still assigned to the Employee in the next run of your application?
UPDATE: After receiving more information: You need to create an application context for your program. As it seems to be a standalone application with one user at a time, this can be a static field in some class.
If the application is a multi-user application, then the context needs to be static per user.
If the application is a web-application, then your application scope is the Session, that will be terminated during Logout. (That is actually not precise, as the session is the session scope, and the application scope is really the context of the application. But in your case, the pin is session scoped.)
A sample for a standalone/single user application:
public class ApplicationContext {
static ApplicationContext CTX;
public static ApplicationContext get() {
if( CTX == null ) {
CTX = new ApplicationContext();
}
return CTX;
}
private Pin pin;
public Pin getPin() { return pin; }
public void setPin(Pin pin) { this.pin = pin; }
// ... add more stuff here ...
public void logout() {
CTX = null;
}
}
What you need is a concept of a so called application scoped object.
There are various libraries that work with JavaEE but I assume you're using Java SE.
DeltaSpike of apache allows to do such scoped beans within Java SE, too. See an example. Like this, you can create an #ApplicationScoped bean where you can store values across your application.
Please know that there are other scopes possible, such as SessionScoped(HTTP), WindowsScoped(BrowserWindow) etc.
This example shall point you to the right direction but there might be better approaches than to use the application scoped in your case.