I have one servlet taking care of multiple sites and therefore I want to have different sessions for different sites, even if its the same user.
Is there any support for this in Java or do I need to prefix the attribute names instead? I guess prefixing is not a good idea.
/Br Johannes
This CANNOT be done in the servlet container based on URL parameters alone; you'll have to do it yourself. Instead of dealing with attribute prefixes in your servlet, however, the easiest way to manage "separate" sessions is via filter:
Write a simple wrapper class for HttpSession. Have it hold a Map of attributes and back all attribute / value methods by said map; delegate all the other methods to the actual session you're wrapping. Override invalidate() method to remove your session wrapper instead of killing the entire "real" session.
Write a servlet filter; map it to intercept all applicable URLs.
Maintain a collection of your session wrappers as an attribute within the real session.
In your filter's doFilter() method extract the appropriate session wrapper from the collection and inject it into the request you're passing down the chain by wrapping the original request into HttpServletRequestWrapper whose getSession() method is overwritten.
Your servlets / JSPs / etc... will enjoy "separate" sessions.
Note that Sessions's "lastAccessedTime" is shared with this approach. If you need to keep those separate you'll have to write your own code for maintaining this setting and for expiring your session wrappers.
I recently came across this problem too, and I went with ChssPly76's suggestion to solve it. I thought I'd post my results here to provide a reference implementation. It hasn't been extensively tested, so kindly let me know if you spot any weaknesses.
I assume that every request to a servlet contains a parameter named uiid, which represents a user ID. The requester has to keep track of sending a new ID everytime a link is clicked that opens a new window. In my case this is sufficient, but feel free to use any other (maybe more secure) method here. Furthermore, I work with Tomcat 7 or 8. You might need to extend other classes when working with different servlet containers, but the APIs shouldn't change too much.
In the following, the created sessions are referred to as subsessions, the original container managed session is the parent session. The implementation consists of the following five classes:
The SingleSessionManager keeps track of creation, distribution and cleanup of all subsessions. It does this by acting as a servlet filter which replaces the ServletRequest with a wrapper that returns the appropriate subsession. A scheduler periodically checks for expired subsessions ...and yes, it's a singleton. Sorry, but I still like them.
package session;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.ScheduledFuture;
import java.util.concurrent.TimeUnit;
import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.FilterConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpSession;
/**
* A singleton class that manages multiple sessions on top of a regular container managed session.
* See web.xml for information on how to enable this.
*
*/
public class SingleSessionManager implements Filter {
/**
* The default session timeout in seconds to be used if no explicit timeout is provided.
*/
public static final int DEFAULT_TIMEOUT = 900;
/**
* The default interval for session validation checks in seconds to be used if no explicit
* timeout is provided.
*/
public static final int DEFAULT_SESSION_INVALIDATION_CHECK = 15;
private static SingleSessionManager instance;
private ScheduledExecutorService scheduler;
protected int timeout;
protected long sessionInvalidationCheck;
private Map<SubSessionKey, HttpSessionWrapper> sessions = new ConcurrentHashMap<SubSessionKey, HttpSessionWrapper>();
public SingleSessionManager() {
sessionInvalidationCheck = DEFAULT_SESSION_INVALIDATION_CHECK;
timeout = DEFAULT_TIMEOUT;
}
public static SingleSessionManager getInstance() {
if (instance == null) {
instance = new SingleSessionManager();
}
return instance;
}
#Override
public void destroy() {
}
#Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
HttpServletRequestWrapper wrapper = new HttpServletRequestWrapper((HttpServletRequest) request);
chain.doFilter(wrapper, response);
}
#Override
public void init(FilterConfig cfg) throws ServletException {
String timeout = cfg.getInitParameter("sessionTimeout");
if (timeout != null && !timeout.trim().equals("")) {
getInstance().timeout = Integer.parseInt(timeout) * 60;
}
String sessionInvalidationCheck = cfg.getInitParameter("sessionInvalidationCheck");
if (sessionInvalidationCheck != null && !sessionInvalidationCheck.trim().equals("")) {
getInstance().sessionInvalidationCheck = Long.parseLong(sessionInvalidationCheck);
}
getInstance().startSessionExpirationScheduler();
}
/**
* Create a new session ID.
*
* #return A new unique session ID.
*/
public String generateSessionId() {
return UUID.randomUUID().toString();
}
protected void startSessionExpirationScheduler() {
if (scheduler == null) {
scheduler = Executors.newScheduledThreadPool(1);
final Runnable sessionInvalidator = new Runnable() {
public void run() {
SingleSessionManager.getInstance().destroyExpiredSessions();
}
};
final ScheduledFuture<?> sessionInvalidatorHandle =
scheduler.scheduleAtFixedRate(sessionInvalidator
, this.sessionInvalidationCheck
, this.sessionInvalidationCheck
, TimeUnit.SECONDS);
}
}
/**
* Get the timeout after which a session will be invalidated.
*
* #return The timeout of a session in seconds.
*/
public int getSessionTimeout() {
return timeout;
}
/**
* Retrieve a session.
*
* #param uiid
* The user id this session is to be associated with.
* #param create
* If <code>true</code> and no session exists for the given user id, a new session is
* created and associated with the given user id. If <code>false</code> and no
* session exists for the given user id, no new session will be created and this
* method will return <code>null</code>.
* #param originalSession
* The original backing session created and managed by the servlet container.
* #return The session associated with the given user id if this session exists and/or create is
* set to <code>true</code>, <code>null</code> otherwise.
*/
public HttpSession getSession(String uiid, boolean create, HttpSession originalSession) {
if (uiid != null) {
SubSessionKey key = new SubSessionKey(originalSession.getId(), uiid);
if (!sessions.containsKey(key) && create) {
HttpSessionWrapper sw = new HttpSessionWrapper(uiid, originalSession);
sessions.put(key, sw);
}
HttpSessionWrapper session = sessions.get(key);
session.setLastAccessedTime(System.currentTimeMillis());
return session;
}
return null;
}
public HttpSessionWrapper removeSession(SubSessionKey key) {
return sessions.remove(key);
}
/**
* Destroy a session, freeing all it's resources.
*
* #param session
* The session to be destroyed.
*/
public void destroySession(HttpSessionWrapper session) {
String uiid = ((HttpSessionWrapper)session).getUiid();
SubSessionKey key = new SubSessionKey(session.getOriginalSession().getId(), uiid);
HttpSessionWrapper w = getInstance().removeSession(key);
if (w != null) {
System.out.println("Session " + w.getId() + " with uiid " + uiid + " was destroyed.");
} else {
System.out.println("uiid " + uiid + " does not have a session.");
}
}
/**
* Destroy all session that are expired at the time of this method call.
*/
public void destroyExpiredSessions() {
List<HttpSessionWrapper> markedForDelete = new ArrayList<HttpSessionWrapper>();
long time = System.currentTimeMillis() / 1000;
for (HttpSessionWrapper session : sessions.values()) {
if (time - (session.getLastAccessedTime() / 1000) >= session.getMaxInactiveInterval()) {
markedForDelete.add(session);
}
}
for (HttpSessionWrapper session : markedForDelete) {
destroySession(session);
}
}
/**
* Remove all subsessions that were created from a given parent session.
*
* #param originalSession
* All subsessions created with this session as their parent session will be
* invalidated.
*/
public void clearAllSessions(HttpSession originalSession) {
Iterator<HttpSessionWrapper> it = sessions.values().iterator();
while (it.hasNext()) {
HttpSessionWrapper w = it.next();
if (w.getOriginalSession().getId().equals(originalSession.getId())) {
destroySession(w);
}
}
}
public void setSessionTimeout(int timeout) {
this.timeout = timeout;
}
}
A subsession is identified by a SubSessionKey. These key objects depend on the uiid and the ID of the parent session.
package session;
/**
* Key object for identifying a subsession.
*
*/
public class SubSessionKey {
private String sessionId;
private String uiid;
/**
* Create a new instance of {#link SubSessionKey}.
*
* #param sessionId
* The session id of the parent session.
* #param uiid
* The users's id this session is associated with.
*/
public SubSessionKey(String sessionId, String uiid) {
super();
this.sessionId = sessionId;
this.uiid = uiid;
}
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((sessionId == null) ? 0 : sessionId.hashCode());
result = prime * result + ((uiid == null) ? 0 : uiid.hashCode());
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
SubSessionKey other = (SubSessionKey) obj;
if (sessionId == null) {
if (other.sessionId != null)
return false;
} else if (!sessionId.equals(other.sessionId))
return false;
if (uiid == null) {
if (other.uiid != null)
return false;
} else if (!uiid.equals(other.uiid))
return false;
return true;
}
#Override
public String toString() {
return "SubSessionKey [sessionId=" + sessionId + ", uiid=" + uiid + "]";
}
}
The HttpServletRequestWrapper wraps a HttpServletRequest object. All methods are redirected to the wrapped request except for the getSession methods which will return an HttpSessionWrapper depending on the user ID in this request's parameters.
package session;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpSession;
/**
* Wrapper class that wraps a {#link HttpServletRequest} object. All methods are redirected to the
* wrapped request except for the <code>getSession</code> which will return an
* {#link HttpSessionWrapper} depending on the user id in this request's parameters.
*
*/
public class HttpServletRequestWrapper extends javax.servlet.http.HttpServletRequestWrapper {
private HttpServletRequest req;
public HttpServletRequestWrapper(HttpServletRequest req) {
super(req);
this.req = req;
}
#Override
public HttpSession getSession() {
return getSession(true);
}
#Override
public HttpSession getSession(boolean create) {
String[] uiid = getParameterMap().get("uiid");
if (uiid != null && uiid.length >= 1) {
return SingleSessionManager.getInstance().getSession(uiid[0], create, req.getSession(create));
}
return req.getSession(create);
}
}
The HttpSessionWrapper represents a subsession.
package session;
import java.util.Collections;
import java.util.Enumeration;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import javax.servlet.ServletContext;
import javax.servlet.http.HttpSession;
import javax.servlet.http.HttpSessionContext;
/**
* Implementation of a HttpSession. Each instance of this class is created around a container
* managed parent session with it's lifetime linked to it's parent's.
*
*/
#SuppressWarnings("deprecation")
public class HttpSessionWrapper implements HttpSession {
private Map<String, Object> attributes;
private Map<String, Object> values;
private long creationTime;
private String id;
private String uiid;
private boolean isNew;
private long lastAccessedTime;
private HttpSession originalSession;
public HttpSessionWrapper(String uiid, HttpSession originalSession) {
creationTime = System.currentTimeMillis();
lastAccessedTime = creationTime;
id = SingleSessionManager.getInstance().generateSessionId();
isNew = true;
attributes = new HashMap<String, Object>();
Enumeration<String> names = originalSession.getAttributeNames();
while (names.hasMoreElements()) {
String name = names.nextElement();
attributes.put(name, originalSession.getAttribute(name));
}
values = new HashMap<String, Object>();
for (String name : originalSession.getValueNames()) {
values.put(name, originalSession.getValue(name));
}
this.uiid = uiid;
this.originalSession = originalSession;
}
public String getUiid() {
return uiid;
}
public void setNew(boolean b) {
isNew = b;
}
public void setLastAccessedTime(long time) {
lastAccessedTime = time;
}
#Override
public Object getAttribute(String arg0) {
return attributes.get(arg0);
}
#Override
public Enumeration<String> getAttributeNames() {
return Collections.enumeration(attributes.keySet());
}
#Override
public long getCreationTime() {
return creationTime;
}
#Override
public String getId() {
return id;
}
#Override
public long getLastAccessedTime() {
return lastAccessedTime;
}
#Override
public int getMaxInactiveInterval() {
return SingleSessionManager.getInstance().getSessionTimeout();
}
#Override
public ServletContext getServletContext() {
return originalSession.getServletContext();
}
#Override
public HttpSessionContext getSessionContext() {
return new HttpSessionContext() {
#Override
public Enumeration<String> getIds() {
return Collections.enumeration(new HashSet<String>());
}
#Override
public HttpSession getSession(String arg0) {
return null;
}
};
}
#Override
public Object getValue(String arg0) {
return values.get(arg0);
}
#Override
public String[] getValueNames() {
return values.keySet().toArray(new String[values.size()]);
}
#Override
public void invalidate() {
SingleSessionManager.getInstance().destroySession(this);
}
#Override
public boolean isNew() {
return isNew;
}
#Override
public void putValue(String arg0, Object arg1) {
values.put(arg0, arg1);
}
#Override
public void removeAttribute(String arg0) {
attributes.remove(arg0);
}
#Override
public void removeValue(String arg0) {
values.remove(arg0);
}
#Override
public void setAttribute(String arg0, Object arg1) {
attributes.put(arg0, arg1);
}
#Override
public void setMaxInactiveInterval(int arg0) {
SingleSessionManager.getInstance().setSessionTimeout(arg0);
}
public HttpSession getOriginalSession() {
return originalSession;
}
}
The SessionInvalidator is an HttpSessionListener that takes care of cleaning all subsessions in case of the invalidation of their parent session.
package session;
import javax.servlet.http.HttpSessionEvent;
import javax.servlet.http.HttpSessionListener;
/**
* Session listener that listens for the destruction of a container managed session and takes care
* of destroying all it's subsessions.
* <p>
* Normally this listener won't have much to do since subsessions usually have a shorter lifetime
* than their parent session and therefore will timeout long before this method is called. This
* listener will only be important in case of an explicit invalidation of a parent session.
* </p>
*
*/
public class SessionInvalidator implements HttpSessionListener {
#Override
public void sessionCreated(HttpSessionEvent arg0) {
}
#Override
public void sessionDestroyed(HttpSessionEvent arg0) {
SingleSessionManager.getInstance().clearAllSessions(arg0.getSession());
}
}
Enable everything by putting the following in your web.xml
<filter>
<filter-name>SingleSessionFilter</filter-name>
<filter-class>de.supportgis.sgjWeb.session.SingleSessionManager</filter-class>
<!-- The timeout in minutes after which a subsession will be invalidated. It is recommended to set a session timeout for the servled container using the parameter "session-timeout", which is higher than this value. -->
<init-param>
<param-name>sessionTimeout</param-name>
<param-value>1</param-value>
</init-param>
<init-param>
<!-- The intervall in seconds in which a check for expired sessions will be performed. -->
<param-name>sessionInvalidationCheck</param-name>
<param-value>15</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>SingleSessionFilter</filter-name>
<!-- Insert the name of your servlet here to which the session management should apply, or use url-pattern instead. -->
<servlet-name>YourServlet</servlet-name>
</filter-mapping>
<listener>
<listener-class>session.SessionInvalidator</listener-class>
</listener>
<!-- Timeout of the parent session -->
<session-config>
<session-timeout>40</session-timeout>
<!-- Session timeout interval in minutes -->
</session-config>
I think you're looking for something like Apache Tomcat. It will manage individual sessions for individual servlet applications.
The session is unique for a combination of user and web application. You can of course deploy your servlet in several web applications on the same Tomcat instance, but you will not be able to route the HTTP request to different web applications simply based on URL parameters unless you evaluate the URL parameters in a second servlet and redirect the browser to a new URL for the specific web app.
Different servlet containers or J2EE app servers may have different options for routing requests to specific web applications, but AFAIK out of the box, Tomcat can only delegate the request based on either host name or base directory, e.g.:
http://app1/... or http://server/app1/... is delegated to app1
http://app2/... or http://server/app2/... is delegated to app2, and so on
here is a bug fix for user3792852's reply
public HttpSession getSession(String uiid, boolean create, HttpSession originalSession)
{
if (uiid != null && originalSession != null)
{
SubSessionKey key = new SubSessionKey(originalSession.getId(), uiid);
synchronized (sessions)
{
HttpSessionWrapper session = sessions.get(key);
if (session == null && create)
{
session = new HttpSessionWrapper(uiid, originalSession);
sessions.put(key, session);
}
if (session != null)
{
session.setLastAccessedTime(System.currentTimeMillis());
}
return session;
}
}
return null;
}
Related
I have to solve the following scenario, in a Spring Security 3.2.5-RELEASE with Spring Core 4.1.2-RELEASE application running Java 1.7 on wildfly 8.1.
user 'bob' logs in
and Admin deletes 'bob'
if 'bob' logs out, he can't log in. again but he`s current session remains active.
i want to kick 'bob' out
//this doesn't work
for (final SessionInformation session : sessionRegistry.getAllSessions(user, true)) {
session.expireNow();
}
add application event listener to track HttpSessionCreatedEvent and HttpSessionDestroyedEvent and register it as an ApplicationListener and maintain a cache of SessionId to HttoSession.
(optional) add your own ApplicationEvent class AskToExpireSessionEvent -
in you user management service add dependencies to SessionRegistry and ApplicationEventPublisher so that you could list through the currently active user sessions and find the ones (cause there could be many) which are active for the user you are looking for i.e. 'bob'
when deleting a user dispatch an AskToExpireSessionEvent for each of his sessions.
use a weak reference HashMap to track the sessions
user service:
#Service
public class UserServiceImpl implements UserService {
/** {#link SessionRegistry} does not exists in unit tests */
#Autowired(required = false)
private Set<SessionRegistry> sessionRegistries;
#Autowired
private ApplicationEventPublisher publisher;
/**
* destroys all active sessions.
* #return <code>true</code> if any session was invalidated^
* #throws IllegalArgumentException
*/
#Override
public boolean invalidateUserByUserName(final String userName) {
if(null == StringUtils.trimToNull(userName)) {
throw new IllegalArgumentException("userName must not be null or empty");
}
boolean expieredAtLeastOneSession = false;
for (final SessionRegistry sessionRegistry : safe(sessionRegistries)) {
findPrincipal: for (final Object principal : sessionRegistry.getAllPrincipals()) {
if(principal instanceof IAuthenticatedUser) {
final IAuthenticatedUser user = (IAuthenticatedUser) principal;
if(userName.equals(user.getUsername())) {
for (final SessionInformation session : sessionRegistry.getAllSessions(user, true)) {
session.expireNow();
sessionRegistry.removeSessionInformation(session.getSessionId());
publisher.publishEvent(AskToExpireSessionEvent.of(session.getSessionId()));
expieredAtLeastOneSession = true;
}
break findPrincipal;
}
} else {
logger.warn("encountered a session for a none user object {} while invalidating '{}' " , principal, userName);
}
}
}
return expieredAtLeastOneSession;
}
}
Application event:
import org.springframework.context.ApplicationEvent;
public class AskToExpireSessionEvent extends ApplicationEvent {
private static final long serialVersionUID = -1915691753338712193L;
public AskToExpireSessionEvent(final Object source) {
super(source);
}
#Override
public String getSource() {
return (String)super.getSource();
}
public static AskToExpireSessionEvent of(final String sessionId) {
return new AskToExpireSessionEvent(sessionId);
}
}
http session caching listener:
import java.util.Map;
import java.util.WeakHashMap;
import javax.servlet.http.HttpSession;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationListener;
import org.springframework.security.web.session.HttpSessionCreatedEvent;
import org.springframework.security.web.session.HttpSessionDestroyedEvent;
import org.springframework.stereotype.Component;
import com.cb4.base.service.event.AskToExpireSessionEvent;
#Component
public class HttpSessionCachingListener {
private static final Logger logger = LoggerFactory.getLogger(HttpSessionCachingListener.class);
private final Map<String, HttpSession> sessionCache = new WeakHashMap<>();
void onHttpSessionCreatedEvent(final HttpSessionCreatedEvent event){
if (event != null && event.getSession() != null && event.getSession().getId() != null) {
sessionCache.put(event.getSession().getId(), event.getSession());
}
}
void onHttpSessionDestroyedEvent(final HttpSessionDestroyedEvent event){
if (event != null && event.getSession() != null && event.getSession().getId() != null){
sessionCache.remove(event.getSession().getId());
}
}
public void timeOutSession(final String sessionId){
if(sessionId != null){
final HttpSession httpSession = sessionCache.get(sessionId);
if(null != httpSession){
logger.debug("invalidating session {} in 1 second", sessionId);
httpSession.setMaxInactiveInterval(1);
}
}
}
#Component
static class HttpSessionCreatedLisener implements ApplicationListener<HttpSessionCreatedEvent> {
#Autowired
HttpSessionCachingListener parent;
#Override
public void onApplicationEvent(final HttpSessionCreatedEvent event) {
parent.onHttpSessionCreatedEvent(event);
}
}
#Component
static class HttpSessionDestroyedLisener implements ApplicationListener<HttpSessionDestroyedEvent> {
#Autowired
HttpSessionCachingListener parent;
#Override
public void onApplicationEvent(final HttpSessionDestroyedEvent event) {
parent.onHttpSessionDestroyedEvent(event);
}
}
#Component
static class AskToTimeOutSessionLisener implements ApplicationListener<AskToExpireSessionEvent> {
#Autowired
HttpSessionCachingListener parent;
#Override
public void onApplicationEvent(final AskToExpireSessionEvent event) {
if(event != null){
parent.timeOutSession(event.getSource());
}
}
}
}
Using java config add the following code in your class extending WebSecurityConfigurerAdapter :
#Bean
public SessionRegistry sessionRegistry( ) {
SessionRegistry sessionRegistry = new SessionRegistryImpl( );
return sessionRegistry;
}
#Bean
public RegisterSessionAuthenticationStrategy registerSessionAuthStr( ) {
return new RegisterSessionAuthenticationStrategy( sessionRegistry( ) );
}
and add the following in your configure( HttpSecurity http ) method:
http.sessionManagement( ).maximumSessions( -1 ).sessionRegistry( sessionRegistry( ) );
http.sessionManagement( ).sessionFixation( ).migrateSession( )
.sessionAuthenticationStrategy( registerSessionAuthStr( ) );
Also, set the registerSessionAuthenticationStratergy in your custom authentication bean as follows:
usernamePasswordAuthenticationFilter
.setSessionAuthenticationStrategy( registerSessionAuthStr( ) );
NOTE: Setting registerSessionAuthenticationStratergy in your custom authentication bean causes the prinicpal list to be populated and hence when you try to fetch the list of all prinicipals from sessionRegistry ( sessionRegistry.getAllPrinicpals() ), the list is NOT empty.
I have saved a persistent anchor (for 365 days) on the cloud. Now, I want to retrieve it. I can do it just fine using the code Google provided in one of its sample projects. However, I want to use Sceneform since I want to do some manipulations afterwards (drawing 3D shapes), that are much easier to do in Sceneform. However, I can't seem to resolve the persistent cloud anchors. All the examples I find online, don't deal with persistent cloud anchors and they only deal with the normal 24 hour cloud anchors.
#RequiresApi(api = VERSION_CODES.N)
protected void onUpdateFrame(FrameTime frameTime) {
Frame frame = arFragment.getArSceneView().getArFrame();
// If there is no frame, just return.
if (frame == null) {
return;
}
if (session == null) {
Log.d(TAG, "setup a session once");
session = arFragment.getArSceneView().getSession();
cloudAnchorManager = new CloudAnchorManager(session);
}
if (resolveListener == null && session != null) {
Log.d(TAG, "setup a resolveListener once");
resolveListener = new MemexViewingActivity.ResolveListener();
// Encourage the user to look at a previously mapped area.
if (cloudAnchorId != null && !gotGoodAnchor && cloudAnchorManager != null) {
Log.d(TAG, "put resolveListener on cloud manager once");
userMessageText.setText(R.string.resolving_processing);
cloudAnchorManager.resolveCloudAnchor(cloudAnchorId, resolveListener);
}
}
if (cloudAnchorManager != null && session != null) {
try {
Frame dummy = session.update();
cloudAnchorManager.onUpdate();
} catch (CameraNotAvailableException e) {
e.printStackTrace();
}
}
}
Is there anything wrong in the above update function that I have written? The CloudAnchorManager class is the same one Google uses in its Persistent Cloud Anchor example. Here, I will put its code too:
package com.memex.eu.helpers;
import android.util.Log;
import com.google.ar.core.Anchor;
import com.google.ar.core.Anchor.CloudAnchorState;
import com.google.ar.core.Session;
import com.google.common.base.Preconditions;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
/**
* A helper class to handle all the Cloud Anchors logic, and add a callback-like mechanism on top of
* the existing ARCore API.
*/
public class CloudAnchorManager {
/** Listener for the results of a host operation. */
public interface CloudAnchorListener {
/** This method is invoked when the results of a Cloud Anchor operation are available. */
void onComplete(Anchor anchor);
}
private final Session session;
private final Map<Anchor, CloudAnchorListener> pendingAnchors = new HashMap<>();
public CloudAnchorManager(Session session) {
this.session = Preconditions.checkNotNull(session);
}
/** Hosts an anchor. The {#code listener} will be invoked when the results are available. */
public synchronized void hostCloudAnchor(Anchor anchor, CloudAnchorListener listener) {
Preconditions.checkNotNull(listener, "The listener cannot be null.");
// This is configurable up to 365 days.
Anchor newAnchor = session.hostCloudAnchorWithTtl(anchor, /* ttlDays= */ 365);
pendingAnchors.put(newAnchor, listener);
}
/** Resolves an anchor. The {#code listener} will be invoked when the results are available. */
public synchronized void resolveCloudAnchor(String anchorId, CloudAnchorListener listener) {
Preconditions.checkNotNull(listener, "The listener cannot be null.");
Anchor newAnchor = session.resolveCloudAnchor(anchorId);
pendingAnchors.put(newAnchor, listener);
}
/** Should be called after a {#link Session#update()} call. */
public synchronized void onUpdate() {
Preconditions.checkNotNull(session, "The session cannot be null.");
for (Iterator<Map.Entry<Anchor, CloudAnchorListener>> it = pendingAnchors.entrySet().iterator();
it.hasNext(); ) {
Map.Entry<Anchor, CloudAnchorListener> entry = it.next();
Anchor anchor = entry.getKey();
if (isReturnableState(anchor.getCloudAnchorState())) {
CloudAnchorListener listener = entry.getValue();
listener.onComplete(anchor);
it.remove();
}
}
}
/** Clears any currently registered listeners, so they won't be called again. */
synchronized void clearListeners() {
pendingAnchors.clear();
}
private static boolean isReturnableState(CloudAnchorState cloudState) {
switch (cloudState) {
case NONE:
case TASK_IN_PROGRESS:
return false;
default:
return true;
}
}
}
Also, here is another class I am using (this is also from the Google example project):
/* Listens for a resolved anchor. */
private final class ResolveListener implements CloudAnchorManager.CloudAnchorListener {
#Override
public void onComplete(Anchor resolvedAnchor) {
runOnUiThread(
() -> {
Anchor.CloudAnchorState state = resolvedAnchor.getCloudAnchorState();
if (state.isError()) {
Log.e(TAG, "Error resolving a cloud anchor, state " + state);
userMessageText.setText(getString(R.string.resolving_error, state));
return;
}
Log.e(TAG, "cloud anchor successfully resolved, state " + state);
anchor = resolvedAnchor;
userMessageText.setText(getString(R.string.resolving_success));
gotGoodAnchor = true;
});
}
}
when I run my app, I point the phone's camera at the physical space where I previously put an object but the anchor is never resolved. I think the problem might be in the update function but I can't seem to figure out what.
I guess I wasn't looking at the object properly. Now, it's working. This code is correct.
I am implementing an upload feature using Grails where basically a user gets to upload a text file and then the system will persist each line of that text file as a database record. While the uploading works fine, larger files take time to process and therefore they ask to have a progress bar so that users can determine if their upload is still processing or an actual error has occurred.
To do this, what I did is to create two URLs:
/upload which is the actual URL that receives the uploaded text file.
/upload/status?uploadToken= which returns the status of a certain upload based on its uploadToken.
What I did is after processing each line, the service will update a session-level counter variable:
// import ...
class UploadService {
Map upload(CommonsMultipartFile record, GrailsParameterMap params) {
Map response = [success: true]
try {
File file = new File(record.getOriginalFilename())
FileUtils.writeByteArrayToFile(file, record.getBytes())
HttpSession session = WebUtils.retrieveGrailsWebRequest().session
List<String> lines = FileUtils.readLines(file, "UTF-8"), errors = []
String uploadToken = params.uploadToken
session.status.put(uploadToken,
[message: "Checking content of the file of errors.",
size: lines.size(),
done: 0])
lines.eachWithIndex { l, li ->
// ... regex checking per line and appending any error to the errors List
session.status.get(uploadToken).done++
}
if(errors.size() == 0) {
session.status.put(uploadToken,
[message: "Persisting record to the database.",
size: lines.size(),
done: 0])
lines.eachWithIndex { l, li ->
// ... Performs GORM manipulation here
session.status.get(uploadToken).done++
}
}
else {
response.success = false
}
}
catch(Exception e) {
response.success = false
}
response << [errors: errors]
return response
}
}
Then create a simple WebSocket implementation that connects to the /upload/status?uploadToken= URL. The problem is that I cannot access the session variable on POGOs. I even change that POGO into a Grails service because I thought that is the cause of the issue, but I still can't access the session variable.
// import ...
#ServerEndpoint("/upload/status")
#WebListener
class UploadEndpointService implements ServletContextListener {
#OnOpen
public void onOpen(Session userSession) { /* ... */ }
#OnClose
public void onClose(Session userSession, CloseReason closeReason) { /* ... */ }
#OnError
public void onError(Throwable t) { /* ... */ }
#OnMessage
public void onMessage(String token, Session userSession) {
// Both of these cause IllegalStateException
def session = WebUtils.retrieveGrailsWebRequest().session
def session = RequestContextHolder.currentRequestAttributes().getSession()
// This returns the session id but I don't know what to do with that information.
String sessionId = userSession.getHttpSessionId()
// Sends the upload status through this line
sendMessage((session.get(token) as JSON).toString(), userSession)
}
private void sendMessage(String message, Session userSession = null) {
Iterator<Session> iterator = users.iterator()
while(iterator.hasNext()) {
iterator.next().basicRemote.sendText(message)
}
}
}
And instead, gives me an error:
Caused by IllegalStateException: No thread-bound request found: Are you referring to request attributes outside of an actual web request, or processing a request outside of the originally receiving thread? If you are actually operating within a web request and still receive this message, your code is probably running outside of DispatcherServlet/DispatcherPortlet: In this case,
use RequestContextListener or RequestContextFilter to expose the current request.
I already verified that the web socket is working by making it send a static String content. But what I want is to be able to get that counter and set it as the send message. I'm using Grails 2.4.4 and the Grails Spring Websocket plugin, while looks promising, is only available from Grails 3 onwards. Is there any way to achieve this, or if not, what approach should I use?
Much thanks to the answer to this question that helped me greatly solving my problem.
I just modified my UploadEndpointService the same as the one on that answer and instead of making it as a service class, I reverted it back into a POGO. I also configured it's #Serverendpoint annotation and added a configurator value. I also added a second parameter to the onOpen() method. Here is the edited class:
import grails.converters.JSON
import grails.util.Environment
import javax.servlet.annotation.WebListener
import javax.servlet.http.HttpSession
import javax.servlet.ServletContext
import javax.servlet.ServletContextEvent
import javax.servlet.ServletContextListener
import javax.websocket.CloseReason
import javax.websocket.EndpointConfig
import javax.websocket.OnClose
import javax.websocket.OnError
import javax.websocket.OnMessage
import javax.websocket.OnOpen
import javax.websocket.server.ServerContainer
import javax.websocket.server.ServerEndpoint
import javax.websocket.Session
import org.apache.log4j.Logger
import org.codehaus.groovy.grails.commons.GrailsApplication
import org.codehaus.groovy.grails.web.json.JSONObject
import org.codehaus.groovy.grails.web.servlet.GrailsApplicationAttributes
import org.springframework.context.ApplicationContext
#ServerEndpoint(value="/ep/maintenance/attendance-monitoring/upload/status", configurator=GetHttpSessionConfigurator.class)
#WebListener
class UploadEndpoint implements ServletContextListener {
private static final Logger log = Logger.getLogger(UploadEndpoint.class)
private Session wsSession
private HttpSession httpSession
#Override
void contextInitialized(ServletContextEvent servletContextEvent) {
ServletContext servletContext = servletContextEvent.servletContext
ServerContainer serverContainer = servletContext.getAttribute("javax.websocket.server.ServerContainer")
try {
if (Environment.current == Environment.DEVELOPMENT) {
serverContainer.addEndpoint(UploadEndpoint)
}
ApplicationContext ctx = (ApplicationContext) servletContext.getAttribute(GrailsApplicationAttributes.APPLICATION_CONTEXT)
GrailsApplication grailsApplication = ctx.grailsApplication
serverContainer.defaultMaxSessionIdleTimeout = grailsApplication.config.servlet.defaultMaxSessionIdleTimeout ?: 0
} catch (IOException e) {
log.error(e.message, e)
}
}
#Override
void contextDestroyed(ServletContextEvent servletContextEvent) {
}
#OnOpen
public void onOpen(Session userSession, EndpointConfig config) {
this.wsSession = userSession
this.httpSession = (HttpSession) config.getUserProperties().get(HttpSession.class.getName())
}
#OnMessage
public void onMessage(String message, Session userSession) {
try {
Map params = new JSONObject(message)
if(httpSession.status == null) {
params = [message: "Initializing file upload.",
size: 0,
token: 0]
sendMessage((params as JSON).toString())
}
else {
sendMessage((httpSession.status.get(params.token) as JSON).toString())
}
}
catch(IllegalStateException e) {
}
}
#OnClose
public void onClose(Session userSession, CloseReason closeReason) {
try {
userSession.close()
}
catch(IllegalStateException e) {
}
}
#OnError
public void onError(Throwable t) {
log.error(t.message, t)
}
private void sendMessage(String message, Session userSession=null) {
wsSession.basicRemote.sendText(message)
}
}
The real magic happens within the onOpen() method. There is where the accessing of the session variable takes place.
I need to make persistent and user specific session counter. I made this
package my.package;
import javax.servlet.http.HttpSessionListener;
import javax.servlet.http.HttpSessionEvent;
import javax.servlet.http.HttpSession;
import java.util.HashMap;
public class SessionCounter implements HttpSessionListener {
private static HashMap activeSessions;
public SessionCounter() {
//How to restore session count?
activeSessions = new HashMap();
}
public void sessionCreated(HttpSessionEvent se) {
HttpSession session = se.getSession();
String userName = session.getAttribute("username");
Integer count = (Integer) activeSessions.get(userName);
if (count != null) {
activeSessions.put(userName, Integer.valueOf(count.intValue() + 1));
} else {
activeSessions.put(userName, new Integer(1));
}
}
public void sessionDestroyed(HttpSessionEvent se) {
HttpSession session = se.getSession();
String userName = session.getAttribute("username");
Integer count = (Integer) activeSessions.get(userName);
if (count != null && count.intValue() > 0) {
activeSessions.put(userName, Integer.valueOf(count.intValue() - 1));
}
}
public static HashMap getActiveSessions() {
return activeSessions;
}
}
Session are active even after restart tomcat but session count stored in my activeSessions variable is lost. How can I restore the session count after restart?
When Tomcat is shut down (i.e, by the shutdown script and not by killing the process) all sessions a serialized and restored when it is started the next time.
An HttpSessionListener will always be recreated, therefore your HashMap gets newly instantiated and the information is lost. You will have to write a model that implements serialiable, that holds your data, and store it to the disk when Tomcat is shutdown.
In Tomcat 5.0.x you had the ability to set useDirtyFlag="false" to force replication of the session after every request rather than checking for set/removeAttribute calls.
<Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
managerClassName="org.apache.catalina.cluster.session.SimpleTcpReplicationManager"
expireSessionsOnShutdown="false"
**useDirtyFlag="false"**
doClusterLog="true"
clusterLogName="clusterLog"> ...
The comments in the server.xml stated this may be used to make the following work:
<%
HashMap map = (HashMap)session.getAttribute("map");
map.put("key","value");
%>
i.e. change the state of an object that has already been put in the session and you can be sure that this object still be replicated to the other nodes in the cluster.
According to the Tomcat 6 documentation you only have two "Manager" options - DeltaManager & BackupManager ... neither of these seem to allow this option or anything like it. In my testing the default setup:
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
where you get the DeltaManager by default, it's definitely behaving as useDirtyFlag="true" (as I'd expect).
So my question is - is there an equivalent in Tomcat 6?
Looking at the source I can see a manager implementation "org.apache.catalina.ha.session.SimpleTcpReplicationManager" which does have the useDirtyFlag but the javadoc comments in this state it's "Tomcat Session Replication for Tomcat 4.0" ... I don't know if this is ok to use - I'm guessing not as it's not mentioned in the main cluster configuration documentation.
I posted essentially the same question on the tomcat-users mailing list and the responses to this along with some information in the tomcat bugzilla ([43866]) led me to the following conclusions:
There is no equivalent to the useDirtyFlag, if you're putting mutable (ie changing) objects in the session you need a custom coded solution.
A Tomcat ClusterValve seems to be an effecting place for this solution - plug into the cluster mechanism, manipulate attributes to make it appear to the DeltaManager that all attributes in the session have changed. This forces replication of the entire session.
Step 1: Write the ForceReplicationValve (extends ValveBase implements ClusterValve)
I won't include the whole class but the key bit of logic (taking out the logging and instanceof checking):
#Override
public void invoke(Request request, Response response)
throws IOException, ServletException {
getNext().invoke(request, response);
Session session = request.getSessionInternal();
HttpSession deltaSession = (HttpSession) session;
for (Enumeration<String> names = deltaSession.getAttributeNames();
names.hasMoreElements(); ) {
String name = names.nextElement();
deltaSession.setAttribute(name, deltaSession.getAttribute(name));
}
}
Step 2: Alter the cluster config (in conf/server.xml)
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">
<Valve className="org.apache.catalina.ha.tcp.ForceReplicationValve"/>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=".*\.gif;.*\.jpg;.*\.png;.*\.js;.*\.htm;.*\.html;.*\.txt;.*\.css;"/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
Replication of the session to all cluster nodes will now happen after every request.
Aside: Note the channelSendOptions setting. This replaces the replicationMode=asynchronous/synchronous/pooled from Tomcat 5.0.x. See the cluster documentation for the possible int values.
Appendix: Full Valve source as requested
package org.apache.catalina.ha.tcp;
import java.io.IOException;
import java.util.Enumeration;
import java.util.LinkedList;
import java.util.List;
import javax.servlet.ServletException;
import javax.servlet.http.HttpSession;
import org.apache.catalina.Lifecycle;
import org.apache.catalina.LifecycleException;
import org.apache.catalina.LifecycleListener;
import org.apache.catalina.Session;
import org.apache.catalina.connector.Request;
import org.apache.catalina.connector.Response;
import org.apache.catalina.ha.CatalinaCluster;
import org.apache.catalina.ha.ClusterValve;
import org.apache.catalina.ha.session.ReplicatedSession;
import org.apache.catalina.ha.session.SimpleTcpReplicationManager;
import org.apache.catalina.util.LifecycleSupport;
//import org.apache.catalina.util.StringManager;
import org.apache.catalina.valves.ValveBase;
/**
* <p>With the {#link SimpleTcpReplicationManager} effectively deprecated, this allows
* mutable objects to be replicated in the cluster by forcing the "dirty" status on
* every request.</p>
*
* #author Jon Brisbin (via post on tomcat-users http://markmail.org/thread/rdo3drcir75dzzrq)
* #author Kevin Jansz
*/
public class ForceReplicationValve extends ValveBase implements Lifecycle, ClusterValve {
private static org.apache.juli.logging.Log log =
org.apache.juli.logging.LogFactory.getLog( ForceReplicationValve.class );
#SuppressWarnings("hiding")
protected static final String info = "org.apache.catalina.ha.tcp.ForceReplicationValve/1.0";
// this could be used if ForceReplicationValve messages were setup
// in org/apache/catalina/ha/tcp/LocalStrings.properties
//
// /**
// * The StringManager for this package.
// */
// #SuppressWarnings("hiding")
// protected static StringManager sm =
// StringManager.getManager(Constants.Package);
/**
* Not actually required but this must implement {#link ClusterValve} to
* be allowed to be added to the Cluster.
*/
private CatalinaCluster cluster = null ;
/**
* Also not really required, implementing {#link Lifecycle} to allow
* initialisation and shutdown to be logged.
*/
protected LifecycleSupport lifecycle = new LifecycleSupport(this);
/**
* Default constructor
*/
public ForceReplicationValve() {
super();
if (log.isInfoEnabled()) {
log.info(getInfo() + ": created");
}
}
#Override
public String getInfo() {
return info;
}
#Override
public void invoke(Request request, Response response) throws IOException,
ServletException {
getNext().invoke(request, response);
Session session = null;
try {
session = request.getSessionInternal();
} catch (Throwable e) {
log.error(getInfo() + ": Unable to perform replication request.", e);
}
String context = request.getContext().getName();
String task = request.getPathInfo();
if(task == null) {
task = request.getRequestURI();
}
if (session != null) {
if (log.isDebugEnabled()) {
log.debug(getInfo() + ": [session=" + session.getId() + ", instanceof=" + session.getClass().getName() + ", context=" + context + ", request=" + task + "]");
}
if (session instanceof ReplicatedSession) {
// it's a SimpleTcpReplicationManager - can just set to dirty
((ReplicatedSession) session).setIsDirty(true);
if (log.isDebugEnabled()) {
log.debug(getInfo() + ": [session=" + session.getId() + ", context=" + context + ", request=" + task + "] maked DIRTY");
}
} else {
// for everything else - cycle all attributes
List cycledNames = new LinkedList();
// in a cluster where the app is <distributable/> this should be
// org.apache.catalina.ha.session.DeltaSession - implements HttpSession
HttpSession deltaSession = (HttpSession) session;
for (Enumeration<String> names = deltaSession.getAttributeNames(); names.hasMoreElements(); ) {
String name = names.nextElement();
deltaSession.setAttribute(name, deltaSession.getAttribute(name));
cycledNames.add(name);
}
if (log.isDebugEnabled()) {
log.debug(getInfo() + ": [session=" + session.getId() + ", context=" + context + ", request=" + task + "] cycled atrributes=" + cycledNames + "");
}
}
} else {
String id = request.getRequestedSessionId();
log.warn(getInfo() + ": [session=" + id + ", context=" + context + ", request=" + task + "] Session not available, unable to send session over cluster.");
}
}
/*
* ClusterValve methods - implemented to ensure this valve is not ignored by Cluster
*/
public CatalinaCluster getCluster() {
return cluster;
}
public void setCluster(CatalinaCluster cluster) {
this.cluster = cluster;
}
/*
* Lifecycle methods - currently implemented just for logging startup
*/
/**
* Add a lifecycle event listener to this component.
*
* #param listener The listener to add
*/
public void addLifecycleListener(LifecycleListener listener) {
lifecycle.addLifecycleListener(listener);
}
/**
* Get the lifecycle listeners associated with this lifecycle. If this
* Lifecycle has no listeners registered, a zero-length array is returned.
*/
public LifecycleListener[] findLifecycleListeners() {
return lifecycle.findLifecycleListeners();
}
/**
* Remove a lifecycle event listener from this component.
*
* #param listener The listener to remove
*/
public void removeLifecycleListener(LifecycleListener listener) {
lifecycle.removeLifecycleListener(listener);
}
public void start() throws LifecycleException {
lifecycle.fireLifecycleEvent(START_EVENT, null);
if (log.isInfoEnabled()) {
log.info(getInfo() + ": started");
}
}
public void stop() throws LifecycleException {
lifecycle.fireLifecycleEvent(STOP_EVENT, null);
if (log.isInfoEnabled()) {
log.info(getInfo() + ": stopped");
}
}
}
Many thanks to kevinjansz for providing the source for ForceReplicationValve.
I adjusted it for Tomcat7, here it is if anyone needs it:
package org.apache.catalina.ha.tcp;
import java.io.IOException;
import java.util.Enumeration;
import java.util.LinkedList;
import java.util.List;
import javax.servlet.ServletException;
import javax.servlet.http.HttpSession;
import org.apache.catalina.Lifecycle;
import org.apache.catalina.LifecycleException;
import org.apache.catalina.LifecycleListener;
import org.apache.catalina.Session;
import org.apache.catalina.connector.Request;
import org.apache.catalina.connector.Response;
import org.apache.catalina.ha.CatalinaCluster;
import org.apache.catalina.ha.ClusterValve;
import org.apache.catalina.util.LifecycleSupport;
import org.apache.catalina.valves.ValveBase;
import org.apache.catalina.LifecycleState;
// import org.apache.tomcat.util.res.StringManager;
/**
* <p>With the {#link SimpleTcpReplicationManager} effectively deprecated, this allows
* mutable objects to be replicated in the cluster by forcing the "dirty" status on
* every request.</p>
*
* #author Jon Brisbin (via post on tomcat-users http://markmail.org/thread/rdo3drcir75dzzrq)
* #author Kevin Jansz
*/
public class ForceReplicationValve extends ValveBase implements Lifecycle, ClusterValve {
private static org.apache.juli.logging.Log log =
org.apache.juli.logging.LogFactory.getLog( ForceReplicationValve.class );
#SuppressWarnings("hiding")
protected static final String info = "org.apache.catalina.ha.tcp.ForceReplicationValve/1.0";
// this could be used if ForceReplicationValve messages were setup
// in org/apache/catalina/ha/tcp/LocalStrings.properties
//
// /**
// * The StringManager for this package.
// */
// #SuppressWarnings("hiding")
// protected static StringManager sm =
// StringManager.getManager(Constants.Package);
/**
* Not actually required but this must implement {#link ClusterValve} to
* be allowed to be added to the Cluster.
*/
private CatalinaCluster cluster = null;
/**
* Also not really required, implementing {#link Lifecycle} to allow
* initialisation and shutdown to be logged.
*/
protected LifecycleSupport lifecycle = new LifecycleSupport(this);
/**
* Default constructor
*/
public ForceReplicationValve() {
super();
if (log.isInfoEnabled()) {
log.info(getInfo() + ": created");
}
}
#Override
public String getInfo() {
return info;
}
#Override
public void invoke(Request request, Response response) throws IOException,
ServletException {
getNext().invoke(request, response);
Session session = null;
try {
session = request.getSessionInternal();
} catch (Throwable e) {
log.error(getInfo() + ": Unable to perform replication request.", e);
}
String context = request.getContext().getName();
String task = request.getPathInfo();
if(task == null) {
task = request.getRequestURI();
}
if (session != null) {
if (log.isDebugEnabled()) {
log.debug(getInfo() + ": [session=" + session.getId() + ", instanceof=" + session.getClass().getName() + ", context=" + context + ", request=" + task + "]");
}
//cycle all attributes
List<String> cycledNames = new LinkedList<String>();
// in a cluster where the app is <distributable/> this should be
// org.apache.catalina.ha.session.DeltaSession - implements HttpSession
HttpSession deltaSession = (HttpSession) session;
for (Enumeration<String> names = deltaSession.getAttributeNames(); names.hasMoreElements(); ) {
String name = names.nextElement();
deltaSession.setAttribute(name, deltaSession.getAttribute(name));
cycledNames.add(name);
}
if (log.isDebugEnabled()) {
log.debug(getInfo() + ": [session=" + session.getId() + ", context=" + context + ", request=" + task + "] cycled atrributes=" + cycledNames + "");
}
} else {
String id = request.getRequestedSessionId();
log.warn(getInfo() + ": [session=" + id + ", context=" + context + ", request=" + task + "] Session not available, unable to send session over cluster.");
}
}
/*
* ClusterValve methods - implemented to ensure this valve is not ignored by Cluster
*/
public CatalinaCluster getCluster() {
return cluster;
}
public void setCluster(CatalinaCluster cluster) {
this.cluster = cluster;
}
/*
* Lifecycle methods - currently implemented just for logging startup
*/
/**
* Add a lifecycle event listener to this component.
*
* #param listener The listener to add
*/
public void addLifecycleListener(LifecycleListener listener) {
lifecycle.addLifecycleListener(listener);
}
/**
* Get the lifecycle listeners associated with this lifecycle. If this
* Lifecycle has no listeners registered, a zero-length array is returned.
*/
public LifecycleListener[] findLifecycleListeners() {
return lifecycle.findLifecycleListeners();
}
/**
* Remove a lifecycle event listener from this component.
*
* #param listener The listener to remove
*/
public void removeLifecycleListener(LifecycleListener listener) {
lifecycle.removeLifecycleListener(listener);
}
protected synchronized void startInternal() throws LifecycleException {
setState(LifecycleState.STARTING);
if (log.isInfoEnabled()) {
log.info(getInfo() + ": started");
}
}
protected synchronized void stopInternal() throws LifecycleException {
setState(LifecycleState.STOPPING);
if (log.isInfoEnabled()) {
log.info(getInfo() + ": stopped");
}
}
}