how to use persistent anchors with sceneform? - java

I have saved a persistent anchor (for 365 days) on the cloud. Now, I want to retrieve it. I can do it just fine using the code Google provided in one of its sample projects. However, I want to use Sceneform since I want to do some manipulations afterwards (drawing 3D shapes), that are much easier to do in Sceneform. However, I can't seem to resolve the persistent cloud anchors. All the examples I find online, don't deal with persistent cloud anchors and they only deal with the normal 24 hour cloud anchors.
#RequiresApi(api = VERSION_CODES.N)
protected void onUpdateFrame(FrameTime frameTime) {
Frame frame = arFragment.getArSceneView().getArFrame();
// If there is no frame, just return.
if (frame == null) {
return;
}
if (session == null) {
Log.d(TAG, "setup a session once");
session = arFragment.getArSceneView().getSession();
cloudAnchorManager = new CloudAnchorManager(session);
}
if (resolveListener == null && session != null) {
Log.d(TAG, "setup a resolveListener once");
resolveListener = new MemexViewingActivity.ResolveListener();
// Encourage the user to look at a previously mapped area.
if (cloudAnchorId != null && !gotGoodAnchor && cloudAnchorManager != null) {
Log.d(TAG, "put resolveListener on cloud manager once");
userMessageText.setText(R.string.resolving_processing);
cloudAnchorManager.resolveCloudAnchor(cloudAnchorId, resolveListener);
}
}
if (cloudAnchorManager != null && session != null) {
try {
Frame dummy = session.update();
cloudAnchorManager.onUpdate();
} catch (CameraNotAvailableException e) {
e.printStackTrace();
}
}
}
Is there anything wrong in the above update function that I have written? The CloudAnchorManager class is the same one Google uses in its Persistent Cloud Anchor example. Here, I will put its code too:
package com.memex.eu.helpers;
import android.util.Log;
import com.google.ar.core.Anchor;
import com.google.ar.core.Anchor.CloudAnchorState;
import com.google.ar.core.Session;
import com.google.common.base.Preconditions;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
/**
* A helper class to handle all the Cloud Anchors logic, and add a callback-like mechanism on top of
* the existing ARCore API.
*/
public class CloudAnchorManager {
/** Listener for the results of a host operation. */
public interface CloudAnchorListener {
/** This method is invoked when the results of a Cloud Anchor operation are available. */
void onComplete(Anchor anchor);
}
private final Session session;
private final Map<Anchor, CloudAnchorListener> pendingAnchors = new HashMap<>();
public CloudAnchorManager(Session session) {
this.session = Preconditions.checkNotNull(session);
}
/** Hosts an anchor. The {#code listener} will be invoked when the results are available. */
public synchronized void hostCloudAnchor(Anchor anchor, CloudAnchorListener listener) {
Preconditions.checkNotNull(listener, "The listener cannot be null.");
// This is configurable up to 365 days.
Anchor newAnchor = session.hostCloudAnchorWithTtl(anchor, /* ttlDays= */ 365);
pendingAnchors.put(newAnchor, listener);
}
/** Resolves an anchor. The {#code listener} will be invoked when the results are available. */
public synchronized void resolveCloudAnchor(String anchorId, CloudAnchorListener listener) {
Preconditions.checkNotNull(listener, "The listener cannot be null.");
Anchor newAnchor = session.resolveCloudAnchor(anchorId);
pendingAnchors.put(newAnchor, listener);
}
/** Should be called after a {#link Session#update()} call. */
public synchronized void onUpdate() {
Preconditions.checkNotNull(session, "The session cannot be null.");
for (Iterator<Map.Entry<Anchor, CloudAnchorListener>> it = pendingAnchors.entrySet().iterator();
it.hasNext(); ) {
Map.Entry<Anchor, CloudAnchorListener> entry = it.next();
Anchor anchor = entry.getKey();
if (isReturnableState(anchor.getCloudAnchorState())) {
CloudAnchorListener listener = entry.getValue();
listener.onComplete(anchor);
it.remove();
}
}
}
/** Clears any currently registered listeners, so they won't be called again. */
synchronized void clearListeners() {
pendingAnchors.clear();
}
private static boolean isReturnableState(CloudAnchorState cloudState) {
switch (cloudState) {
case NONE:
case TASK_IN_PROGRESS:
return false;
default:
return true;
}
}
}
Also, here is another class I am using (this is also from the Google example project):
/* Listens for a resolved anchor. */
private final class ResolveListener implements CloudAnchorManager.CloudAnchorListener {
#Override
public void onComplete(Anchor resolvedAnchor) {
runOnUiThread(
() -> {
Anchor.CloudAnchorState state = resolvedAnchor.getCloudAnchorState();
if (state.isError()) {
Log.e(TAG, "Error resolving a cloud anchor, state " + state);
userMessageText.setText(getString(R.string.resolving_error, state));
return;
}
Log.e(TAG, "cloud anchor successfully resolved, state " + state);
anchor = resolvedAnchor;
userMessageText.setText(getString(R.string.resolving_success));
gotGoodAnchor = true;
});
}
}
when I run my app, I point the phone's camera at the physical space where I previously put an object but the anchor is never resolved. I think the problem might be in the update function but I can't seem to figure out what.

I guess I wasn't looking at the object properly. Now, it's working. This code is correct.

Related

Why my grid does not move on second page with paging toolbar(GWT 2.4)?

I am developing a GWT app where I am using paging toolbar. When I have more than 10 groups in grid, user can go to second page with paging toolbar. But when I press button to go to the second page, it goes to that second, loading is shown but then toolbar is back to the first page with those first. 10 items.
This is first page:
And when I press button for second page I get this loading:
But then after that toolbar backs me to the first page. This is my class for paging toolbar:
public class MyPagingToolBar extends PagingToolBar {
private static final ConsoleMessages MSGS = GWT.create(ConsoleMessages.class);
public MyPagingToolBar(int pageSize) {
super(pageSize);
PagingToolBarMessages pagingToolbarMessages = getMessages();
pagingToolbarMessages.setBeforePageText(MSGS.pagingToolbarPage());
pagingToolbarMessages.setAfterPageText(MSGS.pagingToolbarOf().concat("{0}"));
StringBuilder sb = new StringBuilder();
sb.append(MSGS.pagingToolbarShowingPre())
.append(" {0} - {1} ")
.append(MSGS.pagingToolbarShowingMid())
.append(" {2} ")
.append(MSGS.pagingToolbarShowingPost());
pagingToolbarMessages.setDisplayMsg(sb.toString());
pagingToolbarMessages.setEmptyMsg(MSGS.pagingToolbarNoResult());
pagingToolbarMessages.setFirstText(MSGS.pagingToolbarFirstPage());
pagingToolbarMessages.setPrevText(MSGS.pagingToolbarPrevPage());
pagingToolbarMessages.setNextText(MSGS.pagingToolbarNextPage());
pagingToolbarMessages.setLastText(MSGS.pagingToolbarLastPage());
pagingToolbarMessages.setRefreshText(MSGS.pagingToolbarRefresh());
}
}
And this is class where I using MyPagingToolbar:
public abstract class EntityGrid<M extends GwtEntityModel> extends ContentPanel {
private static final ConsoleMessages MSGS = GWT.create(ConsoleMessages.class);
private static final int ENTITY_PAGE_SIZE = 10;
protected GwtSession currentSession;
private AbstractEntityView<M> parentEntityView;
private EntityCRUDToolbar<M> entityCRUDToolbar;
protected KapuaGrid<M> entityGrid;
protected BasePagingLoader<PagingLoadResult<M>> entityLoader;
protected ListStore<M> entityStore;
protected PagingToolBar entityPagingToolbar;
protected EntityFilterPanel<M> filterPanel;
protected EntityGrid(AbstractEntityView<M> entityView, GwtSession currentSession) {
super(new FitLayout());
//
// Set other properties
this.parentEntityView = entityView;
this.currentSession = currentSession;
//
// Container borders
setBorders(false);
setBodyBorder(true);
setHeaderVisible(false);
//
// CRUD toolbar
entityCRUDToolbar = getToolbar();
if (entityCRUDToolbar != null) {
setTopComponent(entityCRUDToolbar);
}
//
// Paging toolbar
entityPagingToolbar = getPagingToolbar();
if (entityPagingToolbar != null) {
setBottomComponent(entityPagingToolbar);
}
}
#Override
protected void onRender(Element target, int index) {
super.onRender(target, index);
//
// Configure Entity Grid
// Data Proxy
RpcProxy<PagingLoadResult<M>> dataProxy = getDataProxy();
// Data Loader
entityLoader = new BasePagingLoader<PagingLoadResult<M>>(dataProxy);
// Data Store
entityStore = new ListStore<M>(entityLoader);
//
// Grid Data Load Listener
entityLoader.addLoadListener(new EntityGridLoadListener<M>(this, entityStore));
//
// Bind Entity Paging Toolbar
if (entityPagingToolbar != null) {
entityPagingToolbar.bind(entityLoader);
}
//
// Configure columns
ColumnModel columnModel = new ColumnModel(getColumns());
//
// Set grid
entityGrid = new KapuaGrid<M>(entityStore, columnModel);
add(entityGrid);
//
// Bind the grid to CRUD toolbar
entityCRUDToolbar.setEntityGrid(this);
//
// Grid selection mode
GridSelectionModel<M> selectionModel = entityGrid.getSelectionModel();
selectionModel.setSelectionMode(SelectionMode.SINGLE);
selectionModel.addSelectionChangedListener(new SelectionChangedListener<M>() {
#Override
public void selectionChanged(SelectionChangedEvent<M> se) {
selectionChangedEvent(se.getSelectedItem());
}
});
//
// Grid view options
GridView gridView = entityGrid.getView();
gridView.setEmptyText(MSGS.gridEmptyResult());
//
// Do first load
refresh();
}
protected EntityCRUDToolbar<M> getToolbar() {
return new EntityCRUDToolbar<M>(currentSession);
}
protected abstract RpcProxy<PagingLoadResult<M>> getDataProxy();
protected PagingToolBar getPagingToolbar() {
return new MyPagingToolBar(ENTITY_PAGE_SIZE);
}
protected abstract List<ColumnConfig> getColumns();
public void refresh() {
entityLoader.load();
entityPagingToolbar.enable();
}
public void refresh(GwtQuery query) {
// m_filterPredicates = predicates;
setFilterQuery(query);
entityLoader.load();
entityPagingToolbar.enable();
}
public void setFilterPanel(EntityFilterPanel<M> filterPanel) {
this.filterPanel = filterPanel;
entityCRUDToolbar.setFilterPanel(filterPanel);
}
protected void selectionChangedEvent(M selectedItem) {
if (parentEntityView != null) {
parentEntityView.setSelectedEntity(selectedItem);
}
}
public void setPagingToolbar(PagingToolBar entityPagingToolbar) {
this.entityPagingToolbar = entityPagingToolbar;
}
public GridSelectionModel<M> getSelectionModel() {
return entityGrid.getSelectionModel();
}
protected abstract GwtQuery getFilterQuery();
protected abstract void setFilterQuery(GwtQuery filterQuery);
What is my mistake?
EDIT: This is my server method:
int totalLength = 0;
List<GwtGroup> gwtGroupList = new ArrayList<GwtGroup>();
try {
KapuaLocator locator = KapuaLocator.getInstance();
GroupService groupService = locator.getService(GroupService.class);
UserService userService = locator.getService(UserService.class);
GroupQuery groupQuery = GwtKapuaAuthorizationModelConverter.convertGroupQuery(loadConfig,
gwtGroupQuery);
GroupListResult groups = groupService.query(groupQuery);
if (!groups.isEmpty()) {
if (groups.getSize() >= loadConfig.getLimit()) {
totalLength = Long.valueOf(groupService.count(groupQuery)).intValue();
} else {
totalLength = groups.getSize();
}
for (Group g : groups.getItems()) {
gwtGroupList.add(KapuaGwtAuthorizationModelConverter.convertGroup(g));
for (GwtGroup gwtGroup : gwtGroupList) {
User user = userService.find(g.getScopeId(), g.getCreatedBy());
if (user != null) {
gwtGroup.setUserName(user.getDisplayName());
}
}
}
}
} catch (Exception e) {
KapuaExceptionHandler.handle(e);
}
return new BasePagingLoadResult<GwtGroup>(gwtGroupList, loadConfig.getOffset(),
totalLength);
}
(Didn't I just answer this an earlier version of this? Please don't delete questions after you get an answer to them, or people won't answer your questions at all any more.)
If the server is given a request for the second page (offset of 10), but returns a PagingLoadResult for the first page anyway, that is what you will see. Make sure your server is actually sending back the second page - not only that, but it must send in the response object the offset that it actually used for the next page (in your example, 10), or else the paging toolbar will not know which page the user is actually on.
Make sure the server is taking the request offset into account, and returning the parameters it used correctly to the client. If that appears to be correct, please add the server method to your question, and add logging on the client and server to verify what is being requested, vs what is being returned.
Skipping items in Java is pretty straightforward, but will not scale very well for huge lists.
In short, just skip the first offset items when looping.
First though, a free code review - this is very inefficient code - you are rewriting every item in gwtGroupList every time you add something:
for (Group g : groups.getItems()) {
gwtGroupList.add(KapuaGwtAuthorizationModelConverter.convertGroup(g));
for (GwtGroup gwtGroup : gwtGroupList) {
User user = userService.find(g.getScopeId(), g.getCreatedBy());
if (user != null) {
gwtGroup.setUserName(user.getDisplayName());
}
}
It could instead read:
for (Group g : groups.getItems()) {
gwtGroupList.add(KapuaGwtAuthorizationModelConverter.convertGroup(g));
}
for (GwtGroup gwtGroup : gwtGroupList) {
User user = userService.find(g.getScopeId(), g.getCreatedBy());
if (user != null) {
gwtGroup.setUserName(user.getDisplayName());
}
}
Alternatively, they could be just one loop.
Now we modify it again, to handle offset and limit:
int itemsLeftToSkip = offset;
for (Group g : groups.getItems()) {
if (itemsLeftToSkip > 0) {
itemsLeftToSkip--;
continue;//we skipped this item, and now the count is one less
}
if (gwtGroupList.size() >= limit) {
break;//we've got enough already, quit the loop
}
gwtGroupList.add(KapuaGwtAuthorizationModelConverter.convertGroup(g));
}
for (GwtGroup gwtGroup : gwtGroupList) {
User user = userService.find(g.getScopeId(), g.getCreatedBy());
if (user != null) {
gwtGroup.setUserName(user.getDisplayName());
}
}
Notice how we use offset to avoid items until we get to the ones that are needed for the new page, and we use limit to only send that many time, at a maximum.
Finally, unless your groupQuery already has a limit built in (in which case, you should put the offset there too...), the if (groups.getSize() >= loadConfig.getLimit()) { block of code is likely not necessary at all, since you've already loaded all items. If it is necessary because there is a limit, then your pages will not correctly load all the way to the end. Either way, investigate this code, and possibly get it reviewed further, something looks very wrong there.

How to reset a variable in Flume's custom sink class for every batch

I have a flume process that reads data from file on a spooldir & loads the data into MySQL database. There will be multiple types of files that can be processed by the same flume process.
I have created a custom sink java class (extending AbstractSink), that updates a local variable (sInterfaceType) after an initial/first read to decide the data format in the file.
I have to reset it once the file processing completes, so that it has to start with identifying the next batch/interface file.
I tried to do in stop() but it doesn't help. Did anybody do this?
My sink class looks like this:
public class MyFlumeSink2 extends AbstractSink implements Configurable {
private String sInterfaceType; //tells file format of current load
public MyFlumeSink2() {
//my initialization of variables
}
public void configure(Context context) {
//read context variables
}
public void start() {
//create db connection
}
#Override
public void stop() {
//destroy connection
sInterfaceType = ""; //This doesn't help me
super.stop();
}
public Status process() throws EventDeliveryException {
Channel channel = getChannel();
Transaction transaction = channel.getTransaction();
if((sInterfaceType=="" || sInterfaceType==null))
{
//Read first line & set sInterfaceType
}else
//Insert data in MySQL
transaction.commit();
}
}
We have to manually decide which event it is, there is no specialized method called for every new file.
I revised my code to read the event line & set InterfaceType based on first element. My code looks like this:
public Status process() throws EventDeliveryException {
//....other code...
sEvtBody = new String(event.getBody());
sFields = sEvtBody.split(",");
//check first field to know record type
enumRec = RecordType.valueOf( checkRecordType(sFields[0].toUpperCase()) );
switch(enumRec)
{
case CUST_ID:
sInterfaceType = "T_CUST";
bHeader = true;
break;
case TXN_ID:
sInterfaceType = "T_CUST_TXNS";
bHeader = true;
break;
default:
bHeader = false;
}
//insert if not header
if(!bHeader)
{
if(sInterfaceType == "T_CUST")
{
if(sFields.length == 14)
this.bInsertStatus = daoClass.insertHeader(sFields);
else
throw new Exception("INCORRECT_COLUMN_COUNT");
}else if(sInterfaceType == "T_CUST_TXNS")
{
if(sFields.length == 10)
this.bInsertStatus = daoClass.insertData(sFields);
else
throw new Exception("INCORRECT_COLUMN_COUNT");
}
//if(!bInsertStatus)
// logTransaction(sFields);
}
//....Other code....

How to hook into the internal Eclipse browser?

For my eclipse plugin I want to track every URL that is opened with the internal (and if possible also external) Eclipse browser.
So far I use
org.eclipse.swt.browser.Browser;
and
addLocationListener(...)
But I would prefer that it works also for the internal Eclipse browser. How can I achieve that?
One possible solution for the Eclipse Internal Browser would be to create an eclipse plugin that registers an IStartup extension. In your earlyStartup() method you would register an IPartListener on the workbenchPage. Then when the internal browser part is created, you will receive a callback with a reference to the WebBrowserEditor (or WebBrowserView). Since there is no direct API you will have to hack a bit and use reflection to grab the internal SWT Browser instance. Once you have that, you can add your location listener.
Sometimes during early startup there is no active Workbench window yet so you have to loop through all existing workbench windows (usually just one) and each of their workbench pages to add part listeners also.
Here is the snippet of code for the earlyStartup() routine. Note that I have omitted any cleanup of listeners during dispose for windows/pages so that still needs to be done.
//Add this code to an IStartup.earlyStartup() method
final IPartListener partListener = new IPartListener() {
#Override
public void partOpened(IWorkbenchPart part) {
if (part instanceof WebBrowserEditor)
{
WebBrowserEditor editor = (WebBrowserEditor) part;
try {
Field webBrowser = editor.getClass().getDeclaredField("webBrowser");
webBrowser.setAccessible(true);
BrowserViewer viewer = (BrowserViewer)webBrowser.get(editor);
Field browser = viewer.getClass().getDeclaredField("browser");
browser.setAccessible(true);
Browser swtBrowser = (Browser) browser.get(viewer);
swtBrowser.addLocationListener(new LocationListener() {
#Override
public void changed(LocationEvent event) {
System.out.println(event.location);
}
});
} catch (Exception e) {
}
}
else if (part instanceof WebBrowserView)
{
WebBrowserView view = (WebBrowserView) part;
try {
Field webBrowser = editor.getClass().getDeclaredField("viewer");
webBrowser.setAccessible(true);
BrowserViewer viewer = (BrowserViewer)webBrowser.get(view);
Field browser = viewer.getClass().getDeclaredField("browser");
browser.setAccessible(true);
Browser swtBrowser = (Browser) browser.get(viewer);
swtBrowser.addLocationListener(new LocationListener() {
#Override
public void changed(LocationEvent event) {
System.out.println(event.location);
}
});
} catch (Exception e) {
}
}
}
...
};
final IPageListener pageListener = new IPageListener() {
#Override
public void pageOpened(IWorkbenchPage page) {
page.addPartListener(partListener);
}
...
};
final IWindowListener windowListener = new IWindowListener() {
#Override
public void windowOpened(IWorkbenchWindow window) {
window.addPageListener(pageListener);
}
...
};
IWorkbenchWindow activeWindow = PlatformUI.getWorkbench().getActiveWorkbenchWindow();
if (activeWindow != null)
{
IWorkbenchPage activePage = activeWindow.getActivePage();
if (activePage != null)
{
activePage.addPartListener(partListener);
}
else
{
activeWindow.addPageListener(pageListener);
}
}
else
{
for (IWorkbenchWindow window : PlatformUI.getWorkbench().getWorkbenchWindows())
{
for (IWorkbenchPage page : window.getPages()) {
page.addPartListener(partListener);
}
window.addPageListener(pageListener);
}
PlatformUI.getWorkbench().addWindowListener(windowListener);
}
One last detail about this code snippet is that it requires a dependency on the org.eclipse.ui.browser plugin to have access to the WebBrowserEditor class.

modifying a map while iterating over its entrySet

In the java docs of the map interface's entrySet() method I found this statement and I really do no understand it.
The set is backed by the map, so changes to the map are reflected in the set, and vice-versa. If the map is modified while an iteration over the set is in progress, the results of the iteration are undefined
what is meant by undefined here?
For more clarification, this is my situation.
I have a web application based on spring & hibernate.
Our team implemented custom caching class called CachedIntegrationClients.
We are using RabbitMQ as a messaging server.
instead of getting our clients each time we want to send a message to the server, we cache the clients using the previous caching class.
The problem is that the messages are sent to the messaging server twice.
Viewing the logs, we found that the method that get the cached clients return the client twice, although this (theoretically) impossible as we store the clients in a map, and the map does not allow duplicate keys.
After some smoke viewing of the code I found that the method that iterates over the cached clients gets a set of the clients from the cached clients map.
So I suspected that while iterating over this set, another request is made by another client and this client may be uncached, so it modifies the map.
Any way this is the CachedIntegrationClients class
public class CachedIntegrationClientServiceImpl {
private IntegrationDao integrationDao;
private IntegrationService integrationService;
Map<String, IntegrationClient> cachedIntegrationClients = null;
#Override
public void setBaseDAO(BaseDao baseDao) {
super.setBaseDAO(integrationDao);
}
#Override
public void refreshCache() {
cachedIntegrationClients = null;
}
synchronized private void putOneIntegrationClientOnCache(IntegrationClient integrationClient){
fillCachedIntegrationClients(); // only fill cache if it is null , it will never refill cache
if (! cachedIntegrationClients.containsValue(integrationClient)) {
cachedIntegrationClients.put(integrationClient.getClientSlug(),integrationClient);
}
}
/**
* only fill cache if it is null , it will never refill cache
*/
private void fillCachedIntegrationClients() {
if (cachedIntegrationClients != null) {
return ;
}
log.debug("filling cache of cachedClients");
cachedIntegrationClients = new HashMap<String, IntegrationClient>(); // initialize cache Map
List<IntegrationClient> allCachedIntegrationClients= integrationDao.getAllIntegrationClients();
if (allCachedIntegrationClients != null) {
for (IntegrationClient integrationClient : allCachedIntegrationClients) {
integrationService
.injectCssFileForIntegrationClient(integrationClient);
fetchClientServiceRelations(integrationClient
.getIntegrationClientServiceList());
}
for (IntegrationClient integrationClient : allCachedIntegrationClients) {
putOneIntegrationClientOnCache(integrationClient);
}
}
}
/**
* fetch all client service
* #param integrationClientServiceList
*/
private void fetchClientServiceRelations(
List<IntegrationClientService> integrationClientServiceList) {
for (IntegrationClientService integrationClientService : integrationClientServiceList) {
fetchClientServiceRelations(integrationClientService);
}
}
private void fetchClientServiceRelations(IntegrationClientService clientService) {
for (Exchange exchange : clientService.getExchangeList()) {
exchange.getId();
}
for (Company company : clientService.getCompanyList()) {
company.getId();
}
}
/**
* Get a client given its slug.
*
* If the client was not found, an exception will be thrown.
*
* #throws ClientNotFoundIntegrationException
* #return IntegrationClient
*/
#Override
public IntegrationClient getIntegrationClient(String clientSlug) throws ClientNotFoundIntegrationException {
if (cachedIntegrationClients == null) {
fillCachedIntegrationClients();
}
if (!cachedIntegrationClients.containsKey(clientSlug)) {
IntegrationClient integrationClient = integrationDao.getIntegrationClient(clientSlug);
if (integrationClient != null) {
this.fetchClientServiceRelations(integrationClient.getIntegrationClientServiceList());
integrationService.injectCssFileForIntegrationClient(integrationClient);
cachedIntegrationClients.put(clientSlug, integrationClient);
}
}
IntegrationClient client = cachedIntegrationClients.get(clientSlug);
if (client == null) {
throw ClientNotFoundIntegrationException.forClientSlug(clientSlug);
}
return client;
}
public void setIntegrationDao(IntegrationDao integrationDao) {
this.integrationDao = integrationDao;
}
public IntegrationDao getIntegrationDao() {
return integrationDao;
}
public Map<String, IntegrationClient> getCachedIntegrationClients() {
if (cachedIntegrationClients == null) {
fillCachedIntegrationClients();
}
return cachedIntegrationClients;
}
public IntegrationService getIntegrationService() {
return integrationService;
}
public void setIntegrationService(IntegrationService integrationService) {
this.integrationService = integrationService;
}
}
and here is the method that iterates over the set
public List<IntegrationClientService> getIntegrationClientServicesForService(IntegrationServiceModel service) {
List<IntegrationClientService> integrationClientServices = new ArrayList<IntegrationClientService>();
for (Entry<String, IntegrationClient> entry : cachedIntegrationClientService.getCachedIntegrationClients().entrySet()) {
IntegrationClientService integrationClientService = getIntegrationClientService(entry.getValue(), service);
if (integrationClientService != null) {
integrationClientServices.add(integrationClientService);
}
}
return integrationClientServices;
}
Also here is the method that calls the previous one
List<IntegrationClientService> clients = integrationService.getIntegrationClientServicesForService(service);
System.out.println(clients.size());
if (clients.size() > 0) {
log.info("Inbound service message [" + messageType.getKey() + "] to be sent to " + clients.size()
+ " registered clients: [" + StringUtils.arrayToDelimitedString(clients.toArray(), ", ") + "]");
for (IntegrationClientService integrationClientService : clients) {
Message<T> message = integrationMessageBuilder.build(messageType, payload, integrationClientService);
try {
channel.send(message);
} catch (RuntimeException e) {
messagingIntegrationService.handleException(e, messageType, integrationClientService, payload);
}
}
} else {
log.info("Inbound service message [" + messageType.getKey() + "] but no registered clients, not taking any further action.");
}
and here is the logs that appears on the server
BaseIntegrationGateway.createAndSendToSubscribers(65) | Inbound service message [news.create] to be sent to 3 registered clients: [Id=126, Service=IntegrationService.MESSAGE_NEWS, Client=MDC, Id=125, Service=IntegrationService.MESSAGE_NEWS, Client=CNBC, Id=125, Service=IntegrationService.MESSAGE_NEWS, Client=CNBC]
Undefined means there is no requirement for any specific behavior. The implementation is free to start WWIII, re-hang all your toilet rolls by the overhand method, sully your grandmother, etc.
The only permitted modification with a specified behaviour is via the iterator.
Have you looked at java.concurrent.ConcurrentHashMap?
EDIT: I looked over your code again this stikes me as odd:
In fillCachedIntegrationClients() you have the following loop:
for (IntegrationClient integrationClient : allCachedIntegrationClients) {
putOneIntegrationClientOnCache(integrationClient);
}
But the putOneIntegrationClientOnCache method itself directly calls fillCachedIntegrationClients();
synchronized private void putOneIntegrationClientOnCache(IntegrationClient integrationClient){
fillCachedIntegrationClients(); // only fill cache if it is null , it will never refill cache
...
}
Something there must go wrong. You are calling fillCachedIntegrationClients() twice. Actually if I am not mistaken this should actually be a never-ending loop since one method calls the other and vice versa. The != null condition is never met during the initialization. Ofcourse you are modifying and iterating in a undefined way so maybe that saves you from an infinite loop.

Multiple sessions for one servlet in Java

I have one servlet taking care of multiple sites and therefore I want to have different sessions for different sites, even if its the same user.
Is there any support for this in Java or do I need to prefix the attribute names instead? I guess prefixing is not a good idea.
/Br Johannes
This CANNOT be done in the servlet container based on URL parameters alone; you'll have to do it yourself. Instead of dealing with attribute prefixes in your servlet, however, the easiest way to manage "separate" sessions is via filter:
Write a simple wrapper class for HttpSession. Have it hold a Map of attributes and back all attribute / value methods by said map; delegate all the other methods to the actual session you're wrapping. Override invalidate() method to remove your session wrapper instead of killing the entire "real" session.
Write a servlet filter; map it to intercept all applicable URLs.
Maintain a collection of your session wrappers as an attribute within the real session.
In your filter's doFilter() method extract the appropriate session wrapper from the collection and inject it into the request you're passing down the chain by wrapping the original request into HttpServletRequestWrapper whose getSession() method is overwritten.
Your servlets / JSPs / etc... will enjoy "separate" sessions.
Note that Sessions's "lastAccessedTime" is shared with this approach. If you need to keep those separate you'll have to write your own code for maintaining this setting and for expiring your session wrappers.
I recently came across this problem too, and I went with ChssPly76's suggestion to solve it. I thought I'd post my results here to provide a reference implementation. It hasn't been extensively tested, so kindly let me know if you spot any weaknesses.
I assume that every request to a servlet contains a parameter named uiid, which represents a user ID. The requester has to keep track of sending a new ID everytime a link is clicked that opens a new window. In my case this is sufficient, but feel free to use any other (maybe more secure) method here. Furthermore, I work with Tomcat 7 or 8. You might need to extend other classes when working with different servlet containers, but the APIs shouldn't change too much.
In the following, the created sessions are referred to as subsessions, the original container managed session is the parent session. The implementation consists of the following five classes:
The SingleSessionManager keeps track of creation, distribution and cleanup of all subsessions. It does this by acting as a servlet filter which replaces the ServletRequest with a wrapper that returns the appropriate subsession. A scheduler periodically checks for expired subsessions ...and yes, it's a singleton. Sorry, but I still like them.
package session;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.ScheduledFuture;
import java.util.concurrent.TimeUnit;
import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.FilterConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpSession;
/**
* A singleton class that manages multiple sessions on top of a regular container managed session.
* See web.xml for information on how to enable this.
*
*/
public class SingleSessionManager implements Filter {
/**
* The default session timeout in seconds to be used if no explicit timeout is provided.
*/
public static final int DEFAULT_TIMEOUT = 900;
/**
* The default interval for session validation checks in seconds to be used if no explicit
* timeout is provided.
*/
public static final int DEFAULT_SESSION_INVALIDATION_CHECK = 15;
private static SingleSessionManager instance;
private ScheduledExecutorService scheduler;
protected int timeout;
protected long sessionInvalidationCheck;
private Map<SubSessionKey, HttpSessionWrapper> sessions = new ConcurrentHashMap<SubSessionKey, HttpSessionWrapper>();
public SingleSessionManager() {
sessionInvalidationCheck = DEFAULT_SESSION_INVALIDATION_CHECK;
timeout = DEFAULT_TIMEOUT;
}
public static SingleSessionManager getInstance() {
if (instance == null) {
instance = new SingleSessionManager();
}
return instance;
}
#Override
public void destroy() {
}
#Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
HttpServletRequestWrapper wrapper = new HttpServletRequestWrapper((HttpServletRequest) request);
chain.doFilter(wrapper, response);
}
#Override
public void init(FilterConfig cfg) throws ServletException {
String timeout = cfg.getInitParameter("sessionTimeout");
if (timeout != null && !timeout.trim().equals("")) {
getInstance().timeout = Integer.parseInt(timeout) * 60;
}
String sessionInvalidationCheck = cfg.getInitParameter("sessionInvalidationCheck");
if (sessionInvalidationCheck != null && !sessionInvalidationCheck.trim().equals("")) {
getInstance().sessionInvalidationCheck = Long.parseLong(sessionInvalidationCheck);
}
getInstance().startSessionExpirationScheduler();
}
/**
* Create a new session ID.
*
* #return A new unique session ID.
*/
public String generateSessionId() {
return UUID.randomUUID().toString();
}
protected void startSessionExpirationScheduler() {
if (scheduler == null) {
scheduler = Executors.newScheduledThreadPool(1);
final Runnable sessionInvalidator = new Runnable() {
public void run() {
SingleSessionManager.getInstance().destroyExpiredSessions();
}
};
final ScheduledFuture<?> sessionInvalidatorHandle =
scheduler.scheduleAtFixedRate(sessionInvalidator
, this.sessionInvalidationCheck
, this.sessionInvalidationCheck
, TimeUnit.SECONDS);
}
}
/**
* Get the timeout after which a session will be invalidated.
*
* #return The timeout of a session in seconds.
*/
public int getSessionTimeout() {
return timeout;
}
/**
* Retrieve a session.
*
* #param uiid
* The user id this session is to be associated with.
* #param create
* If <code>true</code> and no session exists for the given user id, a new session is
* created and associated with the given user id. If <code>false</code> and no
* session exists for the given user id, no new session will be created and this
* method will return <code>null</code>.
* #param originalSession
* The original backing session created and managed by the servlet container.
* #return The session associated with the given user id if this session exists and/or create is
* set to <code>true</code>, <code>null</code> otherwise.
*/
public HttpSession getSession(String uiid, boolean create, HttpSession originalSession) {
if (uiid != null) {
SubSessionKey key = new SubSessionKey(originalSession.getId(), uiid);
if (!sessions.containsKey(key) && create) {
HttpSessionWrapper sw = new HttpSessionWrapper(uiid, originalSession);
sessions.put(key, sw);
}
HttpSessionWrapper session = sessions.get(key);
session.setLastAccessedTime(System.currentTimeMillis());
return session;
}
return null;
}
public HttpSessionWrapper removeSession(SubSessionKey key) {
return sessions.remove(key);
}
/**
* Destroy a session, freeing all it's resources.
*
* #param session
* The session to be destroyed.
*/
public void destroySession(HttpSessionWrapper session) {
String uiid = ((HttpSessionWrapper)session).getUiid();
SubSessionKey key = new SubSessionKey(session.getOriginalSession().getId(), uiid);
HttpSessionWrapper w = getInstance().removeSession(key);
if (w != null) {
System.out.println("Session " + w.getId() + " with uiid " + uiid + " was destroyed.");
} else {
System.out.println("uiid " + uiid + " does not have a session.");
}
}
/**
* Destroy all session that are expired at the time of this method call.
*/
public void destroyExpiredSessions() {
List<HttpSessionWrapper> markedForDelete = new ArrayList<HttpSessionWrapper>();
long time = System.currentTimeMillis() / 1000;
for (HttpSessionWrapper session : sessions.values()) {
if (time - (session.getLastAccessedTime() / 1000) >= session.getMaxInactiveInterval()) {
markedForDelete.add(session);
}
}
for (HttpSessionWrapper session : markedForDelete) {
destroySession(session);
}
}
/**
* Remove all subsessions that were created from a given parent session.
*
* #param originalSession
* All subsessions created with this session as their parent session will be
* invalidated.
*/
public void clearAllSessions(HttpSession originalSession) {
Iterator<HttpSessionWrapper> it = sessions.values().iterator();
while (it.hasNext()) {
HttpSessionWrapper w = it.next();
if (w.getOriginalSession().getId().equals(originalSession.getId())) {
destroySession(w);
}
}
}
public void setSessionTimeout(int timeout) {
this.timeout = timeout;
}
}
A subsession is identified by a SubSessionKey. These key objects depend on the uiid and the ID of the parent session.
package session;
/**
* Key object for identifying a subsession.
*
*/
public class SubSessionKey {
private String sessionId;
private String uiid;
/**
* Create a new instance of {#link SubSessionKey}.
*
* #param sessionId
* The session id of the parent session.
* #param uiid
* The users's id this session is associated with.
*/
public SubSessionKey(String sessionId, String uiid) {
super();
this.sessionId = sessionId;
this.uiid = uiid;
}
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((sessionId == null) ? 0 : sessionId.hashCode());
result = prime * result + ((uiid == null) ? 0 : uiid.hashCode());
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
SubSessionKey other = (SubSessionKey) obj;
if (sessionId == null) {
if (other.sessionId != null)
return false;
} else if (!sessionId.equals(other.sessionId))
return false;
if (uiid == null) {
if (other.uiid != null)
return false;
} else if (!uiid.equals(other.uiid))
return false;
return true;
}
#Override
public String toString() {
return "SubSessionKey [sessionId=" + sessionId + ", uiid=" + uiid + "]";
}
}
The HttpServletRequestWrapper wraps a HttpServletRequest object. All methods are redirected to the wrapped request except for the getSession methods which will return an HttpSessionWrapper depending on the user ID in this request's parameters.
package session;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpSession;
/**
* Wrapper class that wraps a {#link HttpServletRequest} object. All methods are redirected to the
* wrapped request except for the <code>getSession</code> which will return an
* {#link HttpSessionWrapper} depending on the user id in this request's parameters.
*
*/
public class HttpServletRequestWrapper extends javax.servlet.http.HttpServletRequestWrapper {
private HttpServletRequest req;
public HttpServletRequestWrapper(HttpServletRequest req) {
super(req);
this.req = req;
}
#Override
public HttpSession getSession() {
return getSession(true);
}
#Override
public HttpSession getSession(boolean create) {
String[] uiid = getParameterMap().get("uiid");
if (uiid != null && uiid.length >= 1) {
return SingleSessionManager.getInstance().getSession(uiid[0], create, req.getSession(create));
}
return req.getSession(create);
}
}
The HttpSessionWrapper represents a subsession.
package session;
import java.util.Collections;
import java.util.Enumeration;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import javax.servlet.ServletContext;
import javax.servlet.http.HttpSession;
import javax.servlet.http.HttpSessionContext;
/**
* Implementation of a HttpSession. Each instance of this class is created around a container
* managed parent session with it's lifetime linked to it's parent's.
*
*/
#SuppressWarnings("deprecation")
public class HttpSessionWrapper implements HttpSession {
private Map<String, Object> attributes;
private Map<String, Object> values;
private long creationTime;
private String id;
private String uiid;
private boolean isNew;
private long lastAccessedTime;
private HttpSession originalSession;
public HttpSessionWrapper(String uiid, HttpSession originalSession) {
creationTime = System.currentTimeMillis();
lastAccessedTime = creationTime;
id = SingleSessionManager.getInstance().generateSessionId();
isNew = true;
attributes = new HashMap<String, Object>();
Enumeration<String> names = originalSession.getAttributeNames();
while (names.hasMoreElements()) {
String name = names.nextElement();
attributes.put(name, originalSession.getAttribute(name));
}
values = new HashMap<String, Object>();
for (String name : originalSession.getValueNames()) {
values.put(name, originalSession.getValue(name));
}
this.uiid = uiid;
this.originalSession = originalSession;
}
public String getUiid() {
return uiid;
}
public void setNew(boolean b) {
isNew = b;
}
public void setLastAccessedTime(long time) {
lastAccessedTime = time;
}
#Override
public Object getAttribute(String arg0) {
return attributes.get(arg0);
}
#Override
public Enumeration<String> getAttributeNames() {
return Collections.enumeration(attributes.keySet());
}
#Override
public long getCreationTime() {
return creationTime;
}
#Override
public String getId() {
return id;
}
#Override
public long getLastAccessedTime() {
return lastAccessedTime;
}
#Override
public int getMaxInactiveInterval() {
return SingleSessionManager.getInstance().getSessionTimeout();
}
#Override
public ServletContext getServletContext() {
return originalSession.getServletContext();
}
#Override
public HttpSessionContext getSessionContext() {
return new HttpSessionContext() {
#Override
public Enumeration<String> getIds() {
return Collections.enumeration(new HashSet<String>());
}
#Override
public HttpSession getSession(String arg0) {
return null;
}
};
}
#Override
public Object getValue(String arg0) {
return values.get(arg0);
}
#Override
public String[] getValueNames() {
return values.keySet().toArray(new String[values.size()]);
}
#Override
public void invalidate() {
SingleSessionManager.getInstance().destroySession(this);
}
#Override
public boolean isNew() {
return isNew;
}
#Override
public void putValue(String arg0, Object arg1) {
values.put(arg0, arg1);
}
#Override
public void removeAttribute(String arg0) {
attributes.remove(arg0);
}
#Override
public void removeValue(String arg0) {
values.remove(arg0);
}
#Override
public void setAttribute(String arg0, Object arg1) {
attributes.put(arg0, arg1);
}
#Override
public void setMaxInactiveInterval(int arg0) {
SingleSessionManager.getInstance().setSessionTimeout(arg0);
}
public HttpSession getOriginalSession() {
return originalSession;
}
}
The SessionInvalidator is an HttpSessionListener that takes care of cleaning all subsessions in case of the invalidation of their parent session.
package session;
import javax.servlet.http.HttpSessionEvent;
import javax.servlet.http.HttpSessionListener;
/**
* Session listener that listens for the destruction of a container managed session and takes care
* of destroying all it's subsessions.
* <p>
* Normally this listener won't have much to do since subsessions usually have a shorter lifetime
* than their parent session and therefore will timeout long before this method is called. This
* listener will only be important in case of an explicit invalidation of a parent session.
* </p>
*
*/
public class SessionInvalidator implements HttpSessionListener {
#Override
public void sessionCreated(HttpSessionEvent arg0) {
}
#Override
public void sessionDestroyed(HttpSessionEvent arg0) {
SingleSessionManager.getInstance().clearAllSessions(arg0.getSession());
}
}
Enable everything by putting the following in your web.xml
<filter>
<filter-name>SingleSessionFilter</filter-name>
<filter-class>de.supportgis.sgjWeb.session.SingleSessionManager</filter-class>
<!-- The timeout in minutes after which a subsession will be invalidated. It is recommended to set a session timeout for the servled container using the parameter "session-timeout", which is higher than this value. -->
<init-param>
<param-name>sessionTimeout</param-name>
<param-value>1</param-value>
</init-param>
<init-param>
<!-- The intervall in seconds in which a check for expired sessions will be performed. -->
<param-name>sessionInvalidationCheck</param-name>
<param-value>15</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>SingleSessionFilter</filter-name>
<!-- Insert the name of your servlet here to which the session management should apply, or use url-pattern instead. -->
<servlet-name>YourServlet</servlet-name>
</filter-mapping>
<listener>
<listener-class>session.SessionInvalidator</listener-class>
</listener>
<!-- Timeout of the parent session -->
<session-config>
<session-timeout>40</session-timeout>
<!-- Session timeout interval in minutes -->
</session-config>
I think you're looking for something like Apache Tomcat. It will manage individual sessions for individual servlet applications.
The session is unique for a combination of user and web application. You can of course deploy your servlet in several web applications on the same Tomcat instance, but you will not be able to route the HTTP request to different web applications simply based on URL parameters unless you evaluate the URL parameters in a second servlet and redirect the browser to a new URL for the specific web app.
Different servlet containers or J2EE app servers may have different options for routing requests to specific web applications, but AFAIK out of the box, Tomcat can only delegate the request based on either host name or base directory, e.g.:
http://app1/... or http://server/app1/... is delegated to app1
http://app2/... or http://server/app2/... is delegated to app2, and so on
here is a bug fix for user3792852's reply
public HttpSession getSession(String uiid, boolean create, HttpSession originalSession)
{
if (uiid != null && originalSession != null)
{
SubSessionKey key = new SubSessionKey(originalSession.getId(), uiid);
synchronized (sessions)
{
HttpSessionWrapper session = sessions.get(key);
if (session == null && create)
{
session = new HttpSessionWrapper(uiid, originalSession);
sessions.put(key, session);
}
if (session != null)
{
session.setLastAccessedTime(System.currentTimeMillis());
}
return session;
}
}
return null;
}

Categories