I am reading Spring Security documentation it says that SecurityContextHolder provides different types of strategy for apps with specific thread behaviour. For example - Swing applications.
I understood that in web apps we can use ThredLocal strategy. But when to use other 2 strategies and how they work I cant understand.
See Spring Security Reference:
SecurityContextHolder, SecurityContext and Authentication Objects
The most fundamental object is SecurityContextHolder. This is where we store details of the present security context of the application, which includes details of the principal currently using the application. By default the SecurityContextHolder uses a ThreadLocal to store these details, which means that the security context is always available to methods in the same thread of execution, even if the security context is not explicitly passed around as an argument to those methods. Using a ThreadLocal in this way is quite safe if care is taken to clear the thread after the present principal’s request is processed. Of course, Spring Security takes care of this for you automatically so there is no need to worry about it.
Some applications aren’t entirely suitable for using a ThreadLocal, because of the specific way they work with threads. For example, a Swing client might want all threads in a Java Virtual Machine to use the same security context. SecurityContextHolder can be configured with a strategy on startup to specify how you would like the context to be stored. For a standalone application you would use the SecurityContextHolder.MODE_GLOBAL strategy. Other applications might want to have threads spawned by the secure thread also assume the same security identity. This is achieved by using SecurityContextHolder.MODE_INHERITABLETHREADLOCAL. You can change the mode from the default SecurityContextHolder.MODE_THREADLOCAL in two ways. The first is to set a system property, the second is to call a static method on SecurityContextHolder. Most applications won’t need to change from the default, but if you do, take a look at the JavaDoc for SecurityContextHolder to learn more.
Now i started a new application, which i divided in>
ApplicationWeb, ApplicationFramework.
The applicationWeb contains my servlets which it will execute the logic in the ApplicationFramework side, this one will make several request in differents webservices and also ofcourse DAO transactions...
Before i was using all this with EJB which i didnt need to worry for Pools, trasactionality, concurrency and so on, but this time i want to do it in Tomcat, with POJOs, should i need in this case, a ThreadManager to manage all the request and dao access? or there will be no problem since the Servlet will create an instance for each session?
i was planning to use also Guice which im related and can get closer somehow to the DI that EJB uses.
Any ideas.
Thanks in advance
Well, technically Servlet will process each request in separate thread. So you don't need to worry about concurrency at Servlet layer.
If you create your DAO and SERVICE layer as stateless (you will not store any state in fields) you don't need to worry about concurrency at this level either (since stateless objects are thread-safe).
So in my opinion you don't need to worry about concurrency...
I am reading the Enterprise JavaBeans 3.1 book and I wonder if I have understood the concept of EJB proxy object correctly. I now know that it follows the proxy pattern and I have read some about it.
When we make the interfaces for the beans, we are doing it because we want to have the proxy pattern implemented. This helps us because the clients are only concerned about what we can do and not tied to directly to a class but rather a interface that can act as if it where the real object.
So, the container probably instantiate the proxy objects implementing the corresponding interface and add some magic code (networking code) before invoking upon the real EJB for us, because the proxy object is automatically made right?
Have I misunderstood the concept? If thats the case could someone tell me whats wrong?
Correct. The interfaces you're writing for your beans would have been sufficient if your application was confined to a local JVM. In this case no proxy would be needed, as the implementing class could be instantiated and supplied directly.
Clients of EJBs cannot work on their implementing classes, as they don't have them in their classpath. EJBs are location-transparent, you can call them across the network, or from another application located on the same server, but isolated out by different class loaders. In such cases, you need to have proxy objects to marshal, send over the network, and unmarshal the parameters you supply to EJB calls and results of these calls that you receive. And on the client side, you need a dummy EJB interface implementation, that forwards your calls to the server where this EJB is installed.
Proxies also handle other functions, such as starting/ending transactions around EJB method calls.
EDIT: if you're curious what EXACTLY such proxies could do, take a look at overviews of RMI in Java and AOP (either in AspectJ or Spring). It will give you the idea what kinds of tasks can be implemented this way.
Are you referring to the proxy interfaces to stateless (and stateful) session beans and message driven beans?
If so, I think your understanding is correct. The only thing you seemed to have missed is the concept of instance pools for stateless beans. The container does not instanciate these per request, but instead provides an implementation when needed.
Also, using proxies allows the container managed things to take place: transaction management, asynchronous thread management etc.
I'm trying to understand how the JAAS principal propagates to the Business/EJB tier from web tier.
I've read that the if the roles/realm is configured in login-config & security-context of web.xml then the servlet container will also transparently pass the authenticated principal to the EJB Tier.
Two questions
1.) First & more importantly is that true ? Without any intervention from the developer !
2.) And secondly any idea how that works under the hood.
yes it's true. that's generally the point of ejb, to take the "hard" stuff out of the hands of the developer (e.g. security, transactions, robustness, multithreading, etc.)
it's implementation dependent. i know that in jboss (at least 4.x and before), remote method calls used a custom serialization protocol which had an additional Map of arbitrary information which could be sent along with the request. in this was the auth info as well as other stuff to support clustering. for local method calls i believe they use stuff like ThreadLocals.
There are various "context" pieces of information that get propagated in EJB calls, once you get inside the EJB layer and start doing EJB-EJB calls then Transactions would be an example. Some containers allow you to create your own such context objects too.
Thread-local storage can be used within a process, but generally just assume that the container is in charge and can do the right thing - the actual technique is implementation specific.
Regarding your first question - yes.
Regarding your second question - are you familiar for example with EJB3 interceptors?
The container create proxied objects with "interception code" for the beans,
and in addition the container can track other annotations on the methods and the bean class, for example, to detect the #PostConstruct annotation.
Using the role definition, it can check the configuration (either login-config.xml at older versions of jboss, or standalone.xml in JBoss AS 7 at standalone configuration) and understand what is the definition per each role.
JAAS is used in order to provide you abstraction layer over authentication and authorization.
One of the concepts behind JAAS is login module - it provides you "protocol specific" code that takes care of the actual authorization and authentication.
For example, I'm using in this way Krb5LoginModule to use kerberos.
The Principal propagates to the EJB tier from web tier is configured through the login-config in the web.xml as you had surmised for the most part.
How it is implemented is implementation dependent. The user/group data is also implementation dependent and is configured as part of the application server.
However, one of they ways this is done is through an implementation of the JASPIC provider which is a standard way of obtaining the Principal. Using this allows you to have a different authentication path compared to the standard form login, basic authentication or certificate authentication provided by WEB-INF/web.xml but it is a little bit more work.
JASPIC authentication paths allow more complex scenarios such as header based authentication or two-factor or OpenID. The user database "usually" does not need to be tied to the one in the application server. I say "usually" because WebSphere Application Server ties the authentication to a user configured on the server.
We use Tomcat to host our WAR based applications. We are servlet container compliant J2EE applications with the exception of org.apache.catalina.authenticator.SingleSignOn.
We are being asked to move to a commercial Java EE application server.
The first downside to changing that
I see is the cost. No matter what
the charges for the application
server, Tomcat is free.
Second is the complexity. We don't
use either EJB nor EAR features (of
course not, we can't), and have not missed them.
What then are the benefits I'm not seeing?
What are the drawbacks that I haven't mentioned?
Mentioned were...
JTA - Java Transaction API - We
control transaction via database
stored procedures.
JPA - Java Persistence API - We use
JDBC and again stored procedures to
persist.
JMS - Java Message Service - We use
XML over HTTP for messaging.
This is good, please more!
When we set out with the goal to Java EE 6 certify Apache Tomcat as Apache TomEE, here are some of the gaps we had to fill in order to finally pass the Java EE 6 TCK.
Not a complete list, but some highlights that might not be obvious even with the existing answers.
No TransactionManager
Transaction Management is definitely required for any certified server. In any web component (servlet, filter, listener, jsf managed bean) you should be able to get a UserTransaction injected like so:
#Resource UserTransaction transaction;
You should be able use the javax.transaction.UserTransaction to create transactions. All the resources you touch in the scope of that transaction should all be enrolled in that transaction. This includes, but is not limited to, the following objects:
javax.sql.DataSource
javax.persistence.EntityManager
javax.jms.ConnectionFactory
javax.jms.QueueConnectionFactory
javax.jms.TopicConnectionFactory
javax.ejb.TimerService
For example, if in a servlet you start a transaction then:
Update the database
Fire a JMS message to a topic or queue
Create a Timer to do work at some later point
.. and then one of those things fails or you simply choose to call rollback() on the UserTransaction, then all of those things are undone.
No Connection Pooling
To be very clear there are two kinds of connection pooling:
Transactionally aware connection pooling
Non-Transactionally aware connection pooling
The Java EE specs do not strictly require connection pooling, however if you have connection pooling, it should be transaction aware or you will lose your transaction management.
What this means is basically:
Everyone in the same transaction should have the same connection from the pool
The connection should not be returned to the pool until the transaction completes (commit or rollback) regardless if someone called close() or any other method on the DataSource.
A common library used in Tomcat for connection pooling is commons-dbcp. We wanted to also use this in TomEE, however it did not support transaction-aware connection pooling, so we actually added that functionality into commons-dbcp (yay, Apache) and it is there as of commons-dbc version 1.4.
Note, that adding commons-dbcp to Tomcat is still not enough to get transactional connection pooling. You still need the transaction manager and you still need the container to do the plumbing of registering connections with the TransactionManager via Synchronization objects.
In Java EE 7 there's talk of adding a standard way to encrypt DB passwords and package them with the application in a secure file or external storage. This will be one more feature that Tomcat will not support.
No Security Integration
WebServices security, JAX-RS SecurityContext, EJB security, JAAS login and JAAC are all security concepts that by default are not "hooked up" in Tomcat even if you individually add libraries like CXF, OpenEJB, etc.
These APIs are all of course suppose to work together in a Java EE server. There was quite a bit of work we had to do to get all these to cooperate and to do it on top of the Tomcat Realm API so that people could use all the existing Tomcat Realm implementations to drive their "Java EE" security. It's really still Tomcat security, it's just very well integrated.
JPA Integration
Yes, you can drop a JPA provider into a .war file and use it without Tomcat's help. With this approach you will not get:
#PersistenceUnit EntityManagerFactory injection/lookup
#PersistenceContext EntityManager injection/lookup
An EntityManager hooked up to a transactional aware connection pool
JTA-Managed EntityManager support
Extended persistence contexts
JTA-Managed EntityManager basically mean that two objects in the same transaction that wish to use an EntityManager will both see the same EntityManager and there is no need to explicitly pass the EntityManager around. All this "passing" is done for you by the container.
How is this achieved? Simple, the EntityManager you got from the container is a fake. It's a wrapper. When you use it, it looks in the current transaction for the real EntityManager and delegates the call to that EntityManager. This is the reason for the mysterious EntityManager.getDelegate() method, so users can get the real EntityManager if they want and make use of any non-standard APIs. Do so with great care of course and never keep a reference to the delegate EntityManager or you will have a serious memory leak. The delegate EntityManager will normally be flushed, closed, cleaned up and discarded when a transaction completes. If you're still holding onto a reference, you will prevent garbage collection of that EntityManager and possibly all the data it holds.
It's always safe to hold a reference to a EntityManager you got from the container
Its not safe to hold a reference to EntityManager.getDelegate()
Be very careful holding a reference to an EntityManager you created yourself via an EntityManagerFactory -- you are 100% responsible for its management.
CDI Integration
I don't want to over simplify CDI, but I find it is a little too big and many people have not take a serious look -- it's on the "someday" list for many people :) So here is just a couple highlights that I think a "web guy" would want to know about.
You know all the putting and getting you do in a typical webapp? Pulling things in and out of HttpSession all day? Using String for the key and continuously casting objects you get from the HttpSession. You've probably go utility code to do that for you.
CDI has this utility code too, it's called #SessionScoped. Any object annotated with #SessionScoped gets put and tracked in the HttpSession for you. You just request the object to be injected into your Servlet via #Inject FooObject and the CDI container will track the "real" FooObject instance in the same way I described the transactional tracking of the EntitityManager. Abracadabra, now you can delete a bunch of code :)
Doing any getAttribute and setAttribute on HttpServletRequest? Well, you can delete that too with #RequestScoped in the same way.
And of course there is #ApplicationScoped to eliminate the getAttribute and setAttribute calls you might be doing on ServletContext
To make things even cooler, any object tracked like this can implement a #PostConstruct which gets invoked when the bean gets created and a #PreDestroy method to be notified when said "scope" is finished (the session is done, the request is over, the app is shutting down).
CDI can do a lot more, but that's enough to make anyone want to re-write an old webapp.
Some picky things
There are some things added in Java EE 6 that are in Tomcats wheelhouse that were not added. They don't require big explanations, but did account for a large chunk of the "filling in the gaps".
Support for #DataSourceDefinition
Support for Global JNDI (java:global, java:app, java:module)
Enum injection via #Resource MyEnum myEnum and
Class injection via #Resource Class myPluggableClass and
Support for #Resource(lookup="foo")
Minor points, but it can be incredibly useful to define DataSource in the app in a portable way, share JNDI entries between webapps, and have the simple power to say "look this thing up and inject it"
Conclusion
As mentioned, not a complete list. No mention of EJB, JMS, JAX-RS, JAX-WS, JSF, Bean Validation and other useful things. But at least some idea of the things often overlooked when people talk about what Tomcat is and is not.
Also be aware that what you might have thought of as "Java EE" might not match the actual definition. With the Web Profile, Java EE has shrank. This was deliberately to address "Java EE is too heavy and I don't need all that".
If you cut EJB out of the Web Profile, here's what you have left:
Java Servlets
Java ServerPages (JSP)
Java ServerFaces (JSF)
Java Transaction API (JTA)
Java Persistence API (JPA)
Java Contexts and Dependency Injection (CDI)
Bean Validation
It's a pretty darn useful stack.
Unless you want EJB proper, you don't need a full stack J2EE server (commercial or not).
You can have most J2EE features (such as JTA, JPA, JMS, JSF) with no full stack J2EE server. The only benefit of a full stack j2ee is that the container will manage all these on your behalf declaratively. With the advent of EJB3, if you need container managed services, using one is a good thing.
You can also have no cost full stack server such as Glasfish, Geronimo or JBoss.
You can also run embedded j2ee container managed services with embedded Glasfish for example, right inside Tomcat.
You may want an EJB container if you want to use session beans, message beans, timer beans nicely managed for you, even with clustering and fail over.
I would suggest to the management to consider upgrades based on feature need. Some of these EJB containers might just well use embedded Tomcat as their webserver so what gives!
Some managers just like to pay for things. Ask them to consider a city shelter donation or just go for BEA.
If you are being asked to move to a commercial J2EE server, the reasons may have nothing to do with the J2EE stack but with non-technical considerations.
One thing that you do get with a commercial J2EE offering that you don't get with Tomcat is technical support.
This may not be a consideration for you, depending on the service levels your web applications are supposed to meet. Can your applications be down while you try and figure out a problem with Tomcat, or will that be a major problem?
Cost isn't necessarily a downside as there a few free J2EE servers, e.g. JBoss and Glassfish.
Your question assumes that (J2EE = Servlet + EJB + EAR) and therefore, there's no point in using anything more than a Servlet container if you're not using EJB or EAR. This is simply not the case, J2EE includes a lot more than this. Examples include:
JTA - Java transaction API
JPA - Java persistence API
JMS - Java messaging specification
JSF - technology for constructing user interfaces out of components
Cheers,
Donal
In truth, with the vast array of packages and libraries available, there's little an EJB container provides that can't be added to a modern servlet container (ala Tomcat). So, if you ever wanted any of those features, you can get them "ala carte" so to speak with the cost being the process of integrating that feature in to your app.
If you're not "missing" any of these features now, then from a practical standpoint, you probably don't need them.
That all said, the modern EJB containers are really nice, and come with all of those services pre-integrated, making them, somewhat, easier to use should you ever want them. Sometimes having the feature nearby and handy is enough to make someone explore it for its potential in their application, versus seeing the integration process of a feature as a hurdle to adoption.
With the quality of the free EJB containers, it's really hard to imagine how buying one can be at all useful, especially given that you have no real demand for one at the moment.
However, I do encourage you to actually get one and play around with it and explore the platform. Glassfish is very easy to get started with and very good, and should easily take your WARs as is (or with very minor tweaks).
As a rule when it comes between running Tomcat vs an EJB container the question is really why NOT use one? Speaking specifically for Glassfish, I find it easier to use than Tomcat, and It's primary difference is that it can have a moderately larger memory footprint (particularly for a small application) than Tomcat, but on a large application you won't even notice that. For me, the memory hit isn't a big deal, for others it may be an issue.
And it gives me a single source of all this nice functionality without having to crawl the net for a 3rd party option.