I have the following Java servlet that performs what I call the "Addition Service":
public class AdditionService extends HttpServlet {
#Override
public void doGet(HttpServletRequest request, HttpServletResponse response) {
// The request will have 2 Integers inside its body that need to be
// added together and returned in the response.
Integer addend = extractAddendFromRequest(request);
Integer augend = extractAugendFromRequest(request);
Integer sum = addend + augend;
PrintWriter writer = response.getWriter();
writer.write(sum);
}
}
I am trying to get GWT's RequestFactory to do the same thing (adding two numbers on the app server and returning the sum as a response) using a ValueProxy and AdditionService, and am running into a few issues.
Here's the AdditionRequest (client tier) which is a value object holding two Integers to be added:
// Please note the "tier" (client, shared, server) I have placed all of my Java classes in
// as you read through the code.
public class com.myapp.client.AdditionRequest {
private Integer addend;
private Integer augend;
public AdditionRequest() {
super();
this.addend = 0;
this.augend = 0;
}
// Getters & setters for addend/augend.
}
Next my proxy (client tier):
#ProxyFor(value=AdditionRequest.class)
public interface com.myapp.client.AdditionRequestProxy extends ValueProxy {
public Integer getAddend();
public Integer getAugend();
public void setAddend(Integer a);
public void setAugend(Integer a);
}
Next my service API (in the shared tier):
#Service(value=DefaultAdditionService.class)
public interface com.myapp.shared.AdditionService extends RequestContext {
Request<Integer> sum(AdditionRequest request);
}
Next my request factory (shared tier):
public class com.myapp.shared.ServiceProvider implements RequestFactory {
public AdditionService getAdditionService() {
return new DefaultAdditionService();
}
// ... but since I'm implementing RequestFactory, there's about a dozen
// other methods GWT is forcing me to implement: find, getEventBus, fire, etc.
// Do I really need to implement all these?
}
Finally where the magic happens (server tier):
public class com.myapp.server.DefaultAdditionService implements AdditionService {
#Override
public Request<Integer> sum(AdditionRequest request) {
Integer sum = request.getAddend() + request.getAugend();
return sum;
}
// And because AdditionService extends RequestContext there's another bunch of
// methods GWT is forcing me to implement here: append, create, isChanged, etc.
// Do I really need to implement all these?
}
Here are my questions:
Is my "tier" strategy correct? Have I packaged all the types in the correct client/shared/server packages?
I don't think my setup is correct because AdditionService (in shared) references DefaultAdditionService, which is on the server, which it shouldn't be doing. Shared types should be able to live both on the client and the server, but not have dependencies on either...
Should ServiceProvider be a class that implements RequestFactory, or should it be an interface that extends it? If the latter, where do I define the ServiceProvider impl, and how do I link it back to all these other classes?
What about all these methods in ServiceProvider and DefaultAdditionService? Do I need to implement all 20+ of these core GWT methods? Or am I using the API incorrectly or not as simply as I could be using it?
Where does service locator factor in here? How?
If you want to use RF as a simple RPC mechanism [*] you can (and you are right: only ValueProxys), but you need something more: ServiceLocators (i.e., GWT 2.1.1).
With ServiceLocator you can simply put your service implementation (like your servlet) into a real service instance, instead into an entity object (as you will use only ValueProxys, with no static getXyz() methods) as required by the RF protocol. Note the existence also of Locators, used to externalize all those methods from your server-side entities: not needed if you use ValueProxy everywhere.
A ServiceLocator looks something like (taken from official docs):
public class DefaultAdditionServiceLocator implements ServiceLocator {
#Override
public Object getInstance(Class<?> clazz) {
try {
return clazz.newInstance();
} catch (InstantiationException e) {
throw new RuntimeException(e);
} catch (IllegalAccessException e) {
throw new RuntimeException(e);
}
}
}
You need to annotate your DefaultAdditionService also with a locator param, so RF knows on what to rely when it comes to dispatch your request to your service. Something like:
#Service(value = DefaultAdditionService.class, locator = DefaultAdditionServiceLocator.class)
public interface com.myapp.shared.AdditionService extends RequestContext {
// Note here, you need to use the proxy type of your AdditionRequest.
Request<Integer> sum(AdditionRequestProxy request);
}
Your service will then be the simplest possible thing on Earth (no need to extend/implement anything RF-related):
public class com.myapp.server.DefaultAdditionService {
// The server-side AdditionRequest type.
public Integer sum(AdditionRequest request) {
Integer sum = request.getAddend() + request.getAugend();
return sum;
}
}
If you mispell sum() or you do not implement a method declared in your RequestContext you will get an error.
To instantiate RequestContexts you need to extend the RequestFactory interface, with a public factory method for com.myapp.shared.AdditionService. Something like:
public interface AdditionServiceRequestFactory extends RequestFactory {
public com.myapp.shared.AdditionService createAdditionServiceRequestContext();
}
All your client calls will start from this. See the docs, if not already.
Now, RF works by totally separating the objects your want to pass from client (using EntityProxy and ValueProxy) and server (the real objects, either Entity values or simple DTO classes). You will use proxy types (i.e., interfaces whom implementations are automatically generated) everywhere in client/shared tier, and you use the relative domain object (the one referenced with #ProxyFor) only on server side. RF will take care of the rest. So your AdditionRequest will be on your server side, while AdditionRequestProxy will be on your client side (see the note in the RequestContext). Also note that, if you simply use primitive/boxed types as your RequestContext params or return types, you will not even need to create ValueProxys at all, as they are default transportable.
The last bit you need, is to wire the RequestFactoryServlet on your web.xml. See the docs here. Note that you can extend it if you want to, say, play around with custom ExceptionHandlers or ServiceLayerDecorators, but you don't need to.
Speaking about where to put everything:
Locators, ServiceLocators, service instances, domain objects, and RequestFactoryServlet extensions, will be on your server-side;
The RequestContext, RequestFactory extensions and all your proxy types will be on the shared-side;
client side will initialize the RequestFactory extension and use it to obtain the factory instance for your service requests.
All in all... to create a simple RPC mechanism with RF, just:
create your service along with ServiceLocator;
create a RequestContext for your requests (annotated with service and locator values);
create a RequestFactory extension to return your RequestContext;
if you want to use more than primitive types in your RequestContext (like simple DTOs), just create client proxy interfaces for them, annotated with #ProxyFor, and remember where to use each type;
wire everything.
Much like that. Ok, I wrote too much and probably forgot something :)
For reference, see:
Official RF documentation;
Thomas Broyer's articles [1], [2];
RF vs GWT-RPC from the RF author point of view.
[*]: In this approach you shift your logic from data-oriented to service-oriented app. You give up using Entitys, IDs, versions and, of course, all the complex diff logic between client and server, when it comes to CRUD operations.
Related
I am using Guice's RequestScoped and Provider in order to get instances of some classes during a user request. This works fine currently. Now I want to do some job in a background thread, using the same instances created during request.
However, when I call Provider.get(), guice returns an error:
Error in custom provider, com.google.inject.OutOfScopeException: Cannot
access scoped object. Either we are not currently inside an HTTP Servlet
request, or you may have forgotten to apply
com.google.inject.servlet.GuiceFilter as a servlet
filter for this request.
afaik, this is due to the fact that Guice uses thread local variables in order to keep track of the current request instances, so it is not possible to call Provider.get() from a thread different from the thread that is handling the request.
How can I get the same instances inside new threads using Provider? It is possible to achieve this writing a custom scope?
I recently solved this exact problem. There are a few things you can do. First, read up on ServletScopes.continueRequest(), which wraps a callable so it will execute as if it is within the current request. However, that's not a complete solution because it won't forward #RequestScoped objects, only basic things like the HttpServletResponse. That's because #RequestScoped objects are not expected to be thread safe. You have some options:
If your entire #RequestScoped hierarchy is computable from just the HTTP response, you're done! You will get new instances of these objects in the other thread though.
You can use the code snippet below to explicitly forward all RequestScoped objects, with the caveat that they will all be eagerly instantiated.
Some of my #RequestScoped objects couldn't handle being eagerly instantiated because they only work for certain requests. I extended the below solution with my own scope, #ThreadSafeRequestScoped, and only forwarded those ones.
Code sample:
public class RequestScopePropagator {
private final Map<Key<?>, Provider<?>> requestScopedValues = new HashMap<>();
#Inject
RequestScopePropagator(Injector injector) {
for (Map.Entry<Key<?>, Binding<?>> entry : injector.getAllBindings().entrySet()) {
Key<?> key = entry.getKey();
Binding<?> binding = entry.getValue();
// This is like Scopes.isSingleton() but we don't have to follow linked bindings
if (binding.acceptScopingVisitor(IS_REQUEST_SCOPED)) {
requestScopedValues.put(key, binding.getProvider());
}
}
}
private final BindingScopingVisitor<Boolean> IS_REQUEST_SCOPED = new BindingScopingVisitor<Boolean>() {
#Override
public Boolean visitScopeAnnotation(Class<? extends Annotation> scopeAnnotation) {
return scopeAnnotation == RequestScoped.class;
}
#Override
public Boolean visitScope(Scope scope) {
return scope == ServletScopes.REQUEST;
}
#Override
public Boolean visitNoScoping() {
return false;
}
#Override
public Boolean visitEagerSingleton() {
return false;
}
};
public <T> Callable<T> continueRequest(Callable<T> callable) {
Map<Key<?>, Object> seedMap = new HashMap<>();
for (Map.Entry<Key<?>, Provider<?>> entry : requestScopedValues.entrySet()) {
// This instantiates objects eagerly
seedMap.put(entry.getKey(), entry.getValue().get());
}
return ServletScopes.continueRequest(callable, seedMap);
}
}
I have faced the exact same problem but solved it in a different way. I use jOOQ in my projects and I have implemented transactions using a request scope object and an HTTP filter.
But then I created a background task which is spawned by the server in the middle of the night. And the injection is not working because there is no request scope.
Well. The solutions is simple: create a request scope manually. Of course there is no HTTP request going on but that's not the point (mostly). It is the concept of the request scope. So I just need a request scope that exists alongside my background task.
Guice has an easy way to create a request scope: ServletScope.scopeRequest.
public class MyBackgroundTask extends Thread {
#Override
public void run() {
RequestScoper scope = ServletScopes.scopeRequest(Collections.emptyMap());
try ( RequestScoper.CloseableScope ignored = scope.open() ) {
doTask();
}
}
private void doTask() {
}
}
Oh, and you probably will need some injections. Be sure to use providers there, you want to delay it's creation until inside the created scope.
Better use ServletScopes.transferRequest(Callable) in Guice 4
I am writing endpoint unit tests and for most of those there is an external web service that should be mocked, or a couple of them.
At first, i was creating mocks within tests which was okay when an endpoint test used only one external service, the mock creation was basically one liner.
As use cases became more complex, i needed to mock couple of services and exceptions for a single endpoint test.
I have put these mocks creation behind factories that all extend single factory and used builder pattern.
Within that base factory there is an inner class which i used as a builder for MockWebServiceServer.
protected class MultiStepMockBuilder {
private List<Object> mockActions = new ArrayList<Object>();
private WebServiceGatewaySupport gatewaySupport;
protected MultiStepMockBuilder(WebServiceGatewaySupport gatewaySupport) {
this.gatewaySupport = gatewaySupport;
}
protected MultiStepMockBuilder exception(RuntimeException exception) {
mockActions.add(exception);
return this;
}
protected MultiStepMockBuilder resource(Resource resource) {
mockActions.add(resource);
return this;
}
protected MockWebServiceServer build() {
MockWebServiceServer server = MockWebServiceServer.createServer(gatewaySupport);
for(Object mock: mockActions) {
if (mock instanceof RuntimeException) {
server.expect(anything()).andRespond(withException((RuntimeException)mock));
}
else if (mock instanceof Resource)
{
try
{
server.expect(anything()).andRespond(withSoapEnvelope((Resource) mock));
} catch (IOException e) {e.printStackTrace();}
}
else
throw new RuntimeException("unusuported mock action");
}
return server;
}
}
}
So i can now do something like this to create mock:
return new MultiStepMockBuilder(gatewaySupport).resource(success).exception(new WebServiceIOException("reserve timeout"))
.resource(invalidMsisdn)
.build();
The issue i have with this implementation is dependence on instanceof operator which i never use outside of equals.
Is there an alternative way to instanceof operator in this scenario ? From the questions on topic of instanceof everybody argues it should only be used within equals and therefore i have feeling that this is 'dirty' solution.
Is there an alternative to instanceof operator, within Spring or as a different design, while keeping fluent interface for mocks creation ?
I don't know Spring well enough to comment specifically on this particular area, but to me, this just seems like a design thing. Generally, when you are faced with using instanceof, it means that you need to know the type, but you don't have the type. It is generally the case that we might need to refactor in order to achieve a more cohesive design that avoids this kind of problem.
The root of where the type information is being lost, is in the List of mock actions, which are currently just being stored as a List of Objects. One way to help with this then, is to look at the type of the List and consider if there is a better type that could be stored in the List that might help us later. So we might end up with a refactoring something like this.
private List<MockAction> mockActions = new ArrayList<MockAction>();
Of course, then we have to decide what a MockAction actually is, as we've just made it up. Maybe something like this:
interface MockAction {
void performAction(MockWebServiceServer server);
}
So, we've just created this MockAction interface, and we've decided that instead of the caller performing the action - we're going to pass the server into it and ask the MockAction to perform itself. If we do this, then there will be no need for instanceof - because particular types of MockActions will know what they contain.
So, what types of MockActions do we need?
class ExceptionAction implements MockAction {
private final Exception exception;
private ExceptionAction(final Exception exception) {
this.exception = exception;
}
public void performAction(final MockWebServiceServer server) {
server.expect(anything()).andRespond(withException(exception);
}
}
class ResourceAction implements MockAction {
private final Resource resource;
private ResourceAction(final Resource resource) {
this.resource = resource;
}
public void performAction(final MockWebServiceServer server) {
/* I've left out the exception handling */
server.expect(anything()).andRespond(withSoapEnvelope(resource));
}
}
Ok, so now we've gotten to this point, there are a couple of loose ends.
We're still adding exceptions to the list of MockActions - but we need to change the add methods to make sure we put the right thing in the list. The new versions of these methods might look something like this:
protected MultiStepMockBuilder exception(RuntimeException exception) {
mockActions.add(new ExceptionAction(exception));
return this;
}
protected MultiStepMockBuilder resource(Resource resource) {
mockActions.add(new ResourceAction(resource));
return this;
}
So, now we've left our interface the same, but we're wrapping the resource or exception as they're added to the list so that we have the type specificity we need later on.
And then finally, we need to refactor our method that actually makes the calls, which now looks something like this - which is much simpler and cleaner.
protected MockWebServiceServer build() {
MockWebServiceServer server = MockWebServiceServer.createServer(gatewaySupport);
for(MockAction action: mockActions) {
action.performAction(server);
}
return server;
}
I have a "legacy" code that I want to refactor.
The code basically does a remote call to a server and gets back a reply. Then according to the reply executes accordingly.
Example of skeleton of the code:
public Object processResponse(String responseType, Object response) {
if(responseType.equals(CLIENT_REGISTERED)) {
//code
//code ...
}
else if (responseType.equals(CLIENT_ABORTED)) {
//code
//code....
}
else if (responseType.equals(DATA_SPLIT)) {
//code
//code...
}
etc
The problem is that there are many-many if/else branches and the code inside each if is not trivial.
So it becomes hard to maintain.
I was wondering what is that best pattern for this?
One thought I had was to create a single object with method names the same as the responseType and then inside processResponse just using reflection call the method with the same name as the responseType.
This would clean up processResponse but it moves the code to a single object with many/many methods and I think reflection would cause performance issues.
Is there a nice design approach/pattern to clean this up?
Two approaches:
Strategy pattern http://www.dofactory.com/javascript/strategy-design-pattern
Create dictionary, where key is metadata (in your case metadata is responseType) and value is a function.
For example:
Put this in constructor
responses = new HashMap<string, SomeAbstraction>();
responses.Put(CLIENT_REGISTERED, new ImplementationForRegisteredClient());
responses.Put(CLIENT_ABORTED, new ImplementationForAbortedClient());
where ImplementationForRegisteredClient and ImplementationForAbortedClient implement SomeAbstraction
and call this dictionary via
responses.get(responseType).MethodOfYourAbstraction(SomeParams);
If you want to follow the principle of DI, you can inject this Dictionary in your client class.
My first cut would be to replace the if/else if structures with switch/case:
public Object processResponse(String responseType, Object response) {
switch(responseType) {
case CLIENT_REGISTERED: {
//code ...
}
case CLIENT_ABORTED: {
//code....
}
case DATA_SPLIT: {
//code...
}
From there I'd probably extract each block as a method, and from there apply the Strategy pattern. Stop at whatever point feels right.
The case you've describe seems to fit perfectly to the application of Strategy pattern. In particular, you've many variants of an algorithm, i.e. the code executed accordingly to the response of the remote server call.
Implementing the Stategy pattern means that you have to define a class hierachy, such the following:
public interface ResponseProcessor {
public void execute(Context ctx);
}
class ClientRegistered implements ResponseProcessor {
public void execute(Context ctx) {
// Actions corresponding to a client that is registered
// ...
}
}
class ClientAborted implements ResponseProcessor {
public void execute(Context ctx) {
// Actions corresponding to a client aborted
// ...
}
}
// and so on...
The Context type should contain all the information that are needed to execute each 'strategy'. Note that if different strategies share some algorithm pieces, you could also use Templeate Method pattern among them.
You need a factory to create a particular Strategy at runtime. The factory will build a strategy starting from the response received. A possibile implementation should be the one suggested by #Sattar Imamov. The factory will contain the if .. else code.
If strategy classes are not to heavy to build and they don't need any external information at build time, you can also map each strategy to an Enumeration's value.
public enum ResponseType {
CLIENT_REGISTERED(new ClientRegistered()),
CLIENT_ABORTED(new ClientAborted()),
DATA_SPLIT(new DataSplit());
// Processor associated to a response
private ResponseProcessor processor;
private ResponseType(ResponseProcessor processor) {
this.processor = processor;
}
public ResponseProcessor getProcessor() {
return this.processor;
}
}
Say I have 2 classes in an SOA model application..
Service class - which takes request and returns response
For further processing (say, business logic/parsing/dao etc), it passes the request to a SvcBusiness class.
Question is, should SvcBusiness class use the request as its class variable or should it just use the request in one of it's business methods? It is possible that request needs to be passed to other lower layers like DAO layer. Should those classes also use request as a class variable or should the request be just part of a method?
ServiceImpl class:
public class ServiceImpl {
public Response getDataForType1Request(Request type1) {
SvcBusiness buzclazz = new SvcBusiness();
return buzclazz.doOperationForType1(type1);
}
public Response getDataForType2Request(Request type2) {
SvcBusiness buzclazz = new SvcBusiness();
return buzclazz.doOperationForType2(type2);
}
}
Option 1: when request is passed as a parameter.
public class SvcBusiness {
public Response doOperationForType1(Request type1) {
// do business and return response1
}
public Response doOperationForType2(Request type2) {
// do business and return response2
}
}
Option 2: request is set as a class variable. In this scenario.. ServiceImpl will pass the request to SvcBusiness constructor when the object is created.. and will simply call execute() method.
public class SvcBusiness {
private Request request;
public SvcBusiness(Request request) {
this.request = request;
}
private Response doOperationForType1() {
// do business and return response1
}
private Response doOperationForType2() {
// do business and return response2
}
public Response execute() {
// if type1 request call doOperationForType1()
// if type2 request call doOperationForType1()
}
}
Please help! What are the advantages and disadvantages of both? Is there a design pattern to address this scenario?
Don't use the Request (and Response) further down in your class hierarchy! The service (and everything called by the service) may be called from somewhere else, where there is no such thing as a Request. And then you will have a problem with filling that parameter. Use an own data model in the service, and extract and convert everything you need for that from the Request.
Fully agree with Uwe's answer. However, if you still want to use Request class, it'll be less harmful as a parameter (The way Servlets work). Otherwise, you'd have to deal with synchronization on a highly probable multithreaded environment.
When I face a problem like this I always wonder if I really need an object. Usually I use the option 1 but creating all methods as static. As those methods don't rely in the current object state (there are no instance attributes), I save some memory just not creating such objects (other option is just implement the Singleton pattern).
public class SvcBusiness {
public static Response doOperationForType1(Request type1) {
// do business and return response1
}
public Response doOperationForType2(Request type2) {
// do business and return response2
}
}
What is a use case for using a dynamic proxy?
How do they relate to bytecode generation and reflection?
Any recommended reading?
I highly recommend this resource.
First of all, you must understand what the proxy pattern use case. Remember that the main intent of a proxy is to control access to
the target object, rather than to enhance the functionality of the
target object. The access control includes synchronization, authentication, remote access (RPC), lazy instantiation (Hibernate, Mybatis), AOP (transaction).
In contrast with static proxy, the dynamic proxy generates bytecode which requires Java reflection at runtime. With the dynamic approach you don't need to create the proxy class, which can lead to more convenience.
A dynamic proxy class is a class that implements a list of
interfaces specified at runtime such that a method invocation through
one of the interfaces on an instance of the class will be encoded and
dispatched to another object through a uniform interface. It can be
used to create a type-safe proxy object for a list of interfaces
without requiring pre-generation of the proxy class. Dynamic proxy
classes are useful to an application or library that needs to provide
type-safe reflective dispatch of invocations on objects that present
interface APIs.
Dynamic Proxy Classes
I just came up with an interesting use for a dynamic proxy.
We were having some trouble a non-critical service that is coupled with another dependant service and wanted to explore ways of being fault-tolerant when that dependant service becomes unavailable.
So I wrote a LoadSheddingProxy that takes two delegates - one is the remote impl for the 'normal' service (after the JNDI lookup). The other object is a 'dummy' load-shedding impl. There is simple logic surrounding each method invoke that catches timeouts and diverts to the dummy for a certain length of time before retrying. Here's how I use it:
// This is part of your ServiceLocator class
public static MyServiceInterface getMyService() throws Exception
{
MyServiceInterface loadShedder = new MyServiceInterface() {
public Thingy[] getThingys(Stuff[] whatever) throws Exception {
return new Thingy[0];
}
//... etc - basically a dummy version of your service goes here
}
Context ctx = JndiUtil.getJNDIContext(MY_CLUSTER);
try {
MyServiceInterface impl = ((MyServiceHome) PortableRemoteObject.narrow(
ctx.lookup(MyServiceHome.JNDI_NAME),
MyServiceHome.class)).create();
// Here's where the proxy comes in
return (MyService) Proxy.newProxyInstance(
MyServiceHome.class.getClassLoader(),
new Class[] { MyServiceInterface.class },
new LoadSheddingProxy(MyServiceHome.JNDI_NAME, impl, loadShedder, 60000)); // 10 minute retry
} catch (RemoteException e) { // If we can't even look up the service we can fail by shedding load too
logger.warn("Shedding load");
return loadShedder;
} finally {
if (ctx != null) {
ctx.close();
}
}
}
And here's the proxy:
public class LoadSheddingProxy implements InvocationHandler {
static final Logger logger = ApplicationLogger.getLogger(LoadSheddingProxy.class);
Object primaryImpl, loadDumpingImpl;
long retry;
String serviceName;
// map is static because we may have many instances of a proxy around repeatedly looked-up remote objects
static final Map<String, Long> servicesLastTimedOut = new HashMap<String, Long>();
public LoadSheddingProxy(String serviceName, Object primaryImpl, Object loadDumpingImpl, long retry)
{
this.serviceName = serviceName;
this.primaryImpl = primaryImpl;
this.loadDumpingImpl = loadDumpingImpl;
this.retry = retry;
}
public Object invoke(Object obj, Method m, Object[] args) throws Throwable
{
try
{
if (!servicesLastTimedOut.containsKey(serviceName) || timeToRetry()) {
Object ret = m.invoke(primaryImpl, args);
servicesLastTimedOut.remove(serviceName);
return ret;
}
return m.invoke(loadDumpingImpl, args);
}
catch (InvocationTargetException e)
{
Throwable targetException = e.getTargetException();
// DETECT TIMEOUT HERE SOMEHOW - not sure this is the way to do it???
if (targetException instanceof RemoteException) {
servicesLastTimedOut.put(serviceName, Long.valueOf(System.currentTimeMillis()));
}
throw targetException;
}
}
private boolean timeToRetry() {
long lastFailedAt = servicesLastTimedOut.get(serviceName).longValue();
return (System.currentTimeMillis() - lastFailedAt) > retry;
}
}
The class java.lang.reflect.Proxy allows you to implement interfaces dynamically by handling method calls in an InvocationHandler. It is considered part of Java's reflection facility, but has nothing to do with bytecode generation.
Sun has a tutorial about the use of the Proxy class. Google helps, too.
One use case is hibernate - it gives you objects implementing your model classes interface but under getters and setters there resides db related code. I.e. you use them as if they are just simple POJO, but actually there is much going on under cover.
For example - you just call a getter of lazily loaded property, but really the property (probably whole big object structure) gets fetched from the database.
You should check cglib library for more info.