Best workaround for Spring MVC Json parsing limitations - java

I have a project which uses Spring, Hibernate, and has controllers which return JSONs. Naturally, my models contain lists and such to use JPA annotations to define the hibernate relationships, so, for example, I have Users, which contain a set of Challenges they own, and likewise Challenge contains a User who owns it.
Unfortunately I seem to be having a lot of issues with collections embedded in my JSONs.
For example, with that set up (a User owns challenges and a challenge has a owner) I can return a Challenge just fine. I can return a User just fine. But when I try to return a list of challenges, everything blows up! I receive the following error from my Jmeter test:
Error 500 Server Error
I believe this means that the Jackson json parser had an issue setting the json. I believe this, because if I use #JsonIgnoreProperties({"challengesOwned"}) then I can return the list of challenges just fine, since each individual challenge object no longer has a list embedded inside it.
This seems very strange to me. Can Jackson really not map simple embedded lists within JSONs? I've also got a huge problem because I have a Map which uses a User as its key ... and it seems it's not even possible to define a JSON map's key as an embedded object at all!
Does anyone have a suggestion for my issue? Do I have to manually define some Json mappings? Is there a simple solution I just don't know about?
EDIT:
While what j0ntech says does seem to have been true, it turns out that was not the whole story. It seems that when Spring used Jackson to serialize one of my hibernate entities into it's JSON version, hibernate was trying to lazy load one of that entity's properties, but since the entity was outside of its transaction at that point (being "in" the controller), it caused an exception, which just got swallowed up.
So there were actually TWO issues. I figured this out by trying to manually use Jackson to serialize the object I was returning before actually returning it. That way I actually got the stack trace for the other issue.

You probably have a recursive loop (as per DwB's comment): User contains a list of Challenges, which each contain a User, which contains a list of Challenges and so on and so forth. The parser (or your server at large) doesn't like that. You should use the annotations JsonManagedReference and JsonBackReference.
You can read about how to use these annotations here and here. I've used them in some of my own projects and they work very well if correctly implemented.

you could try flexjson (used by Spring Roo) or gson (developed by google)

Parsing GSON with Lists seems to be pretty straight forward
http://rishabhsays.wordpress.com/2011/02/24/parsing-list-of-json-objects-with-gson/

Related

Replace #QueryResult while switching from SDN+OGM to SDN/RX

Using up to Spring Boot 2.3.4, I've been using the #QueryResult annotation to map some custom Cypher queries responses to POJOs. I'm now testing the Spring Boot 2.4 first RC and trying to follow instructions on how to drop OGM since the support has been removed. I successfully replaced other annotations by the ones provided here:
https://neo4j.github.io/sdn-rx/current/#migrating
but I'm now left with my #QueryResult annotations for which nothing is specified. When I delete them I get Mapping errors:
org.springframework.data.mapping.MappingException: Could not find mappable nodes or relationships inside Record
I've looked up some of the Mapping explanations but here's the thing: my custom POJOs don't represent any entity from the database, neither do they represent part(s) of an entity. They're rather relevant bits from differents Nodes.
Let me examplify:
I want to get all b nodes that are targets of the MY_REL relationship from a:
(a:Node {label:"my label"})-[:MY_REL]->(b:Node)
For my purposes, I don't need to get the nodes in the response, so my POJO only has 2 attributes:
a "source" String which is the beginning node's label
a "targets" Set of String which is the list of end nodes' labels
and I return this:
RETURN a.label AS source, COLLECT(b.label) AS targets
My POJO was simply annotated with #QueryResult in order to get the mapping done.
Does anyone know how to reproduce this behaviour with SB 2.4 release candidate? As I said, removing the now faulty annotation prompts me with a Mapping error but I don't know what I should do to replace it.
Spring Data Neo4j 6 now supports projections (formerly known as #QueryResult) in line with the other Spring Data modules.
Having said this, the simplest thing you would have to do, assuming that this #Query is written in a Neo4jRepository<Node,...>, would be to return also the a.
I know that this sounds ridiculous first but with choosing the repository abstraction, you say that everything that should get processed during the mapping phase is a Node and you want to project its properties (or a subset) into the POJO (DTO projection). SDN cannot ensure that you are really working with the right type when it starts the mapping, so it throws the exception you are facing. Neo4j-OGM was more relaxed behind the scenes for mapping the #QueryResults but unfortunately also wrong with this direction.
If your use-case is simple as you have described it, I would strongly suggest to use the Neo4jClient(docs) that gives you direct access to the mapping.
It has a fluent API for querying and manual mapping, and it participates in the ongoing Spring transactions your repositories are running within.
There is a lot in there when it comes to projections, so I would suggest to also read the section in the documentation.

Tomcat, JAX-WS, Hibernate - flaws and solutions?

I am currently working on a client (java/swing) server (tomcat/hibernate/jax-ws) application that requires many database operations and should be able to execute long-running background tasks on the server-side. I chose this setup mainly for a better code reuse.
However, there are some issues that, probably, many others have also faced and found solutions for:
One of the biggest issues was lazy-loading vs. jax-ws. There were some viable solutions like overriding the jax-ws accessors (JAX-WS + Hibernate + JAXB: How to avoid LazyInitializationException during marshalling) that solved this issue be replacing the hibernate's proxies with null.
Now I'm facing new problems described by this example:
An entity "customer" is located within a "country", thus: n:1 relationship.
The "country" within the "customer" is marked as lazy-loaded to avoid unnecessary database traffic. When the client UI wants to list customers (the country is not needed here), the country-proxy is replaced by null within the jax-ws accessor and everything is fine.
When editing a customer, however, (I think) I must join the country, even when not viewing/changing it. Otherwise its proxy would be replaced by null when sent to the client via jax-ws, then sent back to the server, and committed (with null) into the database. Hereafter my customer->country association is lost.
Maybe there are several solutions like:
marking the country as "optional=false" triggering an exception when I forgot to join the country beforehand and then try to save the customer. Using this approach I must always join all references even when they are not part of the editing process. References requiring "optional=true" would pass silently and coding mistakes might destroy the database.
not replacing the proxy by null within the jax-ws accessor, but some other dummy class that, when sent back from the client to the server, is replaced by the original proxy. But I'm not sure whether this is feasible at all.
use hibernate within the client and connect directly to the database, using jax-ws only for non-database interaction
write some code to allow lazy-loading within the client (when necessary) by sending corresponding jax-ws requests (couldn't find the StackOverflow link anymore where someone asked for something like this). Totally feels like reinventing hibernate...
Are there any other solutions, recommendations, best-practices, better setups for this kind of application?
Thx in advance!
You apparently store data differently compared to how you transfer it. So it might make sense NOT to use the same object instances for transfer and for storage.
One of the solutions is to used different classes for that - DTOs and entities. You store entities but transfer DTOs. This takes additional effort to implement DTOs and mapping DTOs<->entities but this gives you clear separation of layers and may be even more efficient (from the effort point of view) in the long run. You can use Dozer and likes for mapping between DTOs an entities.
Another approach is to use not different classes but different instances of objects for transfer and storage. This is probably similar to #VinZ answer. You "merge" data from source object into the target object. I wrote a JAXB plugin to generate such merge methods some time ago and found the approache very useful in different use cases.
With this approach you save significant amount of effort compared to DTOs, but don't have a layer separation on the class level.
I personally would go with well-developed and polished structure of entities and extra DTOs optimized for transfer. I'd also try to autogenerate the merge/copy methods somehow to avoid the need of writing the manually. A JAXB plugin maybe. I love writing JAXB plugins so if something can be solved with a JAXB plugin, I'd solve it with a JAXB plugin.
Hope this helps.
The problem you describe is not restricted to your "JAX-WS to Hibernate" scenario. Also in other szenarios you will face this "null-value" problem.
One solution is the "DIY merge pattern":
Send the entity from client to server.
On the server invoke "EntityManager.find" with the received ID to find the existing entity.
Now copy over the state ourselves. If EntityManager.find returns null, its new -> just persist the received object.
Example:
Customer serverCustomer = dao.findById(receivedCustomer.getId());
if(serverCustomer == null) {
dao.persist(clientCustomer);
} else {
serverCustomer.setName(receivedCustomer.getName());
serverCustomer.setDate(receivedCustomer.getDate());
// ... all other fields, except "Country"
if (receivedCustomer.getCountry() != null) {
// Country keeps its server state if no new data
serverCustomer.setDate(receivedCustomer.getCountry());
}
}

What is a good strategy for converting jpa entities into restful resources

Restful resources do not always have a one-to-one mapping with your jpa entities. As I see it there are a few problems that I am trying to figure out how to handle:
When a resource has information that is populated and saved by more than one entity.
When an entity has more information in it that you want to send down as a resource. I could just use Jackson's #JsonIgnore but I would still have issue 1, 3 and 4.
When an entity (like an aggregate root) has nested entities and you want to include part of its nested entities but only to a certain level of nesting as your resource.
When you want to exclude once piece of an entity when its part of one parent entity but exclude a separate piece when its part of a different parent entity.
Blasted circular references (I got this mostly working with JSOG using Jackson's #JsonIdentityInfo)
Possible solutions:
The only way I could think of that would handle all of these issues would be to create a whole bunch of "resource" classes that would have constructors that took the needed entities to construct the resource and put necessary getters and setters for that resource on it. Is that overkill?
To solve 2, 3, 4 , and 5 I could just do some pre and post processing on the actual entity before sending it to Jackson to serialize or deserialize my pojo into JSON, but that doesn't address issue 1.
These are all problems I would think others would have come across and I am curious what solutions other people of come up with. (I am currently using JPA 2, Spring MVC, Jackson, and Spring-Data but open to other technologies)
With a combination of JAX_RS 1.1 and Jackson/GSON you can expose JPA entities directly as REST resources, but you will run into a myriad of problems.
DTOs i.e. projections onto the JPA entities are the way to go. It would allow you to separate the resource representation concerns of REST from the transactional concerns of JPA. You get to explicitly define the nature of the representations. You can control the amount of data that appears in the representation, including the depth of the object graph to be traversed, if you design your DTOs/projections carefully. You may need to create multiple DTOs/projections for the same JPA entity for the different resources in which the entity may need to be represented differently.
Besides, in my experience using annotations like #JsonIgnore and #JsonIdentityInfo on JPA entities doesnt exactly lend to more usable resource representations. You may eventually run into trouble when merging the objects back into the persistence context (because of ignored properties), or your clients may be unable to consume the resource representations, since object references as a scheme may not be understood. Most JavaScript clients will usually have trouble consuming object references produced by the #JsonidentityInfo annotation, due to the lack of standardization here.
There are other additional aspects that would be possible through DTOs/projections. JPA #EmbeddedIds do not fit naturally into REST resource representations. Some advocate using the JAX-RS #MatrixParam annotation to identify the resource uniquely in the resource URIs, but this does not work out of the box for most clients. Matrix parameters are after all only a design note, and not a standard (yet). With a DTO/projection, you can serve out the resource representation against a computed Id (could be a combination of the constituent keys).
Note: I currently work on the JBoss Forge plugin for REST where some or all of these issues exist and would be fixed in some future release via the generation of DTOs.
I agree with the other answers that DTOs are the way to go. They solve many problems:
Separation of layers and clean code. One day you may need to expose the data model using a different format (eg. XML) or interface (eg. non web-service based). Keeping all configuration (such as #JsonIgnore, #JsonidentityInfo) for each interface/format in domain model would make is really messy. DTOs separate the concerns. They can contain all the configuration required by your external interface (web-service) without involving changes in domain model, which can stay web-service and format agnostic.
Security - you easily control what is exposed to the client and what the client is allowed to modify.
Performance - you easily control what is sent to the client.
Issues such as (circular) entity references, lazily-loaded collections are also resolved explicitly and knowingly by you on converting to DTO.
Given your constraints, there looks to be no other solution than Data Transfer Objects - yes, it's occurring frequently enough that people named this pattern...
If you application is completely CRUDish then the way to go is definitely Spring Data REST in which you absolutely do not need DTOs. If it's more complicated than that you will be safer with DTOs securing the application layer. But do not attempt to encapsulate DTOs inside the controller layer. They belong to a service layer cause the mapping is also the part of logic (what you let in the application and what you let out of it). This way the application layer stays hermetic. Of course in most cases it can be the mix of those two.

JPA2/Hibernate - Stop lazy loading?

I'm having a problem where JPA is trying to lazily load my data when I don't want it to. Essentially what is happening is I'm using a Service to retrieve some data, and when I go to parse that data into JSON, the JSON library is triggering hibernate to try and lazily load the data. Is there any way to stop this? I've given an example below.
// Web Controller method
public String getEmployeesByQuery(String query) {
Gson gson = new Gson();
List<Employee> employees = employeeService.findEmployeesByQuery(query);
// Here is where the problem is occurring - the gson.toJSON() method is (I imagine)
// using my getters to format the JSON output, which is triggering hibernate to
// try and lazily load my data...
return gson.toJSON(employees);
}
Is it possible to set JPA/hibernate to not try and lazily load the data?
UPDATE: I realize that you can use FetchType.EAGER - but what if I don't want to eager load that data? I just want to stop hibernate from trying to retrieve more data - I already have the data I want. Right now whenever I try and access a get() method hibernate will throw a "no session or session is closed" error, which makes sense because my transaction was already committed from my service.
Thanks!
There are several options:
If you always need to load your collection eagerly, you can specify fetch = FetchType.EAGER in your mapping, as suggested in other answers.
Otherwise you can enable eager fetching for particular query:
By using JOIN FETCH clause in HQL/JPQL query:
SELECT e FROM Employee e JOIN FETCH e.children WHERE ...
By using fetch profiles (in JPA you can access Hibernate Session via em.unwrap(Session.class))
You really have two options:
You can copy the data from employee to one that is not being proxied by hibernate.
See if there is a way to not have the toJSON library reflect the entire object graph. I know some JSON libraries allow you to only serialize some properties of an object to JSON.
Personally I would think #1 would be easier if your library only uses reflection.
As others have stated, this is not an issue with JPA/hibernate but rather with the json serialization library you are using. You should instruct gson to exclude the properties you don't want traversed.
Yes:
#*ToMany(fetch=FetchType.EAGER)
I suggest you to make a fetched copy of the entities you want to use outside of a transaction. That way, the lazy loading will occur from within a transaction and you can pass to Gson a plain, not enhanced, POJO.
You can use Doozer to do this. It is very flexible and through a little configuration (read you'll gonna loose your hair configuring it) you can even retrieve only partially the data you want to send to Gson.
You could always change the fetch attribute to FetchType.EAGER, but it is also worth considering if you have your transactions have the right scope. Collections will be correctly loaded if they are accessed within a transaction.
Your problem is that you are serializing the data. We ran into the same sort of problem with Flex and JPA/Hibernate. The trick is, depending on how much you want to mangle things, either
Change your data model to not chase after the data you don't want.
Copy the data you do want into some sort of DTO that has no relationships to worry about.
Assuming you're using Hibernate, add the Session-in-view filter....its something like that, it will keep the session open while you serialize the entire database. ;)
Option one is what we did for the first big project we did, but it ruined the data access library we had for any sort of general purpose use. Since that time we've tended more toward option two.
YMMV
The easy and straight forward thing to do is create new Data classes (something like DTO)
use Hibernate.isInitialized() to check if the object is initialized by hibernate or not.
I am checking gson if i can override anything. I will post it here if I find anything new.

How to Serialize Hibernate Collections Properly?

I'm trying to serialize objects from a database that have been retrieved with Hibernate, and I'm only interested in the objects' actual data in its entirety (cycles included).
Now I've been working with XStream, which seems powerful. The problem with XStream is that it looks all too blindly on the information. It recognizes Hibernate's PersistentCollections as they are, with all the Hibernate metadata included. I don't want to serialize those.
So, is there a reasonable way to extract the original Collection from within a PersistentCollection, and also initialize all referring data the objects might be pointing to. Or can you recommend me to a better approach?
(The results from Simple seem perfect, but it can't cope with such basic util classes as Calendar. It also accepts only one annotated object at a time)
solution described here worked well for me: http://jira.codehaus.org/browse/XSTR-226
the idea is to have custom XStream converter/mapper for hibernate collections, which will extract actual collection from hibernate one and will call corresponding standard converter (for ArrayList, HashMap etc.)
I recommend a simpler approach: user dozer: http://dozer.sf.net. Dozer is a bean mapper, you can use it to convert, say, a PersonEJB to an object of the same class. Dozer will recursively trigger all proxy fecthes through getter() calls, and will also convert src types to dest types (let's say java.sql.date to java.utilDate).
Here's a snippet:
MapperIF mapper = DozerBeanMapperSingletonWrapper.getInstance();
PersonEJB serializablePerson = mapper.map(myPersonInstance, PersonEJB.class);
Bear in mind, as dozer walks through your object tree it will trigger the proxy loading one by one, so if your object graph has many proxies you will see many queries, which can be expensive.
What generally seems to be the best way to do it, and the way I am currently doing it is to have another layer of DTO objects. This way you can exclude data that you don't want to go over the channel as well as limit the depth to which the graph is serialized. I use Dozer for my current DTO (Data Transfer Object) from Hibernate objects to the Flex client.
It works great, with a few caveats:
It's not fast, in fact it's downright slow. If you send a lot of data, Dozer will not perform very well. This is mostly because of the Reflection involved in performing its magic.
In a few cases you'll have to write custom converters for special behavior. These work very well, but they are bi-directional. I personally had to hack the Dozer source to allow uni-directional custom converters.

Categories