Implementing a RESTful service - java

I'm building a web service to support an Android e-reader app I'm making for our campus magazine. The service needs to return issue objects to the app, each of which has a cover image and a collection of articles to be displayed. I'd like some general input on two strategies I'm considering, and/or some specific help on a few issues I'm having with them:
Strategy 1: Have 2 DB tables, Issues and Articles: The Issues table contains simply an int id, varchar name and varchar imageURI. Articles contains many more columns (headline, content, blurb, etc.), with each article on a separate row. One of the columns is issueID, which points to the issue to which the article belongs. When issue number n is requested, the service first pulls its data from the Issues table and uses it to create a new Issue object. The constructor instantiates a new List<Article> as an instance variable and populates it by pulling all articles with the matching issueID from the Articles table. What I can't figure out with this option is exactly how to execute it at a single endpoint, so that app only has to create one HTTP connection to get everything it needs for the issue (or is this not as important as I think it is?).
Have a single Issues table with the id, name, and imageURI columns, plus a large number of additional text Article1... text Article40 columns. The Articles are packaged into JSONObjects before being uploaded to the server, and these JSONObjects (which will be very long) are stored directly in the database. My worry here is that the text files will be too long, plus I have a nagging suspicion that this strategy isn't in line with best practices (although I can't put my finger on why...)
Also, This being my first web service, and given the requirements specified above, would it be advisable to use the Spring (or some other) framework or am I better off just using JAX-RS?

There are 2 questions here
How to convert your objects to JSON and expose them with a rest service.
How to store/retrieve your data.
To implement your webservices, Jersey is my favorite option. It is the open-source reference implementation of the JSR 311 (JAX-RS). In addition, Jersey uses Jackson to automatically handle the JSON/Object mapping.
To store your data, your second option... is clearly not an option :)
The first solution seems viable.
IMHO, as your application seems tiny, you should not put in place JPA/Hibernate etc.You should simply make one request by hand with a JOIN between Issues and Article, populate the requested Issue then let Jackson automatically convert your object to JSON.

Related

how can we use join concept in mongodb using spring framework

I am new to spring framework recent i have made small project on microservices, where i create two microservices
department service
User service
I need to know how can i use join in them, i have create one common field in both the service i.e departmentId,
when i use getmapping in user service containing department id fetching the data from department service in respective to that departmentId.
Using intellij, mongodb as database, spring framework,java
Since mongo is a document store type database.
It depends on how the data will be used. You'll need to think how the data will be queried, what will the response may be.
In a RDBMS, it is natural to denormalize your data and split it over several tables and use joins to create the views you need.
In a document store you do exactly the opposite you'll normalize your data and try to include as much as you can to satisfy most queries in one query.
When you use spring, you might also like to use https://spring.io/projects/spring-data-mongodb
If you want to gain in-depth knowledge on mongo, they have several courses available where they can teach you for free: https://university.mongodb.com/

Tomcat, JAX-WS, Hibernate - flaws and solutions?

I am currently working on a client (java/swing) server (tomcat/hibernate/jax-ws) application that requires many database operations and should be able to execute long-running background tasks on the server-side. I chose this setup mainly for a better code reuse.
However, there are some issues that, probably, many others have also faced and found solutions for:
One of the biggest issues was lazy-loading vs. jax-ws. There were some viable solutions like overriding the jax-ws accessors (JAX-WS + Hibernate + JAXB: How to avoid LazyInitializationException during marshalling) that solved this issue be replacing the hibernate's proxies with null.
Now I'm facing new problems described by this example:
An entity "customer" is located within a "country", thus: n:1 relationship.
The "country" within the "customer" is marked as lazy-loaded to avoid unnecessary database traffic. When the client UI wants to list customers (the country is not needed here), the country-proxy is replaced by null within the jax-ws accessor and everything is fine.
When editing a customer, however, (I think) I must join the country, even when not viewing/changing it. Otherwise its proxy would be replaced by null when sent to the client via jax-ws, then sent back to the server, and committed (with null) into the database. Hereafter my customer->country association is lost.
Maybe there are several solutions like:
marking the country as "optional=false" triggering an exception when I forgot to join the country beforehand and then try to save the customer. Using this approach I must always join all references even when they are not part of the editing process. References requiring "optional=true" would pass silently and coding mistakes might destroy the database.
not replacing the proxy by null within the jax-ws accessor, but some other dummy class that, when sent back from the client to the server, is replaced by the original proxy. But I'm not sure whether this is feasible at all.
use hibernate within the client and connect directly to the database, using jax-ws only for non-database interaction
write some code to allow lazy-loading within the client (when necessary) by sending corresponding jax-ws requests (couldn't find the StackOverflow link anymore where someone asked for something like this). Totally feels like reinventing hibernate...
Are there any other solutions, recommendations, best-practices, better setups for this kind of application?
Thx in advance!
You apparently store data differently compared to how you transfer it. So it might make sense NOT to use the same object instances for transfer and for storage.
One of the solutions is to used different classes for that - DTOs and entities. You store entities but transfer DTOs. This takes additional effort to implement DTOs and mapping DTOs<->entities but this gives you clear separation of layers and may be even more efficient (from the effort point of view) in the long run. You can use Dozer and likes for mapping between DTOs an entities.
Another approach is to use not different classes but different instances of objects for transfer and storage. This is probably similar to #VinZ answer. You "merge" data from source object into the target object. I wrote a JAXB plugin to generate such merge methods some time ago and found the approache very useful in different use cases.
With this approach you save significant amount of effort compared to DTOs, but don't have a layer separation on the class level.
I personally would go with well-developed and polished structure of entities and extra DTOs optimized for transfer. I'd also try to autogenerate the merge/copy methods somehow to avoid the need of writing the manually. A JAXB plugin maybe. I love writing JAXB plugins so if something can be solved with a JAXB plugin, I'd solve it with a JAXB plugin.
Hope this helps.
The problem you describe is not restricted to your "JAX-WS to Hibernate" scenario. Also in other szenarios you will face this "null-value" problem.
One solution is the "DIY merge pattern":
Send the entity from client to server.
On the server invoke "EntityManager.find" with the received ID to find the existing entity.
Now copy over the state ourselves. If EntityManager.find returns null, its new -> just persist the received object.
Example:
Customer serverCustomer = dao.findById(receivedCustomer.getId());
if(serverCustomer == null) {
dao.persist(clientCustomer);
} else {
serverCustomer.setName(receivedCustomer.getName());
serverCustomer.setDate(receivedCustomer.getDate());
// ... all other fields, except "Country"
if (receivedCustomer.getCountry() != null) {
// Country keeps its server state if no new data
serverCustomer.setDate(receivedCustomer.getCountry());
}
}

Indexing external rest api with solr, possible?

This question is maybe a weird one, but my employer has asked me to find out and thus I will.
In our application we use an external REST api to search for some data. This REST api has the possibility of delivering many types of data, but it is only possible to look up one type of data at a time. For example city names and street names. In our app we force the users to choose what data type to look for as they search, but now our users don't want to do this. So if they search for example 'los' they want the result to contain both "Los Angeles" and 'Losing Street'. For this to be possible for us right now, we would have to do two separate searches in the REST API and merge the results.
So instead my employer has read about Solr and is adamant that it is possible to index the REST API so that we use Solr to search for what we want in one search request. I am not so sure. Is it possible, and is it feasible?
Yes definitely possible to come up a solution for the requirement specified above. Basically solr is a full text search engine, and all the fields are indexed in solr by default. One can carryout different type of operation on these fields through analyzers and tokenizers combinations. You can map all the searchable field to one specific field(which are called copy fields i.e like city name and street name -> text name) and operate your search on this one field to get result as desired.
solr is RESTful search engine, and it serves data in xml and optional JSON format. Its really useful platform to operate over huge data and doesn't help mush over analytics part like calculations.
Few of the benefits include auto-suggest, highlighting, facets, synonym search, n-gram search, auto-correct etc.
I think you should send a feature request to the REST API maintainer to support a composite search.
The only thing you can do to download the whole database from the REST API, and create an own database which you can index and search after that with your custom queries, and which you have to keep in sync with the REST API. I don't think you want to do that. It will work, but so called REST APIs usually don't decouple clients from the implementation of the service with links and semantic annotations. So I am afraid it will break easily by any change of the API.
Afaik Solr is a storage solution which supports full-text search and has a REST interface.
Solr is a standalone enterprise search server with a REST-like API.
You put documents in it (called "indexing") via XML, JSON, CSV or
binary over HTTP. You query it via HTTP GET and receive XML, JSON, CSV
or binary results.
You should have no trouble posting the data from the REST API to Solr using the Data Import Handler (DIH), Solr's RESTful interface, or something like Spring Data Solr once you actually have the data. The tricky part is how will you "crawl" the third-part REST API data?
Depending on whether the REST API provider gives you any way to paginate through the data, i.e. chronologically or alphabetically, you may be able to write a program outside of Solr that polls the REST API then stores the data in a local database before posting it to Solr. This will be easier if the REST API provider allows you to retrieve new or changed records updated after a certain time, so that your polling is efficient and only retrieves a small amount of data after the initial full indexing. Some REST providers allow using webhooks to notify your application that they have updated data in their API. This may or may not be feasible depending on the amount of data and whether you can limit it by user account, etc. to only contain what you need.
It's important to store the third party data in a local database outside of Solr, since Solr's index data files are volatile and sometimes need to be deleted after making configuration changes. That way, you can write a process to repost the data from your database to Solr without having to crawl the REST API again.
For handling the polling at regular intervals, you could use something like Apache Camel or Spring Integration along with Quartz Scheduler. Both of those support REST endpoints and you can also take a look at the DIH examples that come with Solr.

Generate Search SQL from HTTP GET request parameters

We have a Java web app with a hibernate backend that provides REST resources. Now we're facing the task to implement a generic search that is controlled by the query parameters in our get request:
some/rest/resource?name_like=foo&created_on>=2012-09-12&sort_by_asc=something
or similar.
We don't want to predefine all possible parameters(name, created_on,
something)
We don't want to have to analyze the request String to pick up control characters (like >=)
nor do we don't want to implement our own grammar to reflect things like _eq _like _goe and so on (as an alternative or addition to control characters)
Is there some kind of framework that provides help with this mapping from GET request parameters to database query?
Since we know which REST resource we're GETing we have the entity / table (select). It probably will also be necessary to predefine the JOINs that will be executed in order to limit the depths of a search.
But other than that we want the REST consuming client to be able to execute any search without us having to predefine how a certain parameter and a certain control sequence will get translated into a search.
Right now I'm trying some semi automatic solution building on Mysemas QueryDSL. It allows me to predefine the where columns and sort columns and I'm working on a simple string comparison to detect things like '_like', '_loe', ... in a parameter and then activate the corresponding predefined part of the search. Not much different from an SQL String except that it's SQL injection proof an type save.
However I still have to tell my search object that it should be able to potentially handle a query "look for a person with name like '???'". Right now this is okay as we only consume the REST resource internally and isolate the actual search creation quite well. If we need to make a search do more we can just add more predefinitions for now. But should we make our REST resources public at some time in the future that won't be so great.
So we're wondering, there has to be some framework or best practice or recommended solution to approaching this. We're not the first who want this. Redmine for example offers all of its resource via a REST interface and I can query at will. Or facebook with its Graph API. I'm sure those guys didn't just predefine all possibilities but rather created some generic grammar. We'd like to save as much as possible on that effort and use available solutions instead.
Like I said, we're using Hibernate so an SQL or HQL solution would be fine or anything that builds on entities like QueryDsl. (Also there's the security issue concerning SQL injection)
Any suggestions? Ideas? Will we just have to do it all ourselves?
From a .NET perspective the closest thing I can think of would be a WCF data service.
Take a look at the uri-conventions specified on the OData website. There is some good information on the section on 4.5 Filter System Query Option. You'll notice that a lot of the examples on this site are .NET related, but there are other suggestions for getting this to work with Java.

Making sure that the same object isn't loaded twice from XML API in Java

I am new to Java, and am working on a Public Transit Java app as a first small project.
I am loading transit data in from a server through an XML api (using the DOM XML API). So when you call a constructor for say a BusStop(int id), then the constructor loads the info about that Stop from the server based on the id provided. So, I am wondering about a couple things: how can I make sure I don't instantiate two BusStop objects with the same id (I just want one object for each BusStop)?
Also does anyone have recommendations on how I should load up the objects, so I don't need to load the whole database every time I run the app, just the BusStop, and relevant Arrivals and BusTrips objects for that stop? I have done C++ and MVC PHP programming previously, but haven't had experienced loading large numbers of objects with circular object references etc.
Thanks!
I wouldn't start the download/deserialization proces in a constructor. I would write a manager class per entity type with a method to fetch a Java object for a given entity based on its ID. Use a HashMap with the key type as your entity ID and the value type as the Java class for that object. The manager would be a singleton using your preferred pattern (I would probably use static members for simplicity).
The first thing the fetch method should do is check the map to see if it contains an entry for the given ID. If it has already fetched and build this object, return it. If it has not, fetch the entity from the remote service, deserialize the object appropriately, load it into the HashMap by the specified ID, and return it.
Regarding references to other object I suggest you represent those as IDs in your Java objects rather than storing them as Java object references and deserializing them at the same time as the referencing object. The application can lazily instantiate those objects on demand through the relevant manager. This reduces problems through circular references.
If the amount of data is likely to exceed available RAM on your JVM you'd need to consider periodically removing older objects from the map to recover memory (confident they would be reloaded when required).
For this application I would use the following Java EE technologies: JAX-RS, JPA and JAXB. You will find these technologies included in almost every Java application server (i.e. GlassFish).
JPA - Java Persistence API
Provides a simple means of converting your objects to/from the database. Through annotations you can mark a relationship as lazy to prevent the entire database from being read. Also through the use of caching database access and object creation is reduced.
JAXB - Java Architecture for XML Binding
Provides a simple means of converting your objects to/from XML. An implementation is included in Java SE 6.
JAX-RS - Java API for RESTful Services
Provides a simple API (over HTTP) for interacting with XML.
Example
You can check out an example I posted to my blog:
Part 1 - The Database
Part 2 - Mapping the Database to JPA Entities
Part 3 - Mapping JPA entities to XML (using JAXB)
Part 4 - The RESTful Service
Part 5 - The Client
For the classes you want to load only once per given id, use some kind of Factory design pattern. Internally you may want to store id to instance mapping in a Map. Before actually fetching the data from server, first do a lookup on this map to see if you already have a class loaded for this id. If not then go ahead with fetching and the update the map.

Categories