RESTful 1-N optional relationships - java

I trying to learn how to write RESTful apps in Java using Jersey and
Hibernate, and I'm struggling to understand how to handle parent/child type
relationships when POSTing data to a Resource. I'm using JSON to exchange
data, but I don't think that's particularly relevant to my problem.
The example I'm working with models the relationship between employees and
teams. An employee may, or may not, be a member of one team:
GET /team/ - returns a list of teams
POST /team/ - creates a new team
GET /team/1 - returns a list of employees in the team with the ID 1
GET /employee/ - returns a list of employees
POST /employee/ - creates a new employee
GET /employee/1 - returns details about the employee with the ID 1
Behind this I have some Hibernate annotated POJOs: one for team, and one
for employee, with a 1-N relationship between the two (remember that an
Employee may not be a member of a team!). The same POJOs are also annotated
as #XmlRootElements so that JAXB will allow me to pass them to/from the
client as JSON.
The properties for the two entities look like this:
Team
Long id;
String name;
String desc;
List<Employee> employees;
Employee
Long id;
Team team;
String name;
String phone;
String email;
All good so far. But I'm struggling to understand how to make an employee
a member of a team at creation-time by just passing in a Team ID, rather
than passing in a nested team object in my JSON object.
For example, I'd like to be able to call POST /employee/ with a JSON that
looks like this:
{
"team_id":"1",
"name":"Fred Bloggs",
"phone":"1234567890",
"email":"test#example.com"
}
But, instead, I have to pass in something like this:
{
"team":{
"id":"1",
}
"name":"Fred Bloggs",
"phone":"1234567890",
"email":"test#example.com"
}
So, my question is, how do others handle creating relationships in JSON/REST without passing around whole object graphs?
Sorry this is such a sketchy question, but as I say, I'm just starting
out, and terminology is a problem for me at this stage!

If your framework forces your representation to include strange constructs like { "id":"1" } then I'd say it's time to switch framework!
More importantly, instead of worrying about adding a sub-JSONObject to your code, I would worry that the term "1" is indeed not really a hyperlink. Read up on the hypermedia constraint, or HATEOAS if you want.
What you want to pass in your POST is this:
{
"team_href" : "/teams/1",
"name":"can'tbebothered"
}
so when the server sees this, it links the newly created employee with team #1 merely because it recognises the (relative) URI.

I would use a dedicated link type, I modelled it in xml-link tag, but it would map to following json:
{
...
links:
[
{
"href" : "/teams/1",
"rel" : "team"
},
{
"href" : "/teams/2",
"rel" : "team"
}
]
}
I prefer above link style because it is more generic (you define the relationship through the rel attribute). To me the link concept is so important in HTTP REST that I dedicate an own type for it.
Beware in some cases for performance reasons (avoiding network calls to traverse linked resource) you need to inline such relationships. For that you could offer a switch to return a inlined representation /employee/123?inline=true. But only offer such gimmicks, if really necessary. I once had to do it, but implementation wasn't trivial (though my format was XML, which is more constrained by schema definitions).

REST offers the possibility to use URLs as references, too, which I find really cool. So it would look like this:
{
"team":"http://myapp.com/team/1",
"name":"Fred Bloggs",
"phone":"1234567890",
"email":"test#example.com"
}
You can avoid passing nested objects, too by just supplying a
{
"team":"1",
"name":"Fred Bloggs",
"phone":"1234567890",
"email":"test#example.com"
}
In that case your converter must be smart enough to figure that if the value of the team key is a string (or integer, whatever works) and not another JSON object it should be interpreted as an id.

There are multiple ways to approach a solution for this problem. Its a class hyperlinking problem in the domain of RESTful Web Services. Since this has to do with Jersey the first thing I would recommend is to avoid JAXB all together since JAXB (in context of XML or JSON) ain't HATEOAS.
After dwindling a lot with Jersey and HATEOAS I have come to the opinion that the best representations for a RESTful WS is Atom Syndication Format coupled with JSON. For your example of Team and Employee I would take the following approach.
GET /team/ - returns a paginated Atom Syndication Feed list of teams
POST /team/ - creates a new team receiving a JSON representation
GET /team/1 - returns a paginated Atom Syndication Feed list of employees in the
team with the ID 1 with an Link to team's JSON representation too.
GET /employee/ - returns a paginated Atom Syndication Feed list of employees
POST /employee/ - creates a new employee using JSON
GET /employee/1 - returns details about the employee with the ID 1 in JSON
Till here I haven't change much just specifying some representation details. The interesting part is adding/removing an employee from a team. For this I would add resources with pattern
#Path("/team/{id}/employees)
class TeamEmployees {
#Path("/{posId}")
#GET
#Produces(MediaType.APPLICATION_JSON)
public Employee get(#PathParam("posId") int positonId) {}
#Path("/{posId}")
#DELETE
public Employee remove(#PathParam("posId") int positonId) {}
#POST
#Consumes(MediaType.APPLICATION_FORM_URLENCODED)
//empUri sample is /employee/{id} server knows how to parse it
public Employee add(#FormParam("employeeUri") String empUri) {}
}
Now what is position id, one approach - it is a unique number across all teams, i.e. primary key in a table which will have position_id, team_id, emp_id as tuple. Ensuring that
there will never be 2 position_id same for 2 teams. This way the whole scenario could be turned to be HATEOAS.
In addition to be able to export URIs in JSON representation to a DTO, the approach that I take is, I have DTO representing the data model for its persistent storage and I have a representation model representing hyperlinked (de)serializable version of DTO where I store string value as hyperlink. I look at representation model as API and DTO as the SPI to the RESTful WS Data Model.

Related

DDD implementation with Spring Data and JPA + Hibernate problem with identities

So I'm trying for the first time in a not so complex project to implement Domain Driven Design by separating all my code into application, domain, infrastructure and interfaces packages.
I also went with the whole separation of the JPA Entities to Domain models that will hold my business logic as rich models and used the Builder pattern to instantiate. This approach created me a headache and can't figure out if Im doing it all wrong when using JPA + ORM and Spring Data with DDD.
Process explanation
The application is a Rest API consumer (without any user interaction) that process daily through Scheduler tasks a fairly big amount of data resources and stores or updates into MySQL. Im using RestTemplate to fetch and convert the JSON responses into Domain objects and from there Im applying any business logic within the Domain itself e.g. validation, events, etc
From what I have read the aggregate root object should have an identity in their whole lifecycle and should be unique. I have used the id of the rest API object because is already something that I use to identify and track in my business domain. I have also created a property for the Technical id so when I convert Entities to Domain objects it can hold a reference for the update process.
When I need to persist the Domain to the data source (MySQL) for the first time Im converting them into Entity objects and I persist them using the save() method. So far so good.
Now when I need to update those records in the data source I first fetch them as a List of Employees from data source, convert Entity objects to Domain objects and then I fetch the list of Employees from the rest API as Domain models. Up until now I have two lists of the same Domain object types as List<Employee>. I'm iterating them using Streams and checking if an objects are not equal() between them if yes a collection of List items is created as a third list with Employee objects that need to be updated. Here I've already passed the technical Id to the domain objects in the third list of Employees so Hibernate can identify and use to update the records that are already exists.
Up to here are all fairly simple stuff until I use the saveAll() method to update the records.
Questions
I alway see Hibernate using INSERT instead of updating the list of
records. So If Im correct Hibernate session is not recognising the
objects that Im throwing into it because I have detached them when I
used the convert to domain object?
Does anyone have a better idea how can I implement this differently or fix
this problem?
Or should I stop using this approach as two different objects and continue use
them as rich Entity models?
Simple classes to explain it with code
EmployeeDO.java
#Entity
#Table(name = "employees")
public class EmployeeDO implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
public EmployeeDO() {}
...omitted getter/setters
}
Employee.java
public class Employee {
private Long persistId;
private Long employeeId;
private String name;
private Employee() {}
...omitted getters and Builder
}
EmployeeConverter.java
public class EmployeeConverter {
public static EmployeeDO serialize(Employee employee) {
EmployeeDO target = new EmployeeDO();
if (employee.getPersistId() != null) {
target.setId(employee.getPersistId());
}
target.setName(employee.getName());
return target;
}
public static Employee deserialize(EmployeeDO employee) {
return new Country.Builder(employee.getEmployeeId)
.withPersistId(employee.getId()) //<-- Technical ID setter
.withName(employee.getName())
.build();
}
}
EmployeeRepository.java
#Component
public class EmployeeReporistoryImpl implements EmployeeRepository {
#Autowired
EmployeeJpaRepository db;
#Override
public List<Employee> findAll() {
return db.findAll().stream()
.map(employee -> EmployeeConverter.deserialize(employee))
.collect(Collectors.toList());
}
#Override
public void saveAll(List<Employee> employees) {
db.saveAll(employees.stream()
.map(employee -> EmployeeConverter.serialize(employee))
.collect(Collectors.toList()));
}
}
EmployeeJpaRepository.java
#Repository
public interface EmployeeJpaRepository extends JpaRepository<EmployeeDO, Long> {
}
I use the same approach on my project: two different models for the domain and the persistence.
First, I would suggest you to don't use the converter approach but use the Memento pattern. Your domain entity exports a memento object and it could be restored from the same object. Yes, the domain has 2 functions that aren't related to the domain (they exist just to supply a non-functional requirement), but, on the other side, you avoid to expose functions, getters and constructors that the domain business logic never use.
For the part about the persistence, I don't use JPA exactly for this reason: you have to write a lot of code to reload, update and persist the entities correctly. I write directly SQL code: I can write and test it fast, and once it works I'm sure that it does what I want. With the Memento object I can have directly what I will use in the insert/update query, and I avoid myself a lot of headaches about the JPA of handling complex tables structures.
Anyway, if you want to use JPA, the only solution is to:
load the persistence entities and transform them into domain entities
update the domain entities according to the changes that you have to do in your domain
save the domain entities, that means:
reload the persistence entities
change, or create if there're new ones, them with the changes that you get from the updated domain entities
save the persistence entities
I've tried a mixed solution, where the domain entities are extended by the persistence ones (a bit complex to do). A lot of care should be took to avoid that domain model should adapts to the restrictions of JPA that come from the persistence model.
Here there's an interesting reading about the splitting of the two models.
Finally, my suggestion is to think how complex the domain is and use the simplest solution for the problem:
is it big and with a lot of complex behaviours? Is expected that it will grow up in a big one? Use two models, domain and persistence, and manage the persistence directly with SQL It avoids a lot of caos in the read/update/save phase.
is it simple? Then, first, should I use the DDD approach? If really yes, I would let the JPA annotations to split inside the domain. Yes, it's not pure DDD, but we live in the real world and the time to do something simple in the pure way should not be some orders of magnitude bigger that the the time I need to to it with some compromises. And, on the other side, I can write all this stuff in an XML in the infrastructure layer, avoiding to clutter the domain with it. As it's done in the spring DDD sample here.
When you want to update an existing object, you first have to load it through entityManager.find() and apply the changes on that object or use entityManager.merge since you are working with detached entities.
Anyway, modelling rich domain models based on JPA is the perfect use case for Blaze-Persistence Entity Views.
Blaze-Persistence is a query builder on top of JPA which supports many of the advanced DBMS features on top of the JPA model. I created Entity Views on top of it to allow easy mapping between JPA models and custom interface defined models, something like Spring Data Projections on steroids. The idea is that you define your target structure the way you like and map attributes(getters) via JPQL expressions to the entity model. Since the attribute name is used as default mapping, you mostly don't need explicit mappings as 80% of the use cases is to have DTOs that are a subset of the entity model.
The interesting point here is that entity views can also be updatable and support automatic translation back to the entity/DB model.
A mapping for your model could look as simple as the following
#EntityView(EmployeeDO.class)
#UpdatableEntityView
interface Employee {
#IdMapping("persistId")
Long getId();
Long getEmployeeId();
String getName();
void setName(String name);
}
Querying is a matter of applying the entity view to a query, the simplest being just a query by id.
Employee dto = entityViewManager.find(entityManager, Employee.class, id);
The Spring Data integration allows you to use it almost like Spring Data Projections: https://persistence.blazebit.com/documentation/entity-view/manual/en_US/index.html#spring-data-features and it can also be saved back. Here a sample repository
#Repository
interface EmployeeRepository {
Employee findOne(Long id);
void save(Employee e);
}
It will only fetch the mappings that you tell it to fetch and also only update the state that you make updatable through setters.
With the Jackson integration you can deserialize your payload onto a loaded entity view or you can avoid loading alltogether and use the Spring MVC integration to capture just the state that was transferred and flush that. This could look like the following:
#RequestMapping(path = "/employee/{id}", method = RequestMethod.PUT, consumes = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<String> updateEmp(#EntityViewId("id") #RequestBody Employee emp) {
employeeRepository.save(emp);
return ResponseEntity.ok(emp.getId().toString());
}
Here you can see an example project: https://github.com/Blazebit/blaze-persistence/tree/master/examples/spring-data-webmvc

Mapping a document with partly-defined schema

I'm writing a demo app using Spring & MongoDB as a database.
My main domain class looks like:
#Document
public class Person {
#Id
private String id;
//Some other fields
private DBObject additionalData;
}
The key is that additionalData is a subdocument with no schema specified, it is kind of user-defined JSON. But when I am parsing this json (using (DBObject) JSON.parse(value) expression), it is stored as a string in MongoDB, and I need it to be a nested document structure.
Searched for couple of hours, found no solution. Any ideas?
I'm not really sure of the expected result of casting the result of
JSON.parse(value)
to DBObject, which is an interface, not a class.
Try casting the result to an implementation of DBObject BasicDBObject (or BasicDBList), or a Map<String, Object> as mentioned in the comments (it is also an interface, but it does work).
If you're working with Spring Data Rest, you will probably not need to deserialize "manually", Spring will do it for you. Check this answer for a basic example of what to do.
Having data with no schema specified may not be the best idea around (mongodb saves you from doing it at the database level, but you should do it at the application level), but I use similar tricks in production, and you can somehow make it work.

Elasticsearch - what to do if fields have the same name but multiple mapping

I use Elasticsearch for storing data sent from multiple sources outside of my system, i.e. I'm not controlling the incoming data - I just receive json document and store it. I have no logstash with its filters in the middle, only ES and Kibana. Each data source sent its own data type and all of them are stored in the same index (per tenant) but in different types. However since I cannot control the data that is sent to me, it is possible to receive documents of different types with the field having the same name and different structure.
For example, assume that I have type1 and type2 with field FLD, which is an object in both cases but the structure of this object is not the same. Specifically FLD.name is a string field in type1 but an object in type2. And in this case, when type1 data arrives it is stored successfully but when type2 data arrives, it is rejected:
failed to put mappings on indices [[myindex]], type [type2]
java.lang.IllegalArgumentException: Mapper for [FLD] conflicts with existing mapping in other types[Can't merge a non object mapping [FLD.name] with an object mapping [FLD.name]]
ES documentation clearly declare that fields with the same name in the same index in different mapping types mapped to the same field internally and must have the same mapping (see here).
My question is what can I do in this case? I'd prefer to keep all the types in the same index. Is it possible to add a unique-per-type suffix to field names or something like this? Any other solution? I'm a newbie in Elasticsearch so maybe I'm missing something simple... Thanks in advance.
There is no way to do index arbitrary JSON without pre-processing before it's indexed - not even Dynamic templates are flexible enough.
You can flatten nested objects into key-value pairs and use a Nested datatype, Multi-fields, and ignore_malformed to index arbitrary JSON (even with type conflicts) as described here. Unfortunately, Elasticsearch can still throw an exception at query time if you try to, for example, match a string to kv_pairs.value.long, so you'll have choose appropriate fields based on format of the value.
It's not the best practice I suppose, but you can store the field content as a String and make the deserialization manually after retrieve the information.
So, imagine a class like:
class Person {
private final Object name;
}
That can receive a List of String or a List of any other Object, just for example
So, instead of serialize the Person to a String and save it, you can serialize to a String and save the content on another class, like:
String personContent new ObjectMapper().writeValueAsString(person);
RequestDto dto = new RequestDto(personContent);
String dtoContent new ObjectMapper().writeValueAsString(dto);
And save the dtoContent:
IndexRequest request = new IndexRequest("persons")
request.source(dtoContent, XContentType.JSON);
IndexResponse response = client.index(request, RequestOptions.DEFAULT);
The RequestDto will be a simple class with a String field:
class RequestDto {
private String content;
}
I'm not a expert on ElasticSearch, but probably you will loose a lot of features from the ElasticSearch by passing his validations doing that.

Java Client for POSTing complex entities to a Spring Data REST / HATEOAS service

From what I can tell, there are provided means for converting a complex object to proper HAL format. This is of course leveraged in marshalling the objects in the framework itself. Resource and Link objects, etc.
For the sake of a use-case:
Company 1 is an existing Company in my system. I want to add a new Employee that works for Company 1
Below is an example Employee object that you'd receive from a Spring Data REST based service. Spring HATEOAS also provides the means to construct these objects yourself.
{
"id": null,
"firstName": "bZWthNFk",
"lastName": "GtTnrqka",
"loginId": "zTk5rT",
"active": true,
"_links": {
"company": {
"href": "http://localhost/companies/1";
}
}
}
However, this seems to not work for POSTing the object. As I understand it, that same object would have to be POSTed as:
{
"id": null,
"firstName": "bZWthNFk",
"lastName": "GtTnrqka",
"loginId": "zTk5rT",
"active": true,
"company": "http://localhost/companies/1"
}
As far as I can tell, there are no means provided by either the HATEOAS or Data REST project to produce this object for posting to a valid HAL based service, either via RestTemplate or some other means. In fact, I can't find any means of readily POSTing a complex object without some hand-marshalling. Am I wrong in assuming this?
How is one supposed to build out a valid Java SDK for service-to-service communication that leverages HATEOAS principles without this tooling to actually POST objects reliably?
Long story short, I want to post this object without having to hand serialize the URIs for associations.
public class Employee {
private Integer id;
#NotNull
private Company company;
private String firstName;
private String lastName;
}
I've created the following improvement request in reference to this:
https://jira.spring.io/browse/SPR-12678
The approach you suggested should actually work, provided you use at least version 2.0 of Spring Data REST.
You should also have an association resource like http://app.com/employee/10/company. You can PUT a new link to that location using the media type text/uri-list or remove the company from Employee with a DELETE.
UDATE
It seems I didn't address your main concern, that was clarified by your update and comments. So let's take your Employee class that has an association with a Customer.
As you can see from the JSON response you posted, the data structure that the REST API works with doesn't contain a Customer object (or Company in that case), only a link. A client would usually work with the data structure defined by the API. So customerwould be link in the first place and there would be no need for serializing an object to a link.
If the client uses a different data structure internally, then some kind of conversion is necessary anyway. But the reason would be the different structure, not HAL or association links.

PUT method (RESTful) doesn't work as a way to update resources

According to this article(http://restcookbook.com/HTTP%20Methods/put-vs-post/), PUT is supposed to work as a method to update resources.
However, practicing RESTful with JAX_RS 2.0 and Jersey 2.0, I don't think it updates a particular resource.
(I.e. I'm studying RESTful with JAX_RS 2.0 and Jersey 2.0)
Here is a resouce like this.
<customer>
<name>Before</name>
<postcode>111</postcode>
</customer>
What I'm trying to do is to update (perhaps I should say "replace") this resource.
ClientConfig config = new ClientConfig();
Client client = ClientBuilder.newClient(config);
WebTarget target = client.target("http://xxx/yyy/zzz/end.cust");
Customer cust = new Customer();
cust.setName("After");
cust.setPostcode(222);
target.path("Before").request().put(Entity.xml(cust));
#Id annotation is set to "Name" in the "Customer" class, so the path "Before" is supposed to work as the ID and the first resource (named "Before") should be replaced with the second resource (named "After").
However, after the coding above is executed, the "Before" resource still remains, and there is a new "After" resrouce.
It seems that the PUT method worked to create a new resource, instead of updating something.
(i.e. There are both "Before" and "After" resources, and nothing has been updated)
I tested a POST method in order to create a new resource, and it created a new resource as I expected.
If you see anything I'm doing wrong or what needs to be done, could you please give some advice?
edit
I'll add the server side code. The method annotated with #PUT is like this.
#PUT
#Path("{id}")
#Consumes({"application/xml", "application/json"})
public void edit(#PathParam("id") String id, Customer entity) {
super.edit(entity);
}
This is inside a class called CustomerFacadeREST.java, automatically created after I created a "RESTful service from Database".
According to NetBeans' document, super.edit() method is originally like this.
public void edit(T entity) {
getEntityManager().merge(entity);
}
In the "Customer" class, #Id is set to the "name" value in this way.
public class Customer implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#Basic(optional = false)
#NotNull
#Size(min = 1, max = 80)
#Column(name = "Name")
private String name;
// Other fields, such as Postcode...
public Customer() {
}
// Other constructors and methods...
}
The idea behind "HTTP Verbs" like PUT, GET, POST, DELETE are just a matter of protocol semantics. Just performing an HTTP PUT operation doesn't do anything magical. It's just proper semantics we as developers should understand, while developing, as these semantics are known to all (that's why protocols exist). If no one followed these semantics, the world would be somewhere between the Great Depression and the Apocalypse.
That being said, these verbs (semantics) are a sort of guarantee (or maybe assurance is a better word) to the client performing the request with a certain verb will have some know semantics to it. One major factor is the idea of idempotence. Idempotence is the idea that no matter how many times I make a request, the result will be the same (or have the same effect).
Certain HTTP verbs are said to be idempotent, such as PUT, DELETE, GET. No matter how many times be make the exact same request, the general idea is that the result/effect should be the same. POST on the other hand is said to not be idempotent, as the exact same POST request may produce different results, for example submit an order, wrongfully, again, or creating a new customer twice.
If we want to make the world a better place, and do our part in saving the world from a complete meltdown, we should learn these semantics and be good citizens by following them. There's a lot more to learn about the verb semantics, than just idempotence, but understanding that much, is a good start. I'd suggest maybe picking up a good book on REST to learn some good practices. Or if you want you want to be a cool kid, take time to read the bible (actually the Fielding Dissertation).
All that being said, it's our job as developers to create the code to follow these semantics. The reason your method is creating a new resource, is probably because you are creating a new resource with your code. Maybe something like this would seem more appropriate:
#PUT
#Path("/customers/{id}")
#Consumes(MediaType.APPLICATION_JSON)
public Response updateCustomer(#PathParam("id") long id,
Customer updateCustomer) {
Customer customer = customerService.getCustomerById(id);
if (customer == null) {
throw new WebApplicationException("Can't find it", 404);
}
customer.setFirstName(updateCustomer.getFirstName());
customer.setLastName(updateCustomer.getLastName());
...
return Response.noContent().build();
}
So we are just update the customer that already exists in our database. Normally with a PUT request to update, the particular customer resource URI should be known. So say the client makes a request to http://blah.com/api/customers/1234, our service will look up the customer with the id 1234. If it can't be found, we return a 404 status code, as the resource doesn't exist. If it does exist, then we update the customer with the customer data provided in the request. If you wanted to create a new customer, where the URI is not known, then POST would be correct, and you'd send a customer representation to http://blah.com/api/customers.
Also keep just an FYI: in many cases a case like this, what happens is that the client requests (GET) a resource, say a customer, and updates that customer representation, then send it back as PUT request with the updated customer. On the sever it should use that information to update the particular customer's data, as you can see from the example above.
UPDATE
Per your edit. You are completely missing the point of how this is supposed to work.
Customer cust = new Customer();
cust.setName("After");
cust.setPostcode(222);
target.path("Before").request().put(Entity.xml(cust));
What's wrong with this is that with the new Customer, you are setting the identifier to "After", which is different from the identifier in the request path, you are using "Before". So the path variable {id} is "Before". With this request URI you are saying that you want to access the customer with id "Before". As seen in my code, it's your duty to check if a customer with the id "Before" exists in the database. If not, you should return back a 404 Not Found. The name (id) you set for the new Customer should be the id expected in the database. So if you want to update the customer with id in the databse "After". then you should put "After" in the path, instead of "Before". We should not try and change the identifier.
Like I said, when we want to update a resource, we normally, GET the resource, update some field (but not the identifier), and send it back. A sequence might look something like
final String PATH = "http://hello.com/api/customers"
WebTarget target = client.target(PATH);
Customer customer = target.path("1234").request().get(Customer.class);
// where 1234 is the id (or in your case `name` of the customer.
// I would avoid using the name as the DB id, that's why my example uses numbers
customer.setPostalCode(...);
target = client.target(PATH).path(customer.getName()); // getName should be 1234
Response response = target.request().put(Entity.xml(customer));
We are using the same id as we were provided with, in the path, because that is the how the resource is identified in the server.

Categories