Which naming pattern should i use for basic CRUD? - java

I'm creating a RestFul API for the first time using spring framework and now im a bit confused about the common labels used to create, read, update and delete. I want to follow a pattern for an easy maintenance in the code. Is there any rule or naming pattern for the labels that I should follow?
Im thinking about:
/service -> return every services
/service/new -> create new service
/service/update -> update service
/service/delete -> delete service

Use the HTTP verb to control what you want to do with the resouces:
GET: /services -> returns all elements
GET: /services/{id} -> returns element with id
POST: /services -> creates a new object, pass the object in the body
PUT: /services/{id} -> updates element with id, pass updated values in body
DELETE: /services/{id} -> delete element with id
I strongly recommend you use query params for paging in GET: /services, return a default number on page 1 if it's not listed.
A full request could look like: http://www.example.com/services?page=5&count=10

Related

Spring Elasticsearch - bulk save multiple indices in one line?

I have multiple documents with different index name that bulk saves in elasticsearch:
public void bulkCreateOrUpdate(List personUpdateList, List addressUpdateList, List positionUpdateList) {
this.operations.bulkUpdate(personUpdateList,Person.class);
this.operations.bulkUpdate(addressUpdateList, Address.class);
this.operations.bulkUpdate(positionUpdateList, Position.class);
}
However, is this still possible to be optimized by calling just a single line, saving multiple list of different index types?
Tldr;
The bulk api certainly allows for it.
This is a valid call
POST _bulk
{"index":{"_index":"index_1"}}
{"data":"data"}
{"index":{"_index":"index_2"}}
{"data":"data"}
How does your Java Client deal with it ... I am not sure.
Solution
Java client - Bulk
This could be done:
BulkRequest.Builder br = new BulkRequest.Builder();
br.operations(op -> op
.index(idx -> idx
.index("index_1")
.id("1")
.document(document)
)
);
br.operations(op -> op
.index(idx -> idx
.index("index_2")
.id("1")
.document(document)
)
);
Java Rest Client - Bulk
This could be done this way:
BulkRequest request = new BulkRequest();
request.add(new IndexRequest("index_1").id("1")
.source(XContentType.JSON,"data", "data"));
request.add(new IndexRequest("index_2").id("1")
.source(XContentType.JSON,"data", "data"));
For Spring Data Elasticsearch :
The ElasticsearchOperations.bulkXXX() methods take a List<IndexQuery> as first parameter. You can set an index name on each of these objects to specify in which index the data should be written/updated. The index name taken from the last parameter (either the entity class or an IndexCoordinates object) is used in case that no index name is set in the IndexQuery.

Substitute ints into Dataflow via Cloudbuild yaml

I've got a streaming Dataflow pipeline, written in Java with BEAM 2.35. It commits data to BigQuery via StorageWriteApi. Initially the code looks like
BigQueryIO.writeTableRows()
.withTimePartitioning(/* some column */)
.withClustering(/* another column */)
.withMethod(BigQueryIO.Write.Method.STORAGE_WRITE_API)
.withTriggeringFrequency(Duration.standardSeconds(30))
.withNumStorageWriteApiStreams(20) // want to make this dynamic
This code runs in different environment eg Dev & Prod. When I deploy in Dev I want 2 StorageWriteApiStreams, in Prod I want 20, and I'm trying to pass/resolve these values at the moment I deploy with a Cloudbuild.
The cloudbuild-dev.yaml looks like
steps:
- lots-of-steps
args:
--numStorageWriteApiStreams=${_NUM_STORAGEWRITEAPI_STREAMS}
substitutions:
_PROJECT: dev-project
_NUM_STORAGEWRITEAPI_STREAMS: '2'
I expose the substitution in the job code with an interface
ValueProvider<String> getNumStorageWriteApiStreams();
void setNumStorageWriteApiStreams(ValueProvider<String> numStorageWriteApiStreams);
I then refactor the writeTableRows() call to invoke getNumStorageWriteApiStreams()
BigQueryIO.writeTableRows()
.withTimePartitioning(/* some column */)
.withClustering(/* another column */)
.withMethod(BigQueryIO.Write.Method.STORAGE_WRITE_API)
.withTriggeringFrequency(Duration.standardSeconds(30))
.withNumStorageWriteApiStreams(Integer.parseInt(String.valueOf(options.getNumStorageWriteApiStreams())))
Now it's dynamic but I get a build failure on account of java.lang.IllegalArgumentException: methods with same signature getNumStorageWriteApiStreams() but incompatible return types: [class java.lang.Integer, interface org.apache.beam.sdk.options.ValueProvider]
My understanding was that Integer.parseInt returns an int, which I want so I can pass it to withNumStorageWriteApiStreams() which requires an int.
I'd appreciate any help I can get here thanks
Turns out BigQueryOptions.java already has a method getNumStorageWriteApiStreams() that returns an Integer. I was unknowingly trying to rewrite it with a different return, oops.
https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BigQueryOptions.java#L95-L98

GMail API aliases.getSendAs().get(0).getDisplayName returns empty (only for 0, not for the next ones)

I'm using Java GMail API and everything is working good for sending e-mails, collecting data from my profile, etc.
The only problem is that, while I can get the Signature for my 0-th element of the list of SendAs aliases, I can't get the Display Name: it returns an empty String. Both work for the other aliases (get(1) and subsequent numbers). It seems that the problem is on 0, tried on different authenticated users with Name set and it remains the same.
ListSendAsResponse aliases = service.users().settings().sendAs().list("me").execute();
SendAs mimmo = aliases.getSendAs().get(0);
actualsign = mimmo.getSignature();
sendername = mimmo.getDisplayName();
In Gmail API, there are two different ways to retrieve alias(es):
getSendAs() and SendAs.Get(java.lang.String userId,java.lang.String sendAsEmail)
The first one returns you a list of all alias, the second returns you one alias ressource - the one with the specified userId and sendAsEmail parameters.
If what you want to do it to retrieve the first element of the getSendAs() response, you should do it with getSendAs()[0] and not with the Java method get.
Sample:
SendAs mimmo = aliases.getSendAs()[0];
System.out.println(mimmo.getDisplayName());
It is always useful to test with the Try this API what response a method returns. Thereby, the userId can be set to me.

Can not modify value in JavaRDD

I have a question about how to update JavaRDD values.
I have a JavaRDD<CostedEventMessage> with message objects containing information about to which partition of kafka topic it should be written to.
I'm trying to change the partitionId field of such objects using the following code:
rddToKafka = rddToKafka.map(event -> repartitionEvent(event, numPartitions));
where the repartitionEvent logic is:
costedEventMessage.setPartitionId(1);
return costedEventMessage;
But the modification does not happen.
Could you please advice why and how to correctly modify values in a JavaRDD?
Spark is lazy, so from the code you pasted above it's not clear if you actually performed any action on the JavaRDD (like collect or forEach) and how you came to the conclusion that data was not changed.
For example, if you assumed that by running the following code:
List<CostedEventMessage> messagesLst = ...;
JavaRDD<CostedEventMessage> rddToKafka = javaSparkContext.parallelize(messagesLst);
rddToKafka = rddToKafka.map(event -> repartitionEvent(event, numPartitions));
Each element in messagesLst would have partition set to 1, you are wrong.
That would hold true if you added for example:
messagesLst = rddToKafka.collect();
For more details refer to documentation

Mule ESB (Groovy Script): How would I check a value and add a new key value mapping to a java.util.LinkedList

I am calling a service that returns data from a DB in the form of a LinkedList. I need to update the LinkedList with a new field called "status" which is determined off of endDate.
endDate > current date => status=deactivated
endDate <= current date => status=active
Mule Payload Class: java.util.LinkedList
Mule Payload: [{serialNumber=, maintenanceId=12345, customerID=09890, startDate=2017-10-10 23:34:17, endDate=2018-10-10 23:34:17},{serialNumber=, maintenanceId=09091, customerID=74743, startDate=2014-8-16 23:34:17, endDate=2019-8-16 23:34:17}]
The issue I am having in mule is that I am unable to navigate into the linked list to retrieve the value as well as add a new value to the list. Hoping someone could give me some advice on the best way to move forward. I am trying to use a groovy transformer to update the payload, but it's not going so well, so I don't have any code to show.
Thanks taking the time!
I had a similar requirement (the payload was a JSON but it should work as well) and this is what I did using Dataweave (I added your data so it can be easier to understand).
%dw 1.0
%output application/java
---
flowVars.input2 map {
serialNumber : $.serialNumber,
maintenanceId: $.maintenanceId,
customerID: $.customerID,
startDate: $.startDate,
endDate: $.endDate,
status: "deactivated" when $.endDate as :date {format:"yyyy-M-dd HH:mm:ss"} > (now as :date {format:"yyyy-M-dd HH:mm:ss"}) otherwise "activated"
}
With this transformation you iterate the list and add the status value based on your requirement.
Input example:
[{"serialNumber":"test1", "maintenanceId":"12345", "customerID":"09890", "startDate":"2017-10-10 23:34:17", "endDate":"2018-10-10 23:34:17"},{"serialNumber":"test2", "maintenanceId":"09091", "customerID":"74743", "startDate":"2014-8-15 23:34:17", "endDate":"2018-8-15 23:34:17"}]
Output example
[{"serialNumber":"test1","maintenanceId":"12345","customerID":"09890","startDate":"2017-10-10 23:34:17","endDate":"2018-10-10 23:34:17","status":"deactivated"},{"serialNumber":"test2","maintenanceId":"09091","customerID":"74743","startDate":"2014-8-15 23:34:17","endDate":"2018-8-15 23:34:17","status":"activated"}]
Hope this helps you

Categories