I have a redis cache that gets populated by items asynchronously. I'm trying to get this endpoint to return a stream to that redis cache once an item has appeared.
#GET
#Path("/myRedis/{id}")
#Produces(MediaType.SERVER_SENT_EVENTS)
public Multi<String> stream(#PathParam String id) {
return redisCache.get(id); // this returns a string
}
So say we have two users A and B
A requests item a by calling localhost:8080/myRedis/1 the item does not exist yet but a connection has been created to the streaming endpoint and after say 2 seconds the item appears and A gets it
Meanwhile B also requests an item b by calling localhost:8080/myRedis/2 the item exist directly and B gets it
How can I modifiy the code above to achieve this?
Related
I have the following scenario. Below is a minimalist version of what I am trying to do
in a simple spring boot REST API Controller and Service.
func()
{
String lv=vService.getlv("1.2.1");
String mv=vService.getmv("1.3.1");
}
#Service
public class VService{
public String getlv(String version){
JsonArray lvVersions=makeHTTPGETLv();
String result=getVersion(lvVersions,version);
return result;
}
public String getMv(String version){
JsonArray mvVersions=makeHTTPGETMv();
String result=getVersion(mvVersions,version);
return result;
}
private String getVersion(JsonArray versions, String version){
Map<String,String> versionMap=new HashMap<>();
for(JsonElement je: versions){
JsonObject jo=je.getAsJsonObject();
//some logic
versionMap.put(jo.get("ver").getAsString(),jo.get("rver").getAsString);
}
return versionMap.get(version);
}
}
The getVersion method internally builds a map every time it is called.
The getlv method and getMv method both invokes the getVersion method
In the call getVersion(lvVersions,version); the lvVersions value will always be same,
It is a JSON array containing the mapping between versions.
And so each time getlv is called we are making a GET request makeHTTPGETLv and then
searching for the corresponding mapping of the given version.
Due to the static nature of this, it could be cached.
The multiple GET calls to makeHTTPGETLv could be avoided as it will always return the same JSON array (the changes could occur in some days though) and also the map that is built inside the getVersion method is unchanging.
Since the JsonArray could change, which is the response of a GET request,
So there could be a logic to update the cache every x minute. which could be 5 minutes or 60 mins.
How do I achieve this in spring boot?
Also to avoid taking the time for the first call, I could use Eager loading.
Steps Taken:
I tried using the #Cacheable annotation on top of the functions. It didn't have any effect. It seems I will have to put some more logic there. I could put out a separate function that builds the map that is being built inside the getVersion function. And so use this map for every call. But this map needs to be updated periodically (could be 5 mins, 60 mins, or even 1 day) which will require a GET call and building up the map. This reduces the problem of updating the Map every x minute. Because then I could directly use the map to fetch the corresponding version and avoid the GET call and parsing the response every time.
Similar Reducible/Smaller/alternate problem:
class Test {
private Map<String,String> map;
private void buildMap(){
}
}
The buildMap function updates the map attribute. Is there a way that buildMap function gets called every 30 minutes so that the map remains updated? That sounds like a cron job. Not sure if some caching is helpful here or if cron job and how to achieve that in spring boot.
I want to merge 2 responses and return a Flux.
private Flux<Response<List<Company>, Error>> loopGet(List<Entity> registries, Boolean status) {
return Flux.fromIterable(registries)
.flatMap(this::sendGetRequest)
.mergeWith(Mono.just(fetch(status)));
}
This is what I am doing, is working but I would like the merge to wait before calling the Mono.just (fetch (status)).
I'll explain, sendGetRequest returns a Mono that makes an API call and from the result saves things to db. Subsequently the merge goes to call the db with the fetch method, but that data is not updated yet. If I then make the call again, I get the updated data.
You need concatWith and fromCallable to ensure that fetch is called lazily after the get requests are finished.
private Flux<Response<List<Company>, Error>> loopGet(List<Entity> registries, Boolean status) {
return Flux.fromIterable(registries)
.flatMap(this::sendGetRequest)
.concatWith(Mono.fromCallable(() -> fetch(status)));
}
I am working with a spring boot. I am trying to send data from one database to the other.
First, I did this by making a get request to get the data from the first database and applied post through Web Client to send the data to the other database. It worked!
But when I tried to do it with cron scheduler with #Scheduled annotation it's not posting the data to the database. Even though the function is working fine, as i tried printing stuff through that function, but the WebClient is not posting the data (also checked the data, it was fine).
The Cron class is:
#Component
public class NodeCronScheduler {
#Autowired
GraphService graphService;
#Scheduled(cron = "*/10 * * * * *")
public void createAllNodesFiveSeconds()
{
graphService.saveAlltoGraph("Product");
}
}
saveAlltoGraph function takes all the tuples from a Product table and send post request to the api of graph database, which makes node from the tuples.
Here is the function:
public Mono<Statements> saveAlltoGraph(String label) {
JpaRepository currentRepository = repositoryService.getRepository(label);
List<Model> allModels = currentRepository.findAll();
Statements statements = statementService.createAllNodes(allModels, label);
//System.out.println(statements);
return webClientService.sendStatement(statements);
}
First, the label "Product" is used to get the JpaRepository related to that table. Then we fetch all the tuples of that table in the list, and we create objects according to that, (We can use a serializer to get the JSON).
Here is the sendStatement function:
public Mono<Statements> sendStatement(Statements statements){
System.out.println(statements);
return webClient.post().uri("http://localhost:7474/db/data/transaction/commit")
.body(Mono.just(statements), Statements.class).retrieve().bodyToMono(Statements.class);
}
Everything is working when we call this saveAlltoGraph using a get request mapping, but not working with the scheduler.
I tried with adding .block() and .subscribe() to that. And things started working with the cron scheduler.
I'm creating tasks with various properties and I'm passing the JSON data from Angular frontend to Java based backend. Assignee is a property of the Task class at the moment.
A new request came in to change the behavior: The user should be able to select multiple assignees upon creating a new task.
The way I want to handle this is that I want to create the same amount of tasks as the number of assignees passed. So if n users are passed with the various task data, n tasks would be created in the DB for each user as an assignee.
Previously I could only pass one assignee and the code for returning the POST request's response was the following:
#POST
#Consumes(MediaType.APPLICATION_JSON)
public Response save(TaskInDto taskInDto) {
// saving to DB, etc...
String taskId = createdTask.getId().toString();
URI taskUri = uriInfo.getAbsolutePathBuilder().path(taskId).build();
return Response.created(taskUri).build();
}
My question is regarding REST design: What should I return as a Result object to the user if multiple objects were created?
If a POST request is creating multiple objects, clients will expect back a response entity containing links to each created resource.
I want to solve the following problem, it's about deleting entities from a database:
The user selects Delete for a certain entity
The is deleted from the database and disappeared from the list
An undo frame appears inside the page (like Twitter Bootstrap alert messages), where the user can choose to undo the deletion.
I don't know how to realize this, because at the moment I solve this that way:
Delete button links to the URL: delete/entity_id
I have written an if-case for this URL in my request handler that deletes the entity
after the deletion is done, I send a response.sendRedirect(/list) so the updated list is shown
This way I cannot send additional data by redirecting it. Normally I would send the extra data by processing them via the template, but with redirect this is not possible.
How is such a sitation solved?
I have few such scenarios in my web application and here is how I solve it
I have a class called message queue which looks like following
public class MessageQueue {
public static Hashtable<String, Object> messages = new Hashtable<String, Object>();
public static void putMessage(String key, Object obj)
{
messages.put(key, obj);
}
public static Object getMessage(String key)
{
if(key == null)
return null;
Object obj = messages.get(key);
if(obj == null)
return null;
messages.remove(key);
return obj;
}
}
Now this class stays in the memory. Before redirects I create some object that is needed after redirect. Create a random Guid as a String and then store this object in messagequeue
I then add this Guid as a parameter of the URL
String justDeletedId = "someId";
String guid = (new Guid()).toString();
MessageQueue.put(guid,justDeletedId);
sendRedirect("\list\?msgid=" + guid);
Now after redirect you can inspect the messageID and remove the object from the messagequeue and do whatever you please
I choose to allow using this object once ... to avoid memory leak
In the current version ... I also have implemented Last Access Eviction policy which uses a quartz job which cleans up this messagequeue periodically
You could use the setAttribute() and getAttribute() methods of HttpSession. After all that's a way how you can pass Java objects over different HTTP requests.
In your case you could create such an Undo object and store it in the session. After the redirect you have described the session object is retrieved and its content is passed to the template.