First of all I'm gathering information about this question and so that i could implement this feature in a more elegant way.
Let's look at the picture below
The target server (green circle)
This is an api server that I use to fetch some data.
Features:
Only https connection
Response in json format.
Can accept get requests like these [ https://api.server.com/user=1&option&api_key=? ]
Proxy controller (blue square)
It's a simple server that stores list of proxies; Send and receive some data; And I want to talk about the software that i will to run on top of it.
Features:
Proxy list
Api keys list
I think it should be a hashmap that stores ip=>token list or database table if I want to scale my application.
Workers
Just analyze a json response and pass data to the db.
Let's go closer to the proxy controller server.
The first idea:
Create newFixedThreadPoolExecutor
Pass url/token to worker: server.submit(new Worker(url, token, proxy))
Worker analyze the data and pass it to db.
But in my opinion this solution is quite big and hard to maintain, I want to engage endpoint that gather stats, kill or spawn new workers and so on.
The second idea:
Worker generates an request like https://host/user=1&option=1
Pass it to the Proxy controller
Proxy controller assign to the request the api key and proxy server
Execute the request
Accept the response
Pass it back to a worker (I think that the best idea is to put a load balancer between workers and proxy controller).
This solution seems to me quite hacky. For example if the worker is dead the proxy server sends bunch of requests to the dead worker and it could led to dataloss.
The third idea:
The same as the second but instead of sending data directly to the worker the proxy controller pass it to some bus. I find some information about apache camel that allow me to organize this solution. In this case the dead worker is dead worker and dataloss equals zero (maybe).
Of course all three cases don't handle an errors. Some errors can be solved by resending the request with additional data. Some errors can be solved by re-span the workers.
So in your opinion what is the best solution in this case? Do I miss some hidden problems that can appear later? Which tools I should use?
Thanks
What are trying to reach?
Maybe you consider using this architecture:
NGINX (proxy + load balance) -> WORKER SERVERS -> DB SERVER (maybe use some NoSQL like Cassandra)
Related
I’m fairly new to REST API and working on a product where client X interacts with n number of servers (Y1, Y2,…Yn) to retrieve different type of data from backend, using POST requests.
Now I also want to retrieve some metadata related to each server (file names, project name etc.) for our internal use-case in client X. Note: This should be a separate request.
How should I implement this using Rest?
Can I use OPTIONS method for this?
I tried implementing this with GET method but not sure if it’s best approach.
Since you are going to retrieve information the GET is the most appropriate. POST instead should be used to 'insert' fresh new datas. I would suggest to take a look at the meaning of all HTTP verbs (POST,GET,PUT,PATCH,DELETE) in order to understand them.
This might seem like an easy solution got on the internet, but believe me, I have seen through a lot of examples and couldn't figure out which approach to choose.
Requirement :
I have a subscriber at the application service(spring boot/Java) end, subscribed to blockchain events( corda ). I want to push this event to UI (ReactJS) whenever there is a change in state.
I could subscribe to the blockchain events successfully but stuck with multiple in-complete or tangled ideas of pushing it to the UI and how UI would receive my events ( kindly don't suggest paid services, APIs, Libraries etc ).
I have come across and tried out all approach, since I'm newly working on events I need some ray of light as to how to approach towards a complete solution.
Publisher-subscriber pattern
Observable pattern
Sse emitter
Flux & Mono
Firebase ( a clear NO )
+Boggler :
events handling between service and UI , should it be via API/endpoint calls or can it be emitted just in air( i'm not clear) and based on event name can we subscribe to it in UI ?
should i have two APIs dedicated for this ? one trigger subscribe and other actually executes emitter ?
If the endpoint is always being heard doesn't it needs dedicated resource ?
I basically need a CLEAR approach to handle this.
Code can be provided based on demand
I see you mention you are able to capture events in Spring Boot. So you are left with sending the event information to the front-end. I could think of three ways to do this.
Websockets: Might be an over-kill, as I suppose you won't need bi-directional communication.
SEE: Perhaps a better choice than WebSockets.
Or simply Polling: Not a bad choice either, if you are not looking for realtime notifications.
Yes Long Polling.
The solution seems to be pretty simple. Make the connection once and let them wait for as long as possible. So that in the meanwhile if any new data comes to the server, the server can directly give the response back. This way we can definitely reduce the number of requests and response cycles involved.
You will find multiple implementation examples of How Long Polling is done as part of Spring Boot project on internet.
I have a process in my web app, it's just a sequence of CRUD requests. An order is usually strict, the only difference is an Id will be different for every process. I'd like to make "asynchronous" data loading. For example:
A user makes a request for Step 1.
A Server gives to him/her response and as soon as the server builds complete response it starts to build data for Step 2, 3, 5... and put it in a cache.
My question is: could you advise any mechanism which will make it simple and readable? That's' the main requirement. Does Spring have something for this? (WebFlux not well fits the architecture)
I think you can use producer-consumer design pattern
For example.
1. Get a request from a client, do step 1 work.
2. Response client.
3. Make a task entity, then add it to a task queue.
4. Http worker thread end.
5. Work Step 2-N in another thread. And put results in the cache.
For frameworks, akka is a good choice.
RxJava may be also good.
And you can implement it by yourself.
I am trying to
get the information of cars from Tesla Server through its API
. And I want to do it concurrently .i.e fetch the information of multiple cars in parallel using AKKA actors
My Approach:
(1) First get the total number of cars.
(2) Create actors equal to the number of cars.
(3) Inside each actor call rest API to get the information of cars in parallel. i.e.each actor will be provided with url containing the car id.
Am I doing it right regarding my approach?
Specifically, in point number 3, I have made call to the Tesla Server inside each actor using AsyncHttpClient from com.ning. Will using AsyncHttpClient inside each actor ensure that each actor will send request asynchronously to the server without blocking other actors?
Will provide further information if need be. I am beginner in AKKA. Looked a lot of threads but could not find exactly what I was looking for.
Specifically for point number 3, as long as you use a Future based API in your actors, the actors will not block.
In general it is hard to tell much more about your approach without knowing why you chose to use one actor per car.
Consider this question: why couldn't you simply create a listOfCars: List[String] of URLs and use Future.traverse(listOfCars)(downloadCarDataForUrl _)?
Finally, I don't know how AsyncHttpClient behaves, but I would double check that if you have a list of thousands of cars, AsyncHttpClient will not concurrently download all of them... if that's the case, you risk being blocked quite quickly by the api provider. If this becomes a problem, you could look into akka-http which only uses a limit number of connection to a certain host.
I'm writing a Java webservice with CXF. I have the following problem: A client calls a method from the webservice. The webservice has to do two things in parallel and starts two threads. One of the threads needs some additional information from the client. It is not possible to add this information when calling the webservice method, because it is dependent from the calculation done in the webservice. I cannot redesign the webservice becuase it is part of a course assignement and the assignements states that I have to do it this way. I want to pause the thread and notify it when the client delivers the additional information. Unfortunately it is not possible in Java to notify a particular thread. I can't find any other way to solve my problem.
Has anybody a suggestion?
I've edited my answer after thinking about this some more.
You have a fairly complex architecture and if your client requires information from the server in order to complete the request then I think you need to publish one or more 'helper' methods.
For example, you could publish (without all the Web Service annotation):
MyData validateMyData(MyData data);
boolean processMyData(MyData data);
The client would then call validateMyData() as many times as it liked, until it knew it had complete information. The server can modify (through calculation, database look-up, or whatever) the variables in MyData in order to help complete the information and pass it back to the client (for updating the UI, if there is one).
Once the information is complete the client can then call processMyData() to process the complete request.
This has the advantage that the server methods can be implemented without the need for background threads as they should be able to do their thing using the request-thread supplied by the server environment.
The only caveat to this is if MyData can get very large and you don't want to keep passing it back and forth between client and server. In that case you would need to come up with a smaller class that just contains the changes the server wants to make to MyData and exclude data that doesn't need correcting.
IMO it's pretty odd for a web service request to effectively be incomplete. Why can't the request pass all the information in one go? I would try to redesign your service like that, and make it fail if you don't pass in all the information required to process the request.
EDIT: Okay, if you really have to do this, I wouldn't actually start a new thread when you receive the first request. I would store the information from the first request (whether in a database or just in memory if this is just a dummy one) and then when the second request comes in, launch the thread.