I have to develop a client server architecture with Java (server side), but I need some advice.
Situation:
a server that exposes an action decrementValue
the server has a variable called Value = integer value
some clients can send a decrementValue request to the server
server, for each requests, do:
-- if value > 0 value = value - 1 and answer the new val to the client
-- if value = 0 answer impossible operation
So, if value = 2 and 3 request arrives at the same time, only 2 request can decrement value.
What is the best solution to do it?
How can I guarantee the exclusive access to value stored in the server, if some request are done in the same time from client to server?
Thank you.
That depends what you mean by best solution. If you just want your program to behave correctly in a concurrent environment, then you should synchronize data that is concurrently modified. That would be a standard (and readable) way to enable exclusive access to shared data.
However, if this is not your question, but rather what would be the most efficient way to do it in Java in theory, (or if this is called in an extremely concurrent context), then my suggestion is as follows:
static final AtomicInteger atomicInteger = new AtomicInteger(initialValue);
...
if (atomicInteger.get() <= 0) {
return "imposibble. value=0";
}
int result = atomicInteger.decrementAndGet();
if (result < 0) {
atomicInteger.incrementAndGet(); // Revert the effect of decrementing
return "imposibble. value=0";
} else {
return "new value=" + result;
}
Related
As I wrote in title we need in project notify or execute method of some thread by another. This implementation is part of long polling. In following text describe and show my implementation.
So requirements are that:
UserX send request from client to server (poll action) immediately when he got response from previous. In service is executed spring async method where thread immediately check cache if there are some new data in database. I know that cache is usually used for methods where for specific input is expected specific output. This is not that case, because I use cache to reduce database calls and output of my method is always different. So cache help me store notification if I should check database or not. This checking is running in while loop which end when thread find notification to read database in cache or time expired.
Assume that UserX thread (poll action) is currently in while loop and checking cache.
In that moment UserY (push action) send some data to server, data are stored in database in separated thread, and also in cache is stored userId of recipient.
So when UserX is checking cache he found id of recipient (id of recipient == his id in this case), and then break loop and fetch these data.
So in my implementation I use google guava cache which provide manually write.
private static Cache<Long, Long> cache = CacheBuilder.newBuilder()
.maximumSize(100)
.expireAfterWrite(5, TimeUnit.MINUTES)
.build();
In create method I store id of user which should read these data.
public void create(Data data) {
dataRepository.save(data);
cache.save(data.getRecipient(), null);
System.out.println("SAVED " + userId + " in " + Thread.currentThread().getName());
}
and here is method of polling data:
#Async
public CompletableFuture<List<Data>> pollData(Long previousMessageId, Long userId) throws InterruptedException {
// check db at first, if there are new data no need go to loop and waiting
List<Data> data = findRecent(dataId, userId));
data not found so jump to loop for some time
if (data.size() == 0) {
short c = 0;
while (c < 100) {
// check if some new data added or not, if yes break loop
if (cache.getIfPresent(userId) != null) {
break;
}
c++;
Thread.sleep(1000);
System.out.println("SEQUENCE: " + c + " in " + Thread.currentThread().getName());
}
// check database on the end of loop or after break from loop
data = findRecent(dataId, userId);
}
// clear data for that recipient and return result
cache.clear(userId);
return CompletableFuture.completedFuture(data);
}
After User X got response he send poll request again and whole process is repeated.
Can you tell me if is this application design for long polling in java (spring) is correct or exists some better way? Key point is that when user call poll request, this request should be holded for new data for some time and not response immediately. This solution which I show above works, but question is if it will be works also for many users (1000+). I worry about it because of pausing threads which should make slower another requests when no threads will be available in pool. Thanks in advice for your effort.
Check Web Sockets. Spring supports it from version 4 on wards. It doesn't require client to initiate a polling, instead server pushes the data to client in real time.
Check the below:
https://spring.io/guides/gs/messaging-stomp-websocket/
http://www.baeldung.com/websockets-spring
Note - web sockets open a persistent connection between client and server and thus may result in more resource usage in case of large number of users. So, if you are not looking for real time updates and is fine with some delay then polling might be a better approach. Also, not all browsers support web sockets.
Web Sockets vs Interval Polling
Longpolling vs Websockets
In what situations would AJAX long/short polling be preferred over HTML5 WebSockets?
In your current approach, if you are having a concern with large number of threads running on server for multiple users then you can trigger the polling from front-end every time instead. This way only short lived request threads will be triggered from UI looking for any update in the cache. If there is an update, another call can be made to retrieve the data. However don't hit the server every other second as you are doing otherwise you will have high CPU utilization and user request threads may also suffer. You should do some optimization on your timing.
Instead of hitting the cache after a delay of 1 sec for 100 times, you can apply an intelligent algorithm by analyzing the pattern of cache/DB update over a period of time.
By knowing the pattern, you can trigger the polling in an exponential back off manner to hit the cache when the update is most likely expected. This way you will be hitting the cache less frequently and more accurately.
I'm using snmp4j to try and perform SNMP functions against a remote agent. Due to a number of limitations out of our control I need to perform a GETBULK to obtain a large table in a short space of time.
My current implementation:
public Map<String, String> doGetBulk(#NotNull VariableBinding... vbs)
throws IOException {
Map<String, String> result = new HashMap<>();
Snmp snmp = null;
try {
// Create TransportMapping and Listen
TransportMapping transport = new DefaultUdpTransportMapping();
snmp = new Snmp(transport);
transport.listen();
PDU pdu = new PDU();
pdu.setType(PDU.GETBULK);
pdu.setMaxRepetitions(200);
pdu.setNonRepeaters(0);
pdu.addAll(vbs);
ResponseEvent responseEvent = snmp.send(pdu, this.target);
PDU response = responseEvent.getResponse();
// Process Agent Response
if (response != null) {
for(VariableBinding vb : response.getVariableBindings()) {
result.put("." + vb.getOid().toString(), vb.getVariable().toString());
}
} else {
LOG.error("Error: Agent Timeout... ");
}
} catch (NullPointerException ignore) {
// The variable table is null
} finally {
if (snmp != null) snmp.close();
}
return result;
}
However, this only ever returns 100 results when I know there are 5000+. I know I cant exceed the PDU size so I have so problem with the response being truncated into blocks of 100 but I cant work out how I can get a handle to cascade the request to get the next 100 entries.
It is bad practice to use MaxRepetitions > 100 due to TCP/IP packet fragmentation and the nature of UDP that does not guarantee the order of packets. So most SNMP frameworks and agents have such built-in limit.
All details are already there in the RFC document,
https://www.rfc-editor.org/rfc/rfc1905
4.2.3 tells how the agent side should handle GET BULK requests, and
While the maximum number of variable bindings in the Response-PDU is
bounded by N + (M * R), the response may be generated with a lesser
number of variable bindings (possibly zero) for either of three
reasons.
(1) If the size of the message encapsulating the Response-PDU
containing the requested number of variable bindings would be
greater than either a local constraint or the maximum message size
of the originator, then the response is generated with a lesser
number of variable bindings. This lesser number is the ordered set
of variable bindings with some of the variable bindings at the end
of the set removed, such that the size of the message encapsulating
the Response-PDU is approximately equal to but no greater than
either a local constraint or the maximum message size of the
originator. Note that the number of variable bindings removed has
no relationship to the values of N, M, or R.
(2) The response may also be generated with a lesser number of
variable
bindings if for some value of iteration i, such that i is greater
than zero and less than or equal to M, that all of the generated
variable bindings have the value field set to the `endOfMibView'.
In this case, the variable bindings may be truncated after the (N +
(i * R))-th variable binding.
(3) In the event that the processing of a request with many
repetitions
requires a significantly greater amount of processing time than a
normal request, then an agent may terminate the request with less
than the full number of repetitions, providing at least one
repetition is completed.
About how to do a series of proper GET BULK operations to query all data you want, you can refer to section 4.2.3.1 for an example.
You've set maximum repetition count to 200, that is the server may send you at most 200 rows. So on the one hand, you'll never get more than 200 rows (and least of all 5000 or more). On the other hand, the server may decide to send you less rows, it's practically the server's choice; you tell him what you're able to process.
Usually you request 10-50 rows at max. (BTW: There are many servers out there with buggy SNMP implementations and the higher you set max-repetitions, the higher is the chance you get nothing at all.)
So you have to request row set by row set. Since you probably don't want to implement that yourself I'd recommend to use the TableUtils class. Just start with getTable().
Imagine that we have a client/sever app. The client will send directories to the server and the server would store it.
The client protocol is as follows for communicating with the server.
Send the clientId. [say client1]
Send the metadata for the directory.
Send the directory. [/tmp/client1]
The server does the following thing in response.
Verifies the clientId.
Analyse the metadata for the client directory.
Store the file at [/tmp/server/client1].
Usually we want the server to be multithread so that it can service multiple clients at a time. For each client a new thread will be spawned to take care of it.
So here is part of the server code which reads the dir from the client.
public DirServer readDir() throws IOException, ClassNotFoundException {
DirClient clientDir = (DirClient) objectInputStream.readObject();
String serverDirPath = "/tmp/server"
+ "/" + clientId;
List<FileServer> serverFiles = new ArrayList<>();
DirServer dirServer = null;
synchronized (lockObject) {
FileUtils.deleteDirectory(new File(serverDirPath));
Path pathToDir = Paths.get(serverDirPath);
Files.createDirectories(pathToDir.getParent());
for (int i = 0; i < clientDir.getNumberOfFiles(); i++) {
serverFiles.add(readFile());
}
dirServer = new dirServer(clientDir.getFullPath(), serverDirPath, serverFiles);
}
return dirServer;
}
So say we have two threads in the thread pool to serve the clients for now.
Case 1:
Thread 1: client1
Thread 2: client1
i.e two instances of client1 from different machines try to contact the server. For this case synchronized access to code block is desirable, because they access the same path /tmp/server/client1.
Case 2:
Thread 1: client1
Thread 2: client2
For this case it is far efficient to not have synchronized access to the code block. As both threads deal with different paths /tmp/server/client1 and /tmp/server/client2.
How should one achieve this conditional sync ? i.e sync only when you are accessing the same directory and not sync otherwise.
Note that doing synchronized(clientDir) won't work because this object is read over the network. So although two clientDir might be logically the same they are actually two different references.
You can assume that you can do clientDir.getClientId() to get the Id for a client.
Since you you need to synchronize on "logical clientDir".
You may use clientDir as a key to get a dummy object from a hashmap, and synchronize on it.
Writing to hashmap should be synchronized, but not reading. And amount of writing can not exceed amount of "logical client dirs".
== EDIT ==
Reading which returns an existing dummy object is OK (since they will behave like finals), no need synchronization.
Reading which returns null will require a re-read and (possibly) write inside synchronization.
Instead of locking a global valiable as following(as you do)
synchronized (lockObject){}//whwere lockObj is global
lock a session(client) scope object which each session would have one, simply using a list.
class client_session{public long id;/*client stuffs/attribs*/
//don't forget to implement the equals
}
//---
Vector<client_session> sess=new Vector<client_sesion>();//vector or whatever
//---
//for any client request
client_session sx=null;
synchronized(sess){
int idx=sess.indexOf(long_id);
if(idx==-1){sess.add(sx=new client_session(long_id));}
else{sx=sess.elementsAt(idx);}
}
//---
//now lock the client/session
synchronized (sx/*instead of lockObject*/){/*lock only one session/client*/}
Now 10 different client would be maintained parallel, but two request from one client will be maintained serially.
I am developing a REST based web application, which will call a 3rd system asynchronously for some data (using websockets). So
Browser -> REST -> My WebApp -> Another App -> My WebApp -> Browser
The communication between My WebApp to Another APP is asynchronous and I can only track the responses for a request using some identifiers.
So, I request C as <counter>.C and the response will be <counter>.Response where both counters will be same.
To map the response to request, I am setting the command, counter, flag to a bean. I keep a while loop which keeps on checking whether the flag has been set or not. Once I get the response, the set the flag, while loop exits and I know that the data is available.
Is this the right way? Is there a way I can make this better, because I feel (I might be wrong!) that keeping an open while loop is incorrect.
Class bean is set like below,
public void setAllProperties(){
bean.setCommand(commandString);
bean.setCounter(counter);
bean.hasResponse(false);
}
The snippet in webservice is
bean.setAllProperties();
sendToApplication(bean);
int checkCounter = 0;
while(!bean.hasResponse && checkCounter > 0){
if(bean.hasResponse){
checkCounter++;
// loggers and other logic here
}
}
The loop defeats a lot of the value of the asynchronous operation. It also consumes a significant amount of CPU time (try adding a delay - a quick sleep - when using such a loop).
I recommend to use "wait()" and "notify()"/"notifyAll()" instead.
In the code that's waiting for a response, do something like this:
synchronized ( bean ) {
while ( ! bean.hasResponse ) {
bean.wait();
}
}
In the code that processes the response and updates the bean:
synchronized ( bean ) {
bean.hasResponse = true;
bean.notifyAll();
}
I want to draw five different routes in Google Maps API v3, GWT 2.5.1. I initialize a route which sets its DirectionDisplay and DirectionsRequest in this class.
When I start my web project, sometimes only my first route is shown, sometimes all five, so I decided to make a System.out.print(m);.
The results:
01234 -> as expected, all routes shown
10234 -> error, only first route shown.
Why does Google serve my second request before my first? I tried to use Thread.sleep(1000) to ensure that my requests have time to come back in order, also Timer/TimerTasks, no success. Any ideas?
DirectionsService o = DirectionsService.newInstance();
for (Integer i = 0; i < 4; i++) { //routes.size()
final int m = i;
final Route route = new Route("Route " + i.toString());
route.initRoute(m, getRoutingPresenter(), adressData, addressIndex);
//here i initialize the DirectionsRequests and its Displays, which
//i set in this class after execution.
o.route(directionsRequest, new DirectionsResultHandler() {
#Override
public void onCallback(DirectionsResult result,DirectionsStatus status) {
if (status == DirectionsStatus.OK) {
System.out.print(m);
...
}
}
);
}
}
Google can take as long as they like to handle your requests and you should code accordingly. This is true of any HTTP traffic. Even if the remote server guaranteed a fixed service time for all requests, the Internet does not and your requests could be taking any old route through it.
You can either right your handling code so that the response order doesn't matter, or write it so that it waits until all responses are back and then sort out the order itself.
I would recommend the first unless there are very specific an important reasons not to.