Relevant Code
-- Note Instructions is merely a class with several methods which will operate on the data. A new thread is created operate on the data read.
READ THREAD:
while(true) {
System.out.println(".");
if(selector.select(500) == 0)
continue;
System.out.println("processing read");
for(SelectionKey sk : selector.keys()) {
Instructions ins = myHashTable.get(sk);
if(ins == null) {
myHashTable.put(sk, new Instructions(sk));
ins = myHashTable.get(sk);
}
ins.readChannel();
}
}
READCHANNEL
public void readChannel() {
BufferedReader reader = new BufferedReader(Channels.newReader((ReadableByteChannel) this.myKey.channel(), "UTF-8"));
Worker w = new Worker(this, reader.readLine());
(new Thread(w)).start();
}
The new thread then calls more Instructions methods.
When the ins function finishes it might write to a Writer:
Writer out = Channels.newWriter((WritableByteChannel) key.channel(), "UTF-8");
out.write(output);
out.flush();
I can confirm that my client (a flash movie), then receives and acts on the output.
Finally, w exits.
After the receipt of the first message from the client, and successful processing, however, no more messages are taken care of by the READ THREAD loop. I believe the key is registered with the selector and ready to read. I have checked by looping on all the keys to see if they are readable with isReadable & isRegistered on the channel and the result is true in all cases to date. When a second message is sent from the client, however, the only response I see in the read thread is that the '.' character is printed out not every half second, but continuously faster. I believe, then, that the data is there in the channel, but for some reason the Selector isn't selecting any key.
Can anyone help me?
I think you are missing few points here.
Firstly, you should use the selector.selectedKeys() in the for loop
as mentioned by Vijay.
One should remove the key from selectedKeys
after the key is processed. Otherwise, the key will not be
removed automatically and hence selector might spin continuously even
if there is one key with interested ops bit set. (This might be
the issue in your case).
Finally, we should perform operations on
channel if the channel is ready for it. i.e, read only if
isReadable() returns true and try to write only if isWritable() is
true. Don't forget to validate the key.
Shouldn't
for(SelectionKey sk : selector.keys())
be
for(SelectionKey sk : selector.selectedKeys())
Since you would like to process only those events that have occurred in the current select operation ?
Since you say that the select(500) returns before 5 seconds, my guess is that you have registered a channel with the selector for the WRITE operation. A channel is ready for write most of the times. Hence it is necessary to set the interest ops to WRITE only when data is available for writing.
Note that you have to remove the channel from the list of selected keys. Select() won't do that for you. Better to use iterator for this purpose:
Iterator<SelectionKey> key_interator = selector.selectedKeys().iterator();
while (key_interator.hasNext()) {
...
key_interator.remove();
}
Related
So I know I can use readline to get the program output line by line but if I use a while loop
String l;
while( (l=input.readLine()) != null)
rcv = rcv + l
return rcv;
But this freezes my program until the external process finishes giving output. I want to listen to the output as the external process gives it. It can take a long time for the external program to exit.
I tried using read() but it also freezes my program until the end. How can I read the output, whatever is available and then do my processing? and then go back to read output again?
You can use a separate thread to read the input stream. The idea is that the blocking operations should happen in a separate thread, so your application main thread is not blocked.
One way to do that is submitting a Callable task to an executor:
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<String> processOutput = executor.submit(() -> {
// your code to read the stream goes here
String l, rcv;
while( (l=input.readLine()) != null) { ... }
return rcv;
});
This returns a "future" which is a way to represent a value that may not be available now but might be at some point in the future. You can check if the value is available now, or wait for the value to be present with a timeout, etc.
I am trying to create multiple output text data files based on the data present in the servlet request. The constraints to my servlet are that:
My servlet waits for enough requests to hit a threshold (for example 20 names in a file) before producing a file
Otherwise it will timeout after a minute and produce a file
The code I have written is such that:
doGet is not synchronized
Within doGet I am creating a new thread pool (reason being that the calling application to my servlet would not send a next request until my servlet returns a response back - so I validate the request and return an instant acknowledgement back to get new requests)
Pass over all request data to the thread created in a new thread pool
Invoke synchronized function to do thread counting and file printing
I am using wait(60000). The problem is that the code produces files with correct threshold (of names) within a minute, but after the timeout of a minute, the files produced (a very few) have capacity exceeded for example, names more than what I have defined in the capacity.
I think it has something to do with the threads who when wake up are causing an issue?
My code is
if(!hashmap_dob.containsKey(key)){
request_count=0;
hashmap_count.put(key, Integer.toString(request_count));
sb1 = new StringBuilder();
sb2 = new StringBuilder();
sb3 = new StringBuilder();
hashmap_dob.put(key, sb1);
hashmap_firstname.put(key, sb2);
hashmap_surname.put(key, sb3);
}
if(hashmap_dob.containsKey(key)){
request_count = Integer.parseInt(hm_count.get(key));
request_count++;
hashmap_count.put(key, Integer.toString(request_count));
hashmap_filehasbeenprinted.put(key, Boolean.toString(fileHasBeenPrinted));
}
hashmap_dob.get(key).append(dateofbirth + "-");
hashmap_firstname.get(key).append(firstName + "-");
hashmap_surname.get(key).append(surname + "-");
if (hashmap_count.get(key).equals(capacity)){
request_count = 0;
dob = hashmap_dob.get(key).toString();
firstname = hashmap_firstname.get(key).toString();
surname = hashmap_surname.get(key).toString();
produceFile(required String parameters for file printing);
fileHasBeenPrinted = true;
sb1 = new StringBuilder();
sb2 = new StringBuilder();
sb3 = new StringBuilder();
hashmap_dob.put(key, sb1);
hashmap_firstname.put(key, sb2);
hashmap_surname.put(key, sb3);
hashmap_count.put(key, Integer.toString(request_count));
hashmap_filehasbeenprinted.put(key, Boolean.toString(fileHasBeenPrinted));
}
try{
wait(Long.parseLong(listenerWaitingTime));
}catch (InterruptedException ie){
System.out.println("Thread interrupted from wait");
}
if(hashmap_filehasbeenprinted.get(key).equals("false")){
dob = hashmap_dob.get(key).toString();
firstname = hashmap_firstname.get(key).toString();
surname = hm_surname.get(key).toString();
produceFile(required String parameters for file printing );
sb1 = new StringBuilder();
sb2 = new StringBuilder();
sb3 = new StringBuilder();
hashmap_dob.put(key, sb1);
hashmap_firstname.put(key, sb2);
hashmap_surname.put(key, sb3);
fileHasBeenPrinted= true;
request_count =0;
hashmap_filehasbeenprinted.put(key, Boolean.toString(fileHasBeenPrinted));
hashmap_count.put(key, Integer.toString(request_count));
}
If you have got to here, then thank you for reading my question and thanks in advance if you have any thougths on it towards resolution!
I didn't look at your code but I find your approach pretty complicated. Try this instead:
Create a BlockingQueue for the data to work on.
In the servlet, put the data into a queue and return.
Create a single worker thread at startup which pulls data from the queue with a timeout of 60 seconds and collects them in a list.
If the list has enough elements or when a timeout occurs, write a new file.
Create the thread and the queue in a ServletContextListener. Interrupt the thread to stop it. In the thread, flush the last remaining items to the file when you receive an InterruptedException while waiting on the queue.
As per my understanding, you want to create/produce a new file in two situations:
Number of request hit a predefined threshold.
Threshold time-out completes.
I would suggest following:
Use APPLICATION-SCOPED variable: requestMap containing object of HttpServletRequest.
On every servlet hit, just add the received request to map.
Now create listener/filter requestMonitor whatever is suitable, to monitor values of requestMap.
RequestMonitor should check if the requestMap has grown to predefined threshold.
If it has not, then it should allow servlet to add request object.
If it has, then it should print file, empty requestMap, then allow Servlet to add next request.
For timeout, you can check when the last file was produced with LAST_FILE_PRODUCED variable in APPLICATION_SCOPE. This should be updated every time file is produced.
I tried to read your code, but there is a lot of information missing, so if you could please give more details:
1) the indentation is messed up and I'm not sure if there were some mistakes introduced when you copied your code.
2) What is the code you are posting? The code that is called on some other thread after by doGet?
3) Maybe you could also add the variable declarations. Are those thread safe types (ConcurrentHashMap)?
4) I'm not sure we have all the information about fileHasBeenPrinted. Also it seems to be a Boolean, which is not thread safe.
5) you talk about "synchronized" functions, but you did not include those.
EDIT:
If the code you copied is a synchronized method, that means if you have many requests, only one of them only ever runs at a given time. The 60 seconds waiting is always invoked it seems (it is not quite clear with the indentation, but I think there is always a 60 seconds wait, whether the file is written or not). So you lock the synchronized method for 60 seconds before another thread (request) can be processed. That could explain why you are not writing the file after 20 requests, since more than 20 requests can arrive within 60 seconds.
I am interested to know if the Observer Pattern is correct approach for implementing code to monitor log files and their changes?
I am currently using it, but there seems to be an anomaly that I can't quite explain. Basically, i create a Class called FileMonitor that has a timer that fires, that iterates a list of unique files looking for a changed "lastmodified date".
Upon finding it, a list of Listeners are iterated through to find the matching file, and it's
fileChanged event is notified. It then begins to process the lines that were added in the file.
So to make my question more succinct:
Does the Observer Pattern fit what I am trying to do? (Currently
I have one Listener per file)
Is there any possibility of 'concurrency issues' given that there is more than one File to
monitor?
Thanks
Java 7 has introduced WatchService which watches registered objects for changes and event.
A Watchable object is registered with a watch service by invoking its
register method, returning a WatchKey to represent the registration.
When an event for an object is detected the key is signalled, and if
not currently signalled, it is queued to the watch service so that it
can be retrieved by consumers that invoke the poll or take methods to
retrieve keys and process events. Once the events have been processed
the consumer invokes the key's reset method to reset the key which
allows the key to be signalled and re-queued with further events.
File systems may report events faster than they can be retrieved or
processed and an implementation may impose an unspecified limit on the
number of events that it may accumulate. Where an implementation
knowingly discards events then it arranges for the key's pollEvents
method to return an element with an event type of OVERFLOW. This event
can be used by the consumer as a trigger to re-examine the state of
the object.
Example -
Path myDir = Paths.get("D:/test");
try {
WatchService watcher = myDir.getFileSystem().newWatchService();
myDir.register(watcher, StandardWatchEventKind.ENTRY_CREATE,
StandardWatchEventKind.ENTRY_DELETE, StandardWatchEventKind.ENTRY_MODIFY);
WatchKey watckKey = watcher.take();
List<WatchEvent<?>> events = watckKey.pollEvents();
for (WatchEvent event : events) {
if (event.kind() == StandardWatchEventKind.ENTRY_CREATE) {
System.out.println("Created: " + event.context().toString());
}
if (event.kind() == StandardWatchEventKind.ENTRY_DELETE) {
System.out.println("Delete: " + event.context().toString());
}
if (event.kind() == StandardWatchEventKind.ENTRY_MODIFY) {
System.out.println("Modify: " + event.context().toString());
}
}
} catch (Exception e) {
System.out.println("Error: " + e.toString());
}
}
Reference - link
If you do not want to use Java 7, you can get the same behavior with Apache IO.
From the official documentation:
FileAlterationObserver represents the state of files below a root
directory, checking the filesystem and notifying listeners of create,
change or delete events.
Here is how you can add listeners to define operations to be executed when such events happen.
File directory = new File(new File("."), "src");
FileAlterationObserver observer = new FileAlterationObserver(directory);
observer.addListener(...);
observer.addListener(...);
You will have to register the oberver(s) with a FileAlterationMonitor. Continuing from the same documentation:
long interval = ...
FileAlterationMonitor monitor = new FileAlterationMonitor(interval);
monitor.addObserver(observer);
monitor.start();
...
monitor.stop();
Where interval is the amount of time (in miliseconds) to wait between checks of the file system.
Look for package named org.apache.commons.io.monitor in the library.
Does the Observer Pattern fit what I am trying to do? (Currently I
have one Listener per file)
Yes it does.
Is there any possibility of 'concurrency issues' given that there is
more than one File to monitor?
If you have multiple threads removing and adding listeners to a list backed up by an ArrayList you run the risk of ConcurrentModificationException . Use a CopyOnWriteArrayList instead.
IIRC Effective java has an article containing a good example on the same.
I'd suggest to go for NIO
and File Watcher services - Watching File For Changes
General purpose of program
To read in a bash-pattern and specified location from command line, and find all files matching that pattern in the location but I have to make the program multi-threaded.
General structure of the program
Driver/Main Class which parses arguments and initiates other classes.
ProcessDirectories Class which adds all directory addresses found from the specified root directory to a string array for processing later
DirectoryData Class which holds the addresses found in the above class
ProcessMatches Class which examines each directory found, and adds any files inside that match the pattern to a string array for printing results later
Main/Driver once again takes over and prints the results :)
The Problem
I need to be processing matches even whilst the ProcessDirectories class is still working (for efficiency so I don't unnecessarily wait for the list to populate before doing work). To do this I try to: a) make ProcessMatches threads wait() if DirectoryData is empty b) make ProcessDirectories notifyAll() if added a new entry.
The Question :)
Every tutorial I look at is focused on the producer and consumer being in the same object, or dealing with just one data structure. How can I do this when I am using more than one data structure and more than one class for producing and consuming?
How about something like:
class Driver(String args)
{
ProcessDirectories pd = ...
BlockingQueue<DirectoryData> dirQueue = new LinkedBlockingQueue<DirectoryData>();
new Thread(new Runnable(){public void run(){pd.addDirs(dirQueue);}}).start();
ProcessMatches pm = ...
BlockingQueue<File> fileQueue = new LinkedBlockingQueue<File>();
new Thread(new Runnable()
{
public void run()
{
for (DirectoryData dir = dirQueue.take(); dir != DIR_POISON; dir = dirQueue.take())
{
for (File file : dir.getFiles())
{
if (pm.matches(data))
fileQueue.add(file)
}
}
fileQueue.add(FILE_POISON);
}
}).start();
for (File file = fileQueue.take(); file != FILE_POISON; file = fileQueue.take())
{
output(file);
}
}
This is just a rough idea of course. ProcessDirectories.addDirs() would just add DirectoryData objects to the queue. In production you'd want to name the threads. Perhaps use an executor to provide manage threads. Perhaps use some other mechanism to indicate end of processing than a poison message. Also, you might want to reduce the limit on the queue size.
Have one data structure that's associated with the data the two threads communicate with each other. This can be a queue that has "get data from queue, waiting if empty" and "put data on queue, waiting if full" functions. Those functions should internally call notify and wait on the queue itself and they should be synchronized to that queue.
You can see my posted code here. My original problem was more or less solved but now I'm running into the problem described in the question title. Here's the problem: After I enter a command on the client side of any given client, I can't enter additional commands. The first command will work fine (minus /exit; haven't quite figured out how that should work yet), just can't do anything after that. For example, I can do /register (sign in) but after that nothing else. I can go into another instance of Client and list all the current users with /list as well (works), but again, after that I cannot enter additional commands - the console will not take them after the first one. Any idea what may be happening to cause this?
Here is about where I'm referring to (Code in its entirety):
while((keyboardInput = keyboardInputScanner.nextLine()) != null){
System.out.println("Input '" + keyboardInput + "' read on client side.");
if(keyboardInput.equals("/exit")){
socketOut.println("/exit");
socketOut.close();
socketIn.close();
serverSocket.close();
}else{
socketOut.println(keyboardInput);
}
while((serverInput = socketIn.readLine()) != null){
System.out.println(serverInput);
}
}
I'm relatively sure it's something to do with not breaking out of the inner while loop, but I don't know any way around it. If I put that while loop outside of the keyboard input loop, I'll never get anything back from the server.
I think the problem is here:
while((serverInput = socketIn.readLine()) != null){
System.out.println(serverInput);
}
This will loop indefinitely (well, until the socket is closed at the other end). You therefore never get to the second iteration of the outer loop.
You might want to either limit the response to one line, or have some other mechanism for the server to let the client know when to stop reading from socketIn (e.g. send an empty line at the end of every server response and have the client break out of the inner loop when it sees that).