How to do a non destructive queue examination in Java - java

I am helping my son with a college programming class, and I guess I need the class too. He has completed the assignment, but I don't believe he is doing it the best way. Unfortunately I can't get it to work with my better way. It's clearly better, because it doesn't work yet.
He is being asked to implement some methods for a class that extends another class.
He was told he must use the following class definition, and he cannot change anything in ListQueue.
public class MyListQueue <AnyType extends Comparable<AnyType>> extends ListQueue<AnyType>
Heres what is in ListQueue
// Queue interface
//
// ******************PUBLIC OPERATIONS*********************
// void enqueue( x ) --> Insert x
// AnyType getFront( ) --> Return least recently inserted item
// AnyType dequeue( ) --> Return and remove least recent item
// boolean isEmpty( ) --> Return true if empty; else false
// void makeEmpty( ) --> Remove all items
// ******************ERRORS********************************
// getFront or dequeue on empty queue
/**
* Protocol for queues.
*/
OK I feel pretty good about traversing a linked list in Pascal or C (showing my age) but have never worked in an OOP language before.
When I attempt something like this
dummyQueue = this.front.next;
I get the following error.
* front has private access in ListQueue *
Which I agree with, but other than dequeueing an item, how can I traverse the list, or otherwise get access to front, back, next and previous which are all in ListQueue.
An education would be appreciated.
Thanks,
David

If I understand you correctly, you're doing something like this:
MyListQueue<String> dummyQueue = new MyListQueue<String>();
dummyQueue = this.front.next;
If so, one of the main tenets of OOP is encapsulation, i.e. data hiding. The idea is that users outside of the class don't have access to the inner state of the class.
If you're looking to determine the size of the Queue and you can't modify either the interface or the implementation, one thing you could do is create a delegate Queue that overrides enqueue and dequeue to increment and decrement a counter.

If you decide for a queue, you usually only want to enqueue and dequeue elements. You want to know if the queue is empty and want to look at the front element if you need it or leave it for someone else. A queue is a sort of buffer that avoids blocking, if the sender is faster then the receiver. With a queue, the receiver can decide when to read the next entry. A queue may implement some (priority based) sorting and decide, which element is the front element.
If you need other operations like traversing the list, then a queue might not be the best choice. Look at other collection types, maybe at ArrayList.
Some things can be done though, you can subclass ListQueue and override some methods. So if you want an additonal size() method, this could be a solution:
public class MyListQueue <T extends Comparable<T>> extends ListQueue<T> {
private size = 0;
public void enqueue(T element) {
size++;
super.enqueue(element);
}
public T dequeue() {
if (isEmpty()) {
return null; // that's a guess...
}
size--;
super.dequeue(element);
}
public int size() {
return size;
}
}
I've replaced AnyType with T which is more common.

You asked, "Other than dequeueing an item, how can I traverse the list, or otherwise get access to front, back, next and previous which are all in ListQueue."
In the purest sense, you should not be able to.
An idealized queue promises only a few things:
Push items into the back
Pop items from the front
(Possibly) Inspect front item, if not empty
(Possibly) A definition space predicate for pop and inspect, determining whether the queue is empty
I'll assume for the moment that the queue is not meant to be used with either concurrent readers (accessing the front at the same time) or with concurrent readers and writers (accessing the back and front at the same time).
Given that definition, there's no reason to want to look "inside" the queue. You put things in one side and take things out the other side. If you take bounding of queue size into account, you may need an additional definition space predicate for the push operation to determine whether the queue is full. "Being full" only makes sense if the queue is bounded. "Being empty" is only relevant if the thread calling pop or inspect doesn't wish to block.
Beyond that, there's the pragmatic idea of a queue. We can assume it's a sequence of items that -- barring concurrency concerns -- has an observable non-negative size and may even allow visiting each item in the sequence. Some queues even go so far as to allow removal or rearrangement of items at positions other than the front of the sequence. At that point, though, we're not really discussing a queue anymore. We're discussing the sequence that underlies the queue. If we need this kind of access, we don't need a queue. We need a sequence of which we want to offer a queue-like view to other parts of the program.
That's why queues are not usually concrete types in data structure libraries. In C++, type std::queue is a decorator around some other container type. In Java, java.util.Queue is an interface. Scala take a different approach: class scala.collection.mutable.Queue is an extension of type MutableList. That's similar to the approach mandated in your son's assignment, but it's not clear whether your ListQueue ever intended to allow outsiders (including subclasses) to take advantage of its "list nature" -- penetrating the queue view to use the sequence within.
Do you have a requirement to be able to visit anything but the head of your queue? Needing to do so limits your choices as to what kinds of queues your consuming functions can accommodate. It seems like we're learning the wrong lessons with this assignment.

Related

multiple threads accessing an ArrayList

i have an ArrayList that's used to buffer data so that other threads can read them
this array constantly has data added to it since it's reading from a udp source, and the other threads constantly reading from that array.Then the data is removed from the array.
this is not the actual code but a simplified example :
public class PacketReader implements Runnable{
pubic static ArrayList<Packet> buffer = new ArrayList() ;
#Override
public void run(){
while(bActive){
//read from udp source and add data to the array
}
}
public class Player implements Runnable(){
#Override
public void run(){
//read packet from buffer
//decode packets
// now for the problem :
PacketReader.buffer.remove(the packet that's been read);
}
}
The remove() method removes packets from the array and then shifts all the packets on the right to the left to cover the void.
My concern is : since the buffer is constantly being added to and read from by multiple threads , would the remove() method make issues since its gonna have to shift packets to the left?
i mean if .add() or .get() methods get called on that arraylist at the same time that shift is being done would it be a problem ?
i do get index out of bounds exception sometimes and its something like :
index : 100 size 300 , which is strange cuz index is within size , so i want to know if this is what may possibly be causing the problem or should i look for other problems .
thank you
It sounds like what you really want is a BlockingQueue. ArrayBlockingQueue is probably a good choice. If you need an unbounded queue and don't care about extra memory utilization (relative to ArrayBlockingQueue), LinkedBlockingQueue also works.
It lets you push items in and pop them out, in a thread-safe and efficient way. The behavior of those pushes and pops can differ (what happens when you try to push to a full queue, or pop from an empty one?), and the JavaDocs for the BlockingQueue interface have a table that shows all of these behaviors nicely.
A thread-safe List (regardless of whether it comes from synchronizedList or CopyOnWriteArrayList) isn't actually enough, because your use case uses a classic check-then-act pattern, and that's inherently racy. Consider this snippet:
if(!list.isEmpty()) {
Packet p = list.remove(0); // remove the first item
process(p);
}
Even if list is thread-safe, this usage is not! What if list has one element during the "if" check, but then another thread removes it before you get to remove(0)?
You can get around this by synchronizing around both actions:
Pattern p;
synchronized (list) {
if (list.isEmpty()) {
p = null;
} else {
p = list.remove(0);
}
}
if (p != null) {
process(p); // we don't want to call process(..) while still synchronized!
}
This is less efficient and takes more code than a BlockingQueue, though, so there's no reason to do it.
Yes there would be problems because ArrayList is not thread-safe, the internal state of the ArrayList object would be corrupted and eventually you would have some incorrect output or runtime exceptions appearing. You can try using synchronizedList(List list), or if it's a good fit you could try using a CopyOnWriteArrayList.
This issue is the Producer–consumer problem. You can see how much people fix it by using a lock of some kind taking turns extracting an object out of a buffer (a List in your case). There are thread safe buffer implementations you could look at as well if you don't necessarily need a List.

Lock-free and size-restricted queue in Java

I'm trying to extend an implementation of a lock-free queue in Java according to this posting.
For my implemenation I am restricted in using only atomic variables/references.
The addition is that my queue should have a maximum size.
And therefore a putObject() should block when the queue is full and a getObject() if the queue is empty.
At the moment I don't know how to solve this without using locks.
When using an AtomicInteger for example, the modifying operations would be atomic.
But there is still the problem that I must handle a check-and-modify-situation in putObject() and getObject() right?
So the situation still exists that an enqueuing thread will be interrupted after checking the current queue size.
My question at the moment is if this problem is solvable at all with my current restrictions?
Greets
If you have a viable, correctly working lock-free queue, adding a maximum size can be as easy as just adding an AtomicInteger, and doing checks, inc, dec at the right time.
When adding an element, you basically pre-reserve the place in the queue.
Something like:
while (true) {
int curr = count.get();
if (curr < MAX) {
if (count.compareAndSet(curr, curr + 1)) {
break;
}
}
else {
return FULL;
}
}
Then you add the new element and link it in.
When getting, you can just access the head as usual and check if there is anything at all to return from the queue. If yes, you return it and then decrease the counter. If not, you just return EMPTY. Notice that I'm not using the counter to check if the queue is really empty, since the counter could be 1, while there is nothing linked into the queue yet, because of the pre-reserve approach. So I just trust your queue has a way to tell you "I have something in me" or not (it must, or get() would never work).
It is a very common problem which is usually solved by using a Ring Buffer. This is what network adapters use as does the Disruptor library. I suggest you have a look at Disruptor and a good example of what you can do with ring buffers.

Two threads acessing same LinkedList

I am new to Java, and have come across a problem when trying to implement a simple game.
The premise of the game currently is, a timer is used to add a car, and also more frequently to update the movement of the car. A car can be selected by touch, and directed by drawing it's path. The update function will move the car along the path.
Now, the game crashes with an IndexOutOfBoundsException, and I am almost certain this is because occasionally, when a car is reselected, the current path is wiped and it allows a new path to be drawn. The path is stored as a LinkedList, and cleared when the car is touched.
I imagine if the path is cleared via a touch event, whilst the timer thread is updating the cars movement along the path, this is where the error occurs (There are also similar other issues that could arise with two threads accessing this one list.
My question, in Java, what would be the best way of dealing with this? Are there specific types of lists I should be using rather than LinkedList, or are there objects such as a Mutex in c++, where I can protect this list whilst working with it?
In Java, this is usually accomplished using synchronization
A small example might look something like this:
LinkedList list = //Get/build your list
public void doStuffToList()
{
synchronized(list)
{
//Do things to the list
}
}
public void clearList()
{
synchronized(list)
{
list.clear();
}
}
This code won't let the clear operation be performed if there's another thread currently operating on the list at that time. Note that this will cause blocking, so be careful for deadlocks.
Alternatively, if your List is a class that you've built yourself, it probably makes sense to make the data structure thread safe itself:
public class SynchroLinkedList
{
//Implementation details
public synchronized void doThingsToList()
{
//Implementation
}
public synchronized void clearList()
{
//Implementation
}
}
These two approaches would effectively work the same way, but with the second one your thread safety is abstracted into the datatype, which is nice because you don't have to worry about thread safety all over the place when you're using the list.
Instead of recreating your own thread safe list implementation, you have several built-in options, essentially:
use a synchronized list:
List list = Collections.synchronizedList(new LinkedList());
Note that you need to synchronize on the list (synchronized(list) { }) for iterations and other combined operations that need to be atomic)
use a thread safe collection, for example a CopyOnWriteArrayList or a ConcurrenLinkedQueue, which could be a good alternative if you don't need to access items in the middle of the list, but only need to add an iterate.
Note that a CopyOnWriteArrayList might have a performance penalty depending on your use case, especially if you regularly add items (i.e. every few microseconds) and the list can become big.

Why does Java toString() loop infinitely on indirect cycles?

This is more a gotcha I wanted to share than a question: when printing with toString(), Java will detect direct cycles in a Collection (where the Collection refers to itself), but not indirect cycles (where a Collection refers to another Collection which refers to the first one - or with more steps).
import java.util.*;
public class ShonkyCycle {
static public void main(String[] args) {
List a = new LinkedList();
a.add(a); // direct cycle
System.out.println(a); // works: [(this Collection)]
List b = new LinkedList();
a.add(b);
b.add(a); // indirect cycle
System.out.println(a); // shonky: causes infinite loop!
}
}
This was a real gotcha for me, because it occurred in debugging code to print out the Collection (I was surprised when it caught a direct cycle, so I assumed incorrectly that they had implemented the check in general). There is a question: why?
The explanation I can think of is that it is very inexpensive to check for a collection that refers to itself, as you only need to store the collection (which you have already), but for longer cycles, you need to store all the collections you encounter, starting from the root. Additionally, you might not be able to tell for sure what the root is, and so you'd have to store every collection in the system - which you do anyway - but you'd also have to do a hash lookup on every collection element. It's very expensive for the relatively rare case of cycles (in most programming). (I think) the only reason it checks for direct cycles is because it so cheap (one reference comparison).
OK... I've kinda answered my own question - but have I missed anything important? Anyone want to add anything?
Clarification: I now realize the problem I saw is specific to printing a Collection (i.e. the toString() method). There's no problem with cycles per se (I use them myself and need to have them); the problem is that Java can't print them. Edit Andrzej Doyle points out it's not just collections, but any object whose toString is called.
Given that it's constrained to this method, here's an algorithm to check for it:
the root is the object that the first toString() is invoked on (to determine this, you need to maintain state on whether a toString is currently in progress or not; so this is inconvenient).
as you traverse each object, you add it to an IdentityHashMap, along with a unique identifier (e.g. an incremented index).
but if this object is already in the Map, write out its identifier instead.
This approach also correctly renders multirefs (a node that is referred to more than once).
The memory cost is the IdentityHashMap (one reference and index per object); the complexity cost is a hash lookup for every node in the directed graph (i.e. each object that is printed).
I think fundamentally it's because while the language tries to stop you from shooting yourself in the foot, it shouldn't really do so in a way that's expensive. So while it's almost free to compare object pointers (e.g. does obj == this) anything beyond that involves invoking methods on the object you're passing in.
And at this point the library code doesn't know anything about the objects you're passing in. For one, the generics implementation doesn't know if they're instances of Collection (or Iterable) themselves, and while it could find this out via instanceof, who's to say whether it's a "collection-like" object that isn't actually a collection, but still contains a deferred circular reference? Secondly, even if it is a collection there's no telling what it's actual implementation and thus behaviour is like. Theoretically one could have a collection containing all the Longs which is going to be used lazily; but since the library doesn't know this it would be hideously expensive to iterate over every entry. Or in fact one could even design a collection with an Iterator that never terminated (though this would be difficult to use in practice because so many constructs/library classes assume that hasNext will eventually return false).
So it basically comes down to an unknown, possibly infinite cost in order to stop you from doing something that might not actually be an issue anyway.
I'd just like to point out that this statement:
when printing with toString(), Java will detect direct cycles in a collection
is misleading.
Java (the JVM, the language itself, etc) is not detecting the self-reference. Rather this is a property of the toString() method/override of java.util.AbstractCollection.
If you were to create your own Collection implementation, the language/platform wouldn't automatically safe you from a self-reference like this - unless you extend AbstractCollection, you would have to make sure you cover this logic yourself.
I might be splitting hairs here but I think this is an important distinction to make. Just because one of the foundation classes in the JDK does something doesn't mean that "Java" as an overall umbrella does it.
Here is the relevant source code in AbstractCollection.toString(), with the key line commented:
public String toString() {
Iterator<E> i = iterator();
if (! i.hasNext())
return "[]";
StringBuilder sb = new StringBuilder();
sb.append('[');
for (;;) {
E e = i.next();
// self-reference check:
sb.append(e == this ? "(this Collection)" : e);
if (! i.hasNext())
return sb.append(']').toString();
sb.append(", ");
}
}
The problem with the algorithm that you propose is that you need to pass the IdentityHashMap to all Collections involved. This is not possible using the published Collection APIs. The Collection interface does not define a toString(IdentityHashMap) method.
I imagine that whoever at Sun put the self reference check into the AbstractCollection.toString() method thought of all of this, and (in conjunction with his colleagues) decided that a "total solution" is over the top. I think that the current design / implementation is correct.
It is not a requirement that Object.toString implementations be bomb-proof.
You are right, you already answered your own question. Checking for longer cycles (especially really long ones like period length 1000) would be too much overhead and is not needed in most cases. If someone wants it, he has to check it himself.
The direct cycle case, however, is easy to check and will occur more often, so it's done by Java.
You can't really detect indirect cycles; it's a typical example of the halting problem.

Best approach to use in Java 6 for a List being accessed concurrently

I have a List object being accessed by multiple threads. There is mostly one thread, and in some conditions two threads, that updates the list. There are one to five threads that can read from this list, depending on the number of user requests being processed.
The list is not a queue of tasks to perform, it is a list of domain objects that are being retrieved and updated concurrently.
Now there are several ways to make the access to this list thread-safe:
-use synchronized block
-use normal Lock (i.e. read and write ops share same lock)
-use ReadWriteLock
-use one of the new ConcurrentBLABLBA collection classes
My question:
What is the optimal approach to use, given that the cricital sections typically do not contain a lot of operations (mostly just adding/removing/inserting or getting elements from the list)?
Can you recommend another approach, not listed above?
Some constrains
-optimal performance is critical, memory usage not so much
-it must be an ordered list (currently synchronizing on an ArrayList), although not a sorted list (i.e. not sorted using Comparable or Comparator, but according to insertion order)
-the list will is big, containing up to 100000 domain objects, thus using something like CopyOnWriteArrayList not feasible
-the write/update ciritical sections are typically very quick, doing simple add/remove/insert or replace (set)
-the read operations will do primarily a elementAt(index) call most of the time, although some read operations might do a binary search, or indexOf(element)
-no direct iteration over the list is done, though operation like indexOf(..) will traverse list
Do you have to use a sequential list? If a map-type structure is more appropriate, you can use a ConcurrentHashMap. With a list, a ReadWriteLock is probably the most effective way.
Edit to reflect OP's edit: Binary search on insertion order? Do you store a timestamp and use that for comparison, in your binary search? If so, you may be able to use the timestamp as the key, and ConcurrentSkipListMap as the container (which maintains key order).
What are the reading threads doing? If they're iterating over the list, then you really need to make sure no-one touches the list during the whole of the iteration process, otherwise you could get very odd results.
If you can define precisely what semantics you need, it should be possible to solve the issue - but you may well find that you need to write your own collection type to do it properly and efficiently. Alternatively, CopyOnWriteArrayList may well be good enough - if potentially expensive. Basically, the more you can tie down your requirements, the more efficient it can be.
I don't know if this is a posible solution for the problem but... it makes sense to me to use a Database manager to hold that huge amount of data and let it manage the transactions
I second Telcontar's suggestion of a database, since they are actually designed for managing this scale of data and negotiating between threads, while in-memory collections are not.
You say that the data is on a database on the server, and the local list on the clients is for the sake of user interface. You shouldn't need to keep all 100000 items on the client at once, or perform such complicated edits on it. It seems to me that what you want on the client is a lightweight cache onto the database.
Write a cache that stores only the current subset of data on the client at once. This client cache does not perform complex multithreaded edits on its own data; instead it feeds all edits through to the server, and listens for updates. When data changes on the server, the client simply forgets and old data and loads it again. Only one designated thread is allowed to read or write the collection itself. This way the client simply mirrors the edits happening on the server, rather than needing complicated edits itself.
Yes, this is quite a complicated solution. The components of it are:
A protocol for loading a range of the data, say items 478712 to 478901, rather than the whole thing
A protocol for receiving updates about changed data
A cache class that stores items by their known index on the server
A thread belonging to that cache which communicated with the server. This is the only thread that writes to the collection itself
A thread belonging to that cache which processes callbacks when data is retrieved
An interface that UI components implement to allow them to recieve data when it has been loaded
At first stab, the bones of this cache might look something like this:
class ServerCacheViewThingy {
private static final int ACCEPTABLE_SIZE = 500;
private int viewStart, viewLength;
final Map<Integer, Record> items
= new HashMap<Integer, Record>(1000);
final ConcurrentLinkedQueue<Callback> callbackQueue
= new ConcurrentLinkedQueue<Callback>();
public void getRecords (int start, int length, ViewReciever reciever) {
// remember the current view, to prevent records within
// this view from being accidentally pruned.
viewStart = start;
viewLenght = length;
// if the selected area is not already loaded, send a request
// to load that area
if (!rangeLoaded(start, length))
addLoadRequest(start, length);
// add the reciever to the queue, so it will be processed
// when the data has arrived
if (reciever != null)
callbackQueue.add(new Callback(start, length, reciever));
}
class Callback {
int start;
int length;
ViewReciever reciever;
...
}
class EditorThread extends Thread {
private void prune () {
if (items.size() <= ACCEPTABLE_SIZE)
return;
for (Map.Entry<Integer, Record> entry : items.entrySet()) {
int position = entry.key();
// if the position is outside the current view,
// remove that item from the cache
...
}
}
private void markDirty (int from) { ... }
....
}
class CallbackThread extends Thread {
public void notifyCallback (Callback callback);
private void processCallback (Callback) {
readRecords
}
}
}
interface ViewReciever {
void recieveData (int viewStart, Record[] records);
void recieveTimeout ();
}
There's a lot of detail you'll have to fill in for yourself, obviously.
You can use a wrapper that implements synchronization:
import java.util.Collections;
import java.util.ArrayList;
ArrayList list = new ArrayList();
List syncList = Collections.synchronizedList(list);
// make sure you only use syncList for your future calls...
This is an easy solution. I'd try this before resorting to more complicated solutions.

Categories