I've got a class that implements Iterator with a ResultSet as a data member. Essentially the class looks like this:
public class A implements Iterator{
private ResultSet entities;
...
public Object next(){
entities.next();
return new Entity(entities.getString...etc....)
}
public boolean hasNext(){
//what to do?
}
...
}
How can I check if the ResultSet has another row so I can create a valid hasNext method since ResultSet has no hasNext defined itself? I was thinking doing SELECT COUNT(*) FROM... query to get the count and managing that number to see if there's another row but I'd like to avoid this.
This is a bad idea. This approach requires that the connection is open the whole time until the last row is read, and outside the DAO layer you never know when it will happen, and you also seem to leave the resultset open and risk resource leaks and application crashes in the case the connection times out. You don't want to have that.
The normal JDBC practice is that you acquire Connection, Statement and ResultSet in the shortest possible scope. The normal practice is also that you map multiple rows into a List or maybe a Map and guess what, they do have an Iterator.
public List<Data> list() throws SQLException {
List<Data> list = new ArrayList<Data>();
try (
Connection connection = database.getConnection();
Statement statement = connection.createStatement("SELECT id, name, value FROM data");
ResultSet resultSet = statement.executeQuery();
) {
while (resultSet.next()) {
list.add(map(resultSet));
}
}
return list;
}
private Data map(ResultSet resultSet) throws SQLException {
Data data = new Data();
data.setId(resultSet.getLong("id"));
data.setName(resultSet.getString("name"));
data.setValue(resultSet.getInteger("value"));
return data;
}
And use it as below:
List<Data> list = dataDAO.list();
int count = list.size(); // Easy as that.
Iterator<Data> iterator = list.iterator(); // There is your Iterator.
Do not pass expensive DB resources outside the DAO layer like you initially wanted to do. For more basic examples of normal JDBC practices and the DAO pattern you may find this article useful.
You can get out of this pickle by performing a look-ahead in the hasNext() and remembering that you did a lookup to prevent consuming too many records, something like:
public class A implements Iterator{
private ResultSet entities;
private boolean didNext = false;
private boolean hasNext = false;
...
public Object next(){
if (!didNext) {
entities.next();
}
didNext = false;
return new Entity(entities.getString...etc....)
}
public boolean hasNext(){
if (!didNext) {
hasNext = entities.next();
didNext = true;
}
return hasNext;
}
...
}
ResultSet has an 'isLast()' method that might suit your needs. The JavaDoc says it is quite expensive though since it has to read ahead. There is a good chance it is caching the look-ahead value like the others suggest trying.
You can use ResultSetIterator, just put your ResultSet in the constructor.
ResultSet rs = ...
ResultSetIterator = new ResultSetIterator(rs);
One option is the ResultSetIterator from the Apache DBUtils project.
BalusC rightly points out the the various concerns in doing this. You need to be very careful to properly handle the connection/resultset lifecycle. Fortunately, the DBUtils project also has solutions for safely working with resultsets.
If BalusC's solution is impractical for you (e.g. you are processing large datasets that can't all fit in memory) you might want to give it a shot.
public class A implements Iterator<Entity>
{
private final ResultSet entities;
// Not required if ResultSet.isLast() is supported
private boolean hasNextChecked, hasNext;
. . .
public boolean hasNext()
{
if (hasNextChecked)
return hasNext;
hasNext = entities.next();
hasNextChecked = true;
return hasNext;
// You may also use !ResultSet.isLast()
// but support for this method is optional
}
public Entity next()
{
if (!hasNext())
throw new NoSuchElementException();
Entity entity = new Entity(entities.getString...etc....)
// Not required if ResultSet.isLast() is supported
hasNextChecked = false;
return entity;
}
}
Its not a really bad idea in the cases where you need it, it's just that you often do not need it.
If you do need to do something like, say, stream your entire database.... you could pre-fetch the next row - if the fetch fails your hasNext is false.
Here is what I used:
/**
* #author Ian Pojman <pojman#gmail.com>
*/
public abstract class LookaheadIterator<T> implements Iterator<T> {
/** The predetermined "next" object retrieved from the wrapped iterator, can be null. */
protected T next;
/**
* Implement the hasNext policy of this iterator.
* Returns true of the getNext() policy returns a new item.
*/
public boolean hasNext()
{
if (next != null)
{
return true;
}
// we havent done it already, so go find the next thing...
if (!doesHaveNext())
{
return false;
}
return getNext();
}
/** by default we can return true, since our logic does not rely on hasNext() - it prefetches the next */
protected boolean doesHaveNext() {
return true;
}
/**
* Fetch the next item
* #return false if the next item is null.
*/
protected boolean getNext()
{
next = loadNext();
return next!=null;
}
/**
* Subclasses implement the 'get next item' functionality by implementing this method. Implementations return null when they have no more.
* #return Null if there is no next.
*/
protected abstract T loadNext();
/**
* Return the next item from the wrapped iterator.
*/
public T next()
{
if (!hasNext())
{
throw new NoSuchElementException();
}
T result = next;
next = null;
return result;
}
/**
* Not implemented.
* #throws UnsupportedOperationException
*/
public void remove()
{
throw new UnsupportedOperationException();
}
}
then:
this.lookaheadIterator = new LookaheadIterator<T>() {
#Override
protected T loadNext() {
try {
if (!resultSet.next()) {
return null;
}
// process your result set - I use a Spring JDBC RowMapper
return rowMapper.mapRow(resultSet, resultSet.getRow());
} catch (SQLException e) {
throw new IllegalStateException("Error reading from database", e);
}
}
};
}
I agree with BalusC. Allowing an Iterator to escape from your DAO method is going to make it difficult to close any Connection resources. You will be forced to know about the connection lifecycle outside of your DAO, which leads to cumbersome code and potential connection leaks.
However, one choice that I've used is to pass a Function or Procedure type into the DAO method. Basically, pass in some sort of callback interface that will take each row in your result set.
For example, maybe something like this:
public class MyDao {
public void iterateResults(Procedure<ResultSet> proc, Object... params)
throws Exception {
Connection c = getConnection();
try {
Statement s = c.createStatement(query);
ResultSet rs = s.executeQuery();
while (rs.next()) {
proc.execute(rs);
}
} finally {
// close other resources too
c.close();
}
}
}
public interface Procedure<T> {
void execute(T t) throws Exception;
}
public class ResultSetOutputStreamProcedure implements Procedure<ResultSet> {
private final OutputStream outputStream;
public ResultSetOutputStreamProcedure(OutputStream outputStream) {
this.outputStream = outputStream;
}
#Override
public void execute(ResultSet rs) throws SQLException {
MyBean bean = getMyBeanFromResultSet(rs);
writeMyBeanToOutputStream(bean);
}
}
In this way, you keep your database connection resources inside your DAO, which is proper. But, you are not necessarily required to fill a Collection if memory is a concern.
Hope this helps.
You could try the following:
public class A implements Iterator {
private ResultSet entities;
private Entity nextEntity;
...
public Object next() {
Entity tempEntity;
if ( !nextEntity ) {
entities.next();
tempEntity = new Entity( entities.getString...etc....)
} else {
tempEntity = nextEntity;
}
entities.next();
nextEntity = new Entity( entities.getString...ext....)
return tempEntity;
}
public boolean hasNext() {
return nextEntity ? true : false;
}
}
This code caches the next entity, and hasNext() returns true, if the cached entity is valid, otherwise it returns false.
There are a couple of things you could do depending on what you want your class A. If the major use case is to go through every single result then perhaps its best to preload all the Entity objects and throw away the ResultSet.
If however you don't want to do that you could use the next() and previous() method of ResultSet
public boolean hasNext(){
boolean next = entities.next();
if(next) {
//reset the cursor back to its previous position
entities.previous();
}
}
You do have to be careful to make sure that you arent currently reading from the ResultSet, but, if your Entity class is a proper POJO (or at least properly disconnected from ResultSet then this should be a fine approach.
Here's my iterator that wraps a ResultSet. The rows are returned in the form a Map. I hope you'll find it helpful. The strategy is that I always bring one element in advance.
public class ResultSetIterator implements Iterator<Map<String,Object>> {
private ResultSet result;
private ResultSetMetaData meta;
private boolean hasNext;
public ResultSetIterator( ResultSet result ) throws SQLException {
this.result = result;
meta = result.getMetaData();
hasNext = result.next();
}
#Override
public boolean hasNext() {
return hasNext;
}
#Override
public Map<String, Object> next() {
if (! hasNext) {
throw new NoSuchElementException();
}
try {
Map<String,Object> next = new LinkedHashMap<>();
for (int i = 1; i <= meta.getColumnCount(); i++) {
String column = meta.getColumnName(i);
Object value = result.getObject(i);
next.put(column,value);
}
hasNext = result.next();
return next;
}
catch (SQLException ex) {
throw new RuntimeException(ex);
}
}
}
entities.next returns false if there are no more rows, so you could just get that return value and set a member variable to keep track of the status for hasNext().
But to make that work you would also have to have some sort of init method that reads the first entity and caches it in the class. Then when calling next you would need to return the previously cached value and cache the next value, etc...
Iterators are problematic for traversing ResultSets for reasons mentioned above but Iterator like behaviour with all the required semantics for handling errors and closing resources is available with reactive sequences (Observables) in RxJava. Observables are like iterators but include the notions of subscriptions and their cancellations and error handling.
The project rxjava-jdbc has implementations of Observables for jdbc operations including traversals of ResultSets with proper closure of resources, error handling and the ability to cancel the traversal as required (unsubscribe).
Do you expect most of the data in your result set to actually be used? If so, pre-cache it. It's quite trivial using eg Spring
List<Map<String,Object>> rows = jdbcTemplate.queryForList(sql);
return rows.iterator();
Adjust to suit your taste.
I think there's enough decry over why it's a really bad idea to use ResultSet in an Iterator (in short, ResultSet maintains an active connection to DB and not closing it ASAP can lead to problems).
But in a different situation, if you're getting ResultSet (rs) and are going to iterate over the elements, but you also wanted to do something before the iteration like this:
if (rs.hasNext()) { //This method doesn't exist
//do something ONCE, *IF* there are elements in the RS
}
while (rs.next()) {
//do something repeatedly for each element
}
You can achieve the same effect by writing it like this instead:
if (rs.next()) {
//do something ONCE, *IF* there are elements in the RS
do {
//do something repeatedly for each element
} while (rs.next());
}
It can be done like this:
public boolean hasNext() {
...
return !entities.isLast();
...
}
It sounds like you are stuck between either providing an inefficient implementation of hasNext or throwing an exception stating that you do not support the operation.
Unfortunately there are times when you implement an interface and you don't need all of the members. In that case I would suggest that you throw an exception in that member that you will not or cannot support and document that member on your type as an unsupported operation.
Related
In a Spring-based application I have a service which performs the calculation of some Index. Index is relatively expensive to calculate (say, 1s) but relatively cheap to check for actuality (say, 20ms). Actual code does not matter, it goes along the following lines:
public Index getIndex() {
return calculateIndex();
}
public Index calculateIndex() {
// 1 second or more
}
public boolean isIndexActual(Index index) {
// 20ms or less
}
I'm using Spring Cache to cache the calculated index via #Cacheable annotation:
#Cacheable(cacheNames = CacheConfiguration.INDEX_CACHE_NAME)
public Index getIndex() {
return calculateIndex();
}
We currently configure GuavaCache as cache implementation:
#Bean
public Cache indexCache() {
return new GuavaCache(INDEX_CACHE_NAME, CacheBuilder.newBuilder()
.expireAfterWrite(indexCacheExpireAfterWriteSeconds, TimeUnit.SECONDS)
.build());
}
#Bean
public CacheManager indexCacheManager(List<Cache> caches) {
SimpleCacheManager cacheManager = new SimpleCacheManager();
cacheManager.setCaches(caches);
return cacheManager;
}
What I also need is to check if cached value is still actual and refresh it (ideally asynchronously) if it is not. So ideally it should go as follows:
When getIndex() is called, Spring checks if there is a value in the cache.
If not, new value is loaded via calculateIndex() and stored in the cache
If yes, the existing value is checked for actuality via isIndexActual(...).
If old value is actual, it is returned.
If old value is not actual, it is returned, but removed from the cache and loading of the new value is triggered as well.
Basically I want to serve the value from the cache very fast (even if it is obsolete) but also trigger refreshing right away.
What I've got working so far is checking for actuality and eviction:
#Cacheable(cacheNames = INDEX_CACHE_NAME)
#CacheEvict(cacheNames = INDEX_CACHE_NAME, condition = "target.isObsolete(#result)")
public Index getIndex() {
return calculateIndex();
}
This checks triggers eviction if the result is obsolete and returns the old value immediately even if it is the case. But this does not refresh the value in the cache.
Is there a way to configure Spring Cache to actively refresh obsolete values after eviction?
Update
Here's a MCVE.
public static class Index {
private final long timestamp;
public Index(long timestamp) {
this.timestamp = timestamp;
}
public long getTimestamp() {
return timestamp;
}
}
public interface IndexCalculator {
public Index calculateIndex();
public long getCurrentTimestamp();
}
#Service
public static class IndexService {
#Autowired
private IndexCalculator indexCalculator;
#Cacheable(cacheNames = "index")
#CacheEvict(cacheNames = "index", condition = "target.isObsolete(#result)")
public Index getIndex() {
return indexCalculator.calculateIndex();
}
public boolean isObsolete(Index index) {
long indexTimestamp = index.getTimestamp();
long currentTimestamp = indexCalculator.getCurrentTimestamp();
if (index == null || indexTimestamp < currentTimestamp) {
return true;
} else {
return false;
}
}
}
Now the test:
#Test
public void test() {
final Index index100 = new Index(100);
final Index index200 = new Index(200);
when(indexCalculator.calculateIndex()).thenReturn(index100);
when(indexCalculator.getCurrentTimestamp()).thenReturn(100L);
assertThat(indexService.getIndex()).isSameAs(index100);
verify(indexCalculator).calculateIndex();
verify(indexCalculator).getCurrentTimestamp();
when(indexCalculator.getCurrentTimestamp()).thenReturn(200L);
when(indexCalculator.calculateIndex()).thenReturn(index200);
assertThat(indexService.getIndex()).isSameAs(index100);
verify(indexCalculator, times(2)).getCurrentTimestamp();
// I'd like to see indexCalculator.calculateIndex() called after
// indexService.getIndex() returns the old value but it does not happen
// verify(indexCalculator, times(2)).calculateIndex();
assertThat(indexService.getIndex()).isSameAs(index200);
// Instead, indexCalculator.calculateIndex() os called on
// the next call to indexService.getIndex()
// I'd like to have it earlier
verify(indexCalculator, times(2)).calculateIndex();
verify(indexCalculator, times(3)).getCurrentTimestamp();
verifyNoMoreInteractions(indexCalculator);
}
I'd like to have the value refreshed shortly after it was evicted from the cache. At the moment it is refreshed on the next call of getIndex() first. If the value would have been refreshed right after eviction, this would save me 1s later on.
I've tried #CachePut, but it also does not get me the desired effect. The value is refreshed, but the method is always executed, no matter what condition or unless are.
The only way I see at the moment is to call getIndex() twice(second time async/non-blocking). But that's kind of stupid.
I would say the easiest way of doing what you need is to create a custom Aspect which will do all the magic transparently and which can be reused in more places.
So assuming you have spring-aop and aspectj dependencies on your class path the following aspect will do the trick.
#Aspect
#Component
public class IndexEvictorAspect {
#Autowired
private Cache cache;
#Autowired
private IndexService indexService;
private final ReentrantLock lock = new ReentrantLock();
#AfterReturning(pointcut="hello.IndexService.getIndex()", returning="index")
public void afterGetIndex(Object index) {
if(indexService.isObsolete((Index) index) && lock.tryLock()){
try {
Index newIndex = indexService.calculateIndex();
cache.put(SimpleKey.EMPTY, newIndex);
} finally {
lock.unlock();
}
}
}
}
Several things to note
As your getIndex() method does not have a parameters it is stored in the cache for key SimpleKey.EMPTY
The code assumes that IndexService is in the hello package.
Something like the following could refresh the cache in the desired way and keep the implementation simple and straightforward.
There is nothing wrong about writing clear and simple code, provided it satisfies the requirements.
#Service
public static class IndexService {
#Autowired
private IndexCalculator indexCalculator;
public Index getIndex() {
Index cachedIndex = getCachedIndex();
if (isObsolete(cachedIndex)) {
evictCache();
asyncRefreshCache();
}
return cachedIndex;
}
#Cacheable(cacheNames = "index")
public Index getCachedIndex() {
return indexCalculator.calculateIndex();
}
public void asyncRefreshCache() {
CompletableFuture.runAsync(this::getCachedIndex);
}
#CacheEvict(cacheNames = "index")
public void evictCache() { }
public boolean isObsolete(Index index) {
long indexTimestamp = index.getTimestamp();
long currentTimestamp = indexCalculator.getCurrentTimestamp();
if (index == null || indexTimestamp < currentTimestamp) {
return true;
} else {
return false;
}
}
}
EDIT1:
The caching abstraction based on #Cacheable and #CacheEvict will not work in this case. Those behaviour is following: during #Cacheable call if the value is in cache - return value from the cache, otherwise compute and put into cache and then return; during #CacheEvict the value is removed from the cache, so from this moment there is no value in cache, and thus the first incoming call on #Cacheable will force the recalculation and putting into cache. The use #CacheEvict(condition="") will only do the check on condition wether to remove from cache value during this call based on this condition. So after each invalidation the #Cacheable method will run this heavyweight routine to populate cache.
to have the value beign stored in the cache manager, and updated asynchronously, I would propose to reuse following routine:
#Inject
#Qualifier("my-configured-caching")
private Cache cache;
private ReentrantLock lock = new ReentrantLock();
public Index getIndex() {
synchronized (this) {
Index storedCache = cache.get("singleKey_Or_AnythingYouWant", Index.class);
if (storedCache == null ) {
this.lock.lock();
storedCache = indexCalculator.calculateIndex();
this.cache.put("singleKey_Or_AnythingYouWant", storedCache);
this.lock.unlock();
}
}
if (isObsolete(storedCache)) {
if (!lock.isLocked()) {
lock.lock();
this.asyncUpgrade()
}
}
return storedCache;
}
The first construction is sycnhronized, just to block all the upcoming calls to wait until the first call populates cache.
then the system checks wether the cache should be regenerated. if yes, single call for asynchronous update of the value is called, and the current thread is returning the cached value. upcoming call once the cache is in state of recalculation will simply return the most recent value from the cache. and so on.
with solution like this you will be able to reuse huge volumes of memory, of lets say hazelcast cache manager, as well as multiple key-based cache storage and keep your complex logic of cache actualization and eviction.
OR IF you like the #Cacheable annotations, you can do this following way:
#Cacheable(cacheNames = "index", sync = true)
public Index getCachedIndex() {
return new Index();
}
#CachePut(cacheNames = "index")
public Index putIntoCache() {
return new Index();
}
public Index getIndex() {
Index latestIndex = getCachedIndex();
if (isObsolete(latestIndex)) {
recalculateCache();
}
return latestIndex;
}
private ReentrantLock lock = new ReentrantLock();
#Async
public void recalculateCache() {
if (!lock.isLocked()) {
lock.lock();
putIntoCache();
lock.unlock();
}
}
Which is almost the same, as above, but reuses spring's Caching annotation abstraction.
ORIGINAL:
Why you are trying to resolve this via caching? If this is simple value (not key-based, you can organize your code in simpler manner, keeping in mind that spring service is singleton by default)
Something like that:
#Service
public static class IndexService {
#Autowired
private IndexCalculator indexCalculator;
private Index storedCache;
private ReentrantLock lock = new ReentrantLock();
public Index getIndex() {
if (storedCache == null ) {
synchronized (this) {
this.lock.lock();
Index result = indexCalculator.calculateIndex();
this.storedCache = result;
this.lock.unlock();
}
}
if (isObsolete()) {
if (!lock.isLocked()) {
lock.lock();
this.asyncUpgrade()
}
}
return storedCache;
}
#Async
public void asyncUpgrade() {
Index result = indexCalculator.calculateIndex();
synchronized (this) {
this.storedCache = result;
}
this.lock.unlock();
}
public boolean isObsolete() {
long currentTimestamp = indexCalculator.getCurrentTimestamp();
if (storedCache == null || storedCache.getTimestamp() < currentTimestamp) {
return true;
} else {
return false;
}
}
}
i.e. first call is synchronized and you have to wait until the results are populated. Then if stored value is obsolete the system will perform asynchronous update of the value, but the current thread will receive the stored "cached" value.
I had also introduced the reentrant lock to restrict single upgrade of stored index at time.
I would use a Guava LoadingCache in your index service, like shown in the code sample below:
LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder()
.maximumSize(1000)
.refreshAfterWrite(1, TimeUnit.MINUTES)
.build(
new CacheLoader<Key, Graph>() {
public Graph load(Key key) { // no checked exception
return getGraphFromDatabase(key);
}
public ListenableFuture<Graph> reload(final Key key, Graph prevGraph) {
if (neverNeedsRefresh(key)) {
return Futures.immediateFuture(prevGraph);
} else {
// asynchronous!
ListenableFutureTask<Graph> task = ListenableFutureTask.create(new Callable<Graph>() {
public Graph call() {
return getGraphFromDatabase(key);
}
});
executor.execute(task);
return task;
}
}
});
You can create an async reloading cache loader by calling Guava's method:
public abstract class CacheLoader<K, V> {
...
public static <K, V> CacheLoader<K, V> asyncReloading(
final CacheLoader<K, V> loader, final Executor executor) {
...
}
}
The trick is to run the reload operation in a separate thread, using a ThreadPoolExecutor for example:
On first call, the cache is populated by the load() method, thus it may take some time to answer,
On subsequent calls, when the value needs to be refreshed, it's being computed asynchronously while still serving the stale value. It will serve the updated value once the refresh has completed.
I think it can be something like
#Autowired
IndexService indexService; // self injection
#Cacheable(cacheNames = INDEX_CACHE_NAME)
#CacheEvict(cacheNames = INDEX_CACHE_NAME, condition = "target.isObsolete(#result) && #indexService.calculateIndexAsync()")
public Index getIndex() {
return calculateIndex();
}
public boolean calculateIndexAsync() {
someAsyncService.run(new Runable() {
public void run() {
indexService.updateIndex(); // require self reference to use Spring caching proxy
}
});
return true;
}
#CachePut(cacheNames = INDEX_CACHE_NAME)
public Index updateIndex() {
return calculateIndex();
}
Above code has a problem, if you call to getIndex() again while it is being updated, it will be calculated again. To prevent this, it better to don't use #CacheEvict and let the #Cacheable return the obsolete value until the index has done calculated.
#Autowired
IndexService indexService; // self injection
#Cacheable(cacheNames = INDEX_CACHE_NAME, condition = "!(target.isObsolete(#result) && #indexService.calculateIndexAsync())")
public Index getIndex() {
return calculateIndex();
}
public boolean calculateIndexAsync() {
if (!someThreadSafeService.isIndexBeingUpdated()) {
someAsyncService.run(new Runable() {
public void run() {
indexService.updateIndex(); // require self reference to use Spring caching proxy
}
});
}
return false;
}
#CachePut(cacheNames = INDEX_CACHE_NAME)
public Index updateIndex() {
return calculateIndex();
}
I would like to know what would be the best mechanism to implement multiple Producer - single Consumer scenario, where i have to keep the current number of unprocessed requests up to date.
My first thought was to use ConcurrentLinkedQueue:
public class SomeQueueAbstraction {
private Queue<SomeObject> concurrentQueue = new ConcurrentLinkedQueue<>();
private int size;
public void add(Object request) {
SomeObject object = convertIncomingRequest(request);
concurrentQueue.add(object);
size++;
}
public SomeObject getHead() {
SomeObject object = concurrentQueue.poll();
size--;
}
// other methods
Problem with this is that i have to explicitly synchronize on add and size ++, as well as on the poll and size--, to have always accurate size which makes ConccurentLinkedQueue pointless to begin with.
What would be the best way to achieve as good as possible performance while maintaining data consistency ?
Should I use ArrayDequeue instead and explicitly synchronize or there is a better way to achieve this ?
There is sort of similar question/answer here:
java.util.ConcurrentLinkedQueue
where it is discussed how composite operations on ConcurrentLinkedQueue are naturally not atomic but there is no direct answer what is the best option for the given scenario.
Note: I am calculating size explicitly because time complexity for inherent .size() method is O(n).
Note2: I am also worried that getSize() method, which i haven't explicitly written, will add to even more contention overhead. It could be called relatively frequently.
I am looking for the most efficient way to handle Multiple Producers - single Consumer with frequent getSize() calls.
Alternative suggestion: If there was elementId in SomeObject structure, i could get current size from ConcurrentLinkedQueue.poll() and only locking would have to be done within mechanism to generate such id. Add and get could now properly be used without additional locking. How would this fare as an alternative ?
So the requirement is to report an up to date current number of unprocessed requests. And this is requested often which indeed makes ConcurrentLinkedQueue.size() unsuitable.
This can be done using an AtomicInteger: it is fast and is always as close to the current number of unprocessed requests as possible.
Here is an example, note some small updates to ensure that the reported size is accurate:
import java.util.Queue;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.atomic.AtomicInteger;
public class SomeQueueAbstraction {
private final Queue<SomeObject> concurrentQueue = new ConcurrentLinkedQueue<>();
private final AtomicInteger size = new AtomicInteger();
public boolean add(Object request) {
SomeObject object = convertIncomingRequest(request);
if (concurrentQueue.add(object)) {
size.incrementAndGet();
return true;
}
return false;
}
public SomeObject remove() {
SomeObject object = concurrentQueue.poll();
if (object != null) {
size.decrementAndGet();
}
return object;
}
public int getSize() { return size.get(); }
private SomeObject convertIncomingRequest(Object request) {
return new SomeObject(getSize());
}
class SomeObject {
int id;
SomeObject(int id) { this.id = id; }
}
}
You can use an explicit lock, which means you probably won't need a concurrent queue.
public class SomeQueueAbstraction {
private Queue<SomeObject> queue = new LinkedList<>();
private volatile int size;
private Object lock = new Object();
public void add(Object request) {
SomeObject object = convertIncomingRequest(request);
synchronized(lock) {
queue.add(object);
size++;
}
}
public SomeObject getHead() {
SomeObject object = null;
synchronized(lock) {
object = queue.poll();
size--;
}
return object;
}
public int getSize() {
synchronized(lock) {
return size;
}
}
// other methods
}
This way, adding/removing elements to/from the queue and updating the size will be done safely.
I want to display list of record from database to output text field. I'm having problem with method with brings record from database. It is causing an infinite loop as it call in constructor of managed bean class. Here is the code.
Constructor of managed bean class:
public InterViewDto() throws SQLException {
User u = getCurrentUser();
InterviewDao d = new InterviewDao();
List<InterViewDto> dao1 = d.getCall(u.getEmailAddress());
setDto(dao1);
}
Method bringing record from database :
public List<InterViewDto> getCall(String email) throws SQLException {
System.out.print("fyc");
List<InterViewDto> list = new ArrayList<InterViewDto>();
String job = null;
boolean exists = false;
Connection c = null;
try {
c = openConnection();
String query_check = "SELECT * FROM interviewcall WHERE useremail = '"+email+"' ";
Statement st = c.createStatement();
ResultSet rs = st.executeQuery(query_check);
while (rs.next()) {
InterViewDto dto = new InterViewDto();
dto.setDate( rs.getDate("time"));
dto.setJobtitle( rs.getString("jobtitle"));
dto.setJobtitle( rs.getString("useremail"));
list.add(dto);
System.out.print(list.get(0).getJobtitle());
} rs.close();
} catch (Exception e) {
System.out.println(e);
} finally {
c.close();
}
return list;
}
You have a circular dependency. Your constructor for the DTO reaches out to the database, which in turn creates a new DTO to represent the data loaded from the database, which goes to the database and back and forth until you overflow the call stack.
Quite simply, you have merged two complementary design approaches.
Either your InterViewDto constructor loads data from the DAO or the DAO constructs a new InterViewDto object. Pick one or the other.
In my opinion, it makes more sense for the DAO to create the DTO objects. If you want the DTO to delegate to the DAO as a matter of convenience, consider a static method.
public class InterViewDto {
public InterViewDto() {
}
...
public static fromCurrentUser() {
return new InterviewDao().getCall(getCurrentUser().getEmailAddress());
}
}
Then change your constructor to be empty.
NOTE: Please ignore my use of MultivaluedMap instead of multiple vargs String...args.
Is there a standard way in java of doing this?
What I have is a resource, that is returned from a remote server. But before each query, the remote connection must be open, and after the returns are returned - it must be closed.
So a natural way of doing this is something like:
Connection c = config.configureConnection();
c.open(); //open
List<Car> cars;
try{
cars = c.getCars();
}finally{
c.close(); //close
}
Now I want to implement something that operates on the level of the resources themselves, without worrying about connection, for example:
List<Car> cars = new CarResource().all(); //opens and closes connection
The way I am currently doing it is by having one abstract class, AbstractQueriable call abstract methods query(String ...args) and query(int id), which any class extending it must implement.
The AbstractQuerieable implements the Queriable interface, which makes it expose the three public methods filter(String ...args), all() and get(int id) - which are the public facing methods.
Here is the Queriable interface:
public interface Queriable <T>{
public T get(String id);
/** Simply returns all resources */
public Collection<T> all();
public Collection<T> filter(MultivaluedMap<String, String> args);
}
here is the AbstractQueriable class that implements it:
public abstract class AbstractQueriable<T> implements Queriable<T> {
#Override
public final T get(String id) {
setup();
try {
return query(id);
} finally {
cleanup();
}
}
#Override
public final Collection<T> filter(MultivaluedMap<String, String> args) {
setup();
try {
return query(args);
} finally {
cleanup();
}
}
/**
* Returns all resources.
*
* This is a convenience method that is equivalent to passing an empty
* arguments list to the filter function.
*
* #return The collection of all resources if possible
*/
#Override
public final Collection<T> all() {
return filter(null);
}
/**
* Queries for a resource by id.
*
* #param id
* id of the resource to return
* #return
*/
protected abstract T query(String id);
/**
* Queries for a resource by given arguments.
*
* #param args
* Map of arguments, where each key is the argument name, and the
* corresponing values are the values
* #return The collection of resources found
*/
protected abstract Collection<T> query(MultivaluedMap<String, String> args);
private void cleanup() {
Repository.close();
}
private void setup() {
Repository.open();
}
and finally my resource, which I want to use in the code, must extend the AbstractQueriable class, for example (please note that the details of these methods are not important):
public class CarRepositoryResource extends AbstractQueriable<Car> {
#Override
protected Car query(String id) {
MultivaluedMap<String, String> params = new MultivaluedMapImpl();
params.add("CarID", id);
// Delegate the query to the parametarized version
Collection<cars> cars = query(params);
if (cars == null || cars.size() == 0) {
throw new WebApplicationException(Response.Status.NOT_FOUND);
}
if (cars.size() > 1) {
throw new WebApplicationException(Response.Status.NOT_FOUND);
}
return cars.iterator().next();
}
#Override
protected Collection<Car> query(MultivaluedMap<String, String> params) {
Collection<Car> cars = new ArrayList<Car>();
Response response = Repository.getConnection().doQuery("Car");
while (response.next()) {
Returned returned = response.getResult();
if (returned != null) {
cars.add(returned);
}
}
return cars;
}
}
which finally, I can use in my code:
Collection<Car> cars = new CarRepositoryResource().all();
//... display cars to the client etc...
There are a few things I don't like about this kind of setup:
I must instantiate a new instance of my "CarRepositoryResource" every time I do a query.
The method names "query", while internal and private, are still confusing and clunky.
I am not sure if there is a better pattern or framework out there.
The connection that I am using does not support/implement the JDBC api and is not sql-based.
You could use a variation of the (in)famous Open session in view pattern.
Basically it comes down to this:
Define a "context" in which connections are available
(usually the request in web applications)
Handle (possibly lazy) initialization and release of a connection when entering/exiting the context
Code your methods taking for granted they will only be used inside such a context
It is not difficult to implement (storing the connection in a static ThreadLocal to make it thread safe) and will definitely spare a few open/close calls (performance-wise that could be a big gain, depending on how heavy your connection is).
The context class could look something like (consider this pseudo-code);
public class MyContext{
private static final
ThreadLocal<Connection> connection = new ThreadLocal<Connection>();
public static void enter() {
connection.set(initializeConnection());
// this is eager initialization
// if you think it will often the case that no connection is actually
// required inside a context, you can defer the actual initialization
// until the first call to get()
}
public static void exit() {
try { connection.close(); }
catch(Throwable t) { /* panic! */ }
finally { connection.set(null); }
}
public static Connection get() {
Connection c = connection.get();
if (c == null) throw new IllegalStateException("blah blah");
return c;
}
}
Then you would use connections like this:
MyContext.enter();
try {
// connections are available here:
// anything that calls MyContext.get()
// gets (the same) valid connection instance
} finally {
MyContext.exit();
}
This block can be put wherever you want (in webapps it usually wraps the processing of each request) - from the main method if you are coding a simple case when you want a single connection available for the whole lifespan of your application, to the finest methods in your API.
You might want to take a look at fluent interfaces (with an interesting example here) and its "Builder" pattern.
You would query like this:
cars().in(DB).where(id().isEqualTo(1234));
This way you can hide the connection/disconnection code in the outermost cars() method, for example.
I need to fetch many records from an RDBMS in Java (10-20k) my target system expects them to be available as Java List. So I want to implement my code as "Virtual list" where I actually only fetch the records I actually need. I expect SQL like
SELECT * FROM CUSTOMER WHERE COUNTRY="Moldovia"
as parameter and just return what is requested. Most likely the data is requested in batches of 50. Any hints how to do that?
Unless you expect your clients to randomly access the data, you're probably better off returning an Iterator. Also, take a look at ResultSet.setFetchSize: http://java.sun.com/javase/6/docs/api/java/sql/ResultSet.html#setFetchSize(int)
So something like:
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.Iterator;
public class FooResultSetIterator implements Iterator<Foo>
{
private final ResultSet resultSet;
private boolean hasNext;
FooResultSetIterator(final ResultSet resultSet, final int fetchSize) throws SQLException
{
this.resultSet = resultSet;
this.resultSet.setFetchSize(fetchSize);
this.hasNext = resultSet.next();
}
#Override
public boolean hasNext()
{
return hasNext;
}
#Override
public Foo next()
{
final Foo foo = new Foo(resultSet);
try
{
this.hasNext = resultSet.next();
}
catch (final SQLException e)
{
throw new RuntimeException(e);
}
return foo;
}
#Override
public void remove()
{
throw new UnsupportedOperationException("Cannot remove items from a ResultSetIterator");
}
}
class Foo
{
public Foo(ResultSet resultSet)
{
// TODO Auto-generated constructor stub
}
}
Use OFFSET and LIMIT in your query:
SELECT * FROM CUSTOMER WHERE
COUNTRY="Moldovia" LIMIT 50 OFFSET 50
Assuming of course that your SQL dialect allows it. The example will return rows 51-100.