I'm new to JNA and am trying to get a storage info with libvirt c and JNA.
There are two libvirt c functions, virConnectListAllStoragePools() and virStoragePoolGetInfo(), as shown below.
/**
* virConnectListAllStoragePools:
* #conn: Pointer to the hypervisor connection.
* #pools: Pointer to a variable to store the array containing storage pool
* objects or NULL if the list is not required (just returns number
* of pools).
* #flags: bitwise-OR of virConnectListAllStoragePoolsFlags.
*
* Collect the list of storage pools, and allocate an array to store those
* objects. This API solves the race inherent between
* virConnectListStoragePools and virConnectListDefinedStoragePools.
*
* Normally, all storage pools are returned; however, #flags can be used to
* filter the results for a smaller list of targeted pools. The valid
* flags are divided into groups, where each group contains bits that
* describe mutually exclusive attributes of a pool, and where all bits
* within a group describe all possible pools.
*
* The first group of #flags is VIR_CONNECT_LIST_STORAGE_POOLS_ACTIVE (online)
* and VIR_CONNECT_LIST_STORAGE_POOLS_INACTIVE (offline) to filter the pools
* by state.
*
* The second group of #flags is VIR_CONNECT_LIST_STORAGE_POOLS_PERSITENT
* (defined) and VIR_CONNECT_LIST_STORAGE_POOLS_TRANSIENT (running but not
* defined), to filter the pools by whether they have persistent config or not.
*
* The third group of #flags is VIR_CONNECT_LIST_STORAGE_POOLS_AUTOSTART
* and VIR_CONNECT_LIST_STORAGE_POOLS_NO_AUTOSTART, to filter the pools by
* whether they are marked as autostart or not.
*
* The last group of #flags is provided to filter the pools by the types,
* the flags include:
* VIR_CONNECT_LIST_STORAGE_POOLS_DIR
* VIR_CONNECT_LIST_STORAGE_POOLS_FS
* VIR_CONNECT_LIST_STORAGE_POOLS_NETFS
* VIR_CONNECT_LIST_STORAGE_POOLS_LOGICAL
* VIR_CONNECT_LIST_STORAGE_POOLS_DISK
* VIR_CONNECT_LIST_STORAGE_POOLS_ISCSI
* VIR_CONNECT_LIST_STORAGE_POOLS_SCSI
* VIR_CONNECT_LIST_STORAGE_POOLS_MPATH
* VIR_CONNECT_LIST_STORAGE_POOLS_RBD
* VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG
* VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER
* VIR_CONNECT_LIST_STORAGE_POOLS_ZFS
* VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE
* VIR_CONNECT_LIST_STORAGE_POOLS_ISCSI_DIRECT
*
* Returns the number of storage pools found or -1 and sets #pools to
* NULL in case of error. On success, the array stored into #pools is
* guaranteed to have an extra allocated element set to NULL but not included
* in the return count, to make iteration easier. The caller is responsible
* for calling virStoragePoolFree() on each array element, then calling
* free() on #pools.
*/
int
virConnectListAllStoragePools(virConnectPtr conn,
virStoragePoolPtr **pools,
unsigned int flags)
{
//logic about set pool pointer to parameter '**pools' and return number of pools.
}
/**
* virStoragePoolGetInfo:
* #pool: pointer to storage pool
* #info: pointer at which to store info
*
* Get volatile information about the storage pool
* such as free space / usage summary
*
* Returns 0 on success, or -1 on failure.
*/
int
virStoragePoolGetInfo(virStoragePoolPtr pool,
virStoragePoolInfoPtr info)
{
// logic about set pool info to parameter 'info'.
}
My java code with JNA is:.
1) StoragePoolPointer.java
import com.sun.jna.PointerType;
public class StoragePoolPointer extends PointerType {}
2) virStoragePoolInfo
public class virStoragePoolInfo extends Struture {
public int state;
public long capacity;
public long allocation;
public long available;
private static final List<String> filedls = Arrays.asList("state", "capacity", "allocation", "available");
protected List<String> getFieldOrder() {
return fields;
}
}
3) Libvirt.java interface
public interface Libvirt extends Library {
int virConnectListAllStoragePools(ConnectionPoinet VCP, StoragePoolPointer[] pPointers, int flag);
int virStoragePoolGetInfo(StoragePoolPointer poolPointer, virStoragePoolInfo info);
}
4) jna test code
public void listAllStoragePools(ConnectionPointer VCP) {
Libvirt libvirt = (Libvirt) Native.loadLibrary("virt", Libvirt.class);
StoragePoolPointer[] pPointerArr = new StoragePoolPointer[10];
int result = livirt.virConnectListAllStoragePools(VCP, pPointerArr, 64);
// result is fine. i get storage pool count. and also i get StoragePoolPointer object inside array.
virStoragePoolInfo poolInfo = new virStoragePoolInfo();
libvirt.virStoragePoolGetInfo(pPointerArr[0], poolInfo);
// when i call libvirt.virStoragePoolGetInfo(). i get a error msg like this. 'libvirt: Storage Driver error : invalid storage pool pointer in virStoragePoolGetInfo.
//error msg : org.libvirt.LibvirtException: invalid storage pool pointer in virStoragePoolGetInfo.
}
I think i made wrong interface method virConnectListAllStoragePools() and I am passing the wrong parameter for 'virStoragePoolPtr **pools'.
Native arrays are laid out in contiguous memory. In this case, virStoragePoolPtr **pools is a single pointer to the beginning of the array; and C uses its knowledge of pointer size to use offsets to find the remaining elements.
JNA does not translate (most) Java arrays to this native layout. When it does so, it uses clearly defined sizes, such as for primitive arrays like int[] or for arrays of Structure that JNA can calculate the size for, and allocate with its toArray method.
You have defined a Java-side array here:
StoragePoolPointer[] pPointerArr = new StoragePoolPointer[10];
But you have not allocated any Java objects to fill the array with, so it's simply an array of nulls. Further, even if you did fill the array by iterating and setting pPointerArr[i] = new StoragePoolPointer() you would have non-contiguous native memory attached to those pointers.
There are (at least) two approaches to solving this. One is directly allocating the native memory using JNA's Memory class, like this:
pPointerArr = new Memory(10 * Native.POINTER_SIZE);
After the method call to virConnectListAllStoragePools(), you'd have the pointers in this buffer and could iterate, e.g., Pointer p = pPointerArr.share(i * Native.POINTER_SIZE) would be each pointer as you incremented i.
Alternately, you could define a Structure containing a pointer, like this:
#FieldOrder({ "storagePool" })
public class StoragePoolPointer extends Structure {
public Pointer storagePool;
}
Then you could get both the Java array convenience and contiguous native memory with:
StoragePoolPointer[] pPointerArr =
(StoragePoolPointer[]) new StoragePoolPointer().toArray(10);
After populating the arrays, pPointerArr[i].storagePool would be the ith pointer.
Assuming the 10 you are using is just a test placeholder, the standard JNA idiom in this situation is to call virConnectListAllStoragePools() once with a null pointer to get the number of elements to use, then allocate the memory as I've described above and call it a second time with the properly sized buffer (or array of Structures backed by a buffer).
Related
We had the task to periodically check the number of GDI objects a Java process is using on a Windows machine. We had a leak issue leak issue using some third party library. When a certain limit is reached, the application more or less crashes. You can see them in the Windows task manager when you add the GDI-Objects column.
Since we didn't find an existing method we ended up using JNA-Platform-library to get access to the GetGuiResources API function of the user32.dll.
We did the following:
extended User32-interface and added a corresponding method
created an INSTANCE of the extended interface using Native.loadLibrary()
called the method of the INSTANCE by using Kernel32.INSTANCE.GetCurrentProcess() and DWORD(0)
The code now looks like this:
public class GdiTester {
private static interface ExtendedUser32 extends User32 {
/**
* Provides access to the user32.dll-API-Function GetGuiResources.
*
* #param hProcess the process
* #param uiFlags flags (e.g. 0 to get the number of GDI-objects)
* #return result of the API-function call (e.g. the number of GDI-objects)
*/
DWORD GetGuiResources(HANDLE hProcess, DWORD uiFlags);
}
private static ExtendedUser32 INSTANCE = Native.loadLibrary("user32", ExtendedUser32.class, W32APIOptions.DEFAULT_OPTIONS);
public static void main(String[] args) {
System.out.println("Number of GDI-objects: " + INSTANCE.GetGuiResources(Kernel32.INSTANCE.GetCurrentProcess(), new DWORD(0)).intValue());
}
}
Do you know an other (better) way to do this?
I'm working on a project that monitors a micro-service based system.
the mock micro-services I created produce data and upload it to Amazon
Kinesis, now I use this code here from Amazon to produce to and consume from the Kinesis. But I have failed to understand how can I add more processors
(workers) that will work on the same records list (possibly concurrently),
meaning I'm trying to figure out where and how to plug in my code to the added code of Amazon I added here below.
I'm going to have two processors in my program:
Will save each record to a DB.
Will update a GUI that will show monitoring of the system, given it can
compare a current transaction to a valid transaction. My valid transactions
will also be stored in a DB. meaning we will be able to see all of the data flow in the system and see how each request was handled from end to end.
I would really appreciate some guidance, as this is my first industry project and I'm also kind of new to AWS (though I have read about it a lot).
Thanks!
Here is the code from amazon taken from this link:
https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer-sample/src/com/amazonaws/services/kinesis/producer/sample/SampleConsumer.java
/*
* Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Amazon Software License (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://aws.amazon.com/asl/
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazonaws.services.kinesis.producer.sample;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorCheckpointer;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorFactory;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.InitialPositionInStream;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisClientLibConfiguration;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShutdownReason;
import com.amazonaws.services.kinesis.model.Record;
/**
* If you haven't looked at {#link SampleProducer}, do so first.
*
* <p>
* As mentioned in SampleProducer, we will check that all records are received
* correctly by the KCL by verifying that there are no gaps in the sequence
* numbers.
*
* <p>
* As the consumer runs, it will periodically log a message indicating the
* number of gaps it found in the sequence numbers. A gap is when the difference
* between two consecutive elements in the sorted list of seen sequence numbers
* is greater than 1.
*
* <p>
* Over time the number of gaps should converge to 0. You should also observe
* that the range of sequence numbers seen is equal to the number of records put
* by the SampleProducer.
*
* <p>
* If the stream contains data from multiple runs of SampleProducer, you should
* observe the SampleConsumer detecting this and resetting state to only count
* the latest run.
*
* <p>
* Note if you kill the SampleConsumer halfway and run it again, the number of
* gaps may never converge to 0. This is because checkpoints may have been made
* such that some records from the producer's latest run are not processed
* again. If you observe this, simply run the producer to completion again
* without terminating the consumer.
*
* <p>
* The consumer continues running until manually terminated, even if there are
* no more records to consume.
*
* #see SampleProducer
* #author chaodeng
*
*/
public class SampleConsumer implements IRecordProcessorFactory {
private static final Logger log = LoggerFactory.getLogger(SampleConsumer.class);
// All records from a run of the producer have the same timestamp in their
// partition keys. Since this value increases for each run, we can use it
// determine which run is the latest and disregard data from earlier runs.
private final AtomicLong largestTimestamp = new AtomicLong(0);
// List of record sequence numbers we have seen so far.
private final List<Long> sequenceNumbers = new ArrayList<>();
// A mutex for largestTimestamp and sequenceNumbers. largestTimestamp is
// nevertheless an AtomicLong because we cannot capture non-final variables
// in the child class.
private final Object lock = new Object();
/**
* One instance of RecordProcessor is created for every shard in the stream.
* All instances of RecordProcessor share state by capturing variables from
* the enclosing SampleConsumer instance. This is a simple way to combine
* the data from multiple shards.
*/
private class RecordProcessor implements IRecordProcessor {
#Override
public void initialize(String shardId) {}
#Override
public void processRecords(List<Record> records, IRecordProcessorCheckpointer checkpointer) {
long timestamp = 0;
List<Long> seqNos = new ArrayList<>();
for (Record r : records) {
// Get the timestamp of this run from the partition key.
timestamp = Math.max(timestamp, Long.parseLong(r.getPartitionKey()));
// Extract the sequence number. It's encoded as a decimal
// string and placed at the beginning of the record data,
// followed by a space. The rest of the record data is padding
// that we will simply discard.
try {
byte[] b = new byte[r.getData().remaining()];
r.getData().get(b);
seqNos.add(Long.parseLong(new String(b, "UTF-8").split(" ")[0]));
} catch (Exception e) {
log.error("Error parsing record", e);
System.exit(1);
}
}
synchronized (lock) {
if (largestTimestamp.get() < timestamp) {
log.info(String.format(
"Found new larger timestamp: %d (was %d), clearing state",
timestamp, largestTimestamp.get()));
largestTimestamp.set(timestamp);
sequenceNumbers.clear();
}
// Only add to the shared list if our data is from the latest run.
if (largestTimestamp.get() == timestamp) {
sequenceNumbers.addAll(seqNos);
Collections.sort(sequenceNumbers);
}
}
try {
checkpointer.checkpoint();
} catch (Exception e) {
log.error("Error while trying to checkpoint during ProcessRecords", e);
}
}
#Override
public void shutdown(IRecordProcessorCheckpointer checkpointer, ShutdownReason reason) {
log.info("Shutting down, reason: " + reason);
try {
checkpointer.checkpoint();
} catch (Exception e) {
log.error("Error while trying to checkpoint during Shutdown", e);
}
}
}
/**
* Log a message indicating the current state.
*/
public void logResults() {
synchronized (lock) {
if (largestTimestamp.get() == 0) {
return;
}
if (sequenceNumbers.size() == 0) {
log.info("No sequence numbers found for current run.");
return;
}
// The producer assigns sequence numbers starting from 1, so we
// start counting from one before that, i.e. 0.
long last = 0;
long gaps = 0;
for (long sn : sequenceNumbers) {
if (sn - last > 1) {
gaps++;
}
last = sn;
}
log.info(String.format(
"Found %d gaps in the sequence numbers. Lowest seen so far is %d, highest is %d",
gaps, sequenceNumbers.get(0), sequenceNumbers.get(sequenceNumbers.size() - 1)));
}
}
#Override
public IRecordProcessor createProcessor() {
return this.new RecordProcessor();
}
public static void main(String[] args) {
KinesisClientLibConfiguration config =
new KinesisClientLibConfiguration(
"KinesisProducerLibSampleConsumer",
SampleProducer.STREAM_NAME,
new DefaultAWSCredentialsProviderChain(),
"KinesisProducerLibSampleConsumer")
.withRegionName(SampleProducer.REGION)
.withInitialPositionInStream(InitialPositionInStream.TRIM_HORIZON);
final SampleConsumer consumer = new SampleConsumer();
Executors.newScheduledThreadPool(1).scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
consumer.logResults();
}
}, 10, 1, TimeUnit.SECONDS);
new Worker.Builder()
.recordProcessorFactory(consumer)
.config(config)
.build()
.run();
}
}
Your question is very broad, but here are some suggestions on Kinesis consumers hopefully relevant to your use case.
Each Kinesis stream is partitioned into one or more shards. There are limitations imposed per shard, like you can't write more than a MiB of data per second into a shard, and you can't initiate more than 5 GetRecords (which consumer's processRecords calls under the hood) requests per second to a single shard. (See full list of constraints here.) If you are working with amounts of data that come close to or exceed these constraints, you'd want to increase the number of shards in your stream.
When you have only one consumer application and one worker, it takes the responsibility of processing all shards of the corresponding stream. If there are multiple workers, they each assume responsibility for some subset of shards, so that each shard is assigned to one and only one worker (if you watch consumer logs, you can find this referenced as "taking leases" on shards).
If you'd like to have several processors that independently ingest Kinesis traffic and process records, you need to register two separate consumer applications. In the code you referenced above, the application name is the first parameter of KinesisClientLibConfiguration constructor. Note that even though they are separate consumer apps, the limit of total of 5 GetRecords per second still applies.
In other words, you need to have two separate processes, one will instantiate the consumer that talks to DB, the other will instantiate the consumer that updates GUI:
KinesisClientLibConfiguration databaseSaverKclConfig =
new KinesisClientLibConfiguration(
"DatabaseSaverKclApp",
"your-stream",
new DefaultAWSCredentialsProviderChain(),
// I believe worker ids don't need to be unique, but it's a good practice to make them unique so you can easily identify the workers
"unique-worker-id")
.withRegionName(SampleProducer.REGION)
// this only matters the very first time your consumer is launched, subsequent launches will read the checkpoint from the previous runs
.withInitialPositionInStream(InitialPositionInStream.TRIM_HORIZON);
final IRecordProcessorFactory databaseSaverConsumer = new DatabaseSaverConsumer();
KinesisClientLibConfiguration guiUpdaterKclConfig =
new KinesisClientLibConfiguration(
"GuiUpdaterKclApp",
"your-stream",
new DefaultAWSCredentialsProviderChain(),
"unique-worker-id")
.withRegionName(SampleProducer.REGION)
.withInitialPositionInStream(InitialPositionInStream.TRIM_HORIZON);
final IRecordProcessorFactory guiUpdaterConsumer = new GuiUpdaterConsumer();
What about the implementation of DatabaseSaverConsumer and GuiUpdaterConsumer? Each of them needs to implement custom logic in processRecords method. You need to make sure that each of them does the right amount of work inside this method, and that checkpoint logic is sound. Let's decipher these:
Let's say processRecords takes 10 seconds for 100 records, but the corresponding shard receives 500 records in 10 seconds. Every subsequent invocation of processRecords would be falling further behind the shard. That means that either some work needs to be extracted out of processRecords, or number of shards needs to be scaled up.
Conversely, if processRecords only takes 0.1 seconds, then processRecords will be called 10 times a second, exceeding the allotted 5 transactions per second per shard. If I understand/remember correctly, there is no way to add a pause between subsequent calls to processRecords in the KCL config, so you have to add a sleep inside your code.
Checkpointing: the each worker needs to track its progress, so that if it's unexpectedly interrupted and another worker takes over the same shard, it knows where to continue from. It's usually done in either of two ways: at the beginning of processRecords, or in the end. In the former case, you are saying "I am okay with jumping over some records in the stream, but definitely don't want to process them twice"; in the latter, you are saying "I am okay processing some records twice, but definitely can't lose any of them". (When you need the best of both worlds, i.e., process records once and only once, you need to keep the state in some datastore outside the workers.) In your case, the database writer most probably needs to checkpoint after processing; I am not so sure about he GUI.
Speaking of GUI, what do you use to display data, and why does a Kinesis consumer need to update it, rather the GUI itself querying underlying datastores?
Anyway, I hope this helps. Let me know if you have more specific questions.
I couldn't find how to do this anywhere else online, though I'm sure it's really easy to do. I'm primarily self taught though, and I'd like to start learning to document my code properly. This "yellow box" that pops up in eclipse with information about the method - I want it to pop up on a custom object. For my example below I have a custom class called "System Properties" and a method called "getOs" but when I hover that option, no information comes up. How do I add information to my object?
This picture shows the yellow box
This picture shows the lack of a "yellow box" on my object
and then finally my custom objects code...
public class SystemProperties {
private String os;
public SystemProperties() {
this.os = setOs();
}
private String setOs() {
String osName = System.getProperty("os.name");
if(osName.toLowerCase().contains("window"))
return "Windows";
else if(osName.toLowerCase().contains("mac"))
return "Mac";
else
return "Linux";
}
/**
* Method to grab the OS the user is running from
* #return String - the os
*/
public String getOs() {
return this.os;
}
}
Thank you in advance for your time and knowledge. :)
EDIT:
When I import the project of the custom object, it works just fine. It only doesn't work when I export the project of the custom class to a jar file and then use that instead. Do I have to click an option on the export screen?
Eclipse take the info from the notes above the methods in the built in objects.
see this:
/**
* Returns <tt>true</tt> if this map contains a mapping for the specified
* key. More formally, returns <tt>true</tt> if and only if
* this map contains a mapping for a key <tt>k</tt> such that
* <tt>(key==null ? k==null : key.equals(k))</tt>. (There can be
* at most one such mapping.)
*
* #param key key whose presence in this map is to be tested
* #return <tt>true</tt> if this map contains a mapping for the specified
* key
* #throws ClassCastException if the key is of an inappropriate type for
* this map
* (optional)
* #throws NullPointerException if the specified key is null and this map
* does not permit null keys
* (optional)
*/
boolean containsKey(Object key);
You can do the same to the methods of your own objects.
I am having some problems with Javadoc. I have written documentation for variables of a class. And then I want to use that same javaDoc in the constructor. I didn't seem to be able to use #link or #see for this purpose (Well, Netbeans didn't show the result I liked).
It seems like a hassle to copy-paste everything, so is there a tag/parameter to copy javaDoc?
Here is the example:
/**
* The id for identifying this specific detectionloop. It is assumed the
* Detectionloops are numbered in order, so Detectionloop '2' is always next to
* Detectionloop '1'.
*/
private int id;
/**
* Constructor for a detectionloop. Detectionloops are real-world sensors
* that register and identify a kart when it passes by. Please note that
* this class is still under heavy development and the parameters of the
* constructor may change along the way!
*
* #param id The id for identifying this specific detectionloop. It is assumed
* the Detectionloops are numbered in order, so Detectionloop '2' is always
* next to Detectionloop '1'.
* #param nextID The id of the next detectionloop is sequense.
* #param distanceToNext The distance in meters to the next detectionloop.
*/
DetectionLoop(int id, int nextID, int distanceToNext) {
this.distanceToNext = distanceToNext;
this.id = id;
if (Detectionloops.containsKey(id)) {
throw new IllegalArgumentException("Detectionloop " + this.id
+ " already exist, please use a unused identification!");
} else {
Detectionloops.put(this.id, this);
}
}
This is unfortunately impossible using standard Javadoc. As a workaround, you could use the #link tag to reference the field, and then people could click the link to get at its documentation. This would require a click, but at least you don't have to maintain redundant documentation:
/**
* ...
* #param id the value for {#link #id}
The only other way of solving this that I know of is to write a custom doclet, which would allow you to define your own tag for your purpose.
There is an interesting article here on maintaing backwards compatibility for Java. In the wrapper class section, I can't actually understand what the wrapper class accomplishes. In the following code from MyApp, WrapNewClass.checkAvailable() could be replaced by Class.forName("NewClass").
static {
try {
WrapNewClass.checkAvailable();
mNewClassAvailable = true;
} catch (Throwable ex) {
mNewClassAvailable = false;
}
}
Consider when NewClass is unavailable. In the code where we use the wrapper (see below), all we have done is replace a class that doesn't exist, with one that exists, but which can't be compiled as it uses a class that doesn't exist.
public void diddle() {
if (mNewClassAvailable) {
WrapNewClass.setGlobalDiv(4);
WrapNewClass wnc = new WrapNewClass(40);
System.out.println("newer API is available - " + wnc.doStuff(10));
}else {
System.out.println("newer API not available");
}
}
Can anyone explain why this makes a difference? I assume it has something to do with how Java compiles code - which I don't know much about.
The point of this is to have code which is compiled against some class which may not be available at runtime. WrapNewClass has to be present in the classpath of javac, or this thing can't be compiled. However, it can be absent from the classpath at runtime.
The code you quote avoids references to WrapNewClass if mNewClassAvailable is false. Thus, it will just print the 'new API not available' message.
However, I can't say that I'm impressed. In general, I've seen this sort of thing arranged with java.lang.reflect instead of trying to catch the exception. That, in passing, allows the class to be nowhere in sight even when compiled.
I have long had the need to support every JVM since 1.1 in JSE and have used these kind of wrapping techniques to compatibly support optional APIs - that is, APIs which make the application work better, but are not essential to it.
The two techniques I use seem to be (poorly?) described in the article you referenced. Rather than comment further on that, I will instead provide real examples of how I have done this.
Easiest - Static Wrapper Method
Need: To invoke an API if it is available, or otherwise do nothing. This can be compiled against any JVM version.
First, set up a static Method which has the reflected method, like so:
static private final java.lang.reflect.Method SET_ACCELERATION_PRIORITY;
static {
java.lang.reflect.Method mth=null;
try { mth=java.awt.Image.class.getMethod("setAccelerationPriority",new Class[]{Float.TYPE}); } catch(Throwable thr) { mth=null; }
SET_ACCELERATION_PRIORITY=mth;
}
and wrap the reflected method instead of using a direct call:
static public void setImageAcceleration(Image img, int accpty) {
if(accpty>0 && SET_ACCELERATION_PRIORITY!=null) {
try { SET_ACCELERATION_PRIORITY.invoke(img,new Object[]{new Float(accpty)}); }
catch(Throwable thr) { throw new RuntimeException(thr); } // exception will never happen, but don't swallow - that's bad practice
}
}
Harder - Static Wrapper Class
Need: To invoke an API if it is available, or otherwise invoke an older API for equivalent, but degraded, functionality. This must be compiled against the newer JVM version.
First set up a static wrapper class; this may be a static singleton wrapper, or you might need to wrap every instance creation. The example which follows uses a static singleton:
package xxx;
import java.io.*;
import java.util.*;
/**
* Masks direct use of select system methods to allow transparent use of facilities only
* available in Java 5+ JVM.
*
* Threading Design : [ ] Single Threaded [x] Threadsafe [ ] Immutable [ ] Isolated
*/
public class SysUtil
extends Object
{
/** Package protected to allow subclass SysUtil_J5 to invoke it. */
SysUtil() {
super();
}
/** Package protected to allow subclass SysUtil_J5 to override it. */
int availableProcessors() {
return 1;
}
/** Package protected to allow subclass SysUtil_J5 to override it. */
long milliTick() {
return System.currentTimeMillis();
}
/** Package protected to allow subclass SysUtil_J5 to override it. */
long nanoTick() {
return (System.currentTimeMillis()*1000000L);
}
// *****************************************************************************
// STATIC PROPERTIES
// *****************************************************************************
static private final SysUtil INSTANCE;
static {
SysUtil instance=null;
try { instance=(SysUtil)Class.forName("xxx.SysUtil_J5").newInstance(); } // can't use new SysUtil_J5() - compiler reports "class file has wrong version 49.0, should be 47.0"
catch(Throwable thr) { instance=new SysUtil(); }
INSTANCE=instance;
}
// *****************************************************************************
// STATIC METHODS
// *****************************************************************************
/**
* Returns the number of processors available to the Java virtual machine.
* <p>
* This value may change during a particular invocation of the virtual machine. Applications that are sensitive to the
* number of available processors should therefore occasionally poll this property and adjust their resource usage
* appropriately.
*/
static public int getAvailableProcessors() {
return INSTANCE.availableProcessors();
}
/**
* Returns the current value of the most precise available system timer, in milliseconds.
* <p>
* This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock
* time. The value returned represents milliseconds since some fixed but arbitrary time (perhaps in the future, so
* values may be negative). This method provides millisecond precision, but not necessarily millisecond accuracy. No
* guarantees are made about how frequently values change. Differences in successive calls that span greater than
* approximately 292,000 years will not accurately compute elapsed time due to numerical overflow.
* <p>
* For example, to measure how long some code takes to execute:
* <p><pre>
* long startTime = SysUtil.getNanoTick();
* // ... the code being measured ...
* long estimatedTime = SysUtil.getNanoTick() - startTime;
* </pre>
* <p>
* #return The current value of the system timer, in milliseconds.
*/
static public long getMilliTick() {
return INSTANCE.milliTick();
}
/**
* Returns the current value of the most precise available system timer, in nanoseconds.
* <p>
* This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock
* time. The value returned represents nanoseconds since some fixed but arbitrary time (perhaps in the future, so values
* may be negative). This method provides nanosecond precision, but not necessarily nanosecond accuracy. No guarantees
* are made about how frequently values change. Differences in successive calls that span greater than approximately 292
* years will not accurately compute elapsed time due to numerical overflow.
* <p>
* For example, to measure how long some code takes to execute:
* <p><pre>
* long startTime = SysUtil.getNanoTick();
* // ... the code being measured ...
* long estimatedTime = SysUtil.getNanoTick() - startTime;
* </pre>
* <p>
* #return The current value of the system timer, in nanoseconds.
*/
static public long getNanoTick() {
return INSTANCE.nanoTick();
}
} // END PUBLIC CLASS
and create a subclass to provide the newer functionality when available:
package xxx;
import java.util.*;
class SysUtil_J5
extends SysUtil
{
private final Runtime runtime;
SysUtil_J5() {
super();
runtime=Runtime.getRuntime();
}
int availableProcessors() {
return runtime.availableProcessors();
}
long milliTick() {
return (System.nanoTime()/1000000);
}
long nanoTick() {
return System.nanoTime();
}
} // END PUBLIC CLASS
I've seen this behaviour in spring and richfaces. Spring, for example, does the following
has a compile-time dependency on JSF
declares a private static inner class where it references the JSF classes
try/catches Class.forName(..) a JSF class
if no exception is thrown, the inner class is referenced (and the spring context is obtained through the faces context)
if exception is thrown, the spring context is obtained from another source (the servlet context)
Note that inner classes are not loaded until they are referenced, so it is OK to have a dependency that is not met in it.
(The spring class is org.springframework.web.context.request.RequestContextHolder)