I try this tutorial
https://docs.aws.amazon.com/qldb/latest/developerguide/getting-started.java.step-2.html but I don't understand how to connect to qldb with the java sdk.
I only need to update a document, but this documentation is so complex. Does anyone have any idea? Or something for dummies.
public final class ConnectToLedger {
public static final Logger log = LoggerFactory.getLogger(ConnectToLedger.class);
public static AWSCredentialsProvider credentialsProvider;
public static String endpoint = null;
public static String ledgerName = Constants.LEDGER_NAME;
public static String region = null;
public static PooledQldbDriver driver = createQldbDriver();
private ConnectToLedger() { }
/**
* Create a pooled driver for creating sessions.
*
* #return The pooled driver for creating sessions.
*/
public static PooledQldbDriver createQldbDriver() {
AmazonQLDBSessionClientBuilder builder = AmazonQLDBSessionClientBuilder.standard();
if (null != endpoint && null != region) {
builder.setEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(endpoint, region));
}
if (null != credentialsProvider) {
builder.setCredentials(credentialsProvider);
}
return PooledQldbDriver.builder()
.withLedger(ledgerName)
.withRetryLimit(Constants.RETRY_LIMIT)
.withSessionClientBuilder(builder)
.build();
}
/**
* Connect to a ledger through a {#link QldbDriver}.
*
* #return {#link QldbSession}.
*/
public static QldbSession createQldbSession() {
return driver.getSession();
}
public static void main(final String... args) {
try (QldbSession qldbSession = createQldbSession()) {
log.info("Listing table names ");
for (String tableName : qldbSession.getTableNames()) {
log.info(tableName);
}
} catch (QldbClientException e) {
log.error("Unable to create session.", e);
}
}
}
I'm sorry the documentation is complex. Here is a minimal version of the code you referred to with all the customization and options stripped out. It assumes your environment is setup to use the correct AWS region and credentials.
PooledQldbDriver driver = PooledQldbDriver.builder()
.withLedger("my-ledger-name")
.withSessionClientBuilder(AmazonQLDBSessionClientBuilder.standard())
.build();
try (QldbSession session = driver.getSession()) {
session.execute("UPDATE my-table SET my-field = ?", < Ion value here >);
}
I'd love to help you further, but your question as it stands doesn't make it clear where you got stuck. For example, did you try run the above code and, if so, did you get an error? If you update your question with more information or respond to my answer in the comments I"ll check back in.
so i reduce the code because the example need more experience on QLDB sdk java, and java.
public QldbSession getQldbSession(String ledgerName) {
final AmazonQLDBSessionClientBuilder builder = AmazonQLDBSessionClientBuilder.standard();
if (null != endpoint && null != region) {
builder.setEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(endpoint, region));
}
if (null != credentialsProvider) {
builder.setCredentials(credentialsProvider);
}
final PooledQldbDriver driver = PooledQldbDriver.builder().withLedger(ledgerName).withRetryLimit(4)
.withSessionClientBuilder(builder).build();
return driver.getSession();
}
Result result = null;
try {
final String query = "!query hereĀ”";
final IonObjectMapper MAPPER = new IonValueMapper(IonSystemBuilder.standard().build());
final List<IonValue> parameters = new ArrayList<>();
parameters.add(MAPPER.writeValueAsIonValue("parameter"));
parameters.add(MAPPER.writeValueAsIonValue("parameter"));
parameters.add(MAPPER.writeValueAsIonValue("parameter"));
result = qldbSession.execute(query, parameters);
} catch (final QldbClientException e) {
System.out.println("Unable to create session.");
} catch (final IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return result;
}
Related
I have a messages.properties file that contains all string messages used in my application.
I would like to bind these messages to a java class fields and use directly in other classes.
Can this be achieved without using NLS? By some approach in javafx? Because I do not want to add eclipse dependency in UI classes.
Java provides property file reading capability right from the box. You can do adjustment to suit your actual use-case.
For example:
public final class Messages {
private Messages() {
loadFile();
}
private static final class ThreadSafeSingleton {
private static final Messages INSTANCE = new Messages();
}
public static Messages getInstance() {
return ThreadSafeSingleton.INSTANCE;
}
private final Properties props = new Properties();
private void loadFile() {
InputStream is = null;
try {
is = new FileInputStream("messages.properties");
props.load(is);
} catch (IOException ex) {
ex.printStackTrace();
} finally {
if (is != null) {
try {
is.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
public String getMessage(String key) {
if (key == null && key.isEmpty()) return "";
return props.getProperty(key);
}
}
Edit
In order to use these values as if it is a constant, you need to pretty much make everything static:
public final class Messages {
private Messages() {} // Not instantiable
private static final Properties props = loadFile(); // Make sure this static field is at the top
public static final String FOO = getMessage("foo");
public static final String BAR = getMessage("bar");
private static Properties loadFile() {
final Properties p = new Properties();
InputStream is = null;
try {
is = new FileInputStream("messages.properties");
p.load(is);
} catch (IOException ex) {
ex.printStackTrace();
} finally {
if (is != null) {
try {
is.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return p;
}
public static String getMessage(String key) {
if (key == null && key.isEmpty()) return "";
return props.getProperty(key);
}
}
Be warned again, the Properties field must always be the top-most field declared in the class, because the class loader will load the fields top-down for all static fields whose value is computed at runtime (i.e. set by a static method).
Another point, this example does not handles what happens if the file is not file - it simply returns a Properties that has no value.
Here my problem :
I have two mysql databases directory and I want to use one after the other.
The only way that I have actualy found, to switch from one database to the other, is to shutdown the mysql daemon and to start it again pointing to the second database directory.
Are there any other way to perform that ?
Thanks
EDIT :
My application manage "missions directory" that embed a Database.
This missions are copied to an hard disk, that is connected to an external device that will fill this database.
Then when the mission is done, we collect the mission and the database with the application to generate report.
That why we have multiple database with the same schema, but placed in different place, we need also to read this database by an external application that why we need to have only one database open at each time.
My question is not if it possible to run two database from two different directories at the same time, because I know that is possible, but how to switch from one database to another, without kiling the daemon.
PS: I'm working on Java application and I do all this action by system access in Java like Runtime.getRuntime().exec(MY_CMD), not by choice. Maybe it's better to use Java Library, I already use hibernate.
Here the code to switch :
new Thread(new Task<T>() {
#Override
protected T call() throws Exception {
// Close the previous database
if (isDaemonRunning()) {
close();
}
// try to open the new one
if (!open()) {
notifyConnectedStatus(false);
return null;
}
// create the hibernate session object
_session = HibernateUtil.getSessionFactory().openSession();
notifyConnectedStatus(true);
// no return is waiting, then return null
return null;
}
}).start();
Here the called methods :
private boolean open() {
int exitVal = 0;
try {
Process p = Runtime.getRuntime().exec(getRunDaemonCmd());
p.waitFor(1, TimeUnit.SECONDS);
if (p.isAlive()) {
return true;
}
exitVal = p.exitValue();
} catch (Exception e) {
_logger.log(Level.SEVERE, e.getMessage(), e);
return false;
}
return (0 == exitVal);
}
private void close() {
do {
try {
if (null != _session) {
_session.close();
_session = null;
}
Process p = Runtime.getRuntime().exec(SHUTDOWN_CMD);
p.waitFor();
} catch (Exception e) {
_logger.log(Level.SEVERE, e.getMessage(), e);
return;
}
} while (isDaemonRunning());
_connected = false;
}
private String[] getRunDaemonCmd() {
return new String[] { MYSQLD, INI_FILE_PARAM + _myIniFile, DATADIR_PARAM + _databasePath };
}
private boolean isDaemonRunning() {
int exitVal = 0;
try {
Process p = Runtime.getRuntime().exec(PING_CMD);
p.waitFor();
exitVal = p.exitValue();
} catch (Exception e) {
_logger.log(Level.SEVERE, e.getMessage(), e);
}
return (0 == exitVal);
}
And Here the constants :
private static final String MYSQLD = "mysqld";
private static final String INI_FILE_PARAM = "--defaults-file=";
private static final String DATADIR_PARAM = "--datadir=";
private static final String MYSQLADMIN = "mysqladmin";
private static final String USER_PARAM = "-u";
private static final String PASSWORD_PARAM = "-p";
private static final String SHUTDOWN = "shutdown";
private static final String PING = "ping";
private static final String[] PING_CMD = new String[] { MYSQLADMIN, PING };
private static final String[] SHUTDOWN_CMD = new String[] { MYSQLADMIN, USER_PARAM + DatabaseSettings.getUser(),
PASSWORD_PARAM + DatabaseSettings.getPassword(), SHUTDOWN };
private String _myIniFile = DatabaseSettings.getDefaultIniFile();
so, you can use multiple persistence unit to connect with multiple data-source or database if you use hibernate.
I'm using an asyncronus XML-RPC-Client (https://github.com/gturri/aXMLRPC) in my Project and wrote some methods using the asyncronous Callback-Methods of this Client like this this:
public void xmlRpcMethod(final Object callbackSync) {
XMLRPCCallback listener = new XMLRPCCallback() {
public void onResponse(long id, final Object result) {
// Do something
if (callbackSync != null) {
synchronized (callbackSync) {
callbackSync.notify();
}
}
}
public void onError(long id, final XMLRPCException error) {
// Do something
if (callbackSync != null) {
synchronized (callbackSync) {
callbackSync.notify();
}
}
}
public void onServerError(long id, final XMLRPCServerException error) {
Log.e(TAG, error.getMessage());
if (callbackSync != null) {
synchronized (callbackSync) {
callbackSync.notifyAll();
}
}
}
};
XMLRPCClient client = new XMLRPCClient("<url>");
long id = client.callAsync(listener, "<method>");
}
In other methods I like to call this method (here "xmlRpcMethod") and wait until it finished. I wrote methods like this:
public void testMethod(){
Object sync = new Object();
xmlRpcMethod(sync);
synchronized (sync){
try{
sync.wait();
}catch(Interrupted Exception e){
e.printStackTrace();
}
}
// Do something after xmlRcpFinished
}
But this way of waiting and synchronizing get's ugly when the projects grows larger and I need to wait for many requests to finish.
So is this the only possible / best way? Or does someone knows a better solution?
My first shot to create blocking RPC calls would be:
// Little helper class:
class RPCResult<T>{
private final T result;
private final Exception ex;
private final long id;
public RPCResult( long id, T result, Exception ex ){
// TODO set fields
}
// TODO getters
public boolean hasError(){ return null != this.ex; }
}
public Object xmlRpcMethod() {
final BlockingQueue<RPCResult> pipe = new ArrayBlockingQueue<RPCResult>(1);
XMLRPCCallback listener = new XMLRPCCallback() {
public void onResponse(long id, final Object result) {
// Do something
pipe.put( new RPCResult<Object>(id, result, null) );
}
public void onError(long id, final XMLRPCException error) {
// Do something
pipe.put( new RPCResult<Object>(id, null, error) );
}
public void onServerError(long id, final XMLRPCServerException error) {
Log.e(TAG, error.getMessage());
pipe.put(new RPCResult<Object>(id, null, error));
}
};
XMLRPCClient client = new XMLRPCClient("<url>");
long id = client.callAsync(listener, "<method>");
RPCResult result = pipe.take(); // blocks until there is an element available
// TODO: catch and handle InterruptedException!
if( result.hasError() ) throw result.getError(); // Relay Exceptions - do not swallow them!
return result.getResult();
}
Client:
public void testMethod(){
Object result = xmlRpcMethod(); // blocks until result is available or throws exception
}
Next step would be to make a strongly typed version public T xmlRpcMethod().
I'm trying to do a multi-get on my redis data store which is distributed across multiple shards. However the keys I want to do this on do not belong to the same shard so I can't use redis' inbuilt multi-get.
Instead I'm trying to use futures to achieve this. But after checking the lookup times it almost seems like these cache calls are being made serially.
The request/sec on the server is about 1.5k with an average of 10 ms response time. Literature I've read told me that my threadpool size should be requests/sec * response time. Since I'm spawning 3 threads this becomes 1500 * 0.010 * 3 = 45. I've tried using threadpool sizes of 50,100,300. But this hasn't helped either.
I'm using Jedis as a client. I thought it could be an issue with exceeding Jedis' max total/idle connection limit. But even after increasing this from 8 to 24 I see no difference in lookup times.
I understand that some overhead will be there since there will be context switches and the overhead of spawning new threads.
Can anyone help me figure out where I'm missing out. Let me know if you need more info.
for(String recordKey : pidArr) {
//Adding futures. Max 3
if(count >= 3) {
break;
}
count++;
Callable<String> a = new FeedCacheCaller(recordKey);
Future<String> future = feedThreadPool.submit(a);
futureList.add(future);
}
//Getting the data from the futures
for(Future<String> foo : futureList) {
try {
String data = foo.get();
logger.debug(data);
feedDataList.add(parseInfo(data));
} catch (Exception e) {
logger.error("somethings going wrong in retrieval",e);
}
}
Here's the Callable class
public class FeedCacheCaller implements Callable {
String pid = null;
FeedCache feedCache;
public FeedCacheCaller(String pid) {
this.pid = pid;
this.feedCache = new FeedCache();
}
#Override
public String call() throws Exception {
return feedCache.get(pid);
}
}
Edit 1:
Here's the Jedis side code.
public class FeedCache {
private ShardedJedisPool feedClient = RedisPool.getPool("feed");
public String get(String key) {
ShardedJedis client = null;
String value = null;
try {
client = feedClient.getResource();
byte[] valueByteArray = client.get(key.getBytes(Constants.CHARSET));
if (valueByteArray != null) {
value = new String(CacheUtils.decompress(valueByteArray),
Constants.CHARSET);
}
} catch (JedisConnectionException e) {
if (client != null) {
feedClient.returnBrokenResource(client);
client = null;
}
logger.error(e.getMessage());
} finally {
if (client != null) {
feedClient.returnResource(client);
}
}
return value;
}
}
Here is the code that initializes the ShardedJedisPool
public class RedisPool {
private static final Logger logger = LoggerFactory.getLogger(
RedisPool.class);
private static ConcurrentHashMap<String, ShardedJedisPool> redisPools = new ConcurrentHashMap<String, ShardedJedisPool>();
public static void initializePool(String poolName) {
List<JedisShardInfo> shards = new ArrayList<JedisShardInfo>();
ArrayList<String> servers = new ArrayList<String>(Arrays.asList(
Constants.config.getStringArray(
poolName + "_redis_servers")));
for (int i = 0; i < servers.size(); i++) {
JedisShardInfo shardInfo = new JedisShardInfo(servers.get(i).split(":")[0], Integer.parseInt(servers.get(i).split(":")[1]));
shards.add(shardInfo);
}
redisPools.putIfAbsent(poolName,
new ShardedJedisPool(new GenericObjectPoolConfig(), shards));
}
public static ShardedJedisPool getPool(String poolName) {
if (!redisPools.containsKey(poolName)) {
synchronized (RedisPool.class) {
if (!redisPools.containsKey(poolName)) {
initializePool(poolName);
}
}
}
return redisPools.get(poolName);
}
public static void shutdown(String poolName) {
ShardedJedisPool pool = getPool(poolName);
pool.destroy();
redisPools.remove(poolName);
}
public static void main(String args[]) {
initializePool("vizidtoud");
}
}
So, I'm working on a plugin at work and I've run into a situation where I could use a ContentProposalAdapter to my benefit. Basically, a person will start typing in someone's name and then a list of names matching the current query will be returned in a type-ahead manner (a la Google). So, I created a class IContentProposalProvider which, upon calling it's getProposals() method fires off a thread which handles getting the proposals in the background. The problem I am having is that I run into a race condition, where the processing for getting the proposals via HTTP happens and I try to get the proposals before they have actually been retrieved.
Now, I'm trying not to run into an issue of Thread hell, and that isn't getting me very far anyway. So, here is what I've done so far. Does anyone have any suggestions as to what I can do?
public class ProfilesProposalProvider implements IContentProposalProvider, PropertyChangeListener {
private IContentProposal[] props;
#Override
public IContentProposal[] getProposals(String arg0, int arg1) {
Display display = PlatformUI.getWorkbench().getActiveWorkbenchWindow().getShell().getDisplay();
RunProfilesJobThread t1 = new RunProfilesJobThread(arg0, display);
t1.run();
return props;
}
#Override
public void propertyChange(PropertyChangeEvent arg0) {
if (arg0.getSource() instanceof RunProfilesJobThread){
RunProfilesJobThread thread = (RunProfilesJobThread)arg0.getSource();
props = thread.getProps();
}
}
}
public class RunProfilesJobThread extends Thread {
private ProfileProposal[] props;
private Display display;
private String query;
public RunProfilesJobThread(String query, Display display){
this.query = query;
}
#Override
public void run() {
if (!(query.equals(""))){
GetProfilesJob job = new GetProfilesJob("profiles", query);
job.schedule();
try {
job.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
GetProfilesJobInfoThread thread = new GetProfilesJobInfoThread(job.getResults());
try {
thread.join();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
props = thread.getProps();
}
}
public ProfileProposal[] getProps(){
return props;
}
}
public class GetProfilesJobInfoThread extends Thread {
private ArrayList<String> names;
private ProfileProposal[] props;
public GetProfilesJobInfoThread(ArrayList<String> names){
this.names = names;
}
#Override
public void run() {
if (names != null){
props = new ProfileProposal[names.size()];
for (int i = 0; i < props.length - 1; i++){
ProfileProposal temp = new ProfileProposal(names.get(i), names.get(i));
props[i] = temp;
}
}
}
public ProfileProposal[] getProps(){
return props;
}
}
Ok, i'll try it...
I haven't tried to run it, but it should work more or less. At least it's a good start. If you have any questions, feel free to ask.
public class ProfilesProposalProvider implements IContentProposalProvider {
private List<IContentProposal> proposals;
private String proposalQuery;
private Thread retrievalThread;
public void setProposals( List<IContentProposal> proposals, String query ) {
synchronized( this ) {
this.proposals = proposals;
this.proposalQuery = query;
}
}
public IContentProposal[] getProposals( String contents, int position ) {
// Synchronize incoming thread and retrieval thread, so that the proposal list
// is not replaced while we're processing it.
synchronized( this ) {
/**
* Get proposals if query is longer than one char, or if the current list of proposals does with a different
* prefix than the new query, and only if the current retrieval thread is finished.
*/
if ( retrievalThread == null && contents.length() > 1 && ( proposals == null || !contents.startsWith( proposalQuery ) ) ) {
getProposals( contents );
}
/**
* Select valid proposals from retrieved list.
*/
if ( proposals != null ) {
List<IContentProposal> validProposals = new ArrayList<IContentProposal>();
for ( IContentProposal prop : proposals ) {
if(prop == null) {
continue;
}
String propVal = prop.getContent();
if ( isProposalValid( propVal, contents )) {
validProposals.add( prop );
}
}
return validProposals.toArray( new IContentProposal[ validProposals.size() ] );
}
}
return new IContentProposal[0];
}
protected void getProposals( final String query ) {
retrievalThread = new Thread() {
#Override
public void run() {
GetProfilesJob job = new GetProfilesJob("profiles", query);
job.schedule();
try {
job.join();
ArrayList<String> names = job.getResults();
if (names != null){
List<IContentProposal> props = new ArrayList<IContentProposal>();
for ( String name : names ) {
props.add( new ProfileProposal( name, name ) );
}
setProposals( props, query );
}
} catch (InterruptedException e) {
e.printStackTrace();
}
retrievalThread = null;
}
};
retrievalThread.start();
}
protected boolean isProposalValid( String proposalValue, String contents ) {
return ( proposalValue.length() >= contents.length() && proposalValue.substring(0, contents.length()).equalsIgnoreCase(contents));
}
}