makePersistent failing with JDO - java

I have the following code deployed to an app engine server (the only
place where I can test JDO, unfortunately I cannot test JDO locally
because I don't have a local BigTable implementation).
final class PMF {
private static final PersistenceManagerFactory pmf = JDOHelper.getPersistenceManagerFactory("transactions-optional");
private PMF() { }
public static PersistenceManagerFactory get() { return pmf; }
}
#PersistenceCapable
class Data {
// ...
#Persistent
private static List<Store> stores = new ArrayList<Store>();
static List<Store> getStores() {
return stores;
}
}
...
Data.getStores().add(store);
writer.write("this line received OK by client.");
PMF.get().getPersistenceManager().makePersistent(Data.getStores());
writer.write("this line never received by client.");
As shown the first line of output from the server is received on the client and the second one is not which means makePersistent() is failing.
Anyone have any idea why this is happening?

Perhaps the simple fact that no standard persistence API for Java provides persistence of static fields.

You can mimic BigTable on your local machine by running your code locally using ant or the eclipse appengine plugin. The eclipse plugin also runs datanucleus in the background and will catch errors like this for you without having to upload to appengine whenever you make a change.

Related

How to resolve arquillian static variable = null

Since i upgraded to WildFly 23 I have not been able to get shrinkwrap/arquillian to resolve classes correctly.
Here is the createDeployment function
public static Archive<?> createBasicShrinkWrappedDeployment()
{
File[] mavenImports = Maven.resolver()
.loadPomFromFile("pom.xml")
.importRuntimeDependencies()
.resolve()
.withTransitivity()
.asFile();
return ShrinkWrap.create(WebArchive.class, "<project>-tests.war")
.addAsLibraries(mavenImports)
.addPackages(true, "org.<company>.crs")
.addAsResource("jbossas-managed/test-persistence.xml", "META-INF/persistence.xml")
.addAsResource("jbossas-managed/test-orm.xml", "META-INF/orm.xml")
.addAsResource("templates/email/template1.vm")
.addAsResource("templates/email/template2.vm")
.addAsResource("templates/email/template3.vm")
.addAsResource("templates/email/template4.vm")
.addAsResource("templates/pdf/template5.vm")
.addAsWebInfResource("beans.xml", "beans.xml");
}
My issue is that for testing we have some test data that exists at: org.<company>.crs.utils, it is a bunch of static data that we use for our functional tests to compare the expected database data to the static data in the application. Here is an example:
package org.<company>.crs.utils;
public class UserInfo{
public static class Id
{
public static UUID Steve = UUID.fromString("...");
public static UUID TestPerson = UUID.fromString("...");
public static UUID Anonymous = UUID.fromString("...");
}
... <more test classes like Id>
}
Now, when we run the tests we may run something like:
Assert.assertEquals(permission.getIdentityId(), UserInfo.Id.Steve);
However, UserInfo.Id.Steve is null, i am assuming this is a shrinkwrap or arquillian issue since that data is statically defined and cannot be null.
This had worked until we updated the application server from WF8 to WF23 (and made a bunch of other changes along the way). Wondering if anyone knows what caused this, or how to resolve it?
Further developments in the troubleshooting process have concluded that this is an issue with (i think) my IDE and not the testing framework. See the above comments for a link to the new question about the IDE issue.

Appropriate way to implement a cli Application which also uses the service profile with Micronaut

I've no problem in creating a REST Server or a Picocli CLI Application.
But what if I want to have both in one Application?
The thing is, I want to have an Application which provides some business logic via REST Server (no problem there), but in some other cases I want to trigger the business logic via CLI without starting the HTTP Server (eg. for CI/CD).
I'm not sure if I run into problems if I start the app via
PicocliRunner.run(Application.class, args) and if a specific argument is given run the Server with Micronaut.run(Application.class);, since they create a different context.
Does anyone know a proper way to achieve this?
This is how I solved it:
import io.micronaut.configuration.picocli.PicocliRunner;
import io.micronaut.runtime.Micronaut;
import picocli.CommandLine.Command;
import picocli.CommandLine.Parameters;
#Command(
name = "RestAndCliExample",
description = "...",
mixinStandardHelpOptions = true
)
public class Application implements Runnable {
private enum Mode {serve, run}
#Parameters(index = "0", description = "Execution mode: ${COMPLETION-CANDIDATES}")
private Mode mode;
public static void main(String[] args) throws Exception {
args = new String[]{"run"};
PicocliRunner.run(Application.class, args);
}
public void run() {
if (Mode.serve.equals(mode)) {
// Start REST API
Micronaut.run(Application.class);
} else {
// TODO run code directly
}
}
}
One way to accomplish this is to #Inject the ApplicationContext into your #Command-annotated class. This allows your command to use the same application context instead of needing to start a separate one.
Then, in your run method, you can start the REST server by obtaining the EmbeddedServer from the application context and calling start on it, or you can execute the functionality directly without the REST server.
See also this answer for more detail: https://stackoverflow.com/a/56751733/1446916

How to test Azure ARM API using mockito?

I am working on azure ARM methods like provision VM and list all VM's using java sdk. I want to test my method using mockito. How can I do that without making original call to azure.
public class ListAllVM{
public static Azure azure = null;
public void listAllVM() {
azure = getAuthentication();
try {
int i = 0;
for (VirtualMachine VM : azure.virtualMachines().list()) {
System.out.println(++i +
" \n VM ID:-" + VM.id() +
" \n VM Name:-" + VM.name() +
"\n");
}
} catch (CloudException | IOException | IllegalArgumentException e) {
log.info("Listing vm failed"); }
}
}
I am facing problem while getting mock list of vm. How to mock external API class.
Your problem is: you wrote hard to test code - by using static and new. A quick suggestion how to do things differently:
public class AzureUtils {
private final Azure azure;
public AzureUtils() { this ( getAuthentication(); }
AzureUtils(Azure azure) { this.azure = azure };
public List<VM> getVms() {
return azure.virtualMachines.list();
}
In my version, you can use dependency injection to insert a mocked version of Azure.class. Now you can use any kind of mocking framework (like EasyMock or Mokito) to provide an Azure object that (again) returns mocked objects.
For the record: I am not sure where your code is getting getAuthentication() from; so unless this is a static import; something is wrong with your code in the first place.
In other words: you want to learn how to write testable code; for example by watching these videos.
On the other hand one comment says that the Azure class is final; and its constructor is private. Which is well, perfectly fair: when you design an API using final is a powerful tool to express intent.
Coming from there, you are actually pretty limited to:
as written above: you create an abstraction like AzureUtils - this way you can at least shield your own code from the effects from the design decisions by Microsoft
you enable the new experimental feature within Mockito that allows mocking final classes
you turn to PowerMock(ito) or JMockit

Use bolt driver and Java API of Neo4j simultaneously

Hello everybody!
I have developed a JavaFX application to support my scientific work (molecular biology/neuropharmacology), implementing Neo4j, at the time Version 2.x.
Now, since Version 3 (using 3.1.0-M05) is out, I am switching over to Bolt protocol access of the Database, with the Driver (1.1.0-M01) interface. Some functions of my application still require Java API access though, so I cannot completely abandon the old code. I am using a singleton GraphDatabaseFactory to start up the database, like so
private static GraphDatabaseService instance;
private GraphDb() {
instance = new GraphDatabaseFactory().newEmbeddedDatabaseBuilder(new File(FilePaths.DATABASE_PATH))
.setConfig(ShellSettings.remote_shell_enabled, "true").newGraphDatabase();
}
public static synchronized GraphDatabaseService getInstance() {
return instance;
}
(Or, just the .newEmbeddedDatabase())But now, since Version 3, I also use a singleton Driver instance for the Bolt interaction, like so
private static Driver instance;
private GraphDbDriver() {
startLocalDb();
instance = GraphDatabase.driver("bolt://localhost");
}
private static void startLocalDb() {
//start database here?
}
public static synchronized Driver getInstance() {
return instance;
}
My question now, is this (since I gathered that using both at the same time can only breed complications): How do I use these two ways of communicating with the DB without them getting in the way of each other?
Can I somehow get the Driver to load "onto" the already created GraphDatabaseService singleton?
Thanks for reading!
So, for anybody who's interested, in Neo4j 3.x it is recommended to use 'User-defined procedures' to implement API commands (like, e.g., traversals) and then calling it (via CALL) from cypher.

Use ElasticSearch with Dropwizard

I'm trying to use ElasticSearch java API in a Dropwizard application.
I found the dropwizard-elasticsearch package: https://github.com/dropwizard/dropwizard-elasticsearch, that seems to be exactly what I need.
Unfortunately, it provides zero "useful" documentation, and no usage examples.
I still haven't understood how to connect to remote servers using the TransportClient, because, due to no documentation of drop wizard-elasticsearch configuration, I should try "randomly" until I find the correct configuration keys...
Does anyone have tried using dropwizard-elasticsearch? Or has someone a real usage example of this?
Thanks in advance,
Unless you really need to join the Elasticsearch cluster, I would avoid using the Java classes provided by Elasticsearch. If you do connect to Elasticsearch this way, you will need to keep the JVM versions used by Elasticsearch and your application in sync.
Instead, you can connect to Elasticsearch using the Jest client found on GitHub. This will allow you to connect to Elasticsearch over the REST interface, just like all of the other client libraries.
You will need to create a simple configuration block for Elasticsearch, to specify the URL of the REST interface. Also, you will need to create a Manager for starting and stopping the JestClient.
Update: You can find the Dropwizard bundle that I use for connecting to Elasticsearch on GitHub. Here are some basic usage instructions for Java 8:
Include the dependency for the bundle in your project's POM.
<dependency>
<groupId>com.meltmedia.dropwizard</groupId>
<artifactId>dropwizard-jest</artifactId>
<version>0.1.0</version>
</dependency>
Define the JestConfiguraion class somewhere in your application's configuration.
import com.meltmedia.dropwizard.jest.JestConfiguration;
...
#JsonProperty
protected JestConfiguration elasticsearch;
public JestConfiguration getElasticsearch() {
return jest;
}
Then include the bundle in the initialize method of your application.
import com.meltmedia.dropwizard.jest.JestBundle;
...
protected JestBundle jestBundle;
#Override
public void initialize(Bootstrap<ExampleConfiguration> bootstrap) {
bootstrap.addBundle(jestBundle = JestBundle.<ExampleConfiguration>builder()
.withConfiguration(ExampleConfiguration::getElasticsearch)
.build());
}
Finally, use the bundle to access the client supplier.
#Override
public void run(ExampleConfiguration config, Environment env) throws Exception {
JestClient client = jestBundle.getClientSupplier().get();
}
Too long for a comment.
Please check README.md ->"Usage" and "Configuration". If you want dropwizard to create managed TransportClient your configuration settings should be something like this.
nodeClient: false
clusterName: dropwizard_elasticsearch_test
servers:
- 127.23.42.1:9300
- 127.23.42.2:9300
- 127.23.42.3
How to obtain dropwizard-managed TransportClient? Example here: public void transportClientShouldBeCreatedFromConfig().
#Override
public void run(DemoConfiguration config, Environment environment) {
final ManagedEsClient managedClient = new ManagedEsClient(configuration.getEsConfiguration());
Client client = managedClient.getClient();
((TransportClient) client).transportAddresses().size();
// [...]
}
There is also a sample blog application using Dropwizard and ElasticSearch. See "Acknowledgements" section in README.md.
I have used Java api for elasticsearch and the thing is the bundle you are using I also explored but documentation part discouraged using it. Here you can use elastic without this bundle:-
Define your elastic configs in your .yml file.
elasticsearchHost: 127.0.0.1
elasticPort: 9300
clusterName: elasticsearch
Now in configuration file(which in my case is mkApiConfiguration) create static functions which will actually be the getter methods for these elastic configuration:-
#NotNull
private static String elasticsearchHost;
#NotNull
private static Integer elasticPort;
#NotNull
private static String clusterName;
#JsonProperty
public static String getElasticsearchHost() {
return elasticsearchHost;
}
//This function will be called while reading configurations from yml file
#JsonProperty
public void setElasticsearchHost(String elasticsearchHost) {
mkApiConfiguration.elasticsearchHost = elasticsearchHost;
}
#JsonProperty
public void setClusterName(String clusterName) {
mkApiConfiguration.clusterName = clusterName;
}
public void setElasticPort(Integer elasticPort) {
mkApiConfiguration.elasticPort = elasticPort;
}
#JsonProperty
public static String getClusterName() {
return clusterName;
}
#JsonProperty
public static Integer getElasticPort() {
return elasticPort;
}
Now create a elastic factory from where you can get a transport client better create it as a singleton class so that only one instance is created and shared for elastic configuration we can get configuration using getter method of configuration class as these are static method so we don't need to create an object to access these methods. So code of this factory goes like that
public class ElasticFactory {
//Private constructor
private ElasticFactory(){};
public static Client getElasticClient(){
try {
/*
* Creating Transport client Instance
*/
Client client = TransportClient.builder().build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(mkApiConfiguration.getElasticsearchHost()), mkApiConfiguration.getElasticPort()));
return client;
}
catch (Exception e){
e.printStackTrace();
return null;
}
}
Now you can call this elastic factory method from any class like below:-
/*As getInstance is a static method so we can access it directly without creating an object of ElasticFactory class */
Client elasticInstance= ElasticFactory.getElasticClient();

Categories