OrientDB Complete Embedded Cluster Test - java

I am trying to create a simple test that:
Activates a full server embedded instance (Embedded Server and Distributed Configuration)
Creates an initial test database in document mode during the first run (Create a Database)
Opens the test database (Open a Database)
Insert a sample record
Fetch the sample record
Add another node and repeat
I can roughly understand the steps individually but I am having some difficulty piecing together a simple test case. For example, the API documentation assumes a remote connection. I am not sure whether that is the applicable method here, and if so, what URL I should specify.
Once I have completed steps 1, 2 and 3 correctly, I should be able to just refer to the API documentation for steps 4 and 5.
As a novice user, I find difficult to interpret the documentation in context. Any help or clarification would be appreciated.
I am trying to run this test as a jUnit test. Here is what I have so far:
public class TestOrientDb {
private static final Logger log = Logger.getLogger(TestOrientDb.class);
#Test
public void testFullEmbeddedServer() throws Exception {
log.debug("connectiong to database server...");
String orientdbHome = new File("src/test/resources").getAbsolutePath(); //Set OrientDB home to current directory
log.debug("the orientdb home: " + orientdbHome);
System.setProperty("ORIENTDB_HOME", orientdbHome);
OServer server = OServerMain.create();
URL configUrl = this.getClass().getResource("/orientdb-config.xml");
server.startup(configUrl.openStream());
server.activate();
//HOW DO I CREATE A DATABASE HERE?
//HOW DO I OPEN MY DATABASE TO USE THE API LIKE THIS: http://orientdb.com/docs/last/Document-Database.html
//SHOULD I PAUSE THE THREAD TO KEEP THE SERVER ACTIVE?
log.debug("shutting down orientdb...");
server.shutdown();
}}
Here is orientdb-config.xml:
<orient-server>
<users>
<user name="root" password="password" resources="*"/>
</users>
<properties>
<entry value="/etc/kwcn/databases" name="server.database.path"/>
<entry name="log.console.level" value="fine"/>
</properties>
<handler class="com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin">
<parameters>
<!-- NODE-NAME. IF NOT SET IS AUTO GENERATED THE FIRST TIME THE SERVER RUN -->
<!-- <parameter name="nodeName" value="europe1" /> -->
<parameter name="enabled" value="true"/>
<parameter name="configuration.db.default" value="${ORIENTDB_HOME}/orientdb-config.json"/>
<parameter name="configuration.hazelcast" value="${ORIENTDB_HOME}/hazelcast.xml"/>
</parameters>
</handler>
Here is hazelcast.xml:
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.0.xsd"
xmlns="http://www.hazelcast.com/schema/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<group>
<name>orientdb</name>
<password>orientdb</password>
</group>
<network>
<port auto-increment="true">2434</port>
<join>
<multicast enabled="true">
<multicast-group>235.1.1.1</multicast-group>
<multicast-port>2434</multicast-port>
</multicast>
</join>
</network>
<executor-service>
<pool-size>16</pool-size>
</executor-service>
Here is orientdb-config.json:
{ "autoDeploy": true, "hotAlignment": false, "executionMode": "asynchronous", "readQuorum": 1, "writeQuorum": 2, "failureAvailableNodesLessQuorum": false, "readYourWrites": true, "servers": { "*": "master" }, "clusters": { "internal": { }, "index": { }, "*": { "servers": [ "<NEW_NODE>" ] } } }
Here is the output:
2016-02-07 16:02:17:867 INFO OrientDB auto-config DISKCACHE=10,695MB (heap=3,641MB os=16,384MB disk=71,698MB) [orientechnologies] 2016-02-07 16:02:18:016 INFO Loading configuration from input stream [OServerConfigurationLoaderXml] 2016-02-07 16:02:18:127
INFO OrientDB Server v2.2.0-beta is starting up... [OServer] 2016-02-07 16:02:18:133 INFO Databases directory: /etc/kwcn/databases [OServer] 2016-02-07 16:02:18:133 WARNI Network configuration was not found [OServer] 2016-02-07 16:02:18:133 WARNI Found
ORIENTDB_ROOT_PASSWORD variable, using this value as root's password [OServer] 2016-02-07 16:02:18:523 INFO OrientDB Server is active v2.2.0-beta. [OServer] 2016-02-07 16:02:18:523 INFO OrientDB Server is shutting down... [OServer] 2016-02-07 16:02:18:523
INFO Shutting down plugins: [OServerPluginManager] DEBUG [ kwcn.TestOrientDb]: shutting down orientdb... 2016-02-07 16:02:18:524 INFO Shutting down databases: [OServer] 2016-02-07 16:02:18:565 INFO OrientDB Engine shutdown complete [Orient] 2016-02-07
16:02:18:566 INFO OrientDB Server shutdown complete

I suggest you to take a look at
https://github.com/orientechnologies/orientdb/blob/2.1.x/distributed/src/test/java/com/orientechnologies/orient/server/distributed/AbstractServerClusterTest.java
it's the base class of OrientDB distributed tests. Its class hierarchy seems quite complex, but in the end it just instantiates multiple servers and delegates to subclasses to test operations against them.
You can also check
https://github.com/orientechnologies/orientdb/blob/2.1.x/distributed/src/test/java/com/orientechnologies/orient/server/distributed/HATest.java
that is one of its subclasses. Actually you could just copy or extend it and implement your own logic in executeTest() method.
About your questions:
HOW DO I CREATE A DATABASE HERE?
As a normal plocal db:
new ODatabaseDocumentTx("plocal:...").create()
or
new OrientGraph("plocal:...")
//HOW DO I OPEN MY DATABASE TO USE THE API LIKE THIS:
same as above:
new ODatabaseDocumentTx("plocal:...").open("admin", "admin");
//SHOULD I PAUSE THE THREAD TO KEEP THE SERVER ACTIVE?
There is no need to pause the thread, the server creates some non-daemon threads, so it will remain active. Just make sure that someone, at the end of the tests, invokes server.shutdown() (even from another thread)

Related

How can I add a user that can connect to ActiveMQ Artemis in WildFly via Quarkus JMS?

I have a WildFly application with an embedded ActiveMQ Artemis and I can't seem to figure how to debug it. It seems to connect fine since I'm getting no errors in the log, but I'm unable to read any queues. There is no error but nothing is happening.
I'm running the Quarkus JMS client 1.03 which is using org.apache.activemq ยป artemis-jms-client 2.19. The server is running WildFly 26.1 with the embedded ActiveMQ Artemis server 2.19.1 so the client and server should be compatible. However, I'm not sure if I've configured it correctly. I'm using the standalone-full.xml.
For testing I've added a user test1:
bash-4.4$ ./add-user.sh
What type of user do you wish to add?
a) Management User (mgmt-users.properties)
b) Application User (application-users.properties)
(a): b
Enter the details of the new user to add.
Using realm 'ApplicationRealm' as discovered from the existing property files.
Username : test1
Password recommendations are listed below. To modify these restrictions edit the add-user.properties configuration file.
- The password should be different from the username
- The password should not be one of the following restricted values {root, admin, administrator}
- The password should contain at least 8 characters, 1 alphabetic character(s), 1 digit(s), 1 non-alphanumeric symbol(s)
Password :
WFLYDM0098: The password should be different from the username
Are you sure you want to use the password entered yes/no? yes
Re-enter Password :
What groups do you want this user to belong to? (Please enter a comma separated list, or leave blank for none)[ ]: guest
About to add user 'test1' for realm 'ApplicationRealm'
Is this correct yes/no? yes
Is this new user going to be used for one AS process to connect to another AS process?
e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server Jakarta Enterprise Beans calls.
yes/no? yes
To represent the user add the following to the server-identities definition <secret value="dGVzdDE=" />
This is the log for creating that user. In my standalone-full.xml I have the role guest for ActiveMQ Artemis and the queue prices set up:
<subsystem xmlns="urn:jboss:domain:messaging-activemq:13.1">
<server name="default">
...
<security-setting name="#">
<role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
...
<address-setting name="jms.queue.prices" expiry-address="jms.queue.ExpiryQueue" redelivery-delay="1000" max-delivery-attempts="0"/>
...
<jms-queue name="prices" entries="java:/jms/queue/prices" durable="true"/>
...
</server>
</subsystem>
My Quarkus client tries to connect with the following:
quarkus.artemis.url=tcp://localhost:8080
quarkus.artemis.username=test1
quarkus.artemis.password=test1
It has a consumer:
#Override
public void run() {
try (JMSContext context = connectionFactory.createContext(JMSContext.AUTO_ACKNOWLEDGE)) {
JMSConsumer consumer = context.createConsumer(context.createQueue("prices"));
while (true) {
Message message = consumer.receive();
if (message == null) {
// receive returns `null` if the JMSConsumer is closed
return;
}
lastPrice = message.getBody(String.class);
}
} catch (JMSException e) {
throw new RuntimeException(e);
}
}
and a producer:
#Override
public void run() {
try (JMSContext context = connectionFactory.createContext(JMSContext.AUTO_ACKNOWLEDGE)) {
context.createProducer().send(context.createQueue("prices"), Integer.toString(random.nextInt(100)));
}
}
It seem to connect just fine. I can see in the Quarkus log:
2022-12-08 14:29:59,596 DEBUG [org.apa.act.art.cor.cli.imp.ClientSessionFactoryImpl] (pool-10-thread-1) Trying reconnection attempt 0/1
2022-12-08 14:29:59,596 DEBUG [org.apa.act.art.cor.cli.imp.ClientSessionFactoryImpl] (pool-10-thread-1) Trying to connect with connectorFactory=org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory#6096907d and currentConnectorConfig: TransportConfiguration(name=null, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=localhost
2022-12-08 14:29:59,596 DEBUG [org.apa.act.art.cor.rem.imp.net.NettyConnector] (pool-10-thread-1) Connector NettyConnector [host=localhost, port=8080, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] using native epoll
2022-12-08 14:29:59,596 DEBUG [org.apa.act.art.cor.client] (pool-10-thread-1) AMQ211002: Started EPOLL Netty Connector version 4.1.82.Final to localhost:8080
2022-12-08 14:29:59,597 DEBUG [org.apa.act.art.cor.rem.imp.net.NettyConnector] (pool-10-thread-1) Remote destination: localhost/127.0.0.1:8080
2022-12-08 14:29:59,598 DEBUG [org.apa.act.art.cor.rem.imp.net.NettyConnector] (Thread-1 (ActiveMQ-client-netty-threads)) Added ActiveMQClientChannelHandler to Channel with id = 6339db8b
2022-12-08 14:29:59,598 DEBUG [org.apa.act.art.cor.cli.imp.ClientSessionFactoryImpl] (pool-10-thread-1) Connected with the currentConnectorConfig=TransportConfiguration(name=null, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=localhost
2022-12-08 14:29:59,599 DEBUG [org.apa.act.art.cor.cli.imp.ClientSessionFactoryImpl] (pool-10-thread-1) Reconnection successful
So it seems to be working fine, but I can't seem to send or consume from the queue.
Answers to Justin questions in the comments
Have you tried adding ?httpUpgradeEnabled=true
yes no change
quarkus.artemis.url=tcp://localhost:8080?httpUpgradeEnabled=true
Can you check the consumer-count
Actually there does not even seem to be a session active. I tried to close the connection for my test user but it shown me a dialog "no connection for that user" So maybe it's not even connecting as I thought
Why do you suspect the problem is with the user
I thought it might have something to do with some permission error. I was not sure if I was suppost to add the user to the "guest" group or not
But since I have in my config
<security-setting name="#">
<role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
EDIT2. Actually I see now in the log that quarkus seems to be missing permission for creating a consumer. So i guess "guest" role is not the correct one.
I assume that was corrected

Hazelcast: cannot join cluster if using programmatic config

I'm trying to manually configure Hazelcast 2.5.1 instances through the use of their programmatic API, but I find that it has different behaviors when doing -- supposedly -- similar things.
So, my first approach is rather rudimentary, which is:
String confString = "<hazelcast><network><port auto-increment=\"true\">10555</port><join><multicast enabled=\"false\" /><tcp-ip enabled=\"true\"><interface>127.0.0.1</interface></tcp-ip></join><ssl enabled=\"false\" /></network></hazelcast>";
Config config = new InMemoryXmlConfig(confString);
Hazelcast.newHazelcastInstance(config);
This will work and starting different instances will allow them to join the cluster. For readability purposes, here's the XML I'm building in memory:
<hazelcast>
<network>
<port auto-increment="true">10555</port>
<join>
<multicast enabled="false" />
<tcp-ip enabled="true">
<interface>127.0.0.1</interface>
</tcp-ip>
</join>
<ssl enabled="false" />
</network>
</hazelcast>
Starting different instances of this will make them join the cluster, which is the behavior that I want.
However, when I try to do this programatically, Hazelcast won't allow new instances to join and will complain with the following error:
Jul 09, 2015 9:39:33 AM com.hazelcast.impl.Node
WARNING: [127.0.0.1]:10556 [dev] Config seed port is 10555 and cluster size is 1. Some of the ports seem occupied!
This is the code that is supposed to do the same thing programatically:
Config config = new Config();
config.setInstanceName("HazelcastService");
config.getNetworkConfig().setPortAutoIncrement(true);
config.getNetworkConfig().setPort(10555);
config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
config.getNetworkConfig().getJoin().getTcpIpConfig().setEnabled(true);
config.getNetworkConfig().getInterfaces().addInterface("127.0.0.1");
config.getNetworkConfig().getInterfaces().setEnabled(true);
SSLConfig sslConfig = new SSLConfig();
sslConfig.setEnabled(false);
config.getNetworkConfig().setSSLConfig(sslConfig);
Hazelcast.newHazelcastInstance(config);
What am I missing?
Interfaces you added in java code are not the same you added in xml.
This is what you set in java code - http://docs.hazelcast.org/docs/2.5/manual/html-single/#ConfigSpecifyInterfaces
For your configuration to work - you should add this
config.getNetworkConfig().getJoin().getTcpIpConfig().addMember("127.0.0.1");

Checking if an Oracle BPEL Polling DB Adapter is working

I have deployed a Oracle SOA composite from JDeveloper 11g with a BPEL Polling DB Adapter to Weblogic 11g. I am trying to tell if it is working. I am looking in the soa_server1-diagnostic.log and I see the following message:
[2014-10-08T14:53:02.753-05:00] [soa_server1] [NOTIFICATION] [] [oracle.soa.adapter] [tid: [ACTIVE].ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: weblogic] [ecid: b4de9447a6405836:356834d:148f023a253:-8000-00000000000002ad,1:21897] [APP: soa-infra] JCABinding=> [NotificationService.SugarCRM_Poll/2.0] :init Successfully initialized SugarCRM_Poll_db.jca
First am I looking in the right log? And is this what I should see every time it runs?
The jca file for the polling DB Adapter looks like this:
<adapter-config name="SugarCRM_Poll" adapter="Database Adapter" wsdlLocation="SugarCRM_Poll.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">
<connection-factory location="eis/DB/SugarDbProd" UIConnectionName="SugarDbProd" adapterRef=""/>
<endpoint-activation portType="SugarCRM_Poll_ptt" operation="receive">
<activation-spec className="oracle.tip.adapter.db.DBActivationSpec">
<property name="DescriptorName" value="SugarCRM_Poll.OpportunityStagingTable"/>
<property name="QueryName" value="SugarCRM_PollSelect"/>
<property name="MappingsMetaDataURL" value="SugarCRM_Poll-or-mappings.xml"/>
<property name="PollingStrategy" value="LogicalDeletePollingStrategy"/>
<property name="MarkReadColumn" value="account_name_new"/>
<property name="MarkReadValue" value="X"/>
<property name="MarkUnreadValue" value="R"/>
<property name="PollingInterval" value="5"/>
<property name="MaxRaiseSize" value="1"/>
<property name="MaxTransactionSize" value="10"/>
<property name="NumberOfThreads" value="1"/>
<property name="ReturnSingleResultSet" value="false"/>
</activation-spec>
</endpoint-activation>
</adapter-config>
I am also seeing this Notification in the soa_server1-diagnostic.log:
[2014-10-10T07:31:05.328-05:00] [soa_server1] [NOTIFICATION] [] [oracle.soa.adapter] [tid: Workmanager: , Version: 0, Scheduled=false, Started=false, Wait time: 0 ms\n] [userId: weblogic] [ecid: b4de9447a6405836:356834d:148f023a253:-8000-0000000000000708,1:19750] [APP: soa-infra] Database Adapter NotificationService <oracle.tip.adapter.db.InboundWork handleException> BINDING.JCA-11624[[
DBActivationSpec Polling Exception.
Query name: [SugarCRM_PollSelect], Descriptor name: [SugarCRM_Poll.OpportunityStagingTable]. Polling the database for events failed on this iteration.
Caused by com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed..
This exception is considered retriable, likely due to a communication failure. To classify it as non-retriable instead add property nonRetriableErrorCodes with value "0" to your deployment descriptor (i.e. weblogic-ra.xml). Polling will be attempted again next polling interval.
I am able to test the connection in Weblogic 11g Admin Console and it works fine. I see the following message : "Test of TestSugarDataSource on server soa_server1 was successful." And I was also able to use the netcat command to to test connectivity with success. "nc -vz xx.xx.xx.xx 3306" returns a "Connection to xx.xx.xx.xx 3306 port [tcp/mysql] succeeded!" So it appears the connectivity is not an issue.
Thanks,
Tom Henricksen
I was able to find the issue with the polling by changing the Log Configuration on the oracle.soa to TRACE:32 FINEST logging. This allowed me to see the underlying query that was running for the Polling DB Adapter and make corrections. The diagnostic log file gave me everything I needed once I made this change and corrected the testing.
Thanks,
Tom

From within Java, Liquibase update is hanging after applying a changeset

I am running migrations for unit tests with Liquibase. I use a class called ${projectName}Liquibase.java to store two static functions
public class ${projectName}Liquibase {
...
public static void runMigrations(Connection conn, DB_TYPE dbType) {
Liquibase liquibase;
Database database = null;
try {
database = DatabaseFactory.getInstance()
.findCorrectDatabaseImplementation(new JdbcConnection(conn));
liquibase = new Liquibase(dbType.filePath, new FileSystemResourceAccessor(), database);
liquibase.validate();
liquibase.update(null);
} catch (LiquibaseException e) {
throw new RuntimeException("File at " + dbType.filePath + " Error: " + e.getMessage());
}
}
public static void dropTables() {
...
}
}
I pick up the file dbType.filePath parameter by using System.getProperty("user.dir") and the rest of the path.
The file is read fine, however, the update only goes through the very first changeset and then hangs for the duration of the test. Thus, the test does not run.
Tests are run successfully from other files and submodules within my Intellij project. In particular, our integration test suite is run successfully using the same interface from a different submodule. All of the tests will pass up until this one:
Running *.*.*.*.*.*DAOTest
2013-11-03 14:59:53,144 DEBUG [main] c.j.bonecp.BoneCPDataSource : JDBC URL = jdbc:hsqldb:mem:*, Username = SA, partitions = 2, max (per partition) = 5, min (per partition) = 5, helper threads = 3, idle max age = 60 min, idle test period = 240 min
INFO 11/3/13 2:59 PM:liquibase: Reading from PUBLIC.DATABASECHANGELOG
INFO 11/3/13 2:59 PM:liquibase: Successfully acquired change log lock
INFO 11/3/13 2:59 PM:liquibase: Reading from PUBLIC.DATABASECHANGELOG
INFO 11/3/13 2:59 PM:liquibase: /Users/davidgroff/repo/services/${projectName}/server/../core/src/main/java/com/*/*/liquibase/hsqldb.sql: 1::davidgroff: Custom SQL executed
INFO 11/3/13 2:59 PM:liquibase: /Users/davidgroff/repo/services/${projectName}/server/../core/src/main/java/com/*/*/liquibase/hsqldb.sql: 1::davidgroff: ChangeSet /Users/davidgroff/repo/services/*/*/../core/src/main/java/com/*/*/liquibase/hsqldb.sql::1::davidgroff ran successfully in 3ms
INFO 11/3/13 2:59 PM:liquibase: Successfully released change log lock
After this, the test repeatedly hangs as in in some infinite loop.
I have the current setup:
<dependency>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-core</artifactId>
<version>3.0.6</version>
</dependency>
<dependency>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>3.0.6</version>
</dependency>
I'm using Java 7 on Maven 3.1.0.
It may be that a separate transaction has locked a row in your database and Liquibase is hanging waiting for the other transaction to complete.
You said "the update only goes through the very first changeset and then hangs for the duration of the test", does that mean the first changeSet runs successfully? If that is the case, then the locked record is either a table lock on the DATABASECHANGELOG table that is preventing the INSERT INTO DATABASECHANGELOG from completing or a problem with your second changeSet.
Assuming it is a problem with the DATABASECHANGELOG table, is there a separate thread or process that would have been trying to delete from that table?
The issue turned out to be that a connection was being created and used after the liquibase changeset was applied with the command,
connection.createStatement(..."***SQL***"...);
and was never being committed to the database, because a new connection was created or that connection was out of data. It is a mystery why this was working before we used Liquibase to run migrations. The fix is simply to commit the above statement by calling:
connection.commit();

How to configure 2.6 spring: Failed to create route route2 at:

I'm trying to upgrade from Camel 2.0 to 2.6
I have this in my applicationContext-camel.xml file...
<camel:route >
<camel:from uri="transactionSaleBuffer" />
<camel:policy ref="routeTransactionPolicy"/>
<camel:transacted ref="transactionManagerETL" />
<camel:to uri="detailFactProcessor" />
</camel:route>
by adding in the two lines in the middle (policy and transacted) I get the exception...
Caused by: org.apache.camel.FailedToCreateRouteException: Failed to create route route2 at: >>> From[transactionSaleBuffer] <<< in route: Route[[From[transactionSaleBuffer]] -> [Tr
ansacted[ref:trans... because of Route route2 has no output processors. You need to add outputs to the route such as to("log:foo").
I can see this is because the Camel class RouteDefinition.java makes a call to ProcessorDefinitionHelper.hasOutputs(outputs, true).
This passes in an array of one Object ([Transacted[ref:transactionManagerETL]])
This one object has one two children
[Transacted[ref:transactionManagerETL]]
CHILD-[Policy[ref:routeTransactionPolicy],
CHILD-To[detailFactProcessor]
The Policy child has no outputs, so the exception is thrown.
Yet I don't know how to add a child, my XML above matches the schema.
Maybe I'm missing something else?
My setup matches the example...Apache Camel: Book in One Page (See section: Camel 1.x - JMS Sample)
Can anyone please help me out.
Thanks!
Jeff Porter
Try as follows
<camel:route>
<camel:from uri="transactionSaleBuffer" />
<camel:transacted ref="transactionManagerETL" />
<camel:policy ref="routeTransactionPolicy">
<camel:to uri="detailFactProcessor" />
</camel:policy>
</camel:route>

Categories