I have a docker container running the gremlin-server.
It was started via:
./bin/gremlin-server.sh conf/gremlin-server/gremlin-server.yaml
From within a docker container, running this image:
https://hub.docker.com/r/janusgraph/janusgraph
The server is up and is listening at port 8182
$ docker ps
6019adda6081 janusgraph/janusgraph "docker-entrypoint.s…" 2 days ago Up 26 hours 0.0.0.0:8182->8182/tcp
I am interested in using a schema and indexes.
Janus offers this here: https://docs.janusgraph.org/basics/schema/
The following Is the configuration I use to attempt to connect to the gremlin-server:
AbstractConfiguration config = new BaseConfiguration();
config.setListDelimiter('/');
// contents of conf/remote-graph.properties
config.setProperty("gremlin.remote.driver.sourceName", "g");
config.setProperty("gremlin.remote.remoteConnectionClass", "org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteConnection");
// contents of conf/remote-objects.yaml:
config.setProperty("clusterConfiguration.hosts", databaseUrl);
config.setProperty("clusterConfiguration.port", 8182);
config.setProperty("clusterConfiguration.serializer.className", "org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0/");
config.setProperty("storage.backend", "cql");
config.setProperty("clusterConfiguration.serializer.config.ioRegistries", "org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry");
When I call
GraphTraversalSource g = traversal().withRemote(config);
I get a traversal source and everything seems fine. However, to use the management stuff that Janus provides, I seem to need a JanusGraphManagement object. I cannot get the generic Graph object above and cast it to a JanusGraph. The docs suggest using a JanusGraphFactory: https://docs.janusgraph.org/basics/configuration/#janusgraphfactory
So I call
JanusGraph janusGraph = JanusGraphFactory.open(config);
I get the following stack trace:
Exception in thread "main" java.lang.IllegalArgumentException: Could not find implementation class: org.janusgraph.diskstorage.cql.CQLStoreManager
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:60)
at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:440)
at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:411)
at org.janusgraph.graphdb.configuration.builder.GraphDatabaseConfigurationBuilder.build(GraphDatabaseConfigurationBuilder.java:50)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:161)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:132)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:112)
at com.activitystream.database.GraphMigration.migrateDatabase(GraphMigration.java:69)
at com.activitystream.runners.persistence.DataStores.migrateDatabase(DataStores.java:27)
at com.activitystream.runners.persistence.EntityPersistenceRunner.main(EntityPersistenceRunner.java:23)
Caused by: java.lang.ClassNotFoundException: org.janusgraph.diskstorage.cql.CQLStoreManager
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:315)
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:56)
... 9 more
Is it possible to modify the schema over a remote connection?
If it is not possible, how can one modify the schema?
Any insight would be appreciated.
You basically have two choices - either:
Interact with your JanusGraphManagement object by way of scripts sent to Gremlin Server (typically by way of a session but I guess you could package an entire "management script" together and submit it as one request) or
Bypass Gremlin Server and instantiation your JanusGraphManagement object locally as directed in the JanusGraph documentation.
There is no way to have return a JanusGraphManagement to your client as it is not a serializable object that can be sent back from the server.
Related
I am using temporal for running workflows. I have created a jar with my app. and running the below cmd from terminal java -jar build/libs/app-0.0.1-SNAPSHOT.jar
Getting the below error when trying to run the above cmd:-
Exception in thread "main" io.grpc.StatusRuntimeException: UNKNOWN
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:271)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:252)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:165)
at io.temporal.api.workflowservice.v1.WorkflowServiceGrpc$WorkflowServiceBlockingStub.getSystemInfo(WorkflowServiceGrpc.java:4139)
at io.temporal.serviceclient.SystemInfoInterceptor.getServerCapabilitiesOrThrow(SystemInfoInterceptor.java:95)
at io.temporal.serviceclient.ChannelManager.lambda$getServerCapabilities$3(ChannelManager.java:330)
at io.temporal.internal.retryer.GrpcRetryer.retryWithResult(GrpcRetryer.java:60)
at io.temporal.serviceclient.ChannelManager.connect(ChannelManager.java:297)
at io.temporal.serviceclient.WorkflowServiceStubsImpl.connect(WorkflowServiceStubsImpl.java:161)
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.base/java.lang.reflect.Method.invoke(Method.java:577)
at io.temporal.internal.WorkflowThreadMarker.lambda$protectFromWorkflowThread$1(WorkflowThreadMarker.java:83)
at jdk.proxy1/jdk.proxy1.$Proxy0.connect(Unknown Source)
at io.temporal.worker.WorkerFactory.start(WorkerFactory.java:210)
at com.hok.furlenco.workflow.refundStatusSync.RefundStatusSyncSaga.createWorkFlow(RefundStatusSyncSaga.java:41)
at com.hok.furlenco.workflow.refundStatusSync.RefundStatusSyncSaga.main(RefundStatusSyncSaga.java:17)
Caused by: java.nio.channels.UnsupportedAddressTypeException
at java.base/sun.nio.ch.Net.checkAddress(Net.java:146)
at java.base/sun.nio.ch.Net.checkAddress(Net.java:157)
at java.base/sun.nio.ch.SocketChannelImpl.checkRemote(SocketChannelImpl.java:816)
at java.base/sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:839)
at io.grpc.netty.shaded.io.netty.util.internal.SocketUtils$3.run(SocketUtils.java:91)
at io.grpc.netty.shaded.io.netty.util.internal.SocketUtils$3.run(SocketUtils.java:88)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:569)
at io.grpc.netty.shaded.io.netty.util.internal.SocketUtils.connect(SocketUtils.java:88)
at io.grpc.netty.shaded.io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:322)
at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:248)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1342)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:548)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:533)
at io.grpc.netty.shaded.io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:54)
at io.grpc.netty.shaded.io.grpc.netty.WriteBufferingAndExceptionHandler.connect(WriteBufferingAndExceptionHandler.java:157)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:548)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.access$1000(AbstractChannelHandlerContext.java:61)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext$9.run(AbstractChannelHandlerContext.java:538)
at io.grpc.netty.shaded.io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
at io.grpc.netty.shaded.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.grpc.netty.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:833)
The app works fine when trying to run it from IDE:-
The temporal server is running as a docker container in my local:-
**
RefundStatusSyncSaga.java
**
/ gRPC stubs wrapper that talks to the local docker instance of temporal service.
WorkflowServiceStubs service = WorkflowServiceStubs.newLocalServiceStubs();
// client that can be used to start and signal workflows
WorkflowClient client = WorkflowClient.newInstance(service);
// worker factory that can be used to create workers for specific task queues
WorkerFactory factory = WorkerFactory.newInstance(client);
// Worker that listens on a task queue and hosts both workflow and activity implementations.
Worker worker = factory.newWorker(TASK_QUEUE);
// Workflows are stateful. So you need a type to create instances.
worker.registerWorkflowImplementationTypes(RefundSyncWorkflowImpl.class);
// Activities are stateless and thread safe. So a shared instance is used.
RefundStatusActivities tripBookingActivities = new RefundStatusActivitiesImpl();
worker.registerActivitiesImplementations(tripBookingActivities);
// Start all workers created by this factory.
factory.start();
System.out.println("Worker started for task queue: " + TASK_QUEUE);
// now we can start running instances of our saga - its state will be persisted
WorkflowOptions options = WorkflowOptions.newBuilder().setTaskQueue(TASK_QUEUE)
.setWorkflowId("1")
.setWorkflowIdReusePolicy( WorkflowIdReusePolicy.WORKFLOW_ID_REUSE_POLICY_REJECT_DUPLICATE)
.setCronSchedule("* * * * *")
.build();
RefundSyncWorkflow refundSyncWorkflow = client.newWorkflowStub(RefundSyncWorkflow.class, options);
refundSyncWorkflow.syncRefundStatus();
The complete code can be seen here -> https://github.com/iftekharkhan09/temporal-sample
I also come across this and I dig into the jar debugging. I found that in this check public static InetSocketAddress checkAddress(SocketAddress sa), the SocketAddress will become /xxx:443(my original addr is xxx:443). Then the validation check failed... I still don't know how to solve it.
update: one solution could be found here https://community.temporal.io/t/unable-to-run-temporal-workflow-from-jar/6607
I'm facing the following errors while connecting oracle DB, I'm using spring boot JDBC template to connect to database. The errors are below,
Exception in thread "main" java.lang.Exception: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at com.falabella.util.OracleDB.main(OracleDB.java:70)
Caused by: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:392)
Caused by: java.net.UnknownHostException: NODE-01: nodename nor servname provided, or not known
Below are my finding, My database server host having the cluster and it has two nodes, like below,
Cluster (wood.clsuter.com)
| NODE01 (wood-01)
| NODE02 (wood-02)
My connection string is like this, jdbc:oracle:thin:#wood-clsuter.com:1531/service_name
When I'm using the cluster name in the connection string, I'm facing the below error
Caused by: java.net.UnknownHostException: wood-01: nodename nor servname provided, or not known
Whereas if I use any of the node name in the connection string , able to connect Data base without any issue, the working connection string is like below,
jdbc:oracle:thin:#wood-01.com:1531/service_name or
jdbc:oracle:thin:#wood-02.com:1531/service_name
Since I need to use my DB requests as load balancing, I need to use the cluster name instead of slave nodes,
I would like to know the root cause of this issue, such kind of production environment issues,
Could you please help me out with this?
You need to change connect string to:
"jdbc:oracle:thin:#(DESCRIPTION=(FAILOVER=ON)(LOAD_BALANCE=ON)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=wood-01.com)(PORT = 1531))(ADDRESS = (PROTOCOL = TCP)(HOST = wood-02.com)(PORT = 1531)))(CONNECT_DATA=(SERVICE_NAME =service_name)(FAILOVER_MODE=(TYPE=select)(METHOD=basic))))"
We are running an ignite cluster with 3 nodes which pre-loads the data from 3rd party database (using custom cache store). when we try to connect to the cluster using java thin client and if the request reaches the cluster before data loading gets completed, we are getting unknown pair exception and some unstable behavior.
Is there anyway we can block the client request (TCP socket connection) till the data loading gets completed?
I tried with different life cycle events (NODE_START_COMPLETED) but no luck.
Stack trace
Caused by: org.apache.ignite.binary.BinaryInvalidTypeException: Unknown pair [platformId=0, typeId=-845247802]
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:707)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1757)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
at org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:798)
at org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:143)
at org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinary(CacheObjectUtils.java:177)
at org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinaryIfNeeded(CacheObjectUtils.java:67)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:125)
at org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinaryIfNeeded(GridCacheContext.java:1773)
at org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinaryIfNeeded(GridCacheContext.java:1761)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:573)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.putAll(GridCacheStoreManagerAdapter.java:627)
at org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.batchStoreCommit(IgniteTxAdapter.java:1507)
at org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:589)
at org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.localFinish(GridNearTxLocal.java:3646)
at org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.doFinish(GridNearTxFinishFuture.java:475)
... 41 common frames omitted
Caused by: java.lang.ClassNotFoundException: Unknown pair [platformId=0, typeId=-845247802]
at org.apache.ignite.internal.MarshallerContextImpl.getClassName(MarshallerContextImpl.java:394)
at org.apache.ignite.internal.MarshallerContextImpl.getClass(MarshallerContextImpl.java:344)
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:698)
... 56 common frames omitted
There is no way to forbid thin clients to connect to a cluster using Ignite API at the moment. I created a JIRA ticket for this improvement: https://issues.apache.org/jira/browse/IGNITE-12237
The unknown pair exception doesn't seem to be caused by thin clients connecting at a wrong time though. Usually it's caused by a missing marshaller directory in the work path.
I've got a problem with a Spring web application that periodically runs into an error fetching a connection from my connection pool. Eventually in the logs I see entries like:
Caused by: javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection
Caused by: java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
Only way to recover I've found once it hits this point is to restart Tomcat.
I think the most likely explanation is I have some code somewhere that is not properly cleaning up its connection - returning it to Hikari, leaving something open so Spring can't clean it up, etc.
To troubleshoot I've set my hikari config leakDetectionThreshold to 5000ms and enabled logging. After that, I see log entries like
2018-04-24 19:53:56 WARN ProxyLeakTask:87 - Connection leak detection
triggered for org.postgresql.jdbc.PgConnection#664ec666, stack trace
follows
java.lang.Exception: Apparent connection leak detected
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122)
at org.hibernate.internal.NonContextualJdbcConnectionAccess.obtainConnection(NonContextualJdbcConnectionAccess.java:35)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.acquireConnectionIfNeeded(LogicalConnectionManagedImpl.java:99)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getPhysicalConnection(LogicalConnectionManagedImpl.java:129)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.connection(StatementPreparerImpl.java:47)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:146)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:172)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareQueryStatement(StatementPreparerImpl.java:148)
at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1940)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1909)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1887)
at org.hibernate.loader.Loader.doQuery(Loader.java:932)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:349)
at org.hibernate.loader.Loader.doList(Loader.java:2615)
at org.hibernate.loader.Loader.doList(Loader.java:2598)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2430)
at org.hibernate.loader.Loader.list(Loader.java:2425)
at org.hibernate.loader.custom.CustomLoader.list(CustomLoader.java:335)
at org.hibernate.internal.SessionImpl.listCustomQuery(SessionImpl.java:2129)
at org.hibernate.internal.AbstractSharedSessionContract.list(AbstractSharedSessionContract.java:981)
at org.hibernate.query.internal.NativeQueryImpl.doList(NativeQueryImpl.java:147)
at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1398)
at org.hibernate.query.internal.AbstractProducedQuery.getSingleResult(AbstractProducedQuery.java:1444)
at sun.reflect.GeneratedMethodAccessor191.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.orm.jpa.SharedEntityManagerCreator$DeferredQueryInvocationHandler.invoke(SharedEntityManagerCreator.java:379)
at com.sun.proxy.$Proxy163.getSingleResult(Unknown Source)
at com.mycompany.web.jpa.util.DBHelper.getPagedMappedDbResults(DBHelper.java:76)
at com.mycompany.web.jpa.repository.TaskRepositoryImpl.findTaskDetailsByStepIdAndIdIn(TaskRepositoryImpl.java:245)
......
So it is detecting a possible leak. Could be a false positive I suppose? But this is also the only class in my app that is doing database access outside of the standard service/repository pattern often used in Spring apps, so it seems like a likely culprit, and it's my best lead at the moment.
Anyway, the last piece of non library code I see in the trace (ie stuff I wrote, so most likely to be the cause of the leak!) is my DBHelper::getPagedMappedDbResults method, relevant bit included here:
Query q = entityManager.createNativeQuery(countQueryText);
setQueryParameters(q, parameters);
long numActualResults = 0;
try {
numActualResults = ((Number)q.getSingleResult()).longValue(); // line 76
} catch (Exception e) {
System.out.println("just in case: " + e);
}
So basically I create a Query object from my EntityManager instance, set some parameters, and run it to get some results.
Is there something I need to be doing with a Query object when I'm done with it? q.cleanup()? I don't see anything like this from reading the docs, but am I not doing good housekeeping on this resource?
The entityManager itself is created from an #Autowired annotation. My understanding is if I didn't "new" it to instantiate it and instead let the Spring framework autowire it, then Spring will do whatever cleanup is necessary. Is that right? Or do I need to be doing some cleanup after I use the entityManager?
Version details:
Tomcat 8 / Java 8
Spring 5.0.0.RELEASE
Spring Data Kay-RELEASE
Hibernate 5.2.3.Final
Hikari 2.4.5
Any advice or suggestions would be greatly appreciated, thanks!
What is the query? Is it heavy? Maybe you have deadlock here? Connection management looks fine. You do not acquire connection explicitly, so no need to release it. The query might be long running so Hibernate is not able to complete it and release the connection.
Also, you can check the number of open connections on the DB side. Do some analysis on that side as well.
After running my application, i am getting this error after around 5 mins.
Even though i am returning the resource after use, i keep getting this.
I have built jedis-2.2.2-SNAPSHOT.jar from the jedis code base, since its not released yet
I had set the minIdle = 100, maxIdle=200 & maxActive=200. At the time of this exception, the connection count to redis was 122 from my application
redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
at redis.clients.util.Pool.getResource(Pool.java:42)
Caused by: java.util.NoSuchElementException: Timeout waiting for idle object
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:442)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:360)
at redis.clients.util.Pool.getResource(Pool.java:40)
... 6 more
Did you check that redis is still up & running ?
If not, investigate why it died.
try a redis-cli in a terminal if you can. "info" would give you more details.