I'm looking to set up a Clustered Server up on Web Logic. I walk though the following steps and yet when I restart my server then start the cluster I get an exception. The exception is listed below.
Here are the steps I go through on the Console in Web Logic
Create Cluster
In the console page mydomain.com/7002/console on the left side
I click on
Environment
to expand the tree and then click on
Clusters
It changes screens on the right. I fill out the data for the
CLUSTER NAME
and I left the messaging mode to Uni-Cast
The multicast address and port are already filled out but are greyed out.
I then click OK.
After this I configure the server. On the side on the console page I click on Servers
Server Config
I fill out the Server Name: MyServer
I punch in the Server Listen Address
Port is defaulted to 7001 which is available
I then fill out the question
Should this server belong to a cluster
I answer
Yes, Make this a server a member of an existing cluster and I select the cluster I just created.
I then click Finish
After I accept the changes and reboot the server, when I start up the instance of the server I get the below exception in the error logs. Any help would be greatly appreciated.
####<Sep 20, 2016 1:57:25 PM CDT> <Debug> <ServerLifeCycle> <DIFDX> <WLS_RUN_MANAGER> <main> <<WLS Kernel>> <> <> <1474397845602> <BEA-000000> <calling halt on weblogic.nodemanager.NMService#3c3d22af>
####<Sep 20, 2016 1:57:25 PM CDT> <Debug> <DiagnosticContext> <> <> <weblogic.timers.TimerThread> <> <> <> <1474397845604> <BEA-000000> <Invoked DCM.initialValue() for thread id=15, name=weblogic.timers.TimerThread
java.lang.Exception
at weblogic.diagnostics.context.DiagnosticContextManager$1.initialValue(DiagnosticContextManager.java:267)
at weblogic.kernel.ResettableThreadLocal.initialValue(ResettableThreadLocal.java:117)
at weblogic.kernel.ResettableThreadLocal$ThreadStorage.get(ResettableThreadLocal.java:204)
at weblogic.kernel.ResettableThreadLocal.get(ResettableThreadLocal.java:74)
at weblogic.diagnostics.context.DiagnosticContextManager$WLSDiagnosticContextFactoryImpl.findOrCreateDiagnosticContext(DiagnosticContextManager.java:365)
at weblogic.diagnostics.context.DiagnosticContextFactory.findOrCreateDiagnosticContext(DiagnosticContextFactory.java:111)
at weblogic.diagnostics.context.DiagnosticContextFactory.findOrCreateDiagnosticContext(DiagnosticContextFactory.java:94)
at weblogic.diagnostics.context.DiagnosticContextHelper.getContextId(DiagnosticContextHelper.java:32)
at weblogic.logging.LogEntryInitializer.getCurrentDiagnosticContextId(LogEntryInitializer.java:117)
at weblogic.logging.LogEntryInitializer.initializeLogEntry(LogEntryInitializer.java:67)
at weblogic.logging.WLLogRecord.<init>(WLLogRecord.java:43)
at weblogic.logging.WLLogRecord.<init>(WLLogRecord.java:54)
at weblogic.logging.WLLogger.normalizeLogRecord(WLLogger.java:64)
at weblogic.logging.WLLogger.log(WLLogger.java:35)
at weblogic.diagnostics.debug.DebugLogger.log(DebugLogger.java:231)
at weblogic.diagnostics.debug.DebugLogger.debug(DebugLogger.java:204)
at weblogic.work.SelfTuningDebugLogger.debug(SelfTuningDebugLogger.java:18)
at weblogic.work.ServerWorkManagerImpl$1.log(ServerWorkManagerImpl.java:44)
at weblogic.work.SelfTuningWorkManagerImpl.debug(SelfTuningWorkManagerImpl.java:597)
at weblogic.work.RequestManager.log(RequestManager.java:1204)
at weblogic.work.RequestManager.addToCalendarQueue(RequestManager.java:315)
at weblogic.work.RequestManager.addToPriorityQueue(RequestManager.java:301)
at weblogic.work.RequestManager.executeIt(RequestManager.java:248)
at weblogic.work.SelfTuningWorkManagerImpl.scheduleInternal(SelfTuningWorkManagerImpl.java:164)
at weblogic.work.SelfTuningWorkManagerImpl.schedule(SelfTuningWorkManagerImpl.java:144)
at weblogic.timers.internal.TimerManagerFactoryImpl$WorkManagerExecutor.execute(TimerManagerFactoryImpl.java:132)
at weblogic.timers.internal.TimerManagerImpl.waitForStop(TimerManagerImpl.java:241)
at weblogic.timers.internal.TimerManagerImpl.stop(TimerManagerImpl.java:98)
at weblogic.timers.internal.TimerThread$Thread.run(TimerThread.java:250)
Solution: It ended up being a timeout issue. The server was taking such a long time to come up that the instance was assuming no response back and thus assuming it couldn't get though. Once I increased the timeout to be longer, it finally came through and worked.
Related
I am connecting to remote Oracle db from AWS server using Java Hibernate connection, while the fetch is on if their is any network fluctuation, query hangs.
Below is the log of query:
23 Sep 2018 10:46:01 LiveSync INFO {Query----->Select count(rec_no)
as count_rec from TBL_NAME where REC_NO='XXXX'Booking Id--->XXXXX} 23
Sep 2018 11:02:09 LiveSync INFO {SQL Error...could not execute
query}
In the first line, Query was fired at 10:46:01 to provide count of record, however due to network packet loss during that time, the response came to Java at 11:02:09, almost after 15 min.
Can anyone help me to immediately respond to Java on the network breakage?
I'm getting the exception when I try to connect to the orient db using Java. Below is the exception I'm getting.
Jun 07, 2016 12:43:40 PM com.orientechnologies.common.log.OLogManager log
INFO: OrientDB auto-config DISKCACHE=891MB (heap=891MB direct=891MB os=4,006MB), assuming maximum direct memory size equals to maximum JVM heap size
Jun 07, 2016 12:43:40 PM com.orientechnologies.common.log.OLogManager log
WARNING: MaxDirectMemorySize JVM option is not set or has invalid value, that may cause out of memory errors. Please set the -XX:MaxDirectMemorySize=4006m option when you start the JVM.
Jun 07, 2016 12:43:40 PM com.orientechnologies.common.log.OLogManager log
WARNING: MaxDirectMemorySize JVM option is not set or has invalid value, that may cause out of memory errors. Please set the -XX:MaxDirectMemorySize=4006m option when you start the JVM.
Exception in thread "main" com.orientechnologies.orient.core.exception.OFileLockedByAnotherProcessException: File 'F:\Program Files\orientdb-community-2.2.0\databases\mydbo\database.ocf' is locked by another process, maybe the database is in use by another process. Use the remote mode with a OrientDB server to allow multiple access to the same database at com.orientechnologies.orient.core.storage.fs.OFileClassic.lock(OFileClassic.java:756)
at com.orientechnologies.orient.core.storage.fs.OFileClassic.openChannel(OFileClassic.java:813)
at com.orientechnologies.orient.core.storage.fs.OFileClassic.open(OFileClassic.java:603)
at com.orientechnologies.orient.core.storage.impl.local.OSingleFileSegment.open(OSingleFileSegment.java:51)
at com.orientechnologies.orient.core.storage.impl.local.OStorageConfigurationSegment.load(OStorageConfigurationSegment.java:80)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.open(OAbstractPaginatedStorage.java:186)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.open(ODatabaseDocumentTx.java:231)
at orient.insert.Insert.main(Insert.java:12)
this is the code that i tried.
ODatabaseDocumentTx db = new ODatabaseDocumentTx("plocal:F:/Program Files/orientdb-community-2.2.0/databases/mydbo").open("admin", "admin");
ODocument doc = new ODocument("Person");
doc.field( "name", "Luke" );
doc.field( "surname", "Skywalker" );
doc.field( "city", new ODocument("City").field("name","Rome").field("country", "Italy") );
doc.save();
db.close();
I can't figure out the error I'm having.
You have a server running and you try to open the database from another process in plocal.
Could you please verify that you have no active OrientDB instances while accessing it in plocal (console or external processes) and that you open one plocal connection at a time?
I am using Hazelcast v2.5 and Maven plugin in Eclipse . I tried running an example program in Eclipse which creates 3 map entries in the ABCD namespace . When I run my code in Eclipse it shows this WARNING message
Feb 07, 2013 12:08:23 PM com.hazelcast.impl.ConcurrentMapManager
WARNING: [192.168.1.36]:5702 [dev] Caller -> RedoLog{name=c:ABCD, redoType=REDO_TARGET_UNKNOWN, operation=CONCURRENT_MAP_MERGE, target=null / connected=false, redoCount=53, migrating=null
partition=Partition [227]{
}
}
This keeps on Iterating till the redo threshold exceeds.
Feb 07, 2013 12:08:42 PM com.hazelcast.impl.LifecycleServiceImpl
WARNING: [192.168.1.36]:5702 [dev] [CONCURRENT_MAP_MERGE] Redo threshold[90] exceeded! Last redo cause: REDO_TARGET_UNKNOWN, Name: c:ABCD
com.hazelcast.core.OperationTimeoutException: [CONCURRENT_MAP_MERGE] Redo threshold[90] exceeded! Last redo cause: REDO_TARGET_UNKNOWN, Name: c:ABCD
at com.hazelcast.impl.BaseManager$ResponseQueueCall.getRedoAwareResult(BaseManager.java:640)
at com.hazelcast.impl.BaseManager$ResponseQueueCall.getResult(BaseManager.java:627)
at com.hazelcast.impl.BaseManager$RequestBasedCall.getResultAsBoolean(BaseManager.java:437)
at com.hazelcast.impl.BaseManager$ResponseQueueCall.getResultAsBoolean(BaseManager.java:544)
at com.hazelcast.impl.ConcurrentMapManager$MPut.mergeOne(ConcurrentMapManager.java:1758)
at com.hazelcast.impl.ConcurrentMapManager$MPut.merge(ConcurrentMapManager.java:1747)
at com.hazelcast.impl.LifecycleServiceImpl$1.run(LifecycleServiceImpl.java:143)
at com.hazelcast.impl.executor.ParallelExecutorService$ParallelExecutorImpl$ExecutionSegment.run(ParallelExecutorService.java:212)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
at com.hazelcast.impl.ExecutorThreadFactory$1.run(ExecutorThreadFactory.java:38)
Why does this occur ?
Please Help.!
You can do the following just to get started with a barebones Java class (I suggest you go through a more detailed tutorial as it might be more useful - I have not gone over accessing Hazelcast service):
Specify the map in the hazelcast.xml file as follows:
<map name="testMap">
<backup-count>1</backup-count>
<eviction-policy>NONE</eviction-policy>
<max-size policy="cluster_wide_map_size">0</max-size>
<eviction-percentage>25</eviction-percentage>
<merge-policy>hz.ADD_NEW_ENTRY</merge-policy>
<map-store enabled="true">
<class-name>models.test.StoreLoadTestMap</class-name>
<write-delay-seconds>5</write-delay-seconds>
</map-store>
<entry-listeners>
<entry-listener include value="true"local="false">models.test.ListenerTestMap</entry-listener>
</entry-listeners>
</map>
Once done, you can simply call the following from your Java app:
IMap<String, testObject> testMap = Hazelcast.getMap("testMap");
Now, you should be able to put/get values to and from the map as needed. You can use tcp or multicasting for replication based on your use case (use tcp if possible) and retrieve info from the second map for confirmation of data replication. Please also understand how data gets replicated across maps.
Hope it helps
we are using Hazelcat 1.9.4.4 with cluser of 6 Tomcat servers. We restarted our cluster, ant here is a fragment of the log:
14-Jul-2012 03:25:41 com.hazelcast.nio.InSelector
INFO: /10.152.41.105:5701 [cem-prod] 5701 accepted socket connection from /10.153.26.16:54604
14-Jul-2012 03:25:47 com.hazelcast.cluster.ClusterManager
INFO: /10.152.41.105:5701 [cem-prod]
Members [6] {
Member [10.152.41.101:5701]
Member [10.164.101.143:5701]
Member [10.152.41.103:5701]
Member [10.152.41.105:5701] this
Member [10.153.26.15:5701]
Member [10.153.26.16:5701]
}
We can see that 10.153.26.16 is connected to the cluster, but after it later in the log there is:
14-Jul-2012 03:28:50 com.hazelcast.impl.ConcurrentMapManager
INFO: /10.152.41.105:5701 [cem-prod] ======= 47: CONCURRENT_MAP_LOCK ========
thisAddress= Address[10.152.41.105:5701], target= Address[10.153.26.16:5701]
targetMember= Member [10.153.26.16:5701], targetConn=Connection [/10.153.26.16:54604 -> Address[10.153.26.16:5701]] live=true, client=false, type=MEMBER, targetBlock=Block [2] owner=Address[10.153.26.16:5701] migrationAddress=null
cemClientNotificationsLock Re-doing [20] times! c:__hz_Locks : null
14-Jul-2012 03:28:55 com.hazelcast.impl.ConcurrentMapManager
INFO: /10.152.41.105:5701 [cem-prod] ======= 57: CONCURRENT_MAP_LOCK ========
thisAddress= Address[10.152.41.105:5701], target= Address[10.153.26.16:5701]
targetMember= Member [10.153.26.16:5701], targetConn=Connection [/10.153.26.16:54604 -> Address[10.153.26.16:5701]] live=true, client=false, type=MEMBER, targetBlock=Block [2] owner=Address[10.153.26.16:5701] migrationAddress=null
cemClientNotificationsLock Re-doing [30] times! c:__hz_Locks : null
After several restarts of servers (all together, stop all and start one-by-one etc) we were able to run the system.
Could you explain, why Hazelcast fails to lock map at the node if it is in cluster, or if this node was out of cluster, why it is displayed as a member?
Also are there any recomendations how to restart Tomcat cluster with distributed Hazelcast structures (stop all nodes and start together, stop and start one-by-one, stop Hazelcast somehow before server restart etc?)?
Thanks!
Could you explain, why Hazelcast fails to lock map at the node if it is in cluster
Map can be locked by some other node at the moment.
There are also lots of fixes and changes since 1.9.4.4 , it is pretty old version. You should try 2.1+.
I'm transfering a file across a LAN using JxtaServerSocket (receiver side) and JxtaSocket (sender side). At first, I send the file name than its size. After that I wait for offset to start sending the file from it. If I start both parts of program locally (on one computer) it works fine, but in case of different computers it doesn't work:
05.04.2011 13:59:03 net.jxta.logging.Logging logCheckedWarning
WARNING: Line 557 net.jxta.socket.JxtaServerSocket.pipeMsgEvent()
backlog queue full, connect request dropped
05.04.2011 13:59:03 net.jxta.logging.Logging logCheckedInfo
INFO: Line 115 net.jxta.impl.pipe.InputPipeImpl.<init>()
Creating InputPipe for urn:jxta:uuid-C5D686304E5A4916A943F1F4D0FD649892EAC01ED6644C4DA82ADA0A22F4C7B004 of type JxtaUnicast with listener
05.04.2011 13:59:03 net.jxta.logging.Logging logCheckedInfo
INFO: Line 356 net.jxta.socket.JxtaSocket.<init>()
New socket : net.jxta.socket.JxtaSocket#10175206[uuid-C5D686304E5A4916A943F1F4D0FD64988A0B5A3B55C14246824C9A3325E18D2204/uuid-C5D686304E5A4916A943F1F4D0FD649892EAC01ED6644C4DA82ADA0A22F4C7B004] OPEN : i R B C
05.04.2011 13:59:03 org.mopsproject.core.net.transfer.FileReceiver run
INFO: New socket connection accepted
05.04.2011 13:59:03 org.mosprpoject.core.net.transfer.FileReceiver.ConnectionHandler ConnectionHandler(JxtaSocket socket)
INFO: Method started.
05.04.2011 13:59:03 org.mopsproject.core.net.transfer.FileReceiver run
INFO: Waiting for connections
05.04.2011 13:59:03 org.mopsproject.core.net.transfer.FileReceiver getTargetFile
INFO: filename : 550e8400-e29b-41d4-a716-446655441234.xml
05.04.2011 13:59:03 net.jxta.logging.Logging logCheckedInfo
INFO: Line 1137 net.jxta.impl.util.pipe.reliable.ReliableOutputStream$Retransmitter.<init>()
STARTED Reliable Retransmitter, RTO = 60000
05.04.2011 13:59:04 org.mosprpoject.core.net.transfer.FileReceiver.ConnectionHandler sendAndReceiveData(JxtaSocket socket)
SEVERE: Read timeout reached
java.net.SocketTimeoutException: Read timeout reached
at net.jxta.impl.util.pipe.reliable.ReliableInputStream.dequeueMessage(ReliableInputStream.java:569)
at net.jxta.impl.util.pipe.reliable.ReliableInputStream.local_read(ReliableInputStream.java:702)
at net.jxta.impl.util.pipe.reliable.ReliableInputStream.read(ReliableInputStream.java:309)
at java.io.DataInputStream.readFully(Unknown Source)
at java.io.DataInputStream.readLong(Unknown Source)
at org.mopsproject.core.net.transfer.FileReceiver$ConnectionHandler.sendAndReceiveData(FileReceiver.java:310)
at org.mopsproject.core.net.transfer.FileReceiver$ConnectionHandler.run(FileReceiver.java:409)
at java.lang.Thread.run(Unknown Source)
There is one interesting thing that the receiver gets the file name but nothing else.
I would like to know what are the reasons for this strange thing to happen?
On the other hand, it might be JXTA's fault I suppose.
I had a discussion with Eugene on the JXSE/JXTA email list. One of his stack trace seemed to indicate that he was using a ADHOC configuration (which only uses multicasting) in a scenario requiring TCP/HTTP connections. I have advised to use EDGE / RDV configurations instead. I have not received feedback since. I am assuming this solved his issue...