I'm trying to create a similar application like the one mentioned here https://github.com/heroku/devcenter-java-quartz-rabbitmq suggested on Heroku by this article https://devcenter.heroku.com/articles/scheduled-jobs-custom-clock-processes-java-quartz-rabbitmq.
The only difference is that at the moment I have a web app doing nothing ("Hello World") and a scheduler app printing the current time.
Unfortunately, after 30 min that I'm not using the application, both web and scheduler stop working:
2020-02-09T15:20:17.911457+00:00 app[scheduler.1]: 15:20:17.911 [SpringContextShutdownHook] INFO o.s.s.c.ThreadPoolTaskExecutor - Shutting down ExecutorService 'applicationTaskExecutor'
2020-02-09T15:20:18.151399+00:00 app[web.1]: 15:20:18.151 [SpringContextShutdownHook] INFO org.quartz.core.QuartzScheduler - Scheduler quartzScheduler_$_NON_CLUSTERED paused.
2020-02-09T15:20:18.151689+00:00 app[web.1]: 15:20:18.151 [SpringContextShutdownHook] INFO o.s.s.quartz.SchedulerFactoryBean - Shutting down Quartz Scheduler
2020-02-09T15:20:18.151775+00:00 app[web.1]: 15:20:18.151 [SpringContextShutdownHook] INFO org.quartz.core.QuartzScheduler - Scheduler quartzScheduler_$_NON_CLUSTERED shutting down.
2020-02-09T15:20:18.151840+00:00 app[web.1]: 15:20:18.151 [SpringContextShutdownHook] INFO org.quartz.core.QuartzScheduler - Scheduler quartzScheduler_$_NON_CLUSTERED paused.
2020-02-09T15:20:18.152244+00:00 app[web.1]: 15:20:18.152 [SpringContextShutdownHook] INFO org.quartz.core.QuartzScheduler - Scheduler quartzScheduler_$_NON_CLUSTERED shutdown complete.
2020-02-09T15:20:18.152871+00:00 app[web.1]: 15:20:18.152 [SpringContextShutdownHook] INFO o.s.s.c.ThreadPoolTaskExecutor - Shutting down ExecutorService 'applicationTaskExecutor'
2020-02-09T15:20:18.135436+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2020-02-09T15:20:18.247822+00:00 heroku[web.1]: Process exited with status 143
2020-02-09T15:20:18.123889+00:00 heroku[scheduler.1]: Process exited with status 143
The Procfile is pretty simple:
web: java $JAVA_OPTS -Dserver.port=$PORT -jar target/*.jar
scheduler: java $JAVA_OPTS -cp target/*.jar -Dloader.main=algotrading.app.scheduler.SchedulerApp org.springframework.boot.loader.PropertiesLauncher
Is there something I am missing in the example?
Regards
R
I opened a ticket as suggested by #codefinger and I received immediate support. The answer to my question is that when you are using a web dyno with free dynos, when it becomes idle (due to dyno sleeping) all other dynos will become idle too.
This is the expected behaviour.
They will update the documentation as soos as they can.
Related
I'm trying to submit a teragen job to YARN like this:
yarn jar $YARN_EXAMPLES/hadoop-mapreduce-examples-3.3.1.jar teragen 1000 /teragen
It all goes well until it errors out:
2021-11-04 23:45:20,540 INFO mapreduce.Job: Running job: job_1636069364859_0003
2021-11-04 23:45:25,629 INFO mapreduce.Job: Job job_1636069364859_0003 running in uber mode : false
2021-11-04 23:45:25,630 INFO mapreduce.Job: map 0% reduce 0%
2021-11-04 23:45:27,658 INFO mapreduce.Job: Task Id : attempt_1636069364859_0003_m_000000_0, Status : FAILED
[2021-11-04 23:45:26.200]Exception from container-launch.
Container id: container_1636069364859_0003_01_000002
Exit code: 127
[2021-11-04 23:45:26.201]Container exited with a non-zero exit code 127. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
/bin/bash: line 1: m: command not found
I have no clue what the problem is. I've tried looking into the logs, especially the prelaunch.err file but it is empty. The stderr file has:
/bin/bash: line 1: m: command not found
Checking the node manager logs, I found this:
2021-11-04 23:44:05,765 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: s3a-file-system metrics system started
2021-11-04 23:44:06,423 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1636069364859_0001_01_000002 transitioned from LOCALIZING to SCHEDULED
2021-11-04 23:44:06,423 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler: Starting container [container_1636069364859_0001_01_000002]
2021-11-04 23:44:06,453 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1636069364859_0001_01_000002 transitioned from SCHEDULED to RUNNING
2021-11-04 23:44:06,453 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1636069364859_0001_01_000002
2021-11-04 23:44:06,457 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /opt/yarn/local/usercache/vagrant/appcache/application_1636069364859_0001/container_1636069364859_0001_01_000002/default_container_executor.sh]
2021-11-04 23:44:06,477 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1636069364859_0001_01_000002 is : 127
2021-11-04 23:44:06,478 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1636069364859_0001_01_000002 and exit code: 127
ExitCodeException exitCode=127:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
at org.apache.hadoop.util.Shell.run(Shell.java:901)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:309)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:585)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:373)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:103)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-11-04 23:44:06,479 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exception from container-launch.
2021-11-04 23:44:06,479 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Container id: container_1636069364859_0001_01_000002
2021-11-04 23:44:06,479 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exit code: 127
2021-11-04 23:44:06,479 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container launch failed : Container exited with a non-zero exit code 127.
2021-11-04 23:44:06,501 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1636069364859_0001_01_000002 transitioned from RUNNING to EXITED_WITH_FAILURE
2021-11-04 23:44:06,503 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerCleanup: Cleaning up container container_1636069364859_0001_01_000002
2021-11-04 23:44:06,515 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping s3a-file-system metrics system...
2021-11-04 23:44:06,515 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: s3a-file-system metrics system stopped.
2021-11-04 23:44:06,515 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: s3a-file-system metrics system shutdown complete.
2021-11-04 23:44:06,525 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /opt/yarn/local/usercache/vagrant/appcache/application_1636069364859_0001/container_1636069364859_0001_01_000002
2021-11-04 23:44:06,526 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1636069364859_0001_01_000002 transitioned from EXITED_WITH_FAILURE to DONE
2021-11-04 23:44:06,526 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Removing container_1636069364859_0001_01_000002 from application application_1636069364859_0001
2021-11-04 23:44:06,526 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1636069364859_0001_01_000002
I've read other responses and when they mention that Java is missing or JAVA_HOME is not set. That's not my case, my JAVA_HOME is set to /usr/lib/jvm/java-8-openjdk-amd64.
Any idea what could be going on here? Thanks :)
The problem was with the memory allocated for each container. Some containers were not living long enough to actually log the error apparently.
But after several attempts I actually got an error that looked like this:
Error occurred during initialization of VM
Too small initial heap
For some reason, the memory configuration for YARN and MapReduce jobs I was using was not correct. I ended up using Ambari's HDP yarn-util.py to get the appropriate values for my setup.
I am struggling with this issue for a couple of days and couldn't find anything on SO and the description of error codes on heroku is to vague for me to figure out anything.I have created a basic app with no db integration, with spring framework , thymeleaf and spring mail.I have no spring security.After successfully deploying it to heroku i try to start the application server and this is what i get in the heroku logs.The app works fine on localhost with no issue.
2020-08-05T08:03:50.144941+00:00 app[web.1]: 2020-08-05 08:03:50.144 INFO 4 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2020-08-05T08:03:50.458342+00:00 app[web.1]: 2020-08-05 08:03:50.458 INFO 4 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2020-08-05T08:03:50.471638+00:00 app[web.1]: 2020-08-05 08:03:50.471 INFO 4 --- [ main] c.e.P.MyWebWebsiteApplication : Started MyWebWebsiteApplication in 2.688 seconds (JVM running for 3.226)
2020-08-05T08:04:53.993643+00:00 heroku[router]: at=error code=H20 desc="App boot timeout" method=GET path="/" host=myweb01.herokuapp.com request_id=0fb69db1-5c01-412e-b01f-451a8a2b7ad8 fwd="86.124.21.129" dyno= connect= service= status=503 bytes= protocol=https
2020-08-05T08:05:16.255402+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 90 seconds of launch
2020-08-05T08:05:16.271979+00:00 heroku[web.1]: Stopping process with SIGKILL
2020-08-05T08:05:16.349020+00:00 heroku[web.1]: Process exited with status 137
2020-08-05T08:05:16.389343+00:00 heroku[web.1]: State changed from starting to crashed
2020-08-05T08:05:16.392175+00:00 heroku[web.1]: State changed from crashed to starting
2020-08-05T08:05:23.662195+00:00 heroku[web.1]: Starting process with command `java $JAVA_OPTS -jar target/MyWebWebsite-0.0.1-SNAPSHOT.jar -Dserver.port=25487 $JAR_OPTS`
2020-08-05T08:05:25.791836+00:00 app[web.1]: Setting JAVA_TOOL_OPTIONS defaults based on dyno size. Custom settings will override them.
2020-08-05T08:05:25.796964+00:00 app[web.1]: Picked up JAVA_TOOL_OPTIONS: -Xmx300m -Xss512k -XX:CICompilerCount=2 -Dfile.encoding=UTF-8
This is my procfile:
web: java $JAVA_OPTS -jar target/MyWebWebsite-0.0.1-SNAPSHOT.jar -Dserver.port=$PORT $JAR_OPTS
reading the logs and finding this here Tomcat started on port(s): 8080 i suppose could be an issue? i mean tomcat is trying to start on localhost port instead of heroku port?If this is the case how can i fix it? if this is not the case , what could be the issue ?
From the docs here error code H20 means
The router will enqueue requests for 75 seconds while waiting for
starting processes to reach an “up” state. If after 75 seconds, no web
dynos have reached an “up” state, the router logs H20 and serves a
standard error page
If your application requires more time to boot, you may use the boot timeout tool to increase the limit. However, in general, slow boot times will make it harder to deploy your application and will make recovery from dyno failures slower, so this should be considered a temporary solution
Try to set boot timeout to 120 seconds - some SpringBoot apps take some time to boot. The request can be done as shown here: https://devcenter.heroku.com/changelog-items/364
Also, the health check process is based on HTTP requests, so be sure that your app is served through normal Web ports (80, 8080, 443).
I am running a Spring Boot application using bootRun task in Intellij (13.1.5 on Linux), and I am trying to shut down a listener thread in the lifecycle callback, however when stopping the app in IntelliJ, I find my listener thread is still running and logging.
In Console:
"Disconnected from the target VM, address: ..., transport: 'socket'
6:39:22 AM: External task execution finished 'bootRun'."
Then I did ps grep and kill the process,
only then console prints:
"2016-03-28 06:39:59.606 INFO 7740 --- [ Thread-1] ationConfigEmbeddedWebApplicationContext : Closing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#657617e6: ...2016-03-28 06:39:59.611 INFO 7740 --- [ Thread-1] o.s.j.e.a.AnnotationMBeanExporter : Unregistering JMX-exposed beans on shutdown"
My Questions:
Why stopping app in Intellij not closing application context?
Accodring to Doc, Spring Web app automatically registers shutdown hooks. But I have tried #PreDestroy, destroyMethod, and implements LifeCycle, why does none of them get called?
I have a Spark cluster setup with one master and 3 workers.
I Use vagrant and Docker to start a cluster.
I'm trying to submit a Spark work from my local eclipse which would connect to the master, and allow me to execute it. So, here is the Spark Conf :
SparkConf conf = new SparkConf().setAppName("Simple Application").setMaster("spark://scale1.docker:7077");
When I run my application from eclipse on Master's UI, I can see one running application. All the workers are ALIVE, have 4 / 4 cores used, and have allocated 512 MB to the application.
The eclipse console will just print the same warning:
15/03/04 15:39:27 INFO BlockManagerMaster: Updated info of block broadcast_1_piece0
15/03/04 15:39:27 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:838
15/03/04 15:39:27 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0 (MappedRDD[2] at mapToPair at CountLines.java:35)
15/03/04 15:39:27 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
15/03/04 15:39:42 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/03/04 15:39:57 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/03/04 15:40:04 INFO AppClient$ClientActor: Executor updated: app-20150304143926-0001/1 is now EXITED (Command exited with code 1)
15/03/04 15:40:04 INFO SparkDeploySchedulerBackend: Executor app-20150304143926-0001/1 removed: Command exited with code 1
15/03/04 15:40:04 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 1
15/03/04 15:40:04 INFO AppClient$ClientActor: Executor added: app-20150304143926-0001/2 on worker-20150304140319-scale3.docker-55425 (scale3.docker:55425) with 4 cores
15/03/04 15:40:04 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150304143926-0001/2 on hostPort scale3.docker:55425 with 4 cores, 512.0 MB RAM
15/03/04 15:40:04 INFO AppClient$ClientActor: Executor updated: app-20150304143926-0001/2 is now RUNNING
15/03/04 15:40:04 INFO AppClient$ClientActor: Executor updated: app-20150304143926-0001/2 is now LOADING
15/03/04 15:40:04 INFO AppClient$ClientActor: Executor updated: app-20150304143926-0001/0 is now EXITED (Command exited with code 1)
15/03/04 15:40:04 INFO SparkDeploySchedulerBackend: Executor app-20150304143926-0001/0 removed: Command exited with code 1
15/03/04 15:40:04 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 0
15/03/04 15:40:04 INFO AppClient$ClientActor: Executor added: app-20150304143926-0001/3 on worker-20150304140317-scale2.docker-60646 (scale2.docker:60646) with 4 cores
15/03/04 15:40:04 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150304143926-0001/3 on hostPort scale2.docker:60646 with 4 cores, 512.0 MB RAM
15/03/04 15:40:04 INFO AppClient$ClientActor: Executor updated: app-20150304143926-0001/3 is now RUNNING
15/03/04 15:40:04 INFO AppClient$ClientActor: Executor updated: app-20150304143926-0001/3 is now LOADING
Reading Spark Documentation of Spark I have find this:
Because the driver schedules tasks on the cluster, it should be run
close to the worker nodes, preferably on the same local area network.
If you’d like to send requests to the cluster remotely, it’s better to
open an RPC to the driver and have it submit operations from nearby
than to run a driver far away from the worker nodes.
I think the problem is due to the driver that runs locally on my machine.
I am using Spark 1.2.0.
Is it possible to run application in eclipse and submit it to remote cluster using local driver? If so, what can I do?
remote dubugging is quite possible and it works fine with below option executed on edge node.
--driver-java-options -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
Debugging Spark Applications
you need not to speicify the master or anything. here is the sample command.
spark-submit --master yarn-client --class org.hkt.spark.jstest.javascalawordcount.JavaWordCount --driver-java-options -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005 javascalawordcount-0.0.1-SNAPSHOT.jar
I am new to JBoss deployment. I am using Java 32 bit, Unix, Jboss 6 environment. While starting my application shell file X.sh, Jboss struck at Remote service. I have spent lot of time but didn't get any clue to resolve. Please find the error below.
14:34:13,100 INFO [JMXKernel] Legacy JMX core initialized
14:34:24,603 INFO [AbstractServerConfig] JBoss Web Services - Native Server 3.4.1.GA
14:34:25,157 INFO [JSFImplManagementDeployer] Initialized 3 JSF configurations: [Mojarra-1.2, MyFaces-2.0, Mojarra-2.0]
14:34:32,683 WARNING [FileConfigurationParser] AIO wasn't located on this platform, it will fall back to using pure Java NIO. If your platform is Linux, install LibAIO to enable the AIO journal
14:34:37,911 INFO [mbean] Sleeping for 600 seconds
14:34:38,214 WARNING [FileConfigurationParser] AIO wasn't located on this platform, it will fall back to using pure Java NIO. If your platform is Linux, install LibAIO to enable the AIO journal
14:34:38,425 INFO [JMXConnector] starting JMXConnector on host 0.0.0.0:1090
14:34:38,560 INFO [MailService] Mail Service bound to java:/Mail
14:34:39,623 INFO [HornetQServerImpl] live server is starting..
14:34:39,705 INFO [JournalStorageManager] Using NIO Journal
14:34:39,730 WARNING [HornetQServerImpl] Security risk! It has been detected that the cluster admin user and password have not been changed from the installation default. Please see the HornetQ user guide, cluster chapter, for instructions on how to do this.
14:34:40,970 INFO [NettyAcceptor] Started Netty Acceptor version 3.2.1.Final-r2319 0.0.0.0:5455 for CORE protocol
14:34:40,971 INFO [NettyAcceptor] Started Netty Acceptor version 3.2.1.Final-r2319 0.0.0.0:5445 for CORE protocol
14:34:40,975 INFO [HornetQServerImpl] HornetQ Server version 2.1.2.Final (Colmeia, 120) started
14:34:41,040 INFO [WebService] Using RMI server codebase: http://esaxh036.hyd.lab.vignette.com:8083/
14:34:41,271 INFO [jbossatx] ARJUNA-32010 JBossTS Recovery Service (tag: JBOSSTS_4_14_0_Final) - JBoss Inc.
14:34:41,281 INFO [arjuna] ARJUNA-12324 Start RecoveryActivators
14:34:41,301 INFO [arjuna] ARJUNA-12296 ExpiredEntryMonitor running at Thu, 30 Oct 2014 14:34:41
14:34:41,323 INFO [arjuna] ARJUNA-12332 Failed to establish connection to server
14:34:41,348 INFO [arjuna] ARJUNA-12304 Removing old transaction status manager item 0:ffff0a601a3e:126a:5451fbf6:8
14:34:41,390 INFO [arjuna] ARJUNA-12310 Recovery manager listening on endpoint 0.0.0.0:4712
14:34:41,390 INFO [arjuna] ARJUNA-12344 RecoveryManagerImple is ready on port 4712
14:34:41,391 INFO [jbossatx] ARJUNA-32013 Starting transaction recovery manager
14:34:41,402 INFO [arjuna] ARJUNA-12163 Starting service com.arjuna.ats.arjuna.recovery.ActionStatusService on port 4713
14:34:41,403 INFO [arjuna] ARJUNA-12337 TransactionStatusManagerItem host: 0.0.0.0 port: 4713
14:34:41,425 INFO [arjuna] ARJUNA-12170 TransactionStatusManager started on port 4713 and host 0.0.0.0 with service com.arjuna.ats.arjuna.recovery.ActionStatusService
14:34:41,480 INFO [jbossatx] ARJUNA-32017 JBossTS Transaction Service (JTA version - tag: JBOSSTS_4_14_0_Final) - JBoss Inc.
14:34:41,549 INFO [arjuna] ARJUNA-12202 registering bean jboss.jta:type=ObjectStore.
14:34:41,764 INFO [AprLifecycleListener] The Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /home/IWSTU/JBOSS/jboss-6.0.0.Final/bin/native/lib
14:34:41,922 INFO [ModClusterService] Initializing mod_cluster 1.1.0.Final
14:34:41,935 INFO [TomcatDeployment] deploy, ctxPath=/invoker
14:34:42,364 INFO [RARDeployment] Required license terms exist, view vfs:/home/IWSTU/JBOSS/jboss-6.0.0.Final/server/XDomain/deploy/jboss-local-jdbc.rar/META-INF/ra.xml
14:34:42,382 INFO [RARDeployment] Required license terms exist, view vfs:/home/IWSTU/JBOSS/jboss-6.0.0.Final/server/XDomain/deploy/jboss-xa-jdbc.rar/META-INF/ra.xml
14:34:42,395 INFO [RARDeployment] Required license terms exist, view vfs:/home/IWSTU/JBOSS/jboss-6.0.0.Final/server/XDomain/deploy/jms-ra.rar/META-INF/ra.xml
14:34:42,410 INFO [HornetQResourceAdapter] HornetQ resource adaptor started
14:34:42,421 INFO [RARDeployment] Required license terms exist, view vfs:/home/IWSTU/JBOSS/jboss-6.0.0.Final/server/XDomain/deploy/mail-ra.rar/META-INF/ra.xml
14:34:42,439 INFO [RARDeployment] Required license terms exist, view vfs:/home/IWSTU/JBOSS/jboss-6.0.0.Final/server/XDomain/deploy/quartz-ra.rar/META-INF/ra.xml
14:34:42,544 INFO [SimpleThreadPool] Job execution threads will use class loader of thread: Thread-7
14:34:42,578 INFO [SchedulerSignalerImpl] Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
14:34:42,579 INFO [QuartzScheduler] Quartz Scheduler v.1.8.3 created.
14:34:42,582 INFO [RAMJobStore] RAMJobStore initialized.
14:34:42,585 INFO [QuartzScheduler] Scheduler meta-data: Quartz Scheduler (v1.8.3) 'JBossQuartzScheduler' with instanceId 'NON_CLUSTERED'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
14:34:42,585 INFO [StdSchedulerFactory] Quartz scheduler 'JBossQuartzScheduler' initialized from an externally opened InputStream.
14:34:42,586 INFO [StdSchedulerFactory] Quartz scheduler version: 1.8.3
14:34:42,586 INFO [QuartzScheduler] Scheduler JBossQuartzScheduler_$_NON_CLUSTERED started.
14:34:43,229 INFO [ConnectionFactoryBindingService] Bound
ConnectionManager 'jboss.jca:service=DataSourceBinding,name=DefaultDS' to JNDI name 'java:DefaultDS'
14:34:43,422 INFO [TomcatDeployment] deploy, ctxPath=/juddi
14:34:43,488 INFO [RegistryServlet] Loading jUDDI configuration.
14:34:43,494 INFO [RegistryServlet] Resources loaded from: /WEB-INF/juddi.properties
14:34:43,494 INFO [RegistryServlet] Initializing jUDDI components.
14:34:43,688 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jboss.jca:service=ConnectionFactoryBinding,name=JmsXA' to JNDI name 'java:JmsXA'
14:34:43,738 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jboss.jca:service=DataSourceBinding,name=OracleDS' to JNDI name 'java:OracleDS'
14:34:43,926 INFO [xnio] XNIO Version 2.1.0.CR2
14:34:43,937 INFO [nio] XNIO NIO Implementation Version 2.1.0.CR2
**14:34:44,170 INFO [remoting] JBoss Remoting version 3.1.0.Beta2** (Strucked here)
14:44:37,912 INFO [TicketMap] Start:
14:44:37,913 INFO [TicketMap] Complete:
14:44:37,930 INFO [mbean] Sleeping for 600 seconds
14:54:37,932 INFO [TicketMap] Start:
14:54:37,932 INFO [TicketMap] Complete:
14:54:37,944 INFO [mbean] Sleeping for 600 seconds