Everytime i create new JBoss EAP7 with MySQL Persistent Type Applications, i'm always receiving Error Deployment Failed for the MySQL Pod. the log says like this
--> Scaling xxx-mysql-2 to 1
--> Waiting up to 10m0s for pods in deployment xxx-mysql-2 to become ready
W0326 15:34:23.420524 1 reflector.go:330] github.com/openshift/origin/pkg/deploy/strategy/support/lifecycle.go:468:
watch of *api.Pod ended with: too old resource version: 1042923611 (1042946447)
error: update acceptor rejected xxx-mysql-2: pods for deployment "xxx-mysql-2" took longer than 600 seconds to become ready
Is this openshift bugs or Perhaps i have made mistakes?
Related
Recently I've got a strange Wildfly bug where I can't start my server properly.
When I start the server empty I've got:
[org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full
14.0.0.Final (WildFly Core 6.0.1.Final) started in 3430ms - Started 306 of 527
services (321 services are lazy, passive or on-demand)
Which is pretty standard but even tho it says started in 3430ms when I go on my servers tab the server says [Starting, Synchronized] and the only option I have is to terminate it rather than restarting it. It also don't update my WEB content directly and I have to restart the server every single time when I make a slight change which is extremely time-consuming. I've tried so far:
deleted .eclipse, deleted eclipse-workplace, install new eclipse ide, install new JBoss tool (4.9 final), using new/different Wildfly.
And none of these solved the problem.
the only option I have is to terminate it rather than restarting it
This feature was removed from Jboss Tools at version 4.5 when the direct toolbar buttons were removed. See https://issues.jboss.org/browse/JBIDE-24528
You can try to downgrade Jboss Tools
It also don't update my WEB content directly and I have to restart the
server every single time when I make a slight change
Check you have added the resources on the server. Right click on server -> Add and remove...
Once a resource is added wildfly should update automatically your changes, also you can right click on the resource and do a manual incremental or full publish.
Hope it helps.
After fixing "Port Offset" it works. Instead of offset 0 the field had the port number 8080. I had to recreate the server configuration in the server view (New / Server).
I got an issue with my environment I guess.
I use Eclipse Photon and WildFly 11 for Maven Projects.
The issue is when I start my WildFly server on Eclipse. I wait few seconds then I have the following message :
19:46:06,398 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 11.0.0.Final (WildFly Core 3.0.8.Final) started in 24044ms - Started 674 of 900 services (355 services are lazy, passive or on-demand)
But my server seems to not work correctly. I'll explain:
After I start the server and the message is displayed, I can access to localhost:8080/myproject but, when the timeout setting to start server is reach (450 seconds by default) the server stops. Like it wasn't fully started.
Furthermore, if I go to -> Servers -> WildFly 11 -> Server Details, I see the mention Not Connected and nothing else while I should get some folder like Attributes, Core Services, Deployments, etc.
I tried reinstall Eclipse, WildFly 11 and 13, and I tried with project in the server and with an empty server, with a new workspace... The issue is still there.
Has anyone ever had this concern or has any leads to solve it?
OpenShit Prouction App is restarting automatically daily once due to heap space error even though i haven't configured a message broker in my application, i found activemq is trying create new thread getting heap space error though my logs.
Should we explicitly disable activemq?
[31m01:34:12,417 ERROR [org.apache.activemq.artemis.core.client] (Thread-114 (ActiveMQ-remoting-threads-ActiveMQServerImpl::serverUUID=54cb8ef9-d17a-11e5-b538-af749189a999-28800659-992791)) AMQ214017: Caught unexpected Throwable: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor.execute(OrderedExecutorFactory.java:85)
at org.apache.activemq.artemis.core.remoting.impl.invm.InVMConnection.write(InVMConnection.java:163)
at org.apache.activemq.artemis.core.remoting.impl.invm.InVMConnection.write(InVMConnection.java:151)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.send(ChannelImpl.java:259)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.send(ChannelImpl.java:201)
at org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.doConfirmAndResponse(ServerSessionPacketHandler.java:579)
at org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.access$000(ServerSessionPacketHandler.java:116)
at org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler$1.done(ServerSessionPacketHandler.java:561)
at org.apache.activemq.artemis.core.persistence.impl.journal.OperationContextImpl.executeOnCompletion(OperationContextImpl.java:161)
at org.apache.activemq.artemis.core.persistence.impl.journal.JournalStorageManager.afterCompleteOperations(JournalStorageManager.java:666)
at org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.sendResponse(ServerSessionPacketHandler.java:546)
at org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.handlePacket(ServerSessionPacketHandler.java:531)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.handlePacket(ChannelImpl.java:567)
at org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.doBufferReceived(RemotingConnectionImpl.java:349)
at org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:331)
at org.apache.activemq.artemis.core.remoting.server.impl.RemotingServiceImpl$DelegatingBufferHandler.bufferReceived(RemotingServiceImpl.java:605)
at org.apache.activemq.artemis.core.remoting.impl.invm.InVMConnection$1.run(InVMConnection.java:171)
at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$ExecutorTask.run(OrderedExecutorFactory.java:100)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
You could take a look at this comment on Wildfly openshift cartridge project open issues.
To summarize, some users have experienced memory, performance issues with this cartridge default settings.
This is particularly due to the fact that Openshift Wildfly cartridge enables Java EE 7 Full Profile by default (with a custom standalone.xml configuration file, not the standalone-full.xml as in default wildfly standalone configuration), which does not work well on small gears due to their limitations.
But a lot of Java EE 7 users only use Web Profile and do not need to enable all Java EE 7 Full Profile specs in their application actually.
So by enabling only Java EE 7 Web profile features and disabling Full Profile specific ones, such as messaging subsystem, you can make this cartridge work fine on a small gear.
See also this other comment for solution details and this table which lists differences between Java EE 7 profiles.
If you are running this on a small gear, that is your issue. WildFly 10 uses quite a bit of memory just by itself. You should run it on a medium or large gear. You can also try changing the amount of memory on the gear that is available to the JVM: https://developers.openshift.com/en/wildfly-jvm-memory.html
Try to add -Dactivemq.artemis.client.global.thread.pool.max.size=20.
The default global thread pool used by Artemis client is 500 but the small gears have a thread limit of 250 threads. I had similar problem when I started 8 Wildfly instances on my Linux machine where was 4096 threads per user. Next day there was always java.lang.OutOfMemoryError: unable to create new native thread. I observed that artemis creates constantly new threads until it reaches 500.
When you run Wildfly with standalone-full.xml configuration, you have the JMS subsystem enabled. Wildfly 10.0.0.Final has a default Artemis's thread pool initialized with 500 threads. In future versions, it will be changed with a custom thread pool.
With Wildfly 10.0.0.Final, the simple way to say to Artemis initializes a minor number of threads (as Andrsej Szywala says) is with a command line parameter at startup like this:
sh standalone.sh -c standalone-full-ha.xml -Dactivemq.artemis.client.global.thread.pool.max.size=30
You can read more in my post at jboss forum:
https://developer.jboss.org/thread/268397
I have a webapp that is running on amazon ec2 on tomcat with hibernate and rest, my mySQL is standalone instance through amazon rds.
Once i start my webapp - everything is working fine, but recently i configured daily backups on my database and then started seeing problems with my webapp connecting to mySQL.
Basically the problem is only happens if my webapp was started before i mysql instance was restarted(backed up). Then after mySQL restart for some reason any connections to it from my webapp are failing.
It all resolves once i restart my ec2 vm (It might resolve if i restart tomcat as well, but i haven't tried that)
How can i make sure my webapp gets connected back to the mysql after mysql restart?
This is what im getting written to my log:
21-May-2015 11:42:27.857 WARN [http-nio-8080-exec-2] org.hibernate.engine.jdbc.spi.SqlExceptionHelper.logExceptions SQL Error: 0, SQLState: 08S01
21-May-2015 11:42:27.857 ERROR [http-nio-8080-exec-2] org.hibernate.engine.jdbc.spi.SqlExceptionHelper.logExceptions Communications link failure
The last packet successfully received from the server was 200,187 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago.
Any suggestions on what to dig into?
You should use a connection pool. For Hibernate, you can use c3p0.
In your hibernate properties set the following
hibernate.connection.provider_class = org.hibernate.connection.C3P0ConnectionProvider
Then, in a c3p0.properties file, put these properties to retry to reconnect indefinitely every 3 seconds when database is down:
c3p0.acquireRetryAttempts = 0
c3p0.acquireRetryDelay = 3000
c3p0.breakAfterAcquireFailure = false
See this section for more details on how to recover from a database outage.
Hi I have inherited a a Websphere 6.1 Community Edition that hosts several applications. They all use the same pooled DB connections to MySql. Yesterday the connection pool would run out after about 2 hours requiring a server restart... ever 2 hours.... not great. So tonight I have all the modules stopped and am going to add them one by one to see which one is the culprit. However, this leads me to the problem in the subject, when the websphere server boots it give me this every 15 minutes:
ERROR [RecoveryController] Recovery error: com.microsoft.sqlserver.jdbc.SQLServerException: Could not find stored procedure 'master..xp_sqljdbc_xa_recover'.
As far as I know there is no SQL server used in any of the clients apps. Is this something that comes with WS?
How can I get rid of the error?
Extra points, the server.log file is also writing [INFO] entries, where can I turn them off?
Looks like some old transaction to MSSQL database is still in transaction logs and could not be recovered. It looks like XA is not configured on your database server.
If you are still using that MSSQL server, try to configure XA support, if you don't use it any more you can try to stop WAS CE server and remove old transaction logs, which should be in /var/txlog.
For logging configuration check these two links: Logging in WAS CE and Application logging in WebSphere Application Server Community Edition