Jenkins and JBoss EAP 7.1.0.GA deployment Issue - java

I am having problem executing the deployment at the final stage of my jenkins job.
Caused by: org.jboss.as.cli.CommandFormatException: Undeploy failed: {"WFLYCTL0062:
Composite operation failed and was rolled back. Steps that failed:" => {"Operation step-1"
=> "WFLYDC0043: Cannot remove deployment abc-web-1.0.101.war from the domain as it is
still used by server groups [abc-demo-latest]"}}
at org.jboss.as.cli.handlers.UndeployHandler.doHandle(UndeployHandler.java:231)
at org.jboss.as.cli.handlers.CommandHandlerWithHelp.handle(CommandHandlerWithHelp.java:86)
at org.jboss.as.cli.impl.CommandContextImpl.handle(CommandContextImpl.java:581)
I do not have access to the admin console at the moment, but will appreciate any hints or help to what could possibly be wrong with my configuration.
Below is the gradle task failing:
task removeUnusedArtifactsFromJbossRepository(dependsOn: ['outputTenantSettings', 'ensureValidTenant']) << {
confirmToProceed("This command will remove unused artifacts from the Jboss Repository. Although this won't affect running systems, it may make manual rollbacks more difficult. Are you sure your wish to proceed?");
def serverGroup = getTenantProperties('config.properties').server_group_name;
ModelNode node = new ModelNode();
node.get(ClientConstants.OP).set(ClientConstants.READ_RESOURCE_OPERATION);
node.get(ClientConstants.OP_ADDR).add("/deployment");
ModelNode result = getCommandHelper().getModelControllerClient().execute(node);
def deployments = result.get("result").get("deployment");
if(deployments.getType() == ModelType.UNDEFINED) {
println "No artifacts to remove"
return;
}
for(def deployment : deployments.asList()) {
def deploymentName = deployment.asProperty().name
if(deploymentName.contains("abc-web")) {
println "Attempting to remove Deployment '${deploymentName}'"
try {
executeCliCommand("undeploy ${deploymentName}")
println "Deployment '${deploymentName}'' removed"
} catch(java.lang.IllegalArgumentException exception) {
// JBAS014653 means that the file is deployed to servers and can't be removed
// it's ok to ignore this
if(exception.getMessage().contains("JBAS014653")) {
println "Deployment '${deploymentName}' cannot be removed as it's currently in use"
} else {
throw exception;
}
}
}
}
}
After further investigation, I think the script is trying to delete a deployment from another server group i.e.abc-demo-latest, which I need to remain untouched or undeployed.
Is there a way I can change the script to only undeploy from the newly created server group, before deploying the new release?
I have tried the following:
if(serverGroup.equals("abc-demo-latest")) {
println("Undeploying customer-abc-latest server group deployment ${deploymentName}")
executeCliCommand("undeploy ${deploymentName}")
}else{
println("Undeploying customer-demo-ams-stable server group deployment ${deploymentName}")
executeCliCommand("undeploy ${deploymentName} --server-groups=other-server-group --keep-content")
}
But got the following error:
Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:
deploy (default-deploy) on project abc-parent: Failed to deploy artifacts:
Could not transfer artifact abc-parent:pom:1.0.26 from/to deployment (http://x.y.z:8080/nexus/content/repositories/releases/):
Failed to transfer file: http://x.y.z:8080/nexus/content/repositories/releases/abc-parent/1.0.26/abc-parent-1.0.26.pom.
Return code is: 400, ReasonPhrase: Bad Request

The key is in the error you posted there:
WFLYDC0043: Cannot remove deployment abc-web-1.0.101.war from the
domain as it is still used by server groups [abc-demo-latest]
You might not have undeployed for all of the server groups and that's why you can't undeploy from the domain. See the docs for undeploying. There is a command where you can undeploy from all of the relevant groups:
undeploy * --all-relevant-server-groups
Or just one server group:
undeploy * --server-groups=other-server-group

Related

Unable to run temporal workflow from terminal

I am using temporal for running workflows. I have created a jar with my app. and running the below cmd from terminal java -jar build/libs/app-0.0.1-SNAPSHOT.jar
Getting the below error when trying to run the above cmd:-
Exception in thread "main" io.grpc.StatusRuntimeException: UNKNOWN
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:271)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:252)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:165)
at io.temporal.api.workflowservice.v1.WorkflowServiceGrpc$WorkflowServiceBlockingStub.getSystemInfo(WorkflowServiceGrpc.java:4139)
at io.temporal.serviceclient.SystemInfoInterceptor.getServerCapabilitiesOrThrow(SystemInfoInterceptor.java:95)
at io.temporal.serviceclient.ChannelManager.lambda$getServerCapabilities$3(ChannelManager.java:330)
at io.temporal.internal.retryer.GrpcRetryer.retryWithResult(GrpcRetryer.java:60)
at io.temporal.serviceclient.ChannelManager.connect(ChannelManager.java:297)
at io.temporal.serviceclient.WorkflowServiceStubsImpl.connect(WorkflowServiceStubsImpl.java:161)
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.base/java.lang.reflect.Method.invoke(Method.java:577)
at io.temporal.internal.WorkflowThreadMarker.lambda$protectFromWorkflowThread$1(WorkflowThreadMarker.java:83)
at jdk.proxy1/jdk.proxy1.$Proxy0.connect(Unknown Source)
at io.temporal.worker.WorkerFactory.start(WorkerFactory.java:210)
at com.hok.furlenco.workflow.refundStatusSync.RefundStatusSyncSaga.createWorkFlow(RefundStatusSyncSaga.java:41)
at com.hok.furlenco.workflow.refundStatusSync.RefundStatusSyncSaga.main(RefundStatusSyncSaga.java:17)
Caused by: java.nio.channels.UnsupportedAddressTypeException
at java.base/sun.nio.ch.Net.checkAddress(Net.java:146)
at java.base/sun.nio.ch.Net.checkAddress(Net.java:157)
at java.base/sun.nio.ch.SocketChannelImpl.checkRemote(SocketChannelImpl.java:816)
at java.base/sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:839)
at io.grpc.netty.shaded.io.netty.util.internal.SocketUtils$3.run(SocketUtils.java:91)
at io.grpc.netty.shaded.io.netty.util.internal.SocketUtils$3.run(SocketUtils.java:88)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:569)
at io.grpc.netty.shaded.io.netty.util.internal.SocketUtils.connect(SocketUtils.java:88)
at io.grpc.netty.shaded.io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:322)
at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:248)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1342)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:548)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:533)
at io.grpc.netty.shaded.io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:54)
at io.grpc.netty.shaded.io.grpc.netty.WriteBufferingAndExceptionHandler.connect(WriteBufferingAndExceptionHandler.java:157)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:548)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.access$1000(AbstractChannelHandlerContext.java:61)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext$9.run(AbstractChannelHandlerContext.java:538)
at io.grpc.netty.shaded.io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
at io.grpc.netty.shaded.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.grpc.netty.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:833)
The app works fine when trying to run it from IDE:-
The temporal server is running as a docker container in my local:-
**
RefundStatusSyncSaga.java
**
/ gRPC stubs wrapper that talks to the local docker instance of temporal service.
WorkflowServiceStubs service = WorkflowServiceStubs.newLocalServiceStubs();
// client that can be used to start and signal workflows
WorkflowClient client = WorkflowClient.newInstance(service);
// worker factory that can be used to create workers for specific task queues
WorkerFactory factory = WorkerFactory.newInstance(client);
// Worker that listens on a task queue and hosts both workflow and activity implementations.
Worker worker = factory.newWorker(TASK_QUEUE);
// Workflows are stateful. So you need a type to create instances.
worker.registerWorkflowImplementationTypes(RefundSyncWorkflowImpl.class);
// Activities are stateless and thread safe. So a shared instance is used.
RefundStatusActivities tripBookingActivities = new RefundStatusActivitiesImpl();
worker.registerActivitiesImplementations(tripBookingActivities);
// Start all workers created by this factory.
factory.start();
System.out.println("Worker started for task queue: " + TASK_QUEUE);
// now we can start running instances of our saga - its state will be persisted
WorkflowOptions options = WorkflowOptions.newBuilder().setTaskQueue(TASK_QUEUE)
.setWorkflowId("1")
.setWorkflowIdReusePolicy( WorkflowIdReusePolicy.WORKFLOW_ID_REUSE_POLICY_REJECT_DUPLICATE)
.setCronSchedule("* * * * *")
.build();
RefundSyncWorkflow refundSyncWorkflow = client.newWorkflowStub(RefundSyncWorkflow.class, options);
refundSyncWorkflow.syncRefundStatus();
The complete code can be seen here -> https://github.com/iftekharkhan09/temporal-sample
I also come across this and I dig into the jar debugging. I found that in this check public static InetSocketAddress checkAddress(SocketAddress sa), the SocketAddress will become /xxx:443(my original addr is xxx:443). Then the validation check failed... I still don't know how to solve it.
update: one solution could be found here https://community.temporal.io/t/unable-to-run-temporal-workflow-from-jar/6607

SCDF: Error handling when pod failed to start

I'm working on a service where it will call Spring Cloud Dataflow (SCDF) to spin off a new k8s Pod for Spring Batch job.
Map<String, String> properties = Map.of("testApp.cpu", cpu, "testApp.memory", memory);
LOGGER.info("Create task '{}' with definition '{}'", taskName, taskDefinition);
taskOperations.create(taskName, taskDefinition);
LOGGER.info("Launching task '{}' with properties {} and arguments '{}'", taskName, properties, args);
return taskOperations.launch(taskName, properties, args);
Everything works fine. The problem is, whenever we pull a non-existing image (eg: due to some connection issue), the pod failed to start AND we end up with pending tasks (with NO batch jobs created whatever)
For example, we will have tasks in the table task_execution (SCDF table) with empty end time
But no related jobs in batch_job_execution table.
It seems fine at first since no pod is created, we don't consume any resource. But as the number of "pending jobs" reached 20, we have the famous error:
Cannot launch task testApp. The maximum concurrent task executions is at its limit [20]
I'm trying to find a way to detect that the pod spin-off has failed (and hence we should mark the task as error), but to no avail.
Is there a way to detect if the task launch has failed when that task launch a new k8s pod?
UPDATE
Not sure if it is relevant, we are using SCDF 1.7.3.RELEASE
Describe the failed pod:
Name: podname-lp2nyowgmm
Namespace: my-namespace
Priority: 1000
Priority Class Name: test-cluster-default
Node: some-ip.compute.internal/XX.XXX.XXX.XX
Start Time: Thu, 14 Jan 2021 18:47:52 +0700
Labels: role=spring-app
spring-app-id=podname-lp2nyowgmm
spring-deployment-id=podname-lp2nyowgmm
task-name=podname
Annotations: iam.amazonaws.com/role: arn:aws:iam::XXXXXXXXXXXX:role/svc-XXXX-XXX-XX-XXXX-X-XXX-XXX-XXXXXXXXXXXXXXXXXXXX
kubernetes.io/psp: eks.privileged
Status: Pending
IP: XX.XXX.XXX.XXX
IPs:
IP: XX.XXX.XXX.XXX
Containers:
podname-lp2nyowgmm:
Container ID:
Image: image_host:XXX/mysystem/myapp:notExist
Image ID:
Port: <none>
Host Port: <none>
Args:
--spring.datasource.username=postgres
--spring.cloud.task.name=podname
--spring.datasource.url=jdbc:postgresql://...
--spring.datasource.driverClassName=org.postgresql.Driver
--spring.datasource.password=XXXX
--fileId=XXXXXXXXXXX
--spring.application.name=app-name
--fileName=file_name.csv
...
--spring.cloud.task.executionid=3
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 8Gi
Requests:
cpu: 2
memory: 8Gi
Environment:
ELASTIC_SEARCH_PORT: 80
ELASTIC_SEARCH_PROTOCOL: http
SPRING_RABBITMQ_PORT: ${RABBITMQ_SERVICE_PORT}
ELASTIC_SEARCH_URL: elasticsearch
SPRING_PROFILES_ACTIVE: kubernetes
CLIENT_SECRET: ${CLIENT_SECRET}
SPRING_RABBITMQ_HOST: ${RABBITMQ_SERVICE_HOST}
RELEASE_ENV_NAME: QA_TEST
SPRING_CLOUD_APPLICATION_GUID: ${HOSTNAME}
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xxxxx(ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-xxxxx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-xxxxx
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m22s default-scheduler Successfully assigned my-namespace/podname-lp2nyowgmm to some-ip.compute.internal
Normal Pulling 103s (x4 over 3m21s) kubelet Pulling image "image_host:XXX/mysystem/myapp:notExist"
Warning Failed 102s (x4 over 3m19s) kubelet Failed to pull image "image_host:XXX/mysystem/myapp:notExist": rpc error: code = Unknown desc = Error response from daemon: manifest for image_host:XXX/mysystem/myapp:notExist not found: manifest unknown: manifest unknown
Warning Failed 102s (x4 over 3m19s) kubelet Error: ErrImagePull
Normal BackOff 88s (x6 over 3m19s) kubelet Back-off pulling image "image_host:XXX/mysystem/myapp:notExist"
Warning Failed 73s (x7 over 3m19s) kubelet Error: ImagePullBackOff
1.7.3 is a very old release. We just released 2.7. The original logic used the task execution tables instead of the pod status. If the version you are using is subject to that, then it would explain what you are seeing. I strongly recommend an upgrade.
Thanks for the question. Looking at the source code, we don't include Pendingpods when calculating the current number of executing tasks. It may be something else is going on. 1) Could you run kubectl describe pod on a pod when it's in this state and post the result? (status details). 2) Is the deployer configured to create a job for each task? (false by default).

Exception in thread "reader" java.lang.NoClassDefFoundError: org/bouncycastle/crypto/ec/CustomNamedCurves

I've used 'net.schmizz.sshj.SSHClient' package to connect to a server.
Below is my code:
public class ConnectToServer {
String hostName = "10.250.176.6";
int port = 22;
public ConnectToServer(String hostName, int port) {
this.hostName = hostName;
this.port = port;
}
public void ssh() {
SSHClient ssh = new SSHClient();
String cmd = "ipconfig";
try {
ssh.connect(this.hostName, this.port);
ssh.isConnected();
final Process process = Runtime.getRuntime().exec(cmd);
ssh.disconnect();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
However, I faced to an error: "Exception in thread "reader" java.lang.NoClassDefFoundError: org/bouncycastle/crypto/ec/CustomNamedCurves".
I added bcprov-jdk15on-1.49 and bouncycastle.jar into my classpath.
Please help me to resolve this error.
Complete exception:
08:46:05.526 [main] DEBUG net.schmizz.concurrent.Promise - Awaiting <<kex done>>
08:46:05.528 [reader] DEBUG n.s.sshj.transport.KeyExchanger - Received SSH_MSG_KEXINIT
08:46:05.528 [reader] DEBUG n.s.sshj.transport.KeyExchanger - Negotiated algorithms: [ kex=curve25519-sha256#libssh.org; sig=ecdsa-sha2-nistp256; c2sCipher=aes128-ctr; s2cCipher=aes128-ctr; c2sMAC=hmac-sha1; s2cMAC=hmac-sha1; c2sComp=none; s2cComp=none ]
**Exception in thread "reader" java.lang.NoClassDefFoundError: org/bouncycastle/crypto/ec/CustomNamedCurves**
at net.schmizz.sshj.transport.kex.Curve25519DH.getCurve25519Params(Curve25519DH.java:60)
at net.schmizz.sshj.transport.kex.Curve25519SHA256.initDH(Curve25519SHA256.java:44)
at net.schmizz.sshj.transport.kex.AbstractDHG.init(AbstractDHG.java:46)
at net.schmizz.sshj.transport.KeyExchanger.gotKexInit(KeyExchanger.java:236)
at net.schmizz.sshj.transport.KeyExchanger.handle(KeyExchanger.java:356)
at net.schmizz.sshj.transport.TransportImpl.handle(TransportImpl.java:503)
at net.schmizz.sshj.transport.Decoder.decode(Decoder.java:102)
at net.schmizz.sshj.transport.Decoder.received(Decoder.java:170) at net.schmizz.sshj.transport.Reader.run(Reader.java:59)
Caused by: java.lang.ClassNotFoundException: org.bouncycastle.crypto.ec.CustomNamedCurves
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
Your jar is probably missing its dependencies (or some of it.) If its a maven project i suggest you rather switch to Maven.
A nice tutorial can be found here: Maven in 5 Minutes
I think, the SSH Client is missing org.Bouncycastle.crypto as libary (dependency). Quick way to fix this is to get the jar for it too.
This issue might be occurred due to use of different versions of bouncycastle jars in the project.
the solution is ,
to find the different versions of bouncycastle jars getting used directly or indirectly in the project.
try to use one version of bouncycastle jars in whole project.
make changes according to version which you have chosen to use across project as code written with one version of bouncycastle jar may not work for other version of bouncycastle.
Clean your project or rebuild it again.
If the problem is not solved, please post complete exception so that we will get more clarity.

Hibernate connecting to DB2 at application startup on WAS Liberty on CICS

We're running a simple webapp on WebSphere Liberty, that uses Hibernate as persistence provider (included as a library in the WAR file).
When application is starting up Hibernate is initialized and it will open a connection to DB2 and issue some SQL statements. However, this fails when running on CICS and using JDBC Type 2 Driver DataSource. The following messages are logged (some extra line breaks for readability):
WARN org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator -
HHH000342: Could not obtain connection to query metadata : [jcc][50053][12310][4.19.56]
T2zOS exception: [jcc][T2zos]T2zosCicsApi.checkApiStatus:
Thread is not CICS-DB2 compatible: CICS_REGION_BUT_API_DISALLOWED ERRORCODE=-4228, SQLSTATE=null
...
ERROR org.hibernate.hql.spi.id.IdTableHelper - Unable obtain JDBC Connection
com.ibm.db2.jcc.am.SqlException: [jcc][50053][12310][4.19.56] T2zOS exception: [jcc][T2zos]T2zosCicsApi.checkApiStatus:
Thread is not CICS-DB2 compatible: CICS_REGION_BUT_API_DISALLOWED ERRORCODE=-4228, SQLSTATE=null
at com.ibm.db2.jcc.am.kd.a(Unknown Source) ~[db2jcc4.jar:?]
...
at com.ibm.db2.jcc.t2zos.T2zosConnection.a(Unknown Source) ~[db2jcc4.jar:?]
...
at com.ibm.db2.jcc.DB2SimpleDataSource.getConnection(Unknown Source) ~[db2jcc4.jar:?]
at com.ibm.cics.wlp.jdbc.internal.CICSDataSource.getConnection(CICSDataSource.java:176) ~[?:?]
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122) ~[our-app.war:5.1.0.Final]
at org.hibernate.internal.SessionFactoryImpl$3.obtainConnection(SessionFactoryImpl.java:643) ~[our-app.war:5.1.0.Final]
at org.hibernate.hql.spi.id.IdTableHelper.executeIdTableCreationStatements(IdTableHelper.java:67) [our-app.war:5.1.0.Final]
at org.hibernate.hql.spi.id.global.GlobalTemporaryTableBulkIdStrategy.finishPreparation(GlobalTemporaryTableBulkIdStrategy.java:125) [our-app.war:5.1.0.Final]
at org.hibernate.hql.spi.id.global.GlobalTemporaryTableBulkIdStrategy.finishPreparation(GlobalTemporaryTableBulkIdStrategy.java:42) [our-app.war:5.1.0.Final]
at org.hibernate.hql.spi.id.AbstractMultiTableBulkIdStrategyImpl.prepare(AbstractMultiTableBulkIdStrategyImpl.java:88) [our-app.war:5.1.0.Final]
at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:451) [our-app.war:5.1.0.Final]
My current understanding is that when running on CICS and using JDBC Type 2 Drivers only some threads are capable of opening a DB2 connection. That would be the application threads (the ones processing HTTP requests) as well as worker threads servicing CICSExecutorService.
The current solution is to:
Disable JDBC metadata lookup in JdbcEnvironmentInitiator by
setting hibernate.temp.use_jdbc_metadata_defaults property to
false
Wrap execution of IdTableHelper#executeIdTableCreationStatements in a Runnable and submit it to CICSExecutorService.
Would you consider this solution to be sufficient and suitable for production? Or maybe you use some different approach?
Versions used:
CICS Transaction Server for z/OS 5.3.0
WebSphere Application Server 8.5.5.8
Hibernate 5.1.0
Update: Just to clarify, once our application is started, it can query DB2 with no problems (when servicing HTTP requests). The problem is only related to startup.
CICS TS v5.3 support for the JPA feature in Liberty was recently made available in a service-refresh (July 2016). Prior to that update, attempting to run JPA in applications would result in very similar problems to those you describe.
Although you are running hibernate and you are on a CICS-enabled thread, it does not have the API environment (which will allow the type 2 JDBC call to succeed). New detection logic was developed specifically (but not exclusively) for use with the DB2 JDBC type 2 driver and JPA. This update was shipped in a recent service refresh and might cure the issues you are seeing.
Try applying:
http://www-01.ibm.com/support/docview.wss?crawler=1&uid=swg1PI58375
The description says it is for 'Standard-mode Liberty' support, but it contains other developments as outlined above.
The following solution was tested to work ok.
The idea is to execute the SQL/DDL statements using CICSExecutorService#runAsCICS. The following extension is registered via hibernate.hql.bulk_id_strategy property.
package org.hibernate.hql.spi.id.global;
import java.util.concurrent.*;
import org.hibernate.boot.spi.MetadataImplementor;
import org.hibernate.engine.jdbc.connections.spi.JdbcConnectionAccess;
import org.hibernate.engine.jdbc.spi.JdbcServices;
import org.springframework.util.ClassUtils;
import com.ibm.cics.server.*;
public class CicsAwareGlobalTemporaryTableBulkIdStrategy extends GlobalTemporaryTableBulkIdStrategy {
#Override
protected void finishPreparation(JdbcServices jdbcServices, JdbcConnectionAccess connectionAccess, MetadataImplementor metadata, PreparationContextImpl context) {
execute(() -> super.finishPreparation(jdbcServices, connectionAccess, metadata, context));
}
#Override
public void release(JdbcServices jdbcServices, JdbcConnectionAccess connectionAccess) {
execute(() -> super.release(jdbcServices, connectionAccess));
}
private void execute(Runnable runnable) {
if (isCics() && IsCICS.getApiStatus() == IsCICS.CICS_REGION_BUT_API_DISALLOWED) {
RunnableFuture<Void> task = new FutureTask<>(runnable, null);
CICSExecutorService.runAsCICS(task);
try {
task.get();
} catch (InterruptedException | ExecutionException e) {
throw new RuntimeException("Failed to execute in a CICS API-enabled thread. " + e.getMessage(), e);
}
} else {
runnable.run();
}
}
private boolean isCics() {
return ClassUtils.isPresent("com.ibm.cics.server.CICSExecutorService", null);
}
}
Note that the newer JCICS API version has an overlaod for runAsCics method accepting a Callable, which might be useful to simplify the CICS branch of the execute method to something like this:
CICSExecutorService.runAsCICS(() -> { runnable.run(); return null; }).get();
A few alternatives tried:
Wrapping just the connection acquisition action (org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl#getConnection) did not work as the connection was closed already when it was used in the main thread.
Wrapping the whole application startup (org.springframework.web.context.ContextLoaderListener#contextInitialized) led to classloading issues.
Edit: Eventually went with a custom Hibernate's MultiTableBulkIdStrategy implementation that does not run any SQL/DDL on startup (see project page on GitHub).

Jetty + OSGi results in strange cache errors

I'm trying to run an embedded jetty instance from an OSGi server.
When the server starts I can see the following in the log:
Started o.e.j.w.WebAppContext#1f437060{/browser,bundle://201.0:24/browser,AVAILABLE}
The first request is successful but later requests will result in a stack trace, e.g.
WARN | ResourceCache | Could not load bundle://201.0:24/browser/index.html true 1415275444922 java.nio.HeapByteBuffer[pos=0 lim=9 cap=9] java.nio.HeapByteBuffer[pos=0 lim=29 cap=29]
WARN | ResourceCache |
java.io.FileNotFoundException: \browser\index.html (Det går inte att hitta sökvägen)
at java.io.RandomAccessFile.open(Native Method)[:1.8.0_20]
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)[:1.8.0_20]
at org.eclipse.jetty.util.BufferUtil.readFrom(BufferUtil.java:408)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.ResourceCache.getIndirectBuffer(ResourceCache.java:296)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.ResourceCache$Content.getIndirectBuffer(ResourceCache.java:478)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.ResourceCache$Content.getInputStream(ResourceCache.java:525)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.HttpOutput.sendContent(HttpOutput.java:427)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.HttpOutput.sendContent(HttpOutput.java:345)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.servlet.DefaultServlet.sendData(DefaultServlet.java:887)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:493)[201:com.test.mybundle:1.0.0]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:735)[201:com.test.mybundle:1.0.0]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:698)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:505)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:138)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:582)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:213)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1096)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:432)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:175)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1030)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:136)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:261)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:101)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:546)[201:com.test.mybundle:1.0.0]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:735)[201:com.test.mybundle:1.0.0]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:698)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:505)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:138)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:564)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:213)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1096)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:432)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:175)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1030)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:136)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.Server.handle(Server.java:445)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:268)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:229)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.run(AbstractConnection.java:358)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:601)[201:com.test.mybundle:1.0.0]
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:532)[201:com.test.mybundle:1.0.0]
at java.lang.Thread.run(Thread.java:745)[:1.8.0_20]
My guess is that the path prefix bundle:// is not supported. Could this be the problem, and if so, how could this be resolved?
ServiceMix already has a build in Web Feature. You just need install the war or http feature which will also install a Jetty Server. That's all you need to do, no extras to install in a ServiceMix/Karaf Container.

Categories