How to embed broker? Missing broker.xml - java

Currently I am writing a class which shall start and configure an embedded JMS server and after that mediate between Producers and Consumers.
I found this reference and it says that it needs a broker.xml but doesn't supply any example. Can somebody tell me what I need to put into the file.
And also: Will it work to start the BrokerServer as I imagine?
EDIT:
Now I use this code:
...
SecurityConfiguration securityConfig = new SecurityConfiguration();
securityConfig.addUser("guest", "guest");
securityConfig.addRole("guest", "guest");
securityConfig.setDefaultUser("guest");
ActiveMQJAASSecurityManager securityManager = new ActiveMQJAASSecurityManager(InVMLoginModule.class.getName(), securityConfig);
// Step 2. Create and start embedded broker.
ActiveMQServer server = null;
try {
server = ActiveMQServers.newActiveMQServer("broker.xml", null, securityManager);
server.start();
System.out.println("Started Embedded Broker");
} catch (Exception e) {
e.printStackTrace();
}
...
But I receive the error:
java.net.MalformedURLException: no protocol: broker.xml
Even so the the file is right next to the class. Where does the file has to be?
broker.xml
<configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:activemq" xsi:schemaLocation="urn:activemq /schema/artemis-server.xsd">
<core xmlns="urn:activemq:core">
<persistence-enabled>false</persistence-enabled>
<acceptors>
<acceptor name="in-vm">vm://0</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createAddress" roles="guest"/>
<permission type="createDurableQueue" roles="guest"/>
<permission type="deleteDurableQueue" roles="guest"/>
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
<permission type="consume" roles="guest"/>
<permission type="send" roles="guest"/>
</security-setting>
</security-settings>
</core>
</configuration>

The documentation you cited actually covers 2 different ways to embed an instance of ActiveMQ Artemis. The first way uses a broker.xml on your classpath. The second way just uses the configuration API (i.e. programmatic configuration without XML config).
ActiveMQ Artemis ships with many examples in the examples directory demonstrating all kinds of ways to configure the broker via broker.xml. There are even 2 examples demonstrating the two different ways to embed the broker as discussed in the documentation. Check out the example in examples/features/standard/embedded-simple for a demonstration of how to embed a broker and use a broker.xml on the classpath for configuration. Check out the example in examples/features/standard/embedded for a demonstration of how to embed a broker and configure it programmatically.

Related

Why username and password are not respected in activemq createConnection

I created a one and only broker in activemq and I am using the following code to produce and consume messages. I took this code from here.
public boolean runExample() throws Exception {
Connection connection = null;
InitialContext initialContext = null;
try {
Properties properties = new Properties();
properties.put("java.naming.factory.initial", "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory");
properties.put("connectionFactory.ConnectionFactory", "tcp://localhost:61616");
properties.put("queue.queue/exampleQueue", "exampleQueue");
initialContext = new InitialContext(properties);
Queue queue = (Queue) initialContext.lookup("queue/exampleQueue");
ConnectionFactory connectionFactory = (ConnectionFactory) initialContext.lookup("ConnectionFactory");
connection = connectionFactory.createConnection("admin", "admin");//brokerone
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = session.createProducer(queue);
TextMessage message = session.createTextMessage("This is a text message");
System.out.println("Sent message: " + message.getText());
producer.send(message);
MessageConsumer messageConsumer = session.createConsumer(queue);
connection.start();
TextMessage messageReceived = (TextMessage) messageConsumer.receive(5000);
System.out.println("Received message: " + messageReceived.getText());
return true;
} finally {
if (initialContext != null) {
initialContext.close();
}
if (connection != null) {
connection.close();
}
}
}
Now, while creating connection if I put any random string for password in connectionFactory.createConnection method then it still creates connection and I can see the produced messages in broker console. I looked up the documentation and here for more explanation but it also says that the strings passed in createConnection method are username and password.
So now, my question is what is the purpose of username and password when they are not used while creating connection?
Edit1:
broker.xml (after removing bulk commented lines)
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>0.0.0.0</name>
<persistence-enabled>true</persistence-enabled>
<journal-type>NIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<journal-buffer-timeout>1192000</journal-buffer-timeout>
<!-- When using ASYNCIO, this will determine the writing queue depth for libaio. -->
<journal-max-io>1</journal-max-io>
<!-- how often we are looking for how many bytes are being used on the disk in ms -->
<disk-scan-period>5000</disk-scan-period>
<!-- once the disk hits this limit the system will block, or close the connection in certain protocols that won't support flow control. -->
<max-disk-usage>90</max-disk-usage>
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>1192000</page-sync-timeout>
<acceptors>
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
<!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic.-->
<acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
<!-- STOMP Acceptor. -->
<acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>
<!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
<acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
<!-- MQTT Acceptor -->
<acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
</addresses>
</core>
</configuration>
bootstrap.xml
<broker xmlns="http://activemq.org/schema">
<jaas-security domain="activemq"/>
<!-- artemis.URI.instance is parsed from artemis.instance by the CLI startup.
This is to avoid situations where you could have spaces or special characters on this URI -->
<server configuration="file:/C:/dev/artemis/apache-artemis-2.13.0/bin/brokerone/etc//broker.xml"/>
<!-- The web server is only bound to localhost by default -->
<web bind="http://localhost:8161" path="web">
<app url="activemq-branding" war="activemq-branding.war"/>
<app url="artemis-plugin" war="artemis-plugin.war"/>
<app url="console" war="console.war"/>
</web>
</broker>
login.config
activemq {
org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule sufficient
debug=false
reload=true
org.apache.activemq.jaas.properties.user="artemis-users.properties"
org.apache.activemq.jaas.properties.role="artemis-roles.properties";
org.apache.activemq.artemis.spi.core.security.jaas.GuestLoginModule sufficient
debug=false
org.apache.activemq.jaas.guest.user="admin"
org.apache.activemq.jaas.guest.role="amq";
};
The username and password are used when creating the connection. The behavior your observing where it doesn't matter what credentials you pass is due to your configuration. You've specifically configured the broker to allow "guest" users (i.e. users with bad credentials or no credentials) via your login.config:
org.apache.activemq.artemis.spi.core.security.jaas.GuestLoginModule sufficient
debug=false
org.apache.activemq.jaas.guest.user="admin"
org.apache.activemq.jaas.guest.role="amq";
You can read more about this login module in the documentation.
If you don't want to allow "guest" users then you can change login.config to be:
activemq {
org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule required
debug=false
reload=true
org.apache.activemq.jaas.properties.user="artemis-users.properties"
org.apache.activemq.jaas.properties.role="artemis-roles.properties";
};
When the client creates the session, the broker tries to authenticate the client with the passed username and password.
Your login.config file contains 2 login modules PropertiesLoginModule and GuestLoginModule. If PropertiesLoginModule fails the login because of a wrong username/password the GuestLoginModule will login the user admin with the role amq as defined in your login.config file.
Considering a standard installation (5.16.3), when you extract the archive you will find a conf folder.
Starting from there as a base for a Docker container, i had a hard time to configure ActiveMQ security, as there are multiple files involved which partly did not work as expected.
I assume i did not configure everything correctly, as changing login.config had basically no effect.
The only way i got security working was
change jetty-realm.properties for the admin web page access
change activemq.xml for broker security config
(Solution for activemq.xml was found here)
Example config snippet for broker security:
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<plugins>
<simpleAuthenticationPlugin anonymousAccessAllowed="false">
<users>
<authenticationUser username="admin" password="admin1234!" groups="admins,senders,receivers"/>
<!---<authenticationUser username="user" password="password" groups="users"/>
<authenticationUser username="guest" password="password" groups="guests"/>-->
</users>
</simpleAuthenticationPlugin>
<authorizationPlugin>
<map>
<authorizationMap>
<authorizationEntries>
<authorizationEntry queue=">" write="senders" read="receivers" admin="admins" />
<authorizationEntry topic="ActiveMQ.Advisory.>" write="senders" read="receivers" admin="admins,senders,receivers" />
</authorizationEntries>
</authorizationMap>
</map>
</authorizationPlugin>
</plugins>
.
.
This finally enabled security for Queue access.

ActiveMQ compositeTopic's selective persistence

I'm using ActiveMQ's compositeTopic to fan-out messages to multiple destinations like this:
<broker>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
...
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<compositeTopic name="fan-out" forwardOnly="true">
<forwardTo>
<queue physicalName="persistent"/>
<queue physicalName="ephemeral"/>
</forwardTo>
</compositeTopic>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
</broker>
So, I want to forward messages to both persistent and ephemeral queues at the same time. As you might guess from their names, I want messages in persistent queue to be persistent and I do not need persistence for ephemeral queue. The problem is that ActiveMQ doesn't have a concept of a persistence on a per destination basis, does it? One can set persistence for a whole broker, or use persistence / non-persistence delivery modes. So, the question is: how can I disable persistence for ephemeral queue in this case?
So, the solution that seems to work is to use Apache Camel with ActiveMQ. Just add a route that drains ephemeral queue to another queue setting TTL / persistence mode in process:
<broker>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
...
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<compositeTopic name="fan-out" forwardOnly="true">
<forwardTo>
<queue physicalName="persistent"/>
<queue physicalName="ephemeral"/>
</forwardTo>
</compositeTopic>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
</broker>
<camelContext xmlns="http://camel.apache.org/schema/spring" id="camel">
<route>
<from uri="activemq:queue:ephemeral"/>
<to uri="activemq:queue:ephemeral-backend?timeToLive=10000"/>
</route>
</camelContext>
timeToLive is message's TTL in milliseconds. In the config above messages are still persistent: after TTL expires they are moved to DLQ. If you want to throw them away then the config should include deliveryPersistent set to false:
<camelContext xmlns="http://camel.apache.org/schema/spring" id="camel">
<route>
<from uri="activemq:queue:ephemeral" />
<to uri="activemq:queue:ephemeral-backend?timeToLive=10000&deliveryPersistent=false" />
</route>
</camelContext>

Tomcat Cluster configuration issue: two nodes do not synchronize with each other

I want to set up a simple Tomcat cluster of two nodes.
I have two VMs which are members of the same local network and can see each other.
In both Tomcats cluster config section (in server.xml) is similar:
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService" address="228.0.0.4" port="41166" frequency="500" dropTime="3000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" address="auto" port="4000" autoBind="100" selectorTimeout="5000" maxThreads="6"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Valve className="org.jasig.cas.client.tomcat.v7.StaticUriLogoutValve" logoutUri="/j_spring_cas_security_logout"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
When I start both Tomcats, my application is up and running, but the session is not replicated (I have to log in two times).
Any ideas what am I doing wrong?
P.S.: I don't have any load balancer yet. It looks like tomcat cluster use broadcasting and doesn't need any certral node. Please, correct me here if I'm wrong.
Tomcat document sites that you indeed need a load balancer configured for sticky sessions. Please check the last bullet point: http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html#Cluster_Basics
Looks like broadcasting is not actually broadcasting sessions across nodes in a cluster.

Service reference in OSGi/blueprint not working properly

I currently have two OSGi bundles (bundle1 and bundle2) both both exposing services through a blueprint in an EBA. In bundle2's blueprint.xml i want to reference a service from bundle1 and Inject it into the BuildService (code below), as BuildService will be used to call TicketService. This however results in a Timeout exception (also below). It seems like the BuildService never gets registered with OSGi. How would I make something like this work?
blueprint.xml for bundle1:
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:bptx="http://aries.apache.org/xmlns/transactions/v1.0.0">
<bean id="TicketServiceBean" class="com.example.b2.impl.TicketServiceImpl">
<bptx:transaction value="Required" method="*" />
</bean>
<service ranking="0" id="TicketService" interface="com.example.b2.service.TicketService" ref="TicketServiceBean">
<service-properties>
<entry key="service.exported.interfaces" value="*" />
</service-properties>
</service>
</blueprint>
blueprint.xml for bundle2
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
<bean
id="BuildServiceImplBean"
class="com.example.b1.impl.BuildServiceImpl"
activation="eager" >
<property name="ticketService" ref="TicketServiceRef" />
</bean>
<service
id="BuildService"
ref="BuildServiceImplBean"
interface="com.example.b1.service.BuildService"
activation="eager">
<service-properties>
<entry key="service.exported.interfaces" value="*" />
</service-properties>
</service>
<reference
id="TicketServiceRef"
interface="com.example.b2.service.TicketService"
availability="mandatory"
activation="eager" />
</blueprint>
Implementation of the BuildService:
public class BuildServiceImpl implements BuildService {
private TicketService ticketService;
#Override
public TicketBuildResponse ticketBuild(TicketBuildRequest ticketBuildRequest) throws BuildServiceException {
//do stuff here
}
public TicketService getTicketService() {
return ticketService;
}
public void setTicketService(TicketService ticketService) {
this.ticketService = ticketService;
}
}
When starting up the application server (Websphere) I get the following exception:
BlueprintCont E org.apache.aries.blueprint.container.BlueprintContainerImpl$1 run Unable to start blueprint container for bundle com.example.b1.module due to unresolved dependencies [(objectClass=com.example.b2.service.TicketService)]
java.util.concurrent.TimeoutException
at org.apache.aries.blueprint.container.BlueprintContainerImpl$1.run(BlueprintContainerImpl.java:273)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:453)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:315)
at java.util.concurrent.FutureTask.run(FutureTask.java:150)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:207)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:736)
Here is the solution: The OSGi applications runtime treats remote services differently from local ones, because of the different default invocation semantics (local pass-by-reference versus remote pass-by-value). To prevent an application accidentally calling an exported service that is only designed for pass-by-value calls, it is hidden from local lookups.
The solution to this is to export the same bean twice, once for remote calls, and the second for local. In other words, you would add another <service /> element with the same configuration, but without the service.exported.interfaces property.
<service ranking="0" id="TicketServiceExport" interface="com.example.b2.service.TicketService" ref="TicketServiceBean">
<service-properties>
<entry key="service.exported.interfaces" value="*" />
</service-properties>
</service>
<service ranking="0" id="TicketService" interface="com.example.b2.service.TicketService" ref="TicketServiceBean"/>
There is actually also an osgi console in websphere and it can be found under [local websphere installation]/profiles/[profileName]/bin/osgiApplicationConsole.bat. Once lauched, help() gives you a list of commands. To see your imported services from SCA, you first connect to your application (e.g. connect(2), where the number of the application is given in the results of the list() command). You can then do services("(service.imported=true)") to see the service proxies that have been added by SCA . The command services() will list all the services in the application.

Why do I get a bad_certificate error when using spring and CXF

I am using CXF generated code to connect to a remote web service over SSL and through a corporate proxy. The code works fine when the connection is established through the Java API and all SSL settings are set as system properties as follows.
System.setProperties("https.proxyHost", "myproxy.com");
System.setProperties("https.proxyPort", "8001");
System.setProperties("javax.net.ssl.keyStoreType", "pkcs12");
System.setProperties("javax.net.ssl.keyStore", "C:/keystore.p12");
System.setProperties("javax.net.ssl.keyStorePassword", "keypassword");
System.setProperties("javax.net.ssl.trustStore", "C:/cacerts");
System.setProperties("javax.net.ssl.trustStorePassword", "capassword");
MyWebService_Service ss = new MyWebService_Service(wsdlUrl, SERVICE_NAME);
MyWebService service = ss.getMyWebServicePort();
Using this code I can now call the service methods and everything works as expected. My problems occur when I try to set up the same configuration with Spring, which is our preferred approach since we are already using Spring extensively.
My Spring config:
<!-- relevant snippet from spring context -->
<import resource="classpath:META-INF/cxf/cxf.xml" />
<import resource="classpath:META-INF/cxf/cxf-extension-soap.xml" />
<import resource="classpath:META-INF/cxf/cxf-extension-http.xml" />
<jaxws:client id="webservice" serviceName="myns:MyWebService" endpointName="myns:MyWebServicePort"
address="https://bigserver.com:5012/blah/TheWebService"
serviceClass="com.mycomp.MyWebService" />
<http:conduit name="{myns}MyWebServicePort.http-conduit">
<http:tlsClientParamenters disableCNCheck="true" secureSocketProtocol="TLS">
<sec:trustManagers>
<sec:keyStore type="JKS" password="capassword" file="c:/cacerts" />
</sec:trustmanagers>
<sec:keyManagers>
<sec:keyStore type="pkcs12" password="keypassword" file="c:/keystore.p12" />
</sec:keyManagers>
</http:tlsClientParamenters>
<http:client ProxyServer="myproxy.com" ProxyServerPort="8001" />
</http:conduit>
In both cases, the web service client is deployed within a web application. In the second case, access to the web service results in a
javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
Edit: I am using CXF version 2.2.
Have you tried adding the next property to your client parameters?
**useHttpsURLConnectionDefaultHostnameVerifier="false"**
Looks like this:

Categories