What prevents Java from verifying signed jars with multiple signature algorithms - java

Quick background: We release a webstart application, which includes our own application jars and numerous third-party jars. Webstart requires that all distributed jars referred to by the jnlp file be signed by a single certificate. We therefore sign all jars (our jars and the third-party jars) using a self-signed certificate. Some third-party jars are already signed by the party which produced them, but we just sign them again, and this works fine. Until now.
Problem: We recently moved from Java 6 to Java 7, and suddenly webstart is refusing to load some jars, complaining: "Invalid SHA1 signature file digest". This only happens for some jars and not others, and the common thread appears among those jars that fail appears to be having multiple signatures.
After searching around on S.O. and the internet, it appears that the default signature algorithm for Java's jarsigner has changed between Java 6 and Java 7, from SHA1 to SHA256, and various people are recommending using "jarsigner -digestalg SHA1" to work around verification issues. I tried that, and sure enough our multiply-signed jars now verify. So this appears to be a workaround for our issue.
From what I can gather, it appears that the third-party signature is a SHA1 signature, and we were signing with the default -- SHA256 -- resulting in a mixing of signatures. When I force SHA1 using the '-digestalg' switch, we have two signatures of the same type, and verification now works. So it seems the problem is caused by having multiple signatures with different algorithms? Or is there some other factor I'm missing.
Questions:
Why does it fail to verify with SHA1 + SHA256, but verifies with SHA1 + SHA1? Is there a technical reason? A security policy reason? Why can't it verify that both signatures are correct?
Is there any drawback to us using (continuing to use) SHA1 instead of the now-default SHA256?

Rather than re-signing the third party jars yourself, you can create a separate JNLP file for each third-party signer that refers to the relevant jar files, then have your main JNLP depend on these using the <extension> element. The restriction that all JAR files must be signed by the same signer only applies within one JNLP, each extension can have a different signer.
Failing that, you could strip out the third party signatures before adding your own (by repacking them without META-INF/*.{SF,DSA,RSA})

I know this is a bit late - but we are going thru this now. Our problem was the "MD2withRSA" signing issue. I resolved the problem in a couple steps:
1) Worked with Verisign to remove the 'old' algorithm from our certificate - so the MD2withRSA algorithm was no longer used to sign our jars.
2) We also have a pile of 3rd party jars and we just re-sign them with out our certificate. We encountered the 'not all jars signed with the same certificate' when both the SHA1 and SHA-256 algorithms were listed in the MANIFEST.MF. This was just a small subset of the jars - so for those, we removed the bottom half of the MANIFEST.MF file; that part with the Name: class and the algorithm spec. That data is re-generated in the last part of our process. We unzip, exclude the old signing info and re-jar. Last step is to re-sign the jars. We found that in some cases, if the old Name: entry with the SHA1 entry was in the MANIFEST.MF, that the signing did not replace it with the SHA-256 - so we manually handle those jars (for now). Working on updating our Ant tasks to handle this.
Sorry - can't speak to why web start doesn't handle/allow it - just figured out how to make it work!
Good Luck!

Seems like a bug in the JRE. Personally I'm assuming the old default signing algorithm (DSA with SHA1 digest) is less secure than the new one (RSA with SHA256 digest), so it's best not to use the "-digestalg SHA1" option.
I solved this problem by using a custom Ant task in my build script to 'unsign' my jars before signing them. That way there is only one signature for each jar.
Here's my Ant task:
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.zip.ZipEntry;
import java.util.zip.ZipInputStream;
import java.util.zip.ZipOutputStream;
import org.apache.tools.ant.BuildException;
import org.apache.tools.ant.Task;
import org.apache.tools.ant.types.FileSet;
import org.apache.tools.ant.types.Path;
import org.apache.tools.ant.types.Resource;
import org.apache.tools.ant.types.resources.FileProvider;
import org.apache.tools.ant.types.resources.FileResource;
import org.apache.tools.ant.util.FileUtils;
import org.apache.tools.ant.util.ResourceUtils;
public class UnsignJar extends Task {
protected List<FileSet> filesets = new ArrayList<FileSet>();
protected File todir;
public void addFileset(final FileSet set) {
filesets.add(set);
}
public void setTodir(File todir) {
this.todir = todir;
}
#Override
public void execute() throws BuildException {
if (todir == null) {
throw new BuildException("todir attribute not specified");
}
if (filesets.isEmpty()) {
throw new BuildException("no fileset specified");
}
Path path = new Path(getProject());
for (FileSet fset : filesets) {
path.addFileset(fset);
}
for (Resource r : path) {
FileResource from = ResourceUtils.asFileResource(r
.as(FileProvider.class));
File destFile = new File(todir, from.getName());
File fromFile = from.getFile();
if (!isUpToDate(destFile, fromFile)) {
unsign(destFile, fromFile);
}
}
}
private void unsign(File destFile, File fromFile) {
log("Unsigning " + fromFile);
try {
ZipInputStream zin = new ZipInputStream(
new FileInputStream(fromFile));
ZipOutputStream zout = new ZipOutputStream(
new FileOutputStream(destFile));
ZipEntry entry = zin.getNextEntry();
while (entry != null) {
if (!entry.getName().startsWith("META-INF")) {
copyEntry(zin, zout, entry);
}
zin.closeEntry();
entry = zin.getNextEntry();
}
zin.close();
zout.close();
} catch (IOException e) {
throw new BuildException(e);
}
}
private void copyEntry(ZipInputStream zin, ZipOutputStream zout,
ZipEntry entry) throws IOException {
zout.putNextEntry(entry);
byte[] buffer = new byte[1024 * 16];
int byteCount = zin.read(buffer);
while (byteCount != -1) {
zout.write(buffer, 0, byteCount);
byteCount = zin.read(buffer);
}
zout.closeEntry();
}
private boolean isUpToDate(File destFile, File fromFile) {
return FileUtils.getFileUtils().isUpToDate(fromFile, destFile);
}
}

Related

Hadoop Hive UDF with external library

I'm trying to write a UDF for Hadoop Hive, that parses User Agents. Following code works fine on my local machine, but on Hadoop I'm getting:
org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method public java.lang.String MyUDF .evaluate(java.lang.String) throws org.apache.hadoop.hive.ql.metadata.HiveException on object MyUDF#64ca8bfb of class MyUDF with arguments {All Occupations:java.lang.String} of size 1',
Code:
import java.io.IOException;
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.hive.ql.metadata.HiveException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.*;
import com.decibel.uasparser.OnlineUpdater;
import com.decibel.uasparser.UASparser;
import com.decibel.uasparser.UserAgentInfo;
public class MyUDF extends UDF {
public String evaluate(String i) {
UASparser parser = null;
parser = new UASparser();
String key = "";
OnlineUpdater update = new OnlineUpdater(parser, key);
UserAgentInfo info = null;
info = parser.parse(i);
return info.getDeviceType();
}
}
Facts that come to my mind I should mention:
I'm compiling with Eclipse with "export runnable jar file" and extract required libraries into generated jar option
I'm uploading this "fat jar" file with Hue
Minimum working example I managed to run:
public String evaluate(String i) {
return "hello" + i.toString()";
}
I guess the problem lies somewhere around that library (downloaded from https://udger.com) I'm using, but I have no idea where.
Any suggestions?
Thanks, Michal
It could be a few things. Best thing is to check the logs, but here's a list of a few quick things you can check in a minute.
jar does not contain all dependencies. I am not sure how eclipse builds a runnable jar, but it may not include all dependencies. You can do
jar tf your-udf-jar.jar
to see what was included. You should see stuff from com.decibel.uasparser. If not, you have to build the jar with the appropriate dependencies (usually you do that using maven).
Different version of the JVM. If you compile with jdk8 and the cluster runs jdk7, it would also fail
Hive version. Sometimes the Hive APIs change slightly, enough to be incompatible. Probably not the case here, but make sure to compile the UDF against the same version of hadoop and hive that you have in the cluster
You should always check if info is null after the call to parse()
looks like the library uses a key, meaning that actually gets data from an online service (udger.com), so it may not work without an actual key. Even more important, the library updates online, contacting the online service for each record. This means, looking at the code, that it will create one update thread per record. You should change the code to do that only once in the constructor like the following:
Here's how to change it:
public class MyUDF extends UDF {
UASparser parser = new UASparser();
public MyUDF() {
super()
String key = "PUT YOUR KEY HERE";
// update only once, when the UDF is instantiated
OnlineUpdater update = new OnlineUpdater(parser, key);
}
public String evaluate(String i) {
UserAgentInfo info = parser.parse(i);
if(info!=null) return info.getDeviceType();
// you want it to return null if it's unparseable
// otherwise one bad record will stop your processing
// with an exception
else return null;
}
}
But to know for sure, you have to look at the logs...yarn logs, but also you can look at the hive logs on the machine you're submitting the job on ( probably in /var/log/hive but it depends on your installation).
such a problem probably can be solved by steps:
overide the method UDF.getRequiredJars(), make it returning a hdfs file path list which values are determined by where you put the following xxx_lib folder into your hdfs. Note that , the list mist exactly contains each jar's full hdfs path strings ,such as hdfs://yourcluster/some_path/xxx_lib/some.jar
export your udf code by following "Runnable jar file exporting wizard" (chose "copy required libraries into a sub folder next to the generated jar". This steps will result in a xxx.jar and a lib folder xxx_lib next to xxx.jar
put xxx.jar and the folders xxx_lib to your hdfs filesystem according to your code in step 0.
create a udf using: add jar ${the-xxx.jar-hdfs-path}; create function your-function as $}qualified name of udf class};
Try it. I test this and it works

Query exist-db from Java

i want to query existdb from Java. i know there are samples but where can i get the necessary packages to run the examples?
in the samples :
import javax.xml.transform.OutputKeys;
import org.exist.storage.serializers.EXistOutputKeys;
import org.exist.xmldb.EXistResource;
import org.xmldb.api.DatabaseManager;
import org.xmldb.api.base.Collection;
import org.xmldb.api.base.Database;
import org.xmldb.api.modules.XMLResource;
where can i get these ?
and what is the right standard connection string for exist-db? port number etc
and YES, i have tried to read the existdb documentation, but those are not really understandable for beginners. they are confusing.
All i want to do is write a Java class in eclipse that can connect to a exist-db and query an xml document.
Your question is badly written, and I think you are really not explaining what you are trying to do very well.
If you want the JAR files as dependencies directly for some project then you can download eXist and get them from there. Already covered several times here, which JAR files you need as dependencies is documented on the eXist website and links to that documentation have already been posted in this thread.
I wanted to add, that if you did want a series of simple Java examples that use Maven to resolve the dependencies (which takes away the hard work), then when we wrote the eXist book we provided just that in the Integration Chapter. It shows you how to use each of eXist's different APIs from Java for storing/querying/updating etc. You can find the code from that book chapter here: https://github.com/eXist-book/book-code/tree/master/chapters/integration. Included are the Maven project files to resolve all the dependencies and build and run the examples.
If the code is not enough for you, you might also want to consider purchasing the book and reading the Integration Chapter carefully, that should answer all of your questions.
i ended up with a maven project and imported some missing jars (like ws.commons etc) by manually installing them on maven.
the missing jars i copied from the existdb installation path on my local system.
then i got it to work.
from: http://exist-db.org/exist/apps/doc/devguide_xmldb.xml
There are several XML:DB examples provided in eXist's samples
directory . To start an example, use the start.jar jar file and pass
the name of the example class as the first parameter, for instance:
java -jar start.jar org.exist.examples.xmldb.Retrieve [- other
options]
Example: Retrieving a Document with XML:DB
import org.xmldb.api.base.*;
import org.xmldb.api.modules.*;
import org.xmldb.api.*;
import javax.xml.transform.OutputKeys;
import org.exist.xmldb.EXistResource;
public class RetrieveExample {
private static String URI = "xmldb:exist://localhost:8080/exist/xmlrpc";
/**
* args[0] Should be the name of the collection to access
* args[1] Should be the name of the resource to read from the collection
*/
public static void main(String args[]) throws Exception {
final String driver = "org.exist.xmldb.DatabaseImpl";
// initialize database driver
Class cl = Class.forName(driver);
Database database = (Database) cl.newInstance();
database.setProperty("create-database", "true");
DatabaseManager.registerDatabase(database);
Collection col = null;
XMLResource res = null;
try {
// get the collection
col = DatabaseManager.getCollection(URI + args[0]);
col.setProperty(OutputKeys.INDENT, "no");
res = (XMLResource)col.getResource(args[1]);
if(res == null) {
System.out.println("document not found!");
} else {
System.out.println(res.getContent());
}
} finally {
//dont forget to clean up!
if(res != null) {
try { ((EXistResource)res).freeResources(); } catch(XMLDBException xe) {xe.printStackTrace();}
}
if(col != null) {
try { col.close(); } catch(XMLDBException xe) {xe.printStackTrace();}
}
}
}
}
On the page http://exist-db.org/exist/apps/doc/deployment.xml#D2.2.6 a list of dependencies is included; unfortunately there is no link to this page on http://exist-db.org/exist/apps/doc/devguide_xmldb.xml (should be added);
The latest xmldb.jar documentation can be found on http://xmldb.exist-db.org/
All the jar files can be retrieved by installing eXist-db from the installer jar; the files are all in EXIST_HOME/lib/core
If you work with a maven project, try adding this to your pom.xml
<dependency>
<groupId>xmldb</groupId>
<artifactId>xmldb-api</artifactId>
<version>20021118</version>
</dependency>
Be aware that the release date is 2002.
Otherwise you can query exist-db via XML-RPC

Use Jsch to implement scp and additionally not reinvent the wheel

"How to copy a file using Jsch?" was the question first in place. As using Jsch is complicated and error-prone and also works very low-level, you need to program several lines to get a simple scp working.
So, how do I implement a scp (or even sftp) with as few lines of code as possible in Java and not violate the DRY principle?
You can use the libraries used by the Ant scp task:
package org.example.scp;
import org.apache.tools.ant.Project;
import org.apache.tools.ant.taskdefs.optional.ssh.Scp;
public class ScpCopyExample {
public void downloadFile( String remoteFilePath, String localFilePath ) {
Scp scp = new Scp();
scp.setFile("username:password#host.example.org:" + remoteFilePath);
scp.setLocalTofile(localFilePath);
scp.setProject(new Project()); // prevent a NPE (Ant works with projects)
scp.setTrust(true); // workaround for not supplying known hosts file
scp.execute();
}
public static void main(String[] args) {
ScpCopyExample scpDemo = new ScpCopyExample();
scpDemo.downloadFile("~/test.txt", "testlocal.txt");
}
}
I did this with following jars in my classpath:
jsch-0.1.48.jar
ant-jsch-1.6.5.jar
ant-1.7.0.jar
ant-launcher-1.7.0.jar
This example can easily be extended to upload files or use SFTP instead.
Few lines as possible? Try this groovy example, which leverages the ANT scp task.
#Grapes([
#Grab(group='org.apache.ant', module='ant-jsch', version='1.8.4'),
#GrabConfig(systemClassLoader=true)
])
def ant = new AntBuilder()
ant.scp(file:"helloworld.doc", todir:"mark#remotehost:/home/mark/docs", password:"sEcReT")
The Grape annotations will download the jar dependencies at run-time.

Java: CaptureDeviceManager#getDeviceList() is empty?

I am trying to print out all of the capture devices that are supported using the #getDeviceList() method in the CaptureDeviceManager class and the returned Vector has a size of 0.
Why is that? I have a webcam that works - so there should be at least one. I am running Mac OS X Lion - using JMF 2.1.1e.
Thanks!
CaptureDeviceManager.getDeviceList(Format format) does not detect devices. Instead it reads from the JMF registry which is the jmf.properties file. It searches for the jmf.properties file in the classpath.
If your JMF install has succeeded, then the classpath would have been configured to include all the relevant JMF jars and directories. The JMF install comes with a jmf.properties file included in the 'lib' folder under the JMF installation directory. This means the jmf.properties would be located by JMStudio and you would usually see the JMStudio application executing correctly. (If your JMF install is under 'C:\Program Files', then run as administrator to get around UAC)
When you create your own application to detect the devices, the problem you described above might occur. I have seen a few questions related to the same problem. This is because your application's classpath might be different and might not include the environment classpath. Check out your IDE's properties here. The problem is that CaptureDeviceManager cannot find the jmf.properties file because it is not there.
As you have found out correctly, you can copy the jmf.properties file from the JMF installation folder. It would contain the correct device list since JMF detects it during the install (Check it out just to make sure anyway).
If you want do device detection yourself, then create an empty jmf.properties file and put it somewhere in your classpath (it might throw a java.io.EOFException initially during execution but that's properly handled by the JMF classes). Then use the following code for detecting webcams...
import javax.media.*;
import java.util.*;
public static void main(String[] args) {
VFWAuto vfwObj = new VFWAuto();
Vector devices = CaptureDeviceManager.getDeviceList(null);
Enumeration deviceEnum = devices.elements();
System.out.println("Device count : " + devices.size());
while (deviceEnum.hasMoreElements()) {
CaptureDeviceInfo cdi = (CaptureDeviceInfo) deviceEnum.nextElement();
System.out.println("Device : " + cdi.getName());
}
}
The code for the VFWAuto class is given below. This is part of the JMStudio source code. You can get a good idea on how the devices are detected and recorded in the registry. Put both classes in the same package when you test. Disregard the main method in the VFWAuto class.
import com.sun.media.protocol.vfw.VFWCapture;
import java.util.*;
import javax.media.*;
public class VFWAuto {
public VFWAuto() {
Vector devices = (Vector) CaptureDeviceManager.getDeviceList(null).clone();
Enumeration enum = devices.elements();
while (enum.hasMoreElements()) {
CaptureDeviceInfo cdi = (CaptureDeviceInfo) enum.nextElement();
String name = cdi.getName();
if (name.startsWith("vfw:"))
CaptureDeviceManager.removeDevice(cdi);
}
int nDevices = 0;
for (int i = 0; i < 10; i++) {
String name = VFWCapture.capGetDriverDescriptionName(i);
if (name != null && name.length() > 1) {
System.err.println("Found device " + name);
System.err.println("Querying device. Please wait...");
com.sun.media.protocol.vfw.VFWSourceStream.autoDetect(i);
nDevices++;
}
}
}
public static void main(String [] args) {
VFWAuto a = new VFWAuto();
System.exit(0);
}
}
Assuming you are on a Windows platform and you have a working web-cam, then this code should detect the device and populate the jmf.properties file. On the next run you can also comment out the VFWAuto section and it's object references and you can see that CaptureDeviceManager reads from the jmf.properties file.
The VFWAuto class is part of jmf.jar. You can also see the DirectSoundAuto and JavaSoundAuto classes for detecting audio devices in the JMStudio sample source code. Try it out the same way as you did for VFWAuto.
My configuration was Windows 7 64 bit + JMF 2.1.1e windows performance pack + a web-cam.
I had the same issue and I solved by invoking flush() on my ObjectInputStream object.
According to the API documentation for ObjectInputStream's constructor:
The stream header containing the magic number and version number are read from the stream and verified. This method will block until the corresponding ObjectOutputStream has written and flushed the header.
This is a very important point to be aware of when trying to send objects in both directions over a socket because opening the streams in the wrong order will cause deadlock.
Consider for example what would happen if both client and server tried to construct an ObjectInputStream from a socket's input stream, prior to either constructing the corresponding ObjectOutputStream. The ObjectInputStream constructor on the client would block, waiting for the magic number and version number to arrive over the connection, while at the same time the ObjectInputStream constructor on the server side would also block for the same reason. Hence, deadlock.
Because of this, you should always make it a practice in your code to open the ObjectOutputStream and flush it first, before you open the ObjectInputStream. The ObjectOutputStream constructor will not block, and invoking flush() will force the magic number and version number to travel over the wire. If you follow this practice in both your client and server, you shouldn't have a problem with deadlock.
Credit goes to Tim Rohaly and his explanation here.
Before calling CaptureDeviceManager.getDeviceList(), the available devices must be loaded into the memory first.
You can do it manually by running JMFRegistry after installing JMF.
or do it programmatically with the help of the extension library FMJ (Free Media in Java). Here is the code:
import java.lang.reflect.Field;
import java.util.Vector;
import javax.media.*;
import javax.media.format.RGBFormat;
import net.sf.fmj.media.cdp.GlobalCaptureDevicePlugger;
public class FMJSandbox {
static {
System.setProperty("java.library.path", "D:/fmj-sf/native/win32-x86/");
try {
final Field sysPathsField = ClassLoader.class.getDeclaredField("sys_paths");
sysPathsField.setAccessible(true);
sysPathsField.set(null, null);
} catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String args[]) {
GlobalCaptureDevicePlugger.addCaptureDevices();
Vector deviceInfo = CaptureDeviceManager.getDeviceList(new RGBFormat());
System.out.println(deviceInfo.size());
for (Object obj : deviceInfo ) {
System.out.println(obj);
}
}
}
Here is the output:
USB2.0 Camera : civil:\\?\usb#vid_5986&pid_02d3&mi_00#7&584a19f&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global
RGB, -1-bit, Masks=-1:-1:-1, PixelStride=-1, LineStride=-1

WSIT: JKS relative filepath

When creating a web service server using Netbeans, Maven, Metro and Tomcat, how can I use relative filepaths in the wsit configuration?
For example, I have this line inside the wsit file:
<sc:KeyStore wspp:visibility="private" location="SERVER_KeyStore.jks" type="JKS" storepass="*****" alias="*****"/>
where should I put the jks file so it matches that location?
Finally, I found the answer.
when providing the
keystore/trustore name and location in
the wsit-*.xml files, please note that
they'll be loaded as resources
scanning the META-INF directory in
your package (WEB-INF/classes/META-INF
when using war packages on JBoss
Application Server 5).
from JBossWS - Stack Metro User Guide
In my case that means adding a META-INF folder to my resources folder and add <include>**/*.jks</include> to the pom file.
I saw some number of questions for wsit security configuration, most of them deal with externalizing SSL configuration, rather than hardcoding into wsdl file. Just because there may be development and production environment, and all in all hardcoded configuration is bad anyway. I spend several days with this issue and found only some (often monstrous) hints here in stackoverflow and various other forums. But the solution turned to be not so complex indeed. I just leave it here for someone (it matches also original question, because it will allow having jks anywhere, also having external config file as well).
Say, you have wsit policy in your wsdl files like this:
<wsp1:Policy wsu:Id="MyBinding_IWebServicePolicy">
<wsp1:ExactlyOne>
<wsp1:All>
<sc:KeyStore wspp:visibility="private" type="JKS" storepass="pass" alias="some-alias" keypass="pass" location="keystore.jks"/>
<sc:TrustStore wspp:visibility="private" type="JKS" peeralias="other-alias" storepass="pass" location="truststore.jks"/>
</wsp1:All>
</wsp1:ExactlyOne>
</wsp1:Policy>
You need to use CallbackHandler instead.
Adjusted policy:
<wsp1:Policy wsu:Id="MyBinding_IWebServicePolicy">
<wsp1:ExactlyOne>
<wsp1:All>
<sc:KeyStore wspp:visibility="private" callbackHandler="com.my.KeyStoreHandler"/>
<sc:TrustStore wspp:visibility="private" callbackHandler="com.my.TrustStoreHandler"/>
</wsp1:All>
</wsp1:ExactlyOne>
</wsp1:Policy>
And handler might look like this (I use scala, but you may translate this to java easily):
import javax.security.auth.callback.{ CallbackHandler => ICallbackHandler, Callback }
import com.sun.xml.wss.impl.callback.{ KeyStoreCallback, PrivateKeyCallback }
import java.security.{ PrivateKey, KeyStore }
import java.io.FileInputStream
abstract class CallbackHandler extends ICallbackHandler {
def conf: Config // getting external configuration
def handle(callbacks: Array[Callback]): Unit = callbacks foreach {
// loads the keystore
case cb: KeyStoreCallback =>
val ks = KeyStore.getInstance(conf.getString("type"))
val is = new FileInputStream(conf.getString("file"))
try ks.load(is, conf.getString("store-password").toCharArray) finally is.close()
cb.setKeystore(ks)
// loads private key
case cb: PrivateKeyCallback =>
cb.setAlias(conf.getString("alias"))
cb.setKey(cb.getKeystore.getKey(conf.getString("alias"), conf.getString("key-password").toCharArray).asInstanceOf[PrivateKey])
// other things
case cb => // I didn't need anything else, but just in case
}
}
class TrustStoreHandler extends CallbackHandler {
lazy val conf = getMyTrustStoreConfig
}
class KeyStoreHandler extends CallbackHandler {
lazy val conf = getMyKeyStoreConfig
}
In java just use if (cb isinstanceof Class) instead of case cb: Class =>, the other code is practically java without semicolons.

Categories