I'm trying to export from HDFS into MySql and have only been able to find the following technique:
public static boolean exportHDFSToSQL() throws InstantiationException, IllegalAccessException, ClassNotFoundException {
try {
SqoopOptions options = new SqoopOptions();
options.setConnectString("jdbc:mysql://localhost:3306/dbName");
options.setUsername("user_name");
options.setPassword("pwd");
options.setExportDir("path of file to be exported from hdfs");
options.setTableName("table_name");
options.setInputFieldsTerminatedBy(',');
options.setNumMappers(1);
new ExportTool().run(options);
} catch (Exception e) {
return false;
}
return true;
}
The problem I have is with the ExportTool().run() method. I am using Sqoop 1.4.2 and this method has apparently been deprecated. Wanting to know the new way of achieving this? Or point me to a documented source that will assist.
Thanks
Sqoop currently do not exposes any Java API and thus such usage is not supported. It might work, however future versions might break this behavior.
I would expect that you see the deprecation because you are using ExportTool class from package com.cloudera.sqoop.tool, whereas the functionality was moved to package org.apache.sqoop.tool and the original instance was left there for backward compatibility. You can learn more about the namespace migration on appropriate Sqoop wiki page.
Related
I am currently working on implementing Drag & Drop from Outlook to Swing (on Windows) using a Swing DropTarget. Because Outlook Drag and Drop does no automatically work with Swing, I debugged it and found out it used the FileNameW native for the event. To support this I use this code:
private static final String nativeFileNameW = "FileNameW";
private static final DataFlavor fileNameWFlavor = new DataFlavor(InputStream.class, nativeFileNameW);
public void installFileNameWFlavorIfWindows(DropTarget dt) {
FlavorMap fm = dt.getFlavorMap();
if (!(fm instanceof SystemFlavorMap)) {
fm = SystemFlavorMap.getDefaultFlavorMap();
}
if (fm instanceof SystemFlavorMap) {
SystemFlavorMap sysFM = (SystemFlavorMap) fm;
sysFM.addFlavorForUnencodedNative(nativeFileNameW, fileNameWFlavor);
sysFM.addUnencodedNativeForFlavor(fileNameWFlavor, nativeFileNameW);
dt.setFlavorMap(sysFM);
}
}
It seems to work fine, but I am not sure if this is the correct approach, since I couldn't find any resources on this problem.
In the drop event I can now get an InputStream when an Outlook Email is dropped on the Swing Component. I use the following code in my drop method (the real method is more complex, because it also handles other DataFlavors, but this example here can reproduce the error):
public void drop(DropTargetDropEvent dtde) {
Transferable transfer = dtde.getTransferable();
boolean accepted = false;
if (transfer.isDataFlavorSupported(fileNameWFlavor)) {
accepted = true;
dtde.acceptDrop(DnDConstants.ACTION_COPY);
try (InputStream is = (InputStream) transfer.getTransferData(fileNameWFlavor)) {
//Do something with InputStream
} catch (UnsupportedFlavorException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
dtde.dropComplete(accepted);
}
I use a try with resource statement to ensure the stream is closed after the drop event. I want to close the stream to make sure there are no open File Handles or similar native resources, that could be limited, after the drop is completed.
The InputStream for a Drop from Outlook is an Instance of WDropTargetContextPeerFileStream and when the close method is called, it crashes in the native Method freeStgMedium, which should free the native windows data structure.
I do not get any error output on the command line.
The Program terminates with error code -1073740940 which seems to indicate a heap corruption error.
Is there anything I am missing? Is this InputStream not supposed to be closed or is there a Bug earlier on.
I am using the JDK from Azul, Zulu 8.48.0.53 (Java 8u265).
I have also tried it with Zulu 11, Oracle Java 8 and a Redhat build of Openjdk 8, all fail the same way.
Update:
I think I tracked the bug down to JDK native code, that gets the data.
The JDK Code creates a STGMEDIUM object on the stack and passes a Pointer to that to the Windows Method IDataObject::GetData(). This method writes its data back into the STGMEDIUM* parameter.
This should not be a problem, since all examples of this Windows function did it the same way. But it seems, that Outlook does not initialize the member variable IUnknown *STGMEDIUM::pUnkForRelease, but instead relies on the caller to zero-fill the data structure (or Outlook has a Bug).
When the native resources are released by Java, it calls ReleaseStgMedium, which tries to call Release on the pUnkForRelease pointer, if it isn't NULL, which causes the error.
For now, I simply don't close the input stream and let a FileHandle leak, which is not optimal, but I don't see any other solution.
If I find a real solution to this Bug, I will write an Update/Answer here.
I’m trying to auto generate a mapping file using this program using Castor 1.3.2.
But here is the exception I get -
java.lang.IllegalArgumentException: No enum const class org.exolab.castor.mapping.xml.types.BindXmlNodeType.element
This is a fairly basic test, what am I doing wrong?
public class CastorMapping {
public CastorMapping()
{
try
{
MappingTool tool = new MappingTool();
tool.setInternalContext(new org.castor.xml.BackwardCompatibilityContext());
tool.addClass(TestRequest.class);
OutputStream file = new FileOutputStream("gen_mapping.xml" );
Writer writer = new OutputStreamWriter(file);
tool.write(writer);
}
catch (Exception ex)
{
ex.printStackTrace();
}
}
public static void main(String[] args)
{
new CastorMapping();
}
}
Thanks!
I tried this myself and I believe you are doing everything correctly.
I browsed the castor source code and as far as I can tell, they broke the MappingTool somewhere between 1.3 and 1.3.2 when they redesigned BindXmlNodeType to be an enum class instead of a regular class. There is some code where they are looking for an BindXmlNodeType.element, but now that BindXmlNodeType is an enum they need to look up ELEMENT (caps). But I digress...
If you can afford to revert to castor 1.3, everything should work.
BTW - I tried to upgrade to 1.3.3-rc1 but Intellij could not resolve the maven dependencies. For example castor-xml in 1.3.3-rc1 now depends on Spring! It's possible that this bug is fixed in a later version, but I am not hopeful.
I'm currently dealing with a particular issue with my paid application. Internally it contains a licensing check. The app is patched by hackers by modifying the app apk/jar. They are adding a new class which helps bypass the licensing check.
My goal is to somehow check for this particular patch. If I find it I know my app has been compromised.
Any tips on how to know that something has been modified on the package? Doing a hash over the app is not really an option in my case.
I thought maybe checking if this class exists would help, but what if they change the name of the class? Then, another idea is somehow check for unexpected includes added to the class.
Any of these possible? Any suggestions would help :)
Not sure about android but in standard JDK you would do something like this:
try {
Class.forName( "your.fqdn.class.name" );
} catch( ClassNotFoundException e ) {
//my class isn't there!
}
Here is what I used in Android - standard Java:
public boolean isClass(String className) {
try {
Class.forName(className);
return true;
} catch (ClassNotFoundException e) {
return false;
}
}
Implementation example:
if (isClass("android.app.ActionBar")) {
Toast.makeText(getApplicationContext(), "YES", Toast.LENGTH_SHORT).show();
}
You can use
public static Class<?> forName (String className)
and check the ClassNotFoundException
http://developer.android.com/reference/java/lang/Class.html#forName%28java.lang.String%29
How does it get loaded if it's a random class in a random package?
That being said, see http://download.oracle.com/javase/6/docs/api/java/lang/System.html#getProperties%28%29 and java.class.path. For normal java apps, you have to walk the classpath and then search the entries (for jars) or directories (for .class files). But in a container-class-loader environment, this will fail to work (and I'm not sure how that applies to an android environment).
I am wrapping a shared library (written in C) with Java using JNA. The shared library is written internally, but that library uses functions from another external library, which again depends another external library. So the situation is something like this:
ext1 <- ext2 <- internal
I.e. the internal uses external library ext2 which again uses external library ext1. What I have tried is:
System.loadLibrary("ext1");
System.loadLibrary("ext2");
NativeLIbrary.loadLibrary("internal",xxx.class);
This approach fails with "UnresolvedException" when loading the library "ext2"; the linker complains about symbols which are indeed present in the library "ext1". So it semmes that the System.loadLibrary() function does not make the symbols from "ext1" globally available? When using the stdlib function dlopen() as:
handle = dlopen( lib_name , RTLD_GLOBAL );
All the symbols found in #lib_name will be available for symbol resolution in subsequent loads; I guess what I would like was something similar for the java variety System.loadLibrary()?
Regards - Joakim Hove
It's an old question, but I've found an acceptable solution, which should also be portable, and I thought I should post an answer. The solution is to use JNA's NativeLibrary#getInstance(), because on Linux this will pass RTLD_GLOBAL to dlopen() (and on Windows this is not needed).
Now, if you are using this library to implement a Java native method, you will also need to call System.load() (or Sysem.loadLibrary()) on the same library, after calling NativeLibrary#getInstance().
First, a link to a JNA bug: JNA-61
A comment in there says that basically one should load dependencies before the actual library to use from within JNA, not the standard Java way. I'll just copy-paste my code, it's a typical scenario:
String libPath =
"/path/to/my/lib:" + // My library file
"/usr/local/lib:" + // Libraries lept and tesseract
System.getProperty("java.library.path");
System.setProperty("jna.library.path", libPath);
NativeLibrary.getInstance("lept");
NativeLibrary.getInstance("tesseract");
OcrTesseractInterf ocrInstance = (OcrTesseractInterf)
Native.loadLibrary(OcrTesseractInterf.JNA_LIBRARY_NAME, OcrTesseractInterf.class);
I've written a small library to provide OCR capability to my Java app using Tesseract. Tesseract dependes on Leptonica, so to use my library, I need to load libraries lept and tesseract first. Loading the libraries with the standard means (System.load() and System.loadLibrary()) doesn't do the trick, neither does setting properties jna.library.path or java.library.path. Obviously, JNA likes to load libraries its own way.
This works for me in Linux, I guess if one sets the proper library path, this should work in other OSs as well.
There is yet another solution for that. You can dlopen directly inside JNI code, like this:
void loadLibrary() {
if(handle == NULL) {
handle = dlopen("libname.so", RTLD_LAZY | RTLD_GLOBAL);
if (!handle) {
fprintf(stderr, "%s\n", dlerror());
exit(EXIT_FAILURE);
}
}
}
...
...
loadLibrary();
This way, you will open library with RTLD_GLOBAL.
You can find detailed description here: http://www.owsiak.org/?p=3640
OK;
I have found an acceptable solution in the end, but not without significant amount of hoops. What I do is
Use the normal JNA mechanism to map the dlopen() function from the dynamic linking library (libdl.so).
Use the dlopen() function mapped in with JNA to load external libraries "ext1" and "ext2" with the option RTLD_GLOBAL set.
It actually seems to work :-)
As described at http://www.owsiak.org/?p=3640, an easy but crude solution on Linux is to use LD_PRELOAD.
If that's not acceptable, then I'd recommend the answer by Oo.oO: dlopen the library with RTLD_GLOBAL within JNI code.
Try this, add this function to your code. Call it before you load your dlls. For the parameter, use the location of your dlls.
public boolean addDllLocationToPath(String dllLocation)
{
try
{
System.setProperty("java.library.path", System.getProperty("java.library.path") + ";" + dllLocation);
Field fieldSysPath = ClassLoader.class.getDeclaredField("sys_paths");
fieldSysPath.setAccessible(true);
fieldSysPath.set(null, null);
}
catch (Exception e)
{
System.err.println("Could not modify path");
return false;
}
return true;
}
}
In order to fix your issue you can use this package: https://github.com/victor-paltz/global-load-library. It loads the libraries directly with the RTLD_GLOBAL flag.
Here is an example:
import com.globalload.LibraryLoaderJNI;
public class HelloWorldJNI {
static {
// Loaded with RTLD_GLOBAL flag
try {
LibraryLoaderJNI.loadLibrary("/path/to/my_native_lib_A");
} catch (UnsatisfiedLinkError e) {
System.Println("Couldn't load my_native_lib_A");
System.Println(e.getMessage());
e.printStackTrace();
}
// Not loaded with RTLD_GLOBAL flag
try {
System.load("/path/to/my_native_lib_B");
} catch (UnsatisfiedLinkError e) {
System.Println("Couldn't load my_native_lib_B");
System.Println(e.getMessage());
e.printStackTrace();
}
}
public static void main(String[] args) {
new HelloWorldJNI().sayHello();
}
private native void sayHello();
}
It is using the same dlopen() trick as the previous answers, but it is packaged in a standalone code.
As it was made clear in my recent question, Swing applications need to explicitly call System.exit() when they are ran using the Sun Webstart launcher (at least as of Java SE 6).
I want to restrict this hack as much as possible and I am looking for a reliable way to detect whether the application is running under Webstart. Right now I am checking that the value of the system property "webstart.version" is not null, but I couldn't find any guarantees in the documentation that this property should be set by future versions/alternative implementations.
Are there any better ways (preferably ones that do not ceate a dependency on the the webstart API?)
When your code is launched via javaws, javaws.jar is loaded and the JNLP API classes that you don't want to depend on are available. Instead of testing for a system property that is not guaranteed to exist, you could instead see if a JNLP API class exists:
private boolean isRunningJavaWebStart() {
boolean hasJNLP = false;
try {
Class.forName("javax.jnlp.ServiceManager");
hasJNLP = true;
} catch (ClassNotFoundException ex) {
hasJNLP = false;
}
return hasJNLP;
}
This also avoids needing to include javaws.jar on your class path when compiling.
Alternatively you could switch to compiling with javaws.jar and catching NoClassDefFoundError instead:
private boolean isRunningJavaWebStart() {
try {
ServiceManager.getServiceNames();
return ds != null;
} catch (NoClassDefFoundError e) {
return false;
}
}
Using ServiceManager.lookup(String) and UnavailableServiceException is trouble because both are part of the JNLP API. The ServiceManager.getServiceNames() is not documented to throw. We are specifically calling this code to check for a NoClassDefFoundError.
Use the javax.jnlp.ServiceManager to retrieve a webstart service.
If it is availabe, you are running under Webstart.
See http://download.java.net/jdk7/docs/jre/api/javaws/jnlp/index.html
As you mentioned, checking the System property as follows is probably the cleanest way:
private boolean isRunningJavaWebStart() {
return System.getProperty("javawebstart.version", null) != null;
}
In a production system I have used the above technique for years.
You can also try to check to see if there are any properties that start with "jnlpx." but none of those are really "guaranteed" to be there either as far as I know.
An alternative could be to attempt to instantiate the DownloadService us suggested by Tom:
private boolean isRunningJavaWebStart() {
try {
DownloadService ds = (DownloadService) ServiceManager.lookup("javax.jnlp.DownloadService");
return ds != null;
} catch (UnavailableServiceException e) {
return false;
}
}
Of course that does have the downside of coupling your code to that API.
I have no real experience with Java web start other than looking at it a few years back.
How about start your application with a parameter that you define than you set when the app is started via Java web start.
If you want to pass in arguments to your app, you have to add them to the start-up file (aka JNLP descriptor) using or elements.
Then check to see if these properties are set.
Again this is a suggestion I have not coded for JWS and it may not be this easy.
You can check whether the current classloader is an instance of com.sun.jnlp.JNLPClassLoader (Java plugin 1) or sun.plugin2.applet.JNLP2ClassLoader (Java plugin 2). Despite the "applet" package, an applet using JNLP with the Java plugin 2 uses another classloader, sun.plugin2.applet.Applet2ClassLoader. It works with OpenJDK too.