I am working on call one DLL api for C/C++ with JNA.
The function API in DLL is short DKT_init(LPCSTR name). I made the corresponding java method as public short DKT_init(String name); But when I call it, the DLL API return a parameter error. I wonder how to map LPCSTR in JNA? As LPCSTR is cons char * but String is char *.
String is the appropriate mapping for LPCSTR. JNA will convert the modified UTF16 characters into a NUL-terminated buffer of bytes using the default platform encoding.
You might try passing in an explicit byte array instead (using the suggested alternate method mapping above), which would eliminate the potential of an incorrect encoding issue, e.g.
byte[] arg = { (byte)'f', (byte)'o', (byte)'o', (byte)0 };
You can alter the encoding used by setting the system property "jna.encoding".
You should also eliminate the possibility that "LPCSTR" is actually an incorrect type; if the function is expecting a buffer it can write to, String will not work, and if it's actually LPTCSTR and you're using UNICODE, then you need to pass a WString instead.
Have you tried mapping it to a byte array, like this:
short DKT_INIT(byte [] nameAsByteArray);
//now you should be able to obtain it like this:
System.out.println(new String(nameAsByteArray).trim());
Related
I have a JNA Java interface for a C function mpv_set_option_string defined as:
public interface MPV extends StdCallLibrary {
MPV INSTANCE = Native.loadLibrary("lib/mpv-1.dll", MPV.class, W32APIOptions.DEFAULT_OPTIONS);
long mpv_create();
int mpv_initialize(long handle);
int mpv_set_option_string(long handle, String name, String data);
}
When I call this like this:
System.setProperty("jna.encoding", "UTF8");
long handle = MPV.INSTANCE.mpv_create();
int error = MPV.INSTANCE.mpv_initialize(handle);
error = MPV.INSTANCE.mpv_set_option_string(handle, "keep-open", "always");
I get an error back (-5) from the last call, indicating the option (keep-open) is not found.
However, if I change the JNA function signature to:
int mpv_set_option_string(long handle, byte[] name, byte[] data);
...and then call it like this:
error = MPV.INSTANCE.mpv_set_option_string(
handle,
"keep-open\0".getBytes(StandardCharsets.UTF_8),
"always\0".getBytes(StandardCharsets.UTF_8)
);
...it returns no error (0) and works correctly (or so it seems).
What I don't get is, JNA is supposed to encode String by default as char * with UTF-8 encoding and NUL terminated (exactly what I do manually), yet I get different results.
Anyone able to shed some light on this?
You shouldn't be passing W32OPTIONS to a library that isn't a WIN32 API.
By default, JNA maps String to char*, so removing the options should fix the issue for you.
You should also be using an explicit native type for your handle instead of Java long. Pointer is probably correct in this case.
Looks like I found the issue, although I'm not 100% sure what is happening.
It seems that using W32APIOptions.DEFAULT_OPTIONS means it will use the UNICODE settings (because w32.ascii property is false). This looked okay to me, as mpv-1.dll works with UTF-8 strings only, which is Unicode.
However, now I'm guessing that in this case it means it will call a wide-char version of the library function (and if that doesn't exist, still call the original function), and probably means it encodes Strings with two bytes per character. This is because most Win32 libraries have an ASCII and WIDE version of methods accepting strings, but nothing for UTF-8.
Since mpv-1.dll only accepts UTF-8 (and isn't really Win32), strings should be just encoded as bytes in UTF-8 format (basically, just leave them alone). To let JNA know this, either donot pass a W32APIOptions map at all, or select the ASCII_OPTIONS manually.
Given the following example:
String f="FF00000000000000";
byte[] bytes = DatatypeConverter.parseHexBinary(f);
String f2= new String (bytes);
I want the output to be FF00000000000000 but it's not working with this method.
You're currently trying to interpret the bytes as if they were text encoded using the platform default encoding (UTF-8, ISO-8859-1 or whatever). That's not what you actually want to do at all - you want to convert it back to hex.
For that, just look at the converter you're using for the parsing step, and look for similar methods which work in the opposite direction. In this case, you want printHexBinary:
String f2 = DatatypeConverter.printHexBinary(bytes);
The approach of "look for reverse operations near the original operation" is a useful one in general... but be aware that sometimes you need to look at a parallel type, e.g. DataInputStream / DataOutputStream. When you find yourself using completely different types for inverse operations, that's usually a bit of a warning sign. (It's not always wrong, it's just worth investigating other options.)
While trying to send the name of the window on which currently a key is being pressed from a JNI C code to a java method jvm crashes. I think it is due to passing of an invalid argument. Please explain why the call fails and how can I send the argument ?
Prototype of java method looks like :
public void storeNameOfForegroundWindow(String windowName) {
// store in the list
}
JNI C snippet :
jmethodID callBackToStoreWindowName = (*env)->GetMethodID(env,cls,"storeNameOfForegroundWindow","(Ljava/lang/String;)V");
TCHAR title[500];
GetWindowText(GetForegroundWindow(), title, 500);
jvalue windowName,*warr;
windowName.l = title;
warr = &title;
(*Env)->CallVoidMethodA(Env,object,callBackToStoreWindowName,warr);
JVM crashes as it encounters the above snippet.I know that jvm crashes due to passing of invalid argument to the java function ( via C code) . If that is so please explain how do I send the argument .I need to send the title of the current window to the java function.
Since your method has a String as its argument, you should give it a jstring instance. JVM can not understand what a TCHAR is. So you need to convert your chars to a java string using:
(*env)->NewStringUTF(env, title);
EDIT: if TCHAR is wchar_t, i.e. is 16 bit and can be cast to a jchar, then you need to use NewString instead of NewStringUTF. You can read more here.
When I first see TCHAR, I say "Oh! It is magnificent, You can write one code that work in both Win9X and WinNT and also call best platform functions with just one definition: _UNICODE". But in time I see that this confuse many developers. There is nothing standalone as TCHAR, it is a typedef of char when _UNICODE is not defined and a typedef of wchar_t otherwise, so depending on project's definition it will change. But on the other hand Java method expect only one of them (either char or wchar_t, but I don't know which one of them), so if you define _UNICODE in your project (this will be done automatically in new IDE) and Java method expect a char* then you are passing a wchar_t* then you are passing a wrong type to the function and length of string will be counted as one (since wchar_t is 2 bytes, it map most of single byte char to char + an extra '\0') and if you pass a char* to function while it expect a wchar_t*, it may produce an error (for example access violation) because:
TCHAR title[500]; // will be converted to char title[500];
// now fill it and it may contain 'abcde\0 some junk data'
// Java want to interpret this as wchar_t* and expect '\0\0' as end of string
// since wchar_t is 2 byte and it may never see it in you string and read beyond
// end of title that is obviously an error!
What would be the algorithm/implementation of the C++ code C++functionX in the following flow chart:
(JavaString) --getBytes--> (bytes) --C++functionX--> (C++String)
JavaString contents should match C++String contents as far as possible (preferably 100% for all possible values of JavaString)
[EDIT] The endianness of bytes can be ignored as there are ways to handle that.
Java:
String original = new String("BANANAS");
byte[] utf8Bytes = original.getBytes("UTF8");
//save the length as a 32 bit integer, then utf8 Bytes to a file
C++:
int32_t tlength;
std::string utf8Bytes;
//load the tlength as a 32 bit integer, then the utf8 bytes from the file
//well, that's easy for UTF8
//to turn that into a utf-18 string in windows
int wlength = MultiByteToWideChar(CP_UTF8, 0, utf8Bytes.c_str(), utf8Bytes.size(), nullptr, 0);
std::wstring result(wlength, '\0');
MultiByteToWideChar(CP_UTF8, 0, utf8Bytes.c_str(), utf8Bytes.size(), &result[0], wlength);
//so that's not hard either
To do this in linux, one uses the iconv library, which is incredibly powerful, but more difficult to use. Here's a function that converts a std::string in UTF8 to a std::wstring in UTF32: http://coliru.stacked-crooked.com/view?id=986a4a07e391213559d4e65acaf231d5-e54ee7a04e4b807da0930236d4cc94dc
There's no such thing as One True C++ String class. STL alone has std::string and std::wstring. That said, most string classes have a constructor that takes raw byte pointer as a parameter. The bytes come in the const char * form. So, a good example of your C++functionX is the constructor std::string::string(const char*, int).
Note the encoding issues. getBytes() takes an encoding as a parameter; you better match that on the C++ side, or you'll get jumble. If not sure, use UTF-8.
Depending on what kinds of Java strings you have, you might want to choose either regular or wide strings (e. g. std::wstring). The latter is a slightly better representation of what Java String offers.
C++, as far as the standard goes, doesn't know about encodings. Java does. So, to interface the two, make Java emit some well-defined encoding, such as UTF8:
byte[] utf8str = str.getBytes("UTF8");
In C++, use a library such as iconv() to transform the UTF8-string either into another string of a well-defined encoding (e.g. std::u32string with UTF-32, if you have C++11, or std::basic_string<uint32_t> or std::vector<uint32_t> otherwise), or, alternatively, convert it to WCHAR_T encoding, to be stored in a std::wstring, and proceed further to convert this to a multi-byte string via the standard function wcstombs() if you wish to interface with your environment.
The choice depends on what you need to do with the string. For serialization or text processing, go with the definite encoding (e.g. UTF-32). For writing to the standard output using the system's locale, use the multibyte conversion. (Here is a slightly longer discussion of encodings in C++.)
the C++ string should probably be an std::wstring instance and you would alse need to keep track of the encoding you would use to transform from JavaString to bytes.
This article will probably help you more:
std::wstring VS std::string
A legacy software I'm rewriting in Java uses custom (similar to Win-1252) encoding as it's data storage. For the new system I'm building I'd like to replace this with UTF-8.
So I need to convert those files to UTF-8 to feed my database. I know the character map used, but it's not any of the widely known ones. Eg. "A" is on position 0x0041 (as in Win-1252), but on 0x0042 there is a sign which in UTF-8 appears on position 0x0102, and so on. Is there an easy way to decode and convert those files with Java?
I've read many posts already but they all dealt with industry standard encodings of some kind, not with custom ones. I'm expecting it's possible to create a custom java.nio.ByteBuffer.CharsetDecoder or java.nio.charset.Charset to pass it to java.io.InputStreamReader as described in the first Answer here?
Any suggestions welcome.
no need to be complicated. just make an array of 256 chars
static char[] map = { ... 'A', '\u0102', ... }
then
read each byte b in source
int index = (0xff) & b; // to make it unsigned
char c = map[index];
target.write( c );