i want to create request xml-rpc struct in java, my code is like this
public String xmlprc() throws XmlRpcException, MalformedURLException{
ReqModelTest req = new ReqModelTest();
String test="";
Object paramsR = new Object();
Vector params = new Vector();
req.setvalue1("value1");
req.setvalue2("value2");
req.setvalue3("value3");
req.setvalue4("value4");
req.setvalue5("value5");
params.add(req);
XmlRpcClientConfigImpl config = new XmlRpcClientConfigImpl();
try {
config.setServerURL(new URL("myurl"));
XmlRpcClient client = new XmlRpcClient();
client.setConfig(config);
paramsR = (Object)client.execute("mymethod", params);
} catch (MalformedURLException | XmlRpcException e) {
log.info(e);
}
log.info(paramsR.toString());
test = paramsR.toString();
return test;
}
but when i run it, it shows error org.apache.xmlrpc.XmlRpcException: Failed to generate request: Unsupported Java type: com.model.ReqModelTest. is there any way how to do it? Thank you very much
The most common error we have in these cases is not implementing Serializable interface.
The documentation tells us to follow these steps :
All primitive Java types are supported, including long, byte, short, and double.
Calendar objects are supported. In particular, timezone settings, and milliseconds may be sent.
DOM nodes, or JAXB objects, can be transmitted. So are objects implementing the java.io.Serializable interface.
Both server and client can operate in a streaming mode, which preserves resources much better than the default mode, which is based on large internal byte arrays.
Please check you are following these guidelines.
http://ws.apache.org/xmlrpc/
You need to create custom TypeFactory to handle your own data types.
Please see this link
Related
I'm trying to write some unit tests for Kafka Streams and have a number of quite complex schemas that I need to incorporate into my tests.
Instead of just creating objects from scratch each time, I would ideally like to instantiate using some real data and perform tests on that. We use Confluent with records in Avro format, and can extract both schema and a text JSON-like representation from the Control Center application. The JSON is valid JSON, but it's not really in the form that you'd write it in if you were just writing JSON representations of the data, so I assume it's some representation of the underlying AVRO in text form.
I've already used the schema to create a Java SpecificRecord class (price_assessment) and would like to use the JSON string copied from the Control Center message to populate a new instance of that class to feed into to my unit test InputTopic.
The code I've tried so far is
var testAvroString = "{JSON copied from Control Center topic}";
Schema schema = price_assessment.getClassSchema();
DecoderFactory decoderFactory = new DecoderFactory();
Decoder decoder = null;
try {
DatumReader<price_assessment> reader = new SpecificDatumReader<price_assessment>();
decoder = decoderFactory.get().jsonDecoder(schema, testAvroString);
return reader.read(null, decoder);
} catch (Exception e)
{
return null;
}
which is adapted from another SO answer that was using GenericRecords. When I try running this though I get the exception Cannot invoke "org.apache.avro.Schema.equals(Object)" because "writer" is null on the reader.read(...) step.
I'm not massively familiar with streams testing or Java and I'm not sure what exactly I've done wrong. Written in Java 17, streams 3.1.0, though flexible with version
The solution that I've managed to come up with is the following, which seems to work:
private static <T> T avroStringToInstance(Schema classSchema, String testAvroString) {
DecoderFactory decoderFactory = new DecoderFactory();
GenericRecord genericRecord = null;
try {
Decoder decoder = decoderFactory.jsonDecoder(classSchema, testAvroString);
DatumReader<GenericData.Record> reader =
new GenericDatumReader<>(classSchema);
genericRecord = reader.read(null, decoder);
} catch (Exception e)
{
return null;
}
var specific = (T) SpecificData.get().deepCopy(genericRecord.getSchema(), genericRecord);
return specific;
}
Relatively new with Riak development here. I am using a c++ client to connect to a Riak java client that in turn connects to the Riak cluster. The c++ client serializes as a string a query using Google Protocol Buffers in the form of "GET":KEY and the Riak java client serializes the response as "OK":VALUE. How to properly handle the case that the value was not found to the database though?
This is the sample code from the Riak java client to retrieve the object from the db:
String key //Contains the actual key
Namespace ns;
byte[] retVal = null;
Location location = new Location(ns, key);
try {
FetchValue fv;
FetchValue.Response response = client.execute(fv);
if (response.isNotFound()) {
System.out.println("The key/value pair was not found");
} else {
RiakObject obj = response.getValue(RiakObject.class);
retVal = obj.getValue().getValue();
}
}
catch (...) {...}
return retVal;
}
If the object was not found, the byte[] array remains NULL
This is the sample code from the Riak java client to serialize the reply:
ByteString valueBuf = ByteString.copyFrom(value);
// Generate reply message
reply = Request.RequestMessage.newBuilder().setCommand("OK").setValue(valueBuf).build().toByteArray();
However the code throws a NullPointerException at the copyFrom line, since it tries to copy from a Null array. Is there a way to do this more cleanly?
Thanks!
You should check to see if value == null before attempting the copyFrom() call.
Also, you should consider using Go's ability to be called from C (cgo) rather than the Java client for integration. I put together a very, very basic demonstration of doing that here.
Is there any equivalent of TSerializer in the Thrift C# API.
I am trying to use thrift serialization and then push the serialized object into MQ, not using Thrift transport mechanism. On the other end I'll deserialize it to the actual message.
I can do it in Java but not in C#.
The Apache Thrift C# library doesn't have a TSerializer presently. However it does have a TMemoryBuffer (essentially a transport that reads/writes memory) which works perfectly for this kind of thing. Create a TMemoryBuffer, construct a protocol (like TBinaryProtocol) and then serialize your messages and send them as blobs from the TMemoryBuffer.
For example:
TMemoryBuffer trans = new TMemoryBuffer(); //Transport
TProtocol proto = new TCompactProtocol(trans); //Protocol
PNWF.Trade trade = new PNWF.Trade(initStuff); //Message type (thrift struct)
trade.Write(proto); //Serialize the message to memory
byte[] bytes = trans.GetBuffer(); //Get the serialized message bytes
//SendAMQPMsg(bytes); //Send them!
To receive the message you just do the reverse. TMemoryBuffer has a constructor you can use to set the received bytes to read from.
public TMemoryBuffer(byte[] buf);
Then you just call your struct Read() method on the read side I/O stack.
This isn't much more code (maybe less) than using the Java TSerializer helper and it is a bit more universal across Apache Thrift language libraries. You may find TMemoryBuffer is the way to go everywhere!
Credit due to the other answer on this page, and from here:
http://www.markhneedham.com/blog/2008/08/29/c-thrift-examples/
Rather than expecting everyone to take the explanations and write their own functions, here are two functions to serialize and deserialize generalized thrift objects in C#:
public static byte[] serialize(TBase obj)
{
var stream = new MemoryStream();
TProtocol tProtocol = new TBinaryProtocol(new TStreamTransport(stream, stream));
obj.Write(tProtocol);
return stream.ToArray();
}
public static T deserialize<T>(byte[] data) where T : TBase, new()
{
T result = new T();
var buffer = new TMemoryBuffer(data);
TProtocol tProtocol = new TBinaryProtocol(buffer);
result.Read(tProtocol);
return result;
}
There is an RPC framework that uses the standard thrift Protocol named "thrifty", and it is the same effect as using thrift IDL to define the service, that is, thrify can be compatible with code that uses thrift IDL, and it include serializer:
[ThriftStruct]
public class LogEntry
{
[ThriftConstructor]
public LogEntry([ThriftField(1)]String category, [ThriftField(2)]String message)
{
this.Category = category;
this.Message = message;
}
[ThriftField(1)]
public String Category { get; }
[ThriftField(2)]
public String Message { get; }
}
ThriftSerializer s = new ThriftSerializer(ThriftSerializer.SerializeProtocol.Binary);
byte[] s = s.Serialize<LogEntry>();
s.Deserialize<LogEntry>(s);
more detail: https://github.com/endink/Thrifty
I posted this question to the CXF list, without any luck. So here we go. I am trying to upload large files to a remote server (think of them virtual machine disks). So I have a restful service that accepts upload requests. The handler for the upload looks like:
#POST
#Consumes(MediaType.MULTIPART_FORM_DATA)
#Path("/doupload")
public Response receiveStream(MultipartBody multipart) {
List<Attachment> allAttachments = body.getAllAttachments();
Attachment att = null;
for (Attachment b : allAttachments) {
if (UPLOAD_FILE_DESCRIPTOR.equals(b.getContentId())) {
att = b;
}
}
Assert.notNull(att);
DataHandler dh = att.getDataHandler();
if (dh == null) {
throw new WebApplicationException(HTTP_BAD_REQUEST);
}
try {
InputStream is = dh.getInputStream();
byte[] buf = new byte[65536];
int n;
OutputStream os = getOutputStream();
while ((n = is.read(buf)) > 0) {
os.write(buf, 0, n);
}
ResponseBuilder rb = Response.status(HTTP_CREATED);
return rb.build();
} catch (IOException e) {
log.error("Got exception=", e);
throw new WebApplicationException(HTTP_INTERNAL_ERROR);
} catch (NoSuchAlgorithmException e) {
log.error("Got exception=", e);
throw new WebApplicationException(HTTP_INTERNAL_ERROR);
} finally {}
}
The client for this code is fairly simple:
public void sendLargeFile(String filename) {
WebClient wc = WebClient.create(targetUrl);
InputStream is = new FileInputStream(new File(filename));
Response r = wc.post(new Attachment(Constants.UPLOAD_FILE_DESCRIPTOR,
MediaType.APPLICATION_OCTET_STREAM, is));
}
The code works fine in terms of functionality. In terms of performance, I noticed that before my handler (receiveStream() method) gets the first byte out of the stream, the whole stream actually gets persisted into a temporary file (using a CachedOutputStream). Unfortunately, this is not acceptable for my purposes.
My handler simply passes the incoming bytes to a backend storage system (virtual machine disk repository), and waiting for the whole disk to be written to a cache only to be read again takes a lot of time, tying up a lot of resources, and reducing throughput.
There is a cost associated with writing the blocks and reading them again, since the app is running in the cloud, and the cloud provider charges per block read/written.
Since every byte is written to the local disk, my service VM must have enough disk space to accommodate the total sizes of all the streams being uploaded (i.e., if I have 10 uploads of 100GB each, I must have 1TB of disk just to cache the content). That again is extra money, as the size of the service VM grows dramatically, and the cloud provider charges for the provisioned disk size as well.
Given all of this, I am looking for a way to use the HTTP InputStream (or as close to it as possible) to read the attachment directly from there and handle it afterwards. I guess the question translates into one of:
- Is there a way to tell CXF not do caching
- OR - is there a way to pass CXF an output stream (one I write) to use, rather than using CachedOutputStream
I found a similar question here. The resolution says use CXF 2.2.3 or later, I am using 2.4.4 (and tried with 2.7.0) with no luck.
Thanks.
I think it's logically not possible (neither in CXF or anywhere else). You're calling getAllAttachements(), which means that the server should collect information about them from the HTTP input stream. It means that the entire stream has to go into memory for MIME parsing.
In your case you should work directly with the stream, and do the MIME parsing yourself:
public Response receiveStream(InputStream input) {
Now you have full control of the input and can consume it into memory byte-by-byte.
I ended up fixing the problem in an unelegant way, but it works, so I wanted to share my experience. Please do let me know if there are some "standard" or better ways.
Since I am writing the server side, I knew I was accessing all the attachments in the order they were sent, and process them as they are streamed in. So, to reflect that behavior of the handler method (receiveStream() method above), I created a new annotation on the server side called "#SequentialAttachmentProcessing" and annotatated my above method with it.
Also, wrote a subclass of Attachment, called SequentialAttachment that acts like a linked list. It has a skip() method that skips over the current attachment, and when an attachment ends, hasMore() method tells you whether there is another one.
Then I wrote a custom multipart/form-data provider which behaves as follows: If the target method is annotated as above, handle the attachment, otherwise call the default provider to do the handling. When it is handled by my provider, it always returns at most one attachment. Hence it could be misleading to a non-suspecting handling method. However, I think it is acceptable since the writer of the server must have annotated the method as "#SequentialAttachmentProcessing" and therefore must know what that entails.
As a result the implementation of the receiveStream() method is now something like:
#POST
#SequentialAttachmentProcessing
#Consumes(MediaType.MULTIPART_FORM_DATA)
#Path("/doupload")
public Response receiveStream(MultipartBody multipart) {
List<Attachment> allAttachments = body.getAllAttachments();
Assert.isTrue(allAttachments.size() <= 1);
if (allAttachment.size() > 0) {
Attachment head = allAttachments.get(0);
Assert.isTrue(head instanceof SequentialAttachment);
SequentialAttachment att = (SequentialAttachment) head;
while (att != null) {
DataHandler dh = att.getDataHandler();
InputStream is = dh.getInputStream();
byte[] buf = new byte[65536];
int n;
OutputStream os = getOutputStream();
while ((n = is.read(buf)) > 0) {
os.write(buf, 0, n);
}
if (att.hasMore()) {
att = att.next();
}
}
}
}
While this solved my immediate problem, I still believe there has to be a standard way of doing this. I hope this helps someone.
I read the claims from Sun people about the wonderful space economy of not only using FastInfoSet, but using it with an external vocab. The code for this purpose is include in the most recent version (1.2.8) but it is not exactly fully documented.
For many files, this works just great for me. However, we've come up with an XML file which, when serialized from DOM with the vocab I created (using the generator in the FI library), and then read back into DOM, mismatches. The mismatches are all in PC-data.
I just call setVocabulary on the serializer and setExternalVocabulary with a map from URI to vocabulary on the reader.
I had to invent my own mechanism to actually serialize a vocabulary; there didn't seem to be one anywhere in the FI library.
One fiddly bit of business is that the org.jvnet.fastinfoset.Vocabulary class is what the generator gives you, but it's not what the parsers and serializers eat. I made arrangements to serialize these, and then use the code below to turn them into the needed objects:
private static void initializeAnalysis() {
InputStream is = FastInfosetUtils.class.getResourceAsStream(ANALYSIS_VOCAB_CLASSPATH);
try {
ObjectInputStream ois = new ObjectInputStream(is);
analysisJvnetVocab = (SerializableVocabulary) ois.readObject();
ois.close();
} catch (IOException e) {
throw new RuntimeException(e);
} catch (ClassNotFoundException e) {
throw new RuntimeException(e);
}
analysisSerializerVocab = new SerializerVocabulary(analysisJvnetVocab.getVocabulary(), false);
analysisParserVocab = new ParserVocabulary(analysisJvnetVocab.getVocabulary());
}
and then, to actually write a document:
SerializerVocabulary fullVocab = new SerializerVocabulary();
fullVocab.setExternalVocabulary(ANALYSIS_VOCAB_URI, analysisSerializerVocab, false);
// pass fullVocab to setVocabulary.
and to read:
Map<Object, Object> vocabMap = new HashMap<Object, Object>();
vocabMap.put(ANALYSIS_VOCAB_URI, analysisParserVocab);
// pass map into setExternalVocabulary
I could easily imagine that the recipe for creating serialization vocabularies is not right, it's not like I was reading a tutorial. Anyone happen to know?
UPDATE
Since no one 'round here had anything to add to this question, I make a test case and filed a bug report. Somewhat to my surprise, it turned out that it was, in fact, a bug, and a fix has been made.