Is there any equivalent of TSerializer in the Thrift C# API.
I am trying to use thrift serialization and then push the serialized object into MQ, not using Thrift transport mechanism. On the other end I'll deserialize it to the actual message.
I can do it in Java but not in C#.
The Apache Thrift C# library doesn't have a TSerializer presently. However it does have a TMemoryBuffer (essentially a transport that reads/writes memory) which works perfectly for this kind of thing. Create a TMemoryBuffer, construct a protocol (like TBinaryProtocol) and then serialize your messages and send them as blobs from the TMemoryBuffer.
For example:
TMemoryBuffer trans = new TMemoryBuffer(); //Transport
TProtocol proto = new TCompactProtocol(trans); //Protocol
PNWF.Trade trade = new PNWF.Trade(initStuff); //Message type (thrift struct)
trade.Write(proto); //Serialize the message to memory
byte[] bytes = trans.GetBuffer(); //Get the serialized message bytes
//SendAMQPMsg(bytes); //Send them!
To receive the message you just do the reverse. TMemoryBuffer has a constructor you can use to set the received bytes to read from.
public TMemoryBuffer(byte[] buf);
Then you just call your struct Read() method on the read side I/O stack.
This isn't much more code (maybe less) than using the Java TSerializer helper and it is a bit more universal across Apache Thrift language libraries. You may find TMemoryBuffer is the way to go everywhere!
Credit due to the other answer on this page, and from here:
http://www.markhneedham.com/blog/2008/08/29/c-thrift-examples/
Rather than expecting everyone to take the explanations and write their own functions, here are two functions to serialize and deserialize generalized thrift objects in C#:
public static byte[] serialize(TBase obj)
{
var stream = new MemoryStream();
TProtocol tProtocol = new TBinaryProtocol(new TStreamTransport(stream, stream));
obj.Write(tProtocol);
return stream.ToArray();
}
public static T deserialize<T>(byte[] data) where T : TBase, new()
{
T result = new T();
var buffer = new TMemoryBuffer(data);
TProtocol tProtocol = new TBinaryProtocol(buffer);
result.Read(tProtocol);
return result;
}
There is an RPC framework that uses the standard thrift Protocol named "thrifty", and it is the same effect as using thrift IDL to define the service, that is, thrify can be compatible with code that uses thrift IDL, and it include serializer:
[ThriftStruct]
public class LogEntry
{
[ThriftConstructor]
public LogEntry([ThriftField(1)]String category, [ThriftField(2)]String message)
{
this.Category = category;
this.Message = message;
}
[ThriftField(1)]
public String Category { get; }
[ThriftField(2)]
public String Message { get; }
}
ThriftSerializer s = new ThriftSerializer(ThriftSerializer.SerializeProtocol.Binary);
byte[] s = s.Serialize<LogEntry>();
s.Deserialize<LogEntry>(s);
more detail: https://github.com/endink/Thrifty
Related
I'm trying to write some unit tests for Kafka Streams and have a number of quite complex schemas that I need to incorporate into my tests.
Instead of just creating objects from scratch each time, I would ideally like to instantiate using some real data and perform tests on that. We use Confluent with records in Avro format, and can extract both schema and a text JSON-like representation from the Control Center application. The JSON is valid JSON, but it's not really in the form that you'd write it in if you were just writing JSON representations of the data, so I assume it's some representation of the underlying AVRO in text form.
I've already used the schema to create a Java SpecificRecord class (price_assessment) and would like to use the JSON string copied from the Control Center message to populate a new instance of that class to feed into to my unit test InputTopic.
The code I've tried so far is
var testAvroString = "{JSON copied from Control Center topic}";
Schema schema = price_assessment.getClassSchema();
DecoderFactory decoderFactory = new DecoderFactory();
Decoder decoder = null;
try {
DatumReader<price_assessment> reader = new SpecificDatumReader<price_assessment>();
decoder = decoderFactory.get().jsonDecoder(schema, testAvroString);
return reader.read(null, decoder);
} catch (Exception e)
{
return null;
}
which is adapted from another SO answer that was using GenericRecords. When I try running this though I get the exception Cannot invoke "org.apache.avro.Schema.equals(Object)" because "writer" is null on the reader.read(...) step.
I'm not massively familiar with streams testing or Java and I'm not sure what exactly I've done wrong. Written in Java 17, streams 3.1.0, though flexible with version
The solution that I've managed to come up with is the following, which seems to work:
private static <T> T avroStringToInstance(Schema classSchema, String testAvroString) {
DecoderFactory decoderFactory = new DecoderFactory();
GenericRecord genericRecord = null;
try {
Decoder decoder = decoderFactory.jsonDecoder(classSchema, testAvroString);
DatumReader<GenericData.Record> reader =
new GenericDatumReader<>(classSchema);
genericRecord = reader.read(null, decoder);
} catch (Exception e)
{
return null;
}
var specific = (T) SpecificData.get().deepCopy(genericRecord.getSchema(), genericRecord);
return specific;
}
i want to create request xml-rpc struct in java, my code is like this
public String xmlprc() throws XmlRpcException, MalformedURLException{
ReqModelTest req = new ReqModelTest();
String test="";
Object paramsR = new Object();
Vector params = new Vector();
req.setvalue1("value1");
req.setvalue2("value2");
req.setvalue3("value3");
req.setvalue4("value4");
req.setvalue5("value5");
params.add(req);
XmlRpcClientConfigImpl config = new XmlRpcClientConfigImpl();
try {
config.setServerURL(new URL("myurl"));
XmlRpcClient client = new XmlRpcClient();
client.setConfig(config);
paramsR = (Object)client.execute("mymethod", params);
} catch (MalformedURLException | XmlRpcException e) {
log.info(e);
}
log.info(paramsR.toString());
test = paramsR.toString();
return test;
}
but when i run it, it shows error org.apache.xmlrpc.XmlRpcException: Failed to generate request: Unsupported Java type: com.model.ReqModelTest. is there any way how to do it? Thank you very much
The most common error we have in these cases is not implementing Serializable interface.
The documentation tells us to follow these steps :
All primitive Java types are supported, including long, byte, short, and double.
Calendar objects are supported. In particular, timezone settings, and milliseconds may be sent.
DOM nodes, or JAXB objects, can be transmitted. So are objects implementing the java.io.Serializable interface.
Both server and client can operate in a streaming mode, which preserves resources much better than the default mode, which is based on large internal byte arrays.
Please check you are following these guidelines.
http://ws.apache.org/xmlrpc/
You need to create custom TypeFactory to handle your own data types.
Please see this link
I have a hashMap that needs to travel from server to client over network. Now when the size increases beyond some limit provided at socket buffer the following exception is thrown.
Caused by: weblogic.socket.MaxMessageSizeExceededException: Incoming message of size: '3002880' bytes exceeds the configured maximum of: '3000000' bytes for protocol: 't3'
While googling it out I found that the socket size should be increased but that is not required as this is not such a good solution.
Then I am trying to compress the HashMap before sending using "DeflaterOutputStream/InflaterInputStream". But the challenge here is that the "ObjectOutputStream" object is created by weblogic classes and the deflater/Inflater streams are supposed to be embed while trying to create the ObjectOutputStream to make compression work.
Is there some way I can do this?
Also could there be some way to just enable the compression at t3 protocol used by weblogic to automatically use the compression. I have done some research on whether it is possible or not at t3 protocol but it seems t3 protocol does not support this. But I am not sure whether some new version of weblogic support this or not.
I was also thinking of breaking the HashMap in to the chunks of "Socket buffer size" but it will require to change the existing design and is not preferred as of now.
Please share your views thoughts on this.
If the HashMap might contain even more data in the future, compressing it will also only be a temporary solution. The way to resolve it permanently is to split the request into several requests if there are too many items in the map.
You could wrap the Map into another object, e.g. called Payload and make all communications be part of a compressed object.
public class Payload<T extends Serializable> {
private T payload;
public Payload( T payload ) {
this.payload = payload;
}
private T get() {
return payload;
}
public static final boolean ENABLE_COMPRESSION = BooleanUtils.toBooleanDefaultIfNull( BooleanUtils.toBooleanObject( System.getProperty( "serialization.gzip.enabled" ) ), true );
private void writeObject( ObjectOutputStream oos ) throws IOException {
if ( ENABLE_COMPRESSION ) {
GZIPOutputStream zos = new GZIPOutputStream( oos, 65536 );
ObjectOutputStream sender = new ObjectOutputStream( zos );
sender.writeObject( payload );
sender.flush();
zos.finish();
} else {
oos.defaultWriteObject();
}
}
#SuppressWarnings( "unchecked" )
private void readObject( ObjectInputStream ois ) throws ClassNotFoundException, IOException {
if ( ENABLE_COMPRESSION ) {
GZIPInputStream zis = new GZIPInputStream( ois, 65536 );
ObjectInputStream receiver = new ObjectInputStream( zis );
payload = (T) receiver.readObject();
} else {
ois.defaultReadObject();
}
}
}
Then when sending any object, it will be sent wrapped within a Payload object and therefore compressed.
public Payload<Map> someRemoteCall() {
Map map = new HashMap();
populate( map ); // do whatever needs to be done to fill up the map.
Payload<Map> payload = new Payload<Map>( map );
return payload;
}
Obviously it may involve some changes to interfaces which may be undesirable, but so far it's the best I've found for this.
Hope this helps.
I would like to simply open an AMQP 1.0 connection with a specific max_frame_size using the Apache Qpid Proton client library. This is inside a testsuite, not a real world application.
The Java library seems more advanced than the C library and its various bindings for other languages, so I started to use the Java one. Unfortunately, I can't find a way to set this parameter, though there must be a way: there is this Transport class which offers to get or set max_frame_size.
I first tried with the Messenger API, then I played with the Engine API. I couldn't figure out how to access the transport instance. In the case of the Engine API, I see there is a Connection.getTransport() and tried that, but it's NULL at the time I call this function.
Here is my last test:
private void do_test_with_frame_size(int frame_size, int payload_size) {
Connection conn = Connection.Factory.create();
Transport transport = conn.getTransport();
transport.setMaxFrameSize(frame_size);
Session session = conn.session();
Sender sender = session.sender("sender");
conn.open();
session.open();
sender.open();
if (sender.getCredit() > 0) {
String uri = System.getProperty("broker_uri");
assertNotNull(uri);
String address = String.format("%s/fragmentation-%d-%d",
uri, frame_size, payload_size);
Message message = Proton.message();
message.setAddress(address);
message.setBody(new AmqpValue(new byte[payload_size]));
byte[] msgData = new byte[1024];
int length;
while(true) {
try {
length = message.encode(msgData, 0, msgData.length);
break;
} catch(BufferOverflowException e) {
msgData = new byte[msgData.length * 2];
}
}
byte[] tag = "0".getBytes();
Delivery delivery = sender.delivery(tag);
sender.send(msgData, 0, length);
delivery.settle();
sender.advance();
sender.close();
sender.getSession().close();
sender.getSession().getConnection().close();
}
}
I admit I have very limited knowledge of Java. Could you please confirm it is even possible to set this parameter and, if yes, tell me how to?
You need to create a Transport instance for the connection to use and then bind the transport to the connection instance. A created Connection does not have an implicit Transport bound to it which is why you get a null returned to you currently.
private final Transport protonTransport = Proton.transport();
private final Connection protonConnection = Proton.connection();
...
this.protonTransport.setMaxFrameSize(maxFrameSize);
this.protonTransport.setChannelMax(CHANNEL_MAX);
this.protonTransport.bind(this.protonConnection);
Relatively new with Riak development here. I am using a c++ client to connect to a Riak java client that in turn connects to the Riak cluster. The c++ client serializes as a string a query using Google Protocol Buffers in the form of "GET":KEY and the Riak java client serializes the response as "OK":VALUE. How to properly handle the case that the value was not found to the database though?
This is the sample code from the Riak java client to retrieve the object from the db:
String key //Contains the actual key
Namespace ns;
byte[] retVal = null;
Location location = new Location(ns, key);
try {
FetchValue fv;
FetchValue.Response response = client.execute(fv);
if (response.isNotFound()) {
System.out.println("The key/value pair was not found");
} else {
RiakObject obj = response.getValue(RiakObject.class);
retVal = obj.getValue().getValue();
}
}
catch (...) {...}
return retVal;
}
If the object was not found, the byte[] array remains NULL
This is the sample code from the Riak java client to serialize the reply:
ByteString valueBuf = ByteString.copyFrom(value);
// Generate reply message
reply = Request.RequestMessage.newBuilder().setCommand("OK").setValue(valueBuf).build().toByteArray();
However the code throws a NullPointerException at the copyFrom line, since it tries to copy from a Null array. Is there a way to do this more cleanly?
Thanks!
You should check to see if value == null before attempting the copyFrom() call.
Also, you should consider using Go's ability to be called from C (cgo) rather than the Java client for integration. I put together a very, very basic demonstration of doing that here.