I need to write List<MyClass> myList into CSV file. In particular I need to write values. MyClass has the following implementation:
public class MyClass {
private Object[] values;
//...
#Override
public String toString()
{
String line = "";
for (int i=0; i<this.getSize(); i++) {
//if (this.values[i] != null) {
line = line + this.values[i] + " ";
//}
}
return line;
}
}
The code is the following:
private void saveSolutionToCSV(List<MyClass> solution) {
int columnSize = solution.get(0).getSize();
try {
FileWriter writer = new FileWriter("test.csv");
Iterator result = solution.iterator();
while(result.hasNext()) {
for(int i = 0; i < columnSize; i++) {
CharSequence element = (CharSequence)result.next();
writer.append(element);
if(i < columnSize - 1)
writer.append(',');
}
writer.append('\n');
}
}
catch(Exception e)
{
e.printStackTrace();
}
}
And the error message is the following:
java.lang.ClassCastException: myPackage.MyClass cannot be cast to java.lang.CharSequence
How to solve this problem? Thx.
Try:
String element = result.next().toString();
writer.append(element);
if(i < columnSize - 1)
writer.append(',');
You need to call toString() before casting to CharSequence.
Try calling the toString() method explicitly like
CharSequence element = (CharSequence)result.next().toString();
CsvWriter.java
import java.io.ByteArrayOutputStream;
import java.io.Closeable;
import java.io.FilterOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.io.PrintWriter;
import java.util.Collection;
/**
* Composes CSV data for collection of objects. Relays on theirs {#link Object#toString toString()} implementation.<br>
* Could use memory buffer for the data or write straight to the output stream.
* <p>
* Example 1:
*
* <pre>
* Object[][] data = ...
*
* CsvWriter writer = new CsvWriter();
* writer.write("Column 1", "Column 2", ... "Column N");
* for (Object[] values : data)
* writer.write(values);
* ...
* System.out.println(writer.toString());
* </pre>
* <p>
* Example 2:
*
* <pre>
* Object[][] data = ...
*
* CsvWriter writer = null;
* try {
* writer = new CsvWriter(new FileOutputStream(new File("data.csv")));
* writer.write("Column 1", "Column 2", ... "Column N");
* for (Object[] values : data)
* writer.write(values);
* } finally {
* writer.close();
* }
* </pre>
*
* #author Mykhaylo Adamovych
*/
public class CsvWriter implements Closeable {
public static final String NULL_MARK = "";
public static final String QUOTE = "\"";
private String nullMark = NULL_MARK;
private final PrintWriter pw;
private ByteArrayOutputStream baos;
/**
* Creates temporary buffer in the memory. <br>
* Use {#link #toString()} thereafter. No need to close.
*/
public CsvWriter() {
baos = new ByteArrayOutputStream();
pw = new PrintWriter(baos, true);
}
/**
* Doesn't consume memory for CSV data, writes straight to the output stream. Just like {#link FilterOutputStream}, but deal with objects and relays on
* theirs {#link Object#toString toString()} implementation.<br>
* Close writer thereafter.
*
* #param os
* output stream to write data.
*/
public CsvWriter(OutputStream os) {
pw = new PrintWriter(os, true);
}
protected String composeRecord(Object... values) {
if (values == null || values.length == 0)
return "";
final StringBuffer csvRecord = new StringBuffer();
csvRecord.append(QUOTE);
csvRecord.append(composeValue(values[0]));
csvRecord.append(QUOTE);
for (int i = 1; i < values.length; i++) {
csvRecord.append("," + QUOTE);
csvRecord.append(composeValue(values[i]));
csvRecord.append(QUOTE);
}
return csvRecord.toString();
}
protected String composeValue(Object value) {
if (value == null)
return nullMark;
return value.toString().replaceAll(QUOTE, QUOTE + QUOTE);
}
public String getNullMark() {
return nullMark;
}
public void setNullMark(String nullMarker) {
nullMark = nullMarker;
}
#Override
public String toString() {
if (baos == null)
throw new UnsupportedOperationException();
return baos.toString();
}
/**
* Writes collection of objects as CSV record.
*
* #param values
*/
public void write(Collection<?> values) {
write(values.toArray());
}
/**
* Writes collection of objects as CSV record.
*
* #param values
*/
public void write(Object... values) {
pw.println(composeRecord(values));
}
#Override
public void close() throws IOException {
if (baos != null)
throw new UnsupportedOperationException();
pw.close();
}
}
example:
public class MyClass {
private Object[][] data;
//...
#Override
public String toCsv() {
CsvWriter writer = new CsvWriter();
csvWriter.write("Column1", "Column2")
for (final Object[] values : data)
csvWriter.write(values);
return writer.toString();
}
}
Related
I'm currently learning to develop a simple blockchain program that reads sample data from .txt and creates a new block for every 10 transactions. I was wondering if the given sample data was 23 lines of transactions, is there a way to make a new block that consist of the last 3 transactions ?
Current Output
Block[header=Header[index=0,currHash=51aa6b7cf5fb821189d58b5c995b4308370888efcaac469d79ad0a5d94fb0432, prevHash=0, timestamp=1654785847112], tranx=null]
Block[header=Header[index=0,currHash=92b3582095e2403c68401448e8a34864e8465d0ea51c05f11c23810ec36b4868, prevHash=0, timestamp=1654785847385], tranx=Transaction [tranxLst=[alice|bob|credit|1.0, alice|bob|debit|2.0, alice|bob|debit|3.0, alice|bob|credit|4.0, alice|bob|debit|5.0, alice|bob|credit|6.0, alice|bob|debit|7.0, alice|bob|debit|8.0, alice|bob|debit|9.0, alice|bob|debit|10.0]]]
Block[header=Header[index=0,currHash=7488c600433d78e0fb8586e71a010b1d39a040cb101cc6e3418668d21b614519, prevHash=0, timestamp=1654785847386], tranx=Transaction [tranxLst=[alice|bob|credit|11.0, alice|bob|credit|12.0, alice|bob|debit|13.0, alice|bob|debit|14.0, alice|bob|credit|15.0, alice|bob|credit|16.0, alice|bob|credit|17.0, alice|bob|debit|18.0, alice|bob|credit|19.0, alice|bob|credit|20.0]]]
What I want
Block[header=Header[index=0,currHash=51aa6b7cf5fb821189d58b5c995b4308370888efcaac469d79ad0a5d94fb0432, prevHash=0, timestamp=1654785847112], tranx=null]
Block[header=Header[index=0,currHash=92b3582095e2403c68401448e8a34864e8465d0ea51c05f11c23810ec36b4868, prevHash=0, timestamp=1654785847385], tranx=Transaction [tranxLst=[alice|bob|credit|1.0, alice|bob|debit|2.0, alice|bob|debit|3.0, alice|bob|credit|4.0, alice|bob|debit|5.0, alice|bob|credit|6.0, alice|bob|debit|7.0, alice|bob|debit|8.0, alice|bob|debit|9.0, alice|bob|debit|10.0]]]
Block[header=Header[index=0,currHash=7488c600433d78e0fb8586e71a010b1d39a040cb101cc6e3418668d21b614519, prevHash=0, timestamp=1654785847386], tranx=Transaction [tranxLst=[alice|bob|credit|11.0, alice|bob|credit|12.0, alice|bob|debit|13.0, alice|bob|debit|14.0, alice|bob|credit|15.0, alice|bob|credit|16.0, alice|bob|credit|17.0, alice|bob|debit|18.0, alice|bob|credit|19.0, alice|bob|credit|20.0]]]
Block[header=Header[index=0,currHash=7488c600433d78e0fb8586e71a010b1d39a040cb101cc6e3418668d21b614520, prevHash=0, timestamp=1654785847387], tranx=Transaction [tranxLst=[alice|bob|credit|21.0, alice|bob|credit|22.0, alice|bob|debit|23.0]]]
my code:
Client app
public static void main(String[] args) throws IOException {
homework();
}
static void homework() throws IOException {
int count = 0;
Transaction tranxLst = new Transaction();
Block genesis = new Block("0");
System.out.println(genesis);
BufferedReader bf = new BufferedReader(new FileReader("dummytranx.txt"));
String line = bf.readLine();
while (line != null) {
tranxLst.add(line);
line = bf.readLine();
count++;
if (count % 10 == 0) {
Block newBlock = new Block(genesis.getHeader().getPrevHash());
newBlock.setTranx(tranxLst);
System.out.println(newBlock);
tranxLst.getTranxLst().clear();
}
}
bf.close();
}
Transaction class
public class Transaction implements Serializable {
public static final int SIZE = 10;
/**
* we will comeback to generate the merkle root ie., hash of merkle tree
* merkleRoot = hash
*/
private String merkleRoot = "9a0885f8cd8d94a57cd76150a9c4fa8a4fed2d04c244f259041d8166cdfeca1b8c237b2c4bca57e87acb52c8fa0777da";
// private String merkleRoot;
public String getMerkleRoot() {
return merkleRoot;
}
public void setMerkleRoot(String merkleRoot) {
this.merkleRoot = merkleRoot;
}
/**
* For the data collection, u may want to choose classic array or collection api
*/
private List<String> tranxLst;
public List<String> getTranxLst() {
return tranxLst;
}
public Transaction() {
tranxLst = new ArrayList<>(SIZE);
}
/**
* add()
*/
public void add(String tranx) {
tranxLst.add(tranx);
}
#Override
public String toString() {
return "Transaction [tranxLst=" + tranxLst + "]";
}
}
Block class
public class Block implements Serializable {
private Header header;
public Header getHeader() {
return header;
}
private Transaction tranx;
public Block(String previousHash) {
header = new Header();
header.setTimestamp(new Timestamp(System.currentTimeMillis()).getTime());
header.setPrevHash(previousHash);
String blockHash = Hasher.sha256(getBytes());
header.setCurrHash(blockHash);
}
/**
* getBytes of the Block object
*/
private byte[] getBytes() {
try (ByteArrayOutputStream baos = new ByteArrayOutputStream();
ObjectOutputStream out = new ObjectOutputStream(baos);) {
out.writeObject(this);
return baos.toByteArray();
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
public Transaction getTranx() {
return tranx;
}
/**
* aggregation rel
*/
public void setTranx(Transaction tranx) {
this.tranx = tranx;
}
/**
* composition rel
*/
public class Header implements Serializable {
private int index;
private String currHash, prevHash;
private long timestamp;
// getset methods
public String getCurrHash() {
return currHash;
}
public int getIndex() {
return index;
}
public void setIndex(int index) {
this.index = index;
}
public void setCurrHash(String currHash) {
this.currHash = currHash;
}
public String getPrevHash() {
return prevHash;
}
public void setPrevHash(String prevHash) {
this.prevHash = prevHash;
}
public long getTimestamp() {
return timestamp;
}
public void setTimestamp(long timestamp) {
this.timestamp = timestamp;
}
#Override
public String toString() {
return "Header [index=" + index + ", currHash=" + currHash + ", prevHash=" + prevHash + ", timestamp="
+ timestamp + "]";
}
}
#Override
public String toString() {
return "Block [header=" + header + ", tranx=" + tranx + "]";
}
}
enter code here
Instead of using a counter in the conditional statement, try ForLoop.
static void homework() throws IOException {
Transaction tranxLst = new Transaction();
Block genesis = new Block("0");
System.out.println(genesis);
BufferedReader bf = new BufferedReader(new FileReader("dummytranx.txt"));
String line = bf.readLine();
while (line != null) {
for (int i = 0; i < 10; i++) {
tranxLst.add(line);
line = bf.readLine();
if (line == null) {
break;
}
}
Block newBlock = new Block(genesis.getHeader().getPrevHash());
newBlock.setTranx(tranxLst);
System.out.println(newBlock);
tranxLst.getTranxLst().clear();
}
bf.close();
}
I have written below code to read avro schema records from Kafka topic. I have taken .avsc and generate a class(paymentengine) using maven and reading record with SpecificAvroRecord. I am able to read these records successfully. Now, I need to do some validation on these records and insert these records into a table.
package com.example.consumer;
import io.confluent.kafka.serializers.KafkaAvroDeserializer;
import io.confluent.kafka.serializers.KafkaAvroDeserializerConfig;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.StringDeserializer;
import java.time.Duration;
import java.util.Collections;
import java.util.Properties;
public class PayKafkaSpecifcAvro {
public static void main(String[] args) {
// setting properties
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group1");
props.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, "true");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
props.put(KafkaAvroDeserializerConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8081");
//name topic
String topic = "pengine";
// create the consumer
KafkaConsumer<String, pengine> consumer = new KafkaConsumer<String, pengine>(props);
//subscribe to topic
consumer.subscribe(Collections.singleton(topic));
System.out.println("Waiting for the data...");
while (true) {
ConsumerRecords<String, pengine> records = consumer.poll(Duration.ofMillis(5000));
for (ConsumerRecord<String, pengine> record : records) {
System.out.println(record.value());
System.out.println((record.value().getVcp()));
consumer.commitSync();
}
}
}
}
Output :
As the output is in JSON format, how can I convert it into a String and Compare it? I need to compare ACH and VCP value and if the values are same then have to flag that row as error. Also, converting these records in string would help me to insert these records into database as well.
pengine class:
package com.example.consumer; /**
* Autogenerated by Avro
*
* DO NOT EDIT DIRECTLY
*/
import org.apache.avro.message.BinaryMessageDecoder;
import org.apache.avro.message.BinaryMessageEncoder;
import org.apache.avro.message.SchemaStore;
import org.apache.avro.specific.SpecificData;
import org.apache.avro.util.Utf8;
#org.apache.avro.specific.AvroGenerated
public class pengine extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
private static final long serialVersionUID = -3169039590588895557L;
public static final org.apache.avro.Schema SCHEMA$ = new org.apache.avro.Schema.Parser().parse("{\"type\":\"record\",\"name\":\"pengine\",\"fields\":[{\"name\":\"tin\",\"type\":\"string\"},{\"name\":\"ach\",\"type\":\"string\"},{\"name\":\"vcp\",\"type\":\"string\"}]}");
public static org.apache.avro.Schema getClassSchema() { return SCHEMA$; }
private static SpecificData MODEL$ = new SpecificData();
private static final BinaryMessageEncoder<pengine> ENCODER =
new BinaryMessageEncoder<pengine>(MODEL$, SCHEMA$);
private static final BinaryMessageDecoder<pengine> DECODER =
new BinaryMessageDecoder<pengine>(MODEL$, SCHEMA$);
/**
* Return the BinaryMessageEncoder instance used by this class.
* #return the message encoder used by this class
*/
public static BinaryMessageEncoder<pengine> getEncoder() {
return ENCODER;
}
/**
* Return the BinaryMessageDecoder instance used by this class.
* #return the message decoder used by this class
*/
public static BinaryMessageDecoder<pengine> getDecoder() {
return DECODER;
}
/**
* Create a new BinaryMessageDecoder instance for this class that uses the specified {#link SchemaStore}.
* #param resolver a {#link SchemaStore} used to find schemas by fingerprint
* #return a BinaryMessageDecoder instance for this class backed by the given SchemaStore
*/
public static BinaryMessageDecoder<pengine> createDecoder(SchemaStore resolver) {
return new BinaryMessageDecoder<pengine>(MODEL$, SCHEMA$, resolver);
}
/**
* Serializes this pengine to a ByteBuffer.
* #return a buffer holding the serialized data for this instance
* #throws java.io.IOException if this instance could not be serialized
*/
public java.nio.ByteBuffer toByteBuffer() throws java.io.IOException {
return ENCODER.encode(this);
}
/**
* Deserializes a pengine from a ByteBuffer.
* #param b a byte buffer holding serialized data for an instance of this class
* #return a pengine instance decoded from the given buffer
* #throws java.io.IOException if the given bytes could not be deserialized into an instance of this class
*/
public static pengine fromByteBuffer(
java.nio.ByteBuffer b) throws java.io.IOException {
return DECODER.decode(b);
}
private CharSequence tin;
private CharSequence ach;
private CharSequence vcp;
/**
* Default constructor. Note that this does not initialize fields
* to their default values from the schema. If that is desired then
* one should use <code>newBuilder()</code>.
*/
public pengine() {}
/**
* All-args constructor.
* #param tin The new value for tin
* #param ach The new value for ach
* #param vcp The new value for vcp
*/
public pengine(CharSequence tin, CharSequence ach, CharSequence vcp) {
this.tin = tin;
this.ach = ach;
this.vcp = vcp;
}
public SpecificData getSpecificData() { return MODEL$; }
public org.apache.avro.Schema getSchema() { return SCHEMA$; }
// Used by DatumWriter. Applications should not call.
public Object get(int field$) {
switch (field$) {
case 0: return tin;
case 1: return ach;
case 2: return vcp;
default: throw new org.apache.avro.AvroRuntimeException("Bad index");
}
}
// Used by DatumReader. Applications should not call.
#SuppressWarnings(value="unchecked")
public void put(int field$, Object value$) {
switch (field$) {
case 0: tin = (CharSequence)value$; break;
case 1: ach = (CharSequence)value$; break;
case 2: vcp = (CharSequence)value$; break;
default: throw new org.apache.avro.AvroRuntimeException("Bad index");
}
}
/**
* Gets the value of the 'tin' field.
* #return The value of the 'tin' field.
*/
public CharSequence getTin() {
return tin;
}
/**
* Sets the value of the 'tin' field.
* #param value the value to set.
*/
public void setTin(CharSequence value) {
this.tin = value;
}
/**
* Gets the value of the 'ach' field.
* #return The value of the 'ach' field.
*/
public CharSequence getAch() {
return ach;
}
/**
* Sets the value of the 'ach' field.
* #param value the value to set.
*/
public void setAch(CharSequence value) {
this.ach = value;
}
/**
* Gets the value of the 'vcp' field.
* #return The value of the 'vcp' field.
*/
public CharSequence getVcp() {
return vcp;
}
/**
* Sets the value of the 'vcp' field.
* #param value the value to set.
*/
public void setVcp(CharSequence value) {
this.vcp = value;
}
/**
* Creates a new pengine RecordBuilder.
* #return A new pengine RecordBuilder
*/
public static pengine.Builder newBuilder() {
return new pengine.Builder();
}
/**
* Creates a new pengine RecordBuilder by copying an existing Builder.
* #param other The existing builder to copy.
* #return A new pengine RecordBuilder
*/
public static pengine.Builder newBuilder(pengine.Builder other) {
if (other == null) {
return new pengine.Builder();
} else {
return new pengine.Builder(other);
}
}
/**
* Creates a new pengine RecordBuilder by copying an existing pengine instance.
* #param other The existing instance to copy.
* #return A new pengine RecordBuilder
*/
public static pengine.Builder newBuilder(pengine other) {
if (other == null) {
return new pengine.Builder();
} else {
return new pengine.Builder(other);
}
}
/**
* RecordBuilder for pengine instances.
*/
#org.apache.avro.specific.AvroGenerated
public static class Builder extends org.apache.avro.specific.SpecificRecordBuilderBase<pengine>
implements org.apache.avro.data.RecordBuilder<pengine> {
private CharSequence tin;
private CharSequence ach;
private CharSequence vcp;
/** Creates a new Builder */
private Builder() {
super(SCHEMA$);
}
/**
* Creates a Builder by copying an existing Builder.
* #param other The existing Builder to copy.
*/
private Builder(pengine.Builder other) {
super(other);
if (isValidValue(fields()[0], other.tin)) {
this.tin = data().deepCopy(fields()[0].schema(), other.tin);
fieldSetFlags()[0] = other.fieldSetFlags()[0];
}
if (isValidValue(fields()[1], other.ach)) {
this.ach = data().deepCopy(fields()[1].schema(), other.ach);
fieldSetFlags()[1] = other.fieldSetFlags()[1];
}
if (isValidValue(fields()[2], other.vcp)) {
this.vcp = data().deepCopy(fields()[2].schema(), other.vcp);
fieldSetFlags()[2] = other.fieldSetFlags()[2];
}
}
/**
* Creates a Builder by copying an existing pengine instance
* #param other The existing instance to copy.
*/
private Builder(pengine other) {
super(SCHEMA$);
if (isValidValue(fields()[0], other.tin)) {
this.tin = data().deepCopy(fields()[0].schema(), other.tin);
fieldSetFlags()[0] = true;
}
if (isValidValue(fields()[1], other.ach)) {
this.ach = data().deepCopy(fields()[1].schema(), other.ach);
fieldSetFlags()[1] = true;
}
if (isValidValue(fields()[2], other.vcp)) {
this.vcp = data().deepCopy(fields()[2].schema(), other.vcp);
fieldSetFlags()[2] = true;
}
}
/**
* Gets the value of the 'tin' field.
* #return The value.
*/
public CharSequence getTin() {
return tin;
}
/**
* Sets the value of the 'tin' field.
* #param value The value of 'tin'.
* #return This builder.
*/
public pengine.Builder setTin(CharSequence value) {
validate(fields()[0], value);
this.tin = value;
fieldSetFlags()[0] = true;
return this;
}
/**
* Checks whether the 'tin' field has been set.
* #return True if the 'tin' field has been set, false otherwise.
*/
public boolean hasTin() {
return fieldSetFlags()[0];
}
/**
* Clears the value of the 'tin' field.
* #return This builder.
*/
public pengine.Builder clearTin() {
tin = null;
fieldSetFlags()[0] = false;
return this;
}
/**
* Gets the value of the 'ach' field.
* #return The value.
*/
public CharSequence getAch() {
return ach;
}
/**
* Sets the value of the 'ach' field.
* #param value The value of 'ach'.
* #return This builder.
*/
public pengine.Builder setAch(CharSequence value) {
validate(fields()[1], value);
this.ach = value;
fieldSetFlags()[1] = true;
return this;
}
/**
* Checks whether the 'ach' field has been set.
* #return True if the 'ach' field has been set, false otherwise.
*/
public boolean hasAch() {
return fieldSetFlags()[1];
}
/**
* Clears the value of the 'ach' field.
* #return This builder.
*/
public pengine.Builder clearAch() {
ach = null;
fieldSetFlags()[1] = false;
return this;
}
/**
* Gets the value of the 'vcp' field.
* #return The value.
*/
public CharSequence getVcp() {
return vcp;
}
/**
* Sets the value of the 'vcp' field.
* #param value The value of 'vcp'.
* #return This builder.
*/
public pengine.Builder setVcp(CharSequence value) {
validate(fields()[2], value);
this.vcp = value;
fieldSetFlags()[2] = true;
return this;
}
/**
* Checks whether the 'vcp' field has been set.
* #return True if the 'vcp' field has been set, false otherwise.
*/
public boolean hasVcp() {
return fieldSetFlags()[2];
}
/**
* Clears the value of the 'vcp' field.
* #return This builder.
*/
public pengine.Builder clearVcp() {
vcp = null;
fieldSetFlags()[2] = false;
return this;
}
#Override
#SuppressWarnings("unchecked")
public pengine build() {
try {
pengine record = new pengine();
record.tin = fieldSetFlags()[0] ? this.tin : (CharSequence) defaultValue(fields()[0]);
record.ach = fieldSetFlags()[1] ? this.ach : (CharSequence) defaultValue(fields()[1]);
record.vcp = fieldSetFlags()[2] ? this.vcp : (CharSequence) defaultValue(fields()[2]);
return record;
} catch (org.apache.avro.AvroMissingFieldException e) {
throw e;
} catch (Exception e) {
throw new org.apache.avro.AvroRuntimeException(e);
}
}
}
#SuppressWarnings("unchecked")
private static final org.apache.avro.io.DatumWriter<pengine>
WRITER$ = (org.apache.avro.io.DatumWriter<pengine>)MODEL$.createDatumWriter(SCHEMA$);
#Override public void writeExternal(java.io.ObjectOutput out)
throws java.io.IOException {
WRITER$.write(this, SpecificData.getEncoder(out));
}
#SuppressWarnings("unchecked")
private static final org.apache.avro.io.DatumReader<pengine>
READER$ = (org.apache.avro.io.DatumReader<pengine>)MODEL$.createDatumReader(SCHEMA$);
#Override public void readExternal(java.io.ObjectInput in)
throws java.io.IOException {
READER$.read(this, SpecificData.getDecoder(in));
}
#Override protected boolean hasCustomCoders() { return true; }
#Override public void customEncode(org.apache.avro.io.Encoder out)
throws java.io.IOException
{
out.writeString(this.tin);
out.writeString(this.ach);
out.writeString(this.vcp);
}
#Override public void customDecode(org.apache.avro.io.ResolvingDecoder in)
throws java.io.IOException
{
org.apache.avro.Schema.Field[] fieldOrder = in.readFieldOrderIfDiff();
if (fieldOrder == null) {
this.tin = in.readString(this.tin instanceof Utf8 ? (Utf8)this.tin : null);
this.ach = in.readString(this.ach instanceof Utf8 ? (Utf8)this.ach : null);
this.vcp = in.readString(this.vcp instanceof Utf8 ? (Utf8)this.vcp : null);
} else {
for (int i = 0; i < 3; i++) {
switch (fieldOrder[i].pos()) {
case 0:
this.tin = in.readString(this.tin instanceof Utf8 ? (Utf8)this.tin : null);
break;
case 1:
this.ach = in.readString(this.ach instanceof Utf8 ? (Utf8)this.ach : null);
break;
case 2:
this.vcp = in.readString(this.vcp instanceof Utf8 ? (Utf8)this.vcp : null);
break;
default:
throw new java.io.IOException("Corrupt ResolvingDecoder.");
}
}
}
}
}
The record.value() is a PaymentEngine object, so you can use record.value().getAch() to get the value of arc. The vcp is same. So you can compare to them with the requiered value. You get the json on print because print is automatically call the object's toString() method.
I tried to save data to json string in a txt file using Gson and then restore it using Gson either. Things go well if I do it in eclipse. But when packaged to jar, Gson throws Exceptions.
Here is the code for saving the file.
String gsonStr = gson.toJson(masterShips); // masterShips is ArrayList<Ship>
BufferedWriter writer = null;
try {
writer = new BufferedWriter(new FileWriter("D:\\master_ship.txt"));
writer.write(gsonStr);
} catch (IOException e) {
System.err.println(e);
} finally {
if (writer != null) {
try {
writer.close();
} catch (IOException e) {
System.err.println(e);
}
}
}
Then I read the file in eclipse using this code (and it works):
Scanner in = new Scanner(new FileReader("D:\\master_ship.txt"));
String str = in.nextLine();
Log.toDebug(str);
in.close();
JsonParser parser = new JsonParser();
JsonElement je = parser.parse(str);
JsonArray ja = je.getAsJsonArray();
for (int i=0; i<ja.size(); ++i) {
...
}
But after packaged into jar and run in cmd, Exception occurs:
Exception in thread "main" com.google.gson.JsonSyntaxException: com.google.gson.
stream.MalformedJsonException: Use JsonReader.setLenient(true) to accept malform
ed JSON at line 1 column 4
at com.google.gson.JsonParser.parse(JsonParser.java:65)
at com.google.gson.JsonParser.parse(JsonParser.java:45)
at kan.util.Master.loadMasterShip(Master.java:44)
at kan.util.Master.load(Master.java:27)
at kan.Main.main(Main.java:22)
Caused by: com.google.gson.stream.MalformedJsonException: Use JsonReader.setLeni
ent(true) to accept malformed JSON at line 1 column 4
at com.google.gson.stream.JsonReader.syntaxError(JsonReader.java:1505)
at com.google.gson.stream.JsonReader.checkLenient(JsonReader.java:1386)
at com.google.gson.stream.JsonReader.doPeek(JsonReader.java:531)
at com.google.gson.stream.JsonReader.peek(JsonReader.java:414)
at com.google.gson.JsonParser.parse(JsonParser.java:60)
... 4 more
According to the hint of the Exception, I changed my code and it still works in eclipse:
Scanner in = new Scanner(new FileReader("D:\\master_ship.txt"));
String str = in.nextLine();
in.close();
Reader reader = new StringReader(str);
JsonReader jr = new JsonReader(reader);
jr.setLenient(true);
JsonParser parser = new JsonParser();
JsonElement je = parser.parse(jr);
JsonArray ja = je.getAsJsonArray();
for (int i=0; i<ja.size(); ++i) {
...
}
But jar failed and throws
Exception in thread "main" java.lang.IllegalStateException: This is not a JSON A
rray.
at com.google.gson.JsonElement.getAsJsonArray(JsonElement.java:106)
at kan.util.Master.loadMasterShip(Master.java:58)
at kan.util.Master.load(Master.java:30)
at kan.Main.main(Main.java:22)
As suggested by Sotirios I cut the length of the arraylist down, and when I increase the number of ships to 4, things go wrong. Here is the json:
[{"id":1,"name":"睦月","type":2,"rank":2,"fuelMax":15,"bulletMax":15,"slotNum":2,"speed":10,"afterLv":20,"afterId":254,"range":1,"powerups":[1,1,0,0]},{"id":2,"name":"如月","type":2,"rank":1,"fuelMax":15,"bulletMax":15,"slotNum":2,"speed":10,"afterLv":20,"afterId":255,"range":1,"powerups":[0,1,0,0]},{"id":6,"name":"長月","type":2,"rank":1,"fuelMax":15,"bulletMax":15,"slotNum":2,"speed":10,"afterLv":20,"afterId":258,"range":1,"powerups":[0,1,0,0]},{"id":7,"name":"三日月","type":2,"rank":1,"fuelMax":15,"bulletMax":15,"slotNum":2,"speed":10,"afterLv":20,"afterId":260,"range":1,"powerups":[0,1,0,0]}]
↑ colunm 473
Exception in thread "main" com.google.gson.JsonSyntaxException: com.google.gson.
stream.MalformedJsonException: Unterminated object at line 1 column 473
at com.google.gson.internal.Streams.parse(Streams.java:56)
at com.google.gson.JsonParser.parse(JsonParser.java:84)
at kan.util.Master.loadMasterShip(Master.java:55)
at kan.util.Master.load(Master.java:30)
at kan.Main.main(Main.java:22)
Caused by: com.google.gson.stream.MalformedJsonException: Unterminated object at
line 1 column 473
at com.google.gson.stream.JsonReader.syntaxError(JsonReader.java:1505)
at com.google.gson.stream.JsonReader.doPeek(JsonReader.java:480)
at com.google.gson.stream.JsonReader.hasNext(JsonReader.java:403)
at com.google.gson.internal.bind.TypeAdapters$25.read(TypeAdapters.java:
666)
at com.google.gson.internal.bind.TypeAdapters$25.read(TypeAdapters.java:
659)
at com.google.gson.internal.bind.TypeAdapters$25.read(TypeAdapters.java:
642)
at com.google.gson.internal.Streams.parse(Streams.java:44)
... 4 more
Can anyone help me with this, you will be reaally preciated!
Use this class
import java.util.List;
public class GsonResponse
{
public int id;
public String name;
public int type;
public int rank;
public int fuelMax;
public int bulletMax;
public int slotNum;
public int speed;
public int afterLv;
public int afterId;
public int range;
public List<Integer> powerups;
/**
* #return the id
*/
public int getId() {
return id;
}
/**
* #param id the id to set
*/
public void setId(int id) {
this.id = id;
}
/**
* #return the name
*/
public String getName() {
return name;
}
/**
* #param name the name to set
*/
public void setName(String name) {
this.name = name;
}
/**
* #return the type
*/
public int getType() {
return type;
}
/**
* #param type the type to set
*/
public void setType(int type) {
this.type = type;
}
/**
* #return the rank
*/
public int getRank() {
return rank;
}
/**
* #param rank the rank to set
*/
public void setRank(int rank) {
this.rank = rank;
}
/**
* #return the fuelMax
*/
public int getFuelMax() {
return fuelMax;
}
/**
* #param fuelMax the fuelMax to set
*/
public void setFuelMax(int fuelMax) {
this.fuelMax = fuelMax;
}
/**
* #return the bulletMax
*/
public int getBulletMax() {
return bulletMax;
}
/**
* #param bulletMax the bulletMax to set
*/
public void setBulletMax(int bulletMax) {
this.bulletMax = bulletMax;
}
/**
* #return the slotNum
*/
public int getSlotNum() {
return slotNum;
}
/**
* #param slotNum the slotNum to set
*/
public void setSlotNum(int slotNum) {
this.slotNum = slotNum;
}
/**
* #return the speed
*/
public int getSpeed() {
return speed;
}
/**
* #param speed the speed to set
*/
public void setSpeed(int speed) {
this.speed = speed;
}
/**
* #return the afterLv
*/
public int getAfterLv() {
return afterLv;
}
/**
* #param afterLv the afterLv to set
*/
public void setAfterLv(int afterLv) {
this.afterLv = afterLv;
}
/**
* #return the afterId
*/
public int getAfterId() {
return afterId;
}
/**
* #param afterId the afterId to set
*/
public void setAfterId(int afterId) {
this.afterId = afterId;
}
/**
* #return the range
*/
public int getRange() {
return range;
}
/**
* #param range the range to set
*/
public void setRange(int range) {
this.range = range;
}
/**
* #return the powerups
*/
public List<Integer> getPowerups() {
return powerups;
}
/**
* #param powerups the powerups to set
*/
public void setPowerups(List<Integer> powerups) {
this.powerups = powerups;
}
}
just add below code where u parse
String strJson = "[{\"id\":1,\"name\":\"睦月\",\"type\":2,\"rank\":2,\"fuelMax\":15,\"bulletMax\":15,\"slotNum\":2,\"speed\":10,\"afterLv\":20,\"afterId\":254,\"range\":1,\"powerups\":[1,1,0,0]},{\"id\":2,\"name\":\"如月\",\"type\":2,\"rank\":1,\"fuelMax\":15,\"bulletMax\":15,\"slotNum\":2,\"speed\":10,\"afterLv\":20,\"afterId\":255,\"range\":1,\"powerups\":[0,1,0,0]},{\"id\":6,\"name\":\"長月\",\"type\":2,\"rank\":1,\"fuelMax\":15,\"bulletMax\":15,\"slotNum\":2,\"speed\":10,\"afterLv\":20,\"afterId\":258,\"range\":1,\"powerups\":[0,1,0,0]},{\"id\":7,\"name\":\"三日月\",\"type\":2,\"rank\":1,\"fuelMax\":15,\"bulletMax\":15,\"slotNum\":2,\"speed\":10,\"afterLv\":20,\"afterId\":260,\"range\":1,\"powerups\":[0,1,0,0]}]";
GsonResponse gsonResponse = null;
try {
gsonResponse = new Gson().fromJson(strJson,
GsonResponse.class);
} catch (Exception e) {
// TODO: handle exception
e.printStackTrace();
}
I understand linux can be installed on machine having less physical memory. Is there any way to read 4GB log file using 1GB RAM? Basically I want to read few lines from large log file. That line can be anywhere in large log file.
-If someone can help me to understand linux tail implementation I can try same in java.
-At least which java I/O package should I use to read lines on demand, rather loading complete file on physical memory and then reading line by line.
I think you can use something like this..!!!
/**
* TODO Description go here.
* #author Rajakrishna V. Reddy
* #version 1.0
*/
#InterfaceAudience.Public
#InterfaceStability.Evolving
public abstract class FileTailer extends AbstractFileNotificationListener
{
/** The logger to log the debugging messages as application runs. */
private final Logger logger;
/** The tail text. */
private final StringBuilder tailText = new StringBuilder(256);
/** The file watcher. */
private final FileWatcher fileWatcher;
/**
* Instantiates a new file tailer.
*
* #param logFileName
* the log file name
* #throws FileNotFoundException
* the file not found exception
*/
public FileTailer(String logFileName) throws FileNotFoundException
{
logger = Logger.getLogger(FileTailer.class);
fileWatcher = FileWatcher.getFileWatcher();
fileWatcher.registerFileNotificationListener(this, logFileName);
fileWatcher.start();
}
/*
* (non-Javadoc)
*
* #see
* com.mt.filewatcher.listener.AbstractFileNotificationListener#onModifyFile
* (com.mt.filewatcher.info.FileInfo, com.mt.filewatcher.info.FileInfo)
*/
#Override
public final synchronized void onModifyFile(FileInfo oldFileInfo, FileInfo newFileInfo)
{
tailText.setLength(0);
final String name = oldFileInfo.getAbsolutePath();
RandomAccessFile randomFile = null;
try
{
randomFile = new RandomAccessFile(new File(name), "r");
final long offset = getOffset(oldFileInfo, newFileInfo);
if (offset > 0)
{
randomFile.seek(offset-1); // use native seek method to skip bytes
}
byte[] bytes = new byte[1024];
int noOfRead = -1;
while ((noOfRead = randomFile.read(bytes)) > 0) // Best approach than reading line by line.
{
for (int i = 0; i < noOfRead; i++)
{
tailText.append((char)bytes[i]);
}
bytes = new byte[1024];
}
final String tailedText = tailText.toString();
tail(tailedText);
if (tailedText != null && tailedText.contains("\n"))
{
final String[] lines = tailedText.split("\n");
for (String line : lines)
{
tailLineByLine(line);
}
}
else
{
tailLineByLine(tailedText);
}
}
catch (IOException e)
{
logger.error("Exception in tailing: ", e);
}
finally
{
if (randomFile != null)
{
try
{
randomFile.close();
}
catch (IOException e)
{
logger.error("Exception in closing the file: ", e);
}
}
}
}
/**
* Gets the tailed text.
* #see #tailLineByLine(String)
*
* #param tailedText
* the tailed text
*/
public abstract void tail(String tailedText);
/**
* Gets the line by line of the tailed text of the file as it is appending
* into file.
* <br><b>Note: This line does not contain the new line character.</b>
*
* #param tailedLine
* the tailed line
*/
public void tailLineByLine(String tailedLine)
{
//TODO
}
/**
* Gets the offset of the given old FileInfo.
*
* #param oldFileInfo
* the old file info
* #param newFileInfo
* the new file info
* #return the offset
*/
protected long getOffset(FileInfo oldFileInfo, FileInfo newFileInfo)
{
if (newFileInfo.getSize() < oldFileInfo.getSize())
{
return -1;
}
return oldFileInfo.getSize();
}
/**
* The main method.
*
* #param args
* the args
* #throws FileNotFoundException
* the file not found exception
*/
public static void main(String[] args) throws FileNotFoundException
{
System.setProperty(LoggerConstants.LOG_CLASS_FQ_NAME, ConsoleLogger.class.getName());
//new FileTailer("/krishna/Project/Hyperic/hq-hq/tools/unit_tests").onModifyFile(new FileInfo("/krishna/Project/Hyperic/hq-hq/tools/unit_tests/Raja/hi.txt"), null);
}
}
I have a library contains a bunch of static *lib files, I wish to access them from JNA (a Java library that allows one to dynamically call `dll's from JAVA Code), so is there a way to magically change static lib to dll?
Code was compiled using Visual studio (hope that is relevant), and I also have appropriate header files.
I do not have access to source code, also I would like to do it using only free (as in beer) tools.
I'm not aware of anyb tools that will do this automatically, but the process is to create a DLL project and add your libraries to the project. For each function in the header file:
int SomeLibFunc( int x, int y );
you would need to create and export your own function in the DLL;
int MyFunc( int x, int y ) {
return SomLibFunc( x, y );
}
The process is quite mechanical, and you may be able to knock up a script using something like perl to create the DLL source files.
Assuming you don't have access to the source, you can simply create a wrapper DLL that exports the functions you need and delegates to the static library.
I did as anon suggested, I did an automatic converter (someone suggested just putting _ddlspec(export) before declaration and compiling dll with this header would work -- well it didn't -- maybe i did something wrong -- I'm Plain Old Java Programmer ;)):
it basically parses header files and turns:
SADENTRY SadLoadedMidFiles( HMEM, USHORT usMaxMidFiles, VOID * );
to:
__declspec(dllexport) SADENTRY DLL_WRAPPER_SadLoadedMidFiles(HMEM param0,
USHORT usMaxMidFiles, VOID* param2){
return SadLoadedMidFiles(param0, usMaxMidFiles, param2);
}
Here is the code (most probably its Regexp abuse, but it works), gui part depends on MigLayout:
package cx.ath.jbzdak.diesIrae.util.wrappergen;
import net.miginfocom.swing.MigLayout;
import javax.swing.*;
import static java.awt.GraphicsEnvironment.getLocalGraphicsEnvironment;
import java.awt.*;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.io.*;
import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
/**
* Displays a window. In this window you have to specify two things:
* <p/>
* 1. Name of header file that you want to process.
* <p/>
* 2. Name of output files extension will be added automatically. We will override any existing files.
*
* <p/>
* Dependencies: MigLayout
* <p/>
* Actual wrapper generation is done inside WrapperGen class.
* <p/>
* KNOWN ISSUES:
* <p/>
* 1. Ignores preprocessor so may extract finction names that are inside <code>#if false</code>.
* <p/>
* 2. Ignores comments
* <p/>
* 3. May fail to parse werid parameter syntax. . .
*
* Created by IntelliJ IDEA.
* User: Jacek Bzdak
*/
public class WrapperGenerator {
public static final Charset charset = Charset.forName("UTF-8");
WrapperGen generator = new WrapperGen();
// GUI CODE:
File origHeader, targetHeader, targetCpp;
JTextField newHeaderFileName;
JFrame wrapperGeneratorFrame;
{
wrapperGeneratorFrame = new JFrame();
wrapperGeneratorFrame.setTitle("Zamknij mnie!"); //Wrapper generator
wrapperGeneratorFrame.setLayout( new MigLayout("wrap 2, fillx", "[fill,min!]"));
wrapperGeneratorFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
ActionListener buttonListener = new ActionListener() {
JFileChooser fileChooser = new JFileChooser();
{
fileChooser.setFileFilter(new javax.swing.filechooser.FileFilter() {
#Override
public boolean accept(File f) {
return f.isDirectory() || f.getName().matches(".*\\.h(?:pp)?");
}
#Override
public String getDescription() {
return "Header files";
}
});
fileChooser.setCurrentDirectory(new File("C:\\Documents and Settings\\jb\\My Documents\\Visual Studio 2008\\Projects\\dll\\dll"));
}
public void actionPerformed(ActionEvent e) {
if(JFileChooser.APPROVE_OPTION == fileChooser.showOpenDialog(wrapperGeneratorFrame)){
origHeader = fileChooser.getSelectedFile();
}
}
};
wrapperGeneratorFrame.add(new JLabel("Original header file"));
JButton jButton = new JButton("Select header file");
jButton.addActionListener(buttonListener);
wrapperGeneratorFrame.add(jButton);
wrapperGeneratorFrame.add(new JLabel("Result files prefix"));
newHeaderFileName = new JTextField("dll_wrapper");
wrapperGeneratorFrame.add(newHeaderFileName);
ActionListener doListener = new ActionListener() {
public void actionPerformed(ActionEvent e) {
targetHeader = new File(origHeader.getParentFile(), newHeaderFileName.getText() + ".h");
targetCpp = new File(origHeader.getParentFile(), newHeaderFileName.getText() + ".cpp");
try {
targetHeader.createNewFile();
targetCpp.createNewFile();
generator.reader = new InputStreamReader(new FileInputStream(origHeader),charset);
generator.cppWriter = new OutputStreamWriter(new FileOutputStream(targetCpp), charset);
generator.heaerWriter = new OutputStreamWriter(new FileOutputStream(targetHeader), charset);
generator.parseReader();
} catch (IOException e1) {
e1.printStackTrace();
JOptionPane.showMessageDialog(wrapperGeneratorFrame, "ERROR:" + e1.getMessage(), "Error", JOptionPane.ERROR_MESSAGE);
return;
}
}
};
JButton go = new JButton("go");
go.addActionListener(doListener);
wrapperGeneratorFrame.add(go, "skip 1");
}
public static void main(String []args){
SwingUtilities.invokeLater(new Runnable() {
public void run() {
WrapperGenerator wgen = new WrapperGenerator();
JFrame f = wgen.wrapperGeneratorFrame;
wgen.wrapperGeneratorFrame.pack();
Point p = getLocalGraphicsEnvironment().getCenterPoint();
wgen.wrapperGeneratorFrame.setLocation(p.x-f.getWidth()/2, p.y-f.getHeight()/2);
wgen.wrapperGeneratorFrame.setVisible(true);
}
});
}
}
/**
* Does the code parsing and generation
*/
class WrapperGen{
/**
* Method is basically syntax like this: <code>(anything apart from some special chars like #;) functionName(anything)</code>;
* Method declarations may span many lines.
*/
private static final Pattern METHOD_PATTERN =
//1 //2 //params
Pattern.compile("([^#;{}]*\\s+\\w[\\w0-9_]+)\\(([^\\)]*)\\);", Pattern.MULTILINE);
//1 - specifiers - including stuff like __dllspec(export)...
//2 - function name
//3 param list
/**
* Generated functions will have name prefixet with #RESULT_PREFIX
*/
private static final String RESULT_PREFIX = "DLL_WRAPPER_";
/**
* Specifiers of result will be prefixed with #RESULT_SPECIFIER
*/
private static final String RESULT_SPECIFIER = "__declspec(dllexport) ";
Reader reader;
Writer heaerWriter;
Writer cppWriter;
public void parseReader() throws IOException {
StringWriter writer = new StringWriter();
int read;
while((read = reader.read())!=-1){
writer.write(read);
}
reader.close();
heaerWriter.append("#pragma once\n\n\n");
heaerWriter.append("#include \"stdafx.h\"\n\n\n"); //Standard Visual C++ import file.
cppWriter.append("#include \"stdafx.h\"\n\n\n");
Matcher m = METHOD_PATTERN.matcher(writer.getBuffer());
while(m.find()){
System.out.println(m.group());
handleMatch(m);
}
cppWriter.close();
heaerWriter.close();
}
public void handleMatch(Matcher m) throws IOException {
Method meth = new Method(m);
outputHeader(meth);
outputCPP(meth);
}
private void outputDeclaration(Method m, Writer writer) throws IOException {
//writer.append(RESULT_SPECIFIER);
writer.append(m.specifiers);
writer.append(" ");
writer.append(RESULT_PREFIX);
writer.append(m.name);
writer.append("(");
for (int ii = 0; ii < m.params.size(); ii++) {
Parameter p = m.params.get(ii);
writer.append(p.specifiers);
writer.append(" ");
writer.append(p.name);
if(ii!=m.params.size()-1){
writer.append(", ");
}
}
writer.append(")");
}
public void outputHeader(Method m) throws IOException {
outputDeclaration(m, heaerWriter);
heaerWriter.append(";\n\n");
}
public void outputCPP(Method m) throws IOException {
cppWriter.append(RESULT_SPECIFIER);
outputDeclaration(m, cppWriter);
cppWriter.append("{\n\t");
if (!m.specifiers.contains("void") || m.specifiers.matches(".*void\\s*\\*.*")) {
cppWriter.append("return ");
}
cppWriter.append(m.name);
cppWriter.append("(");
for (int ii = 0; ii < m.params.size(); ii++) {
Parameter p = m.params.get(ii);
cppWriter.append(p.name);
if(ii!=m.params.size()-1){
cppWriter.append(", ");
}
}
cppWriter.append(");\n");
cppWriter.append("}\n\n");
}
}
class Method{
private static final Pattern NAME_REGEXP =
//1 //2
Pattern.compile("\\s*(.*)\\s+(\\w[\\w0-9]+)\\s*", Pattern.MULTILINE);
//1 - all specifiers - including __declspec(dllexport) and such ;)
//2 - function name
public final List<Parameter> params;
public final String name;
public final String specifiers;
public Method(Matcher m) {
params = Collections.unmodifiableList(Parameter.parseParamList(m.group(2)));
Matcher nameMather = NAME_REGEXP.matcher(m.group(1));
System.out.println("ALL: " + m.group());
System.out.println("G1: " + m.group(1));
if(!nameMather.matches()){
throw new IllegalArgumentException("for string " + m.group(1));
}
// nameMather.find();
specifiers = nameMather.group(1);
name = nameMather.group(2);
}
}
class Parameter{
static final Pattern PARAMETER_PATTERN =
//1 //2
Pattern.compile("\\s*(?:(.*)\\s+)?([\\w\\*&]+[\\w0-9]*[\\*&]?)\\s*");
//1 - Most probably parameter type and specifiers, but may also be empty - in which case name is empty, and specifiers are in 2
//2 - Most probably parameter type, sometimes prefixed with ** or &* ;), also
// 'char *' will be parsed as grup(1) == char, group(2) = *.
/**
* Used to check if group that represenrs parameter name is in fact param specifier like '*'.
*/
static final Pattern STAR_PATTERN =
Pattern.compile("\\s*([\\*&]?)+\\s*");
/**
* If
*/
static final Pattern NAME_PATTERN =
Pattern.compile("\\s*([\\*&]+)?(\\w[\\w0-9]*)\\s*");
public final String name;
public final String specifiers;
public Parameter(String param, int idx) {
System.out.println(param);
Matcher m = PARAMETER_PATTERN.matcher(param);
String name = null;
String specifiers = null;
if(!m.matches()){
throw new IllegalStateException(param);
}
name = m.group(2);
specifiers = m.group(1);
if(specifiers==null || specifiers.isEmpty()){ //Case that parameter has no name like 'int', or 'int**'
specifiers = name;
name = null;
}else if(STAR_PATTERN.matcher(name).matches()){ //Case that parameter has no name like 'int *'
specifiers += name;
name = null;
}else if(NAME_PATTERN.matcher(name).matches()){ //Checks if name contains part of type like '**ptrData', and extracts '**'
Matcher m2 = NAME_PATTERN.matcher(name);
m2.matches();
if(m2.group(1)!=null){
specifiers += m2.group(1);
name = m2.group(2);
}
}
if(name==null){
name = "param" + idx;
}
this.specifiers = specifiers;
this.name = name;
}
public static List<Parameter> parseParamList(String paramList){
List<Parameter> result = new ArrayList<Parameter>();
String[] params = paramList.split(",");
int idx = 0;
for(String param : params){
Parameter p = new Parameter(param, idx++);
result.add(p);
}
if(result.size()==1){
Parameter p = result.get(0);
if(p.specifiers.matches("\\s*void\\s*")){
return Collections.emptyList();
}
}
return result;
}
}