I am using Jackson 2.5.1 (com.fasterxml.jackson.core.JsonGenerator) to write JSON document to an output stream. It looks like the API allow us to write wrong JSON to the stream? I thought if we try to write elements in wrong context, it is supposed to throw JsonGenerationException or IOException. Here is the code snippet :
try {
JsonFactory jfactory = new JsonFactory();
ByteArrayOutputStream b = new ByteArrayOutputStream();
JsonGenerator jsonGenerator = jfactory.createJsonGenerator(b, JsonEncoding.UTF8);
jsonGenerator.writeStartObject();
jsonGenerator.writeStringField("str1", "blahblah");
jsonGenerator.writeNumber(1234);
jsonGenerator.writeEndObject();
jsonGenerator.close();
System.out.println(b.toString());
} catch (Exception e) {
e.printStackTrace();
}
The output is : {"str1":"blahblah":1234} and it is not a valid JSON. Is this expected behavior or I am missing something? I thought the API itself tracks if the objects are written in correct context. Does it need to be enforced by the application itself? It is not clear from the documentation :
http://fasterxml.github.io/jackson-core/javadoc/2.0.0/com/fasterxml/jackson/core/JsonGenerator.html
Related
In a Java/Spring application, I have a POJO/bean. Let's call it CourseRequest. It is converted/serialized into a string and sent out in a PUT REST request. The conversion is done using org.codehaus.jackson.map.ObjectMapper. The sending is done using org.scribe.model.OAuthRequest. See simplified code:
OAuthRequest request = new OAuthRequest(...);
CourseRequest courseRequest = new CourseRequest(...);
ObjectMapper mapper = new ObjectMapper();
String courseJson = null;
try {
// convert to json.
courseJson = mapper.writeValueAsString(courseRequest);
request.addPayload(courseJson);
request.send();
} catch (JsonGenerationException e) {
logger.error(e);
} catch (JsonMappingException e) {
logger.error(e);
} catch (IOException e) {
logger.error(e);
}
I have run into a problem where I need to remove a property instructorId from the JSON/payload before sending the request. Obviously, I can't just naively remove the property in CourseRequest because the POJO/bean is used in other places too.
So, what is the best way to do it?
I can't use #JsonIgnore or alike annotations, because
I want to remove the property only on some conditions;
There are other requests elsewhere in the application which may want to keep the property
For initial thoughts:
As a "hack", I can do a regex replace on the courseJson to remove the property after mapper.writeValueAsString(courseRequest) but I think that's not a very clean way to do it.
I can parse the courseJson back into some kind of map and then remove the property but that's just clumsy.
P.S. I am using Jackson v1.9.13
I'm working in a Spring Boot api that can receive very large objects and try to save it in a MongoDB database. Because of this the program sometimes throws me the next error:
org.bson.BsonMaximumSizeExceededException: Payload document size is larger than maximum of 16793600.
I'd read that MongoDB only permits objects of size below 16MB, this is very inconvenient for my system because an object can easily surpass this gap. To solve this I had read about GridFS, technology that allows to surpass the 16MB files gap.
Now I'm trying to implement GridFS in my system but I only had seen examples using files to save in the database, something like this:
gridFsOperations.store(new FileInputStream("/Users/myuser/Desktop/text.txt"), "myText.txt", "text/plain", metaData);
But I want to do is not to take the data from a file, but to the api to receive a object and save it, something like this:
#PostMapping
public String save(#RequestBody Object object){
DBObject metaData = new BasicDBObject();
metaData.put("type", "data");
gridFsOperations.store(object, metaData);
return "Stored successfully...";
}
Is it a posible way to doing this?
Get an InputStream from the request and pass it to a GridFSBucket. Here's a rough example:
In your controller:
#PostMapping
public ResponseEntity<String> uploadFile(MultipartHttpServletRequest request)
{
Iterator<String> iterator = request.getFilenames();
String filename = iterator.next();
MultipartFile mf = request.getFile(filename);
// I always have a service layer between controller and repository but for purposes of this example...
myDao.uploadFile(filename, mf.getInputStream());
}
In your DAO/repository:
private GridFSBucket bucket;
#Autowired
void setMongoDatabase(MongoDatabase db)
{
bucket = GridFSBuckets.create(db);
}
public ObjectId uploadFile(String filename, InputStream is)
{
Document metadata = new Document("type", "data");
GridFSUploadOptions opts = new GridFSUploadOptions().metadata(metadata);
ObjectId oid = bucket.uploadFromStream(filename, is, opts);
try
{
is.close();
}
catch (IOException ioe)
{
throw new UncheckedIOException(ioe);
}
return oid;
}
I paraphrased this from existing code so it may not be perfect but will be good enough to point you in the right direction.
We are migrating from GSA to Solr and are looking to keep the existing GSA Connectors to scrape our ECM systems.
GSA Connectors construct XML documents as follows
<gsafeed>
<header>
<datasource>source</datasource>
<feedtype>incremental</feedtype>
</header>
<group>
<record url="..." displayurl="http://url.com/a/b" action="add" ...>
<metadata>
<meta name="Author" content="author#company.com"/>
<meta name="DocIcon" content="pdf"/>
... bunch of other meta fields ...
<content encoding="base64compressed">...</content>
</record>
<group>
</gsafeed>
The <content> is not text but the document byte stream, compressed and then encoded to base64.
What I need is for Solr to ingest this XML, will obviously needs to be modified first.
So I've come up with this process:
Code a custom request handler which GSA will send that XML to. This looks like a decent place to start: https://stackoverflow.com/a/40568514/482261
The custom handler will modify the incoming request body: (a) decode and then decrypt the <content> node data (b) construct a Solr-able XML
Forward this modified SolrQueryRequest to the /update/extract (class="solr.extraction.ExtractingRequestHandler") handler for Tika extraction
I am trying to build the custom handler. Doing CRUD on the request parameters is easy enough, but I am lost on how to deal with content streams.
Edit 1:
Solution posted.
Edit 2:
I now have a follow up question. The posted solution works when GSA feed only has a single document. With multiple documents, each with their own metadata, things get a bit murky. I haven't decided on a way of dealing with that yet, once I do the solution will be posted as a new question.
Here is the what I have come up with to address the original question. Hopefully it helps someone. I have extracted relevant bits from my working code, please treat this as pseudo-code.
public class MyCustomRequestHandler extends ExtractingRequestHandler {
#Override
public void handleRequestBody(SolrQueryRequest originalReq, SolrQueryResponse rsp) throws Exception {
Iterable<ContentStream> streams = originalReq.getContentStreams();
ContentStream theStream = streams.iterator().next();
InputStream is = theStream.getStream();
//stream is an XML so parse it to a Document. I used the XOM library for this
Document doc = parser.build(is)
//process accordingly:
// 1. Convert the <meta> tags to a Map<String, String>
SolrParams extractedSolrParams = new MapSolrParams(/*Map<String, String> of all <meta> fields in GSA feed */);
// 2. Take <content> and pass it to decodeUncompress()
byte[] decodedUncompressedContent = decodeUncompress(/* <content> from gsa feed*/)
//Once the parsing and processing is complete, construct a new solr request
LocalSolrQueryRequest localRequest = new LocalSolrQueryRequest(originalReq.getCore(), extractedSolrParams);
List<ContentStream> newContentStreams = new ArrayList<ContentStream>();
newContentStreams.add(new ContentStreamBase.ByteArrayStream(decodedUncompressedContent, "GSA Feed <content>"));
localRequest.setContentStreams(newContentStreams);
super.handleRequestBody(localRequest, rsp);
}
private byte[] decodeUncompress(byte[] data) throws IOException {
// Decode
byte[] decodedBytes = Base64.getDecoder().decode(data);
// Uncompress
ByteArrayOutputStream stream = new ByteArrayOutputStream();
Inflater decompresser = new Inflater(false);
InflaterOutputStream inflaterOutputStream = new InflaterOutputStream(stream, decompresser);
try {
inflaterOutputStream.write(decodedBytes);
} catch (IOException e) {
throw e;
} finally {
try {
inflaterOutputStream.close();
} catch (IOException e) {
}
}
return stream.toByteArray();
}
}
I am working on a JavaSE application in which I would like to connect to a Spring-MVC based server to get List of objects, Objects itself. I looked up on net, and came upon JSON. While I agree that it is working, but it is very inefficient as I have to go through the 2 while loops and seems not so sophisticated. For this reason I researched and found out I can use Spring remoting to achieve the task.
One thing I would like to do is to send over objects directly, instead of converting them by JSON, and sending.
I am pasting my code below for what I have with JSON, I would appreciate if I know this seems more better or is Spring remoting more sophisticated in long term too. A replacement code for the client side would be nice. Thanks.
Client code :
public void getCanvas(){
JsonFactory jsonFactory = new JsonFactory();
String canvas = "";
try {
JsonParser jsonParser = jsonFactory.createJsonParser(new URL(canvasURL));
JsonToken token = jsonParser.nextToken();
while (token!=JsonToken.START_ARRAY && token!=null){
token = jsonParser.nextToken();
if(token==null){break;}
System.out.println("Token is "+jsonParser.getText());
}
while (token!=JsonToken.END_ARRAY){
token = jsonParser.nextToken();
if(token == JsonToken.START_OBJECT){
canvas = jsonParser.toString();
System.out.println("Canvas is "+canvas);
}
}
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("Canvas is "+canvas);
}
Server code :
#RequestMapping(value = "/getcanvas",method = RequestMethod.GET)
public #ResponseBody String getCanvasforFX(){
System.out.println("Canvas was requested");
Canvas canvas = this.canvasService.getCanvasById(10650);
canvas.setCanvasimage(null);
ObjectMapper objectMapper = new ObjectMapper();
try {
System.out.println("Canvas value is "+objectMapper.writeValueAsString(canvas));
return objectMapper.writeValueAsString(canvas);
} catch (IOException e) {
e.printStackTrace();
return null;
}
}
In the client-code I am getting the information, but again I have to read the fields, set them in object and update the UI, even though I am programming the server also, I want to directly receive an object, and cut out the middle-man(JSON). Thanks.
I am playing around with Jersey and would like to know how one should implement a "download" feature. For example let's say I have some resources under /files/ that I would like to be "downloaded" via a GET how should I do this? I already know the proper annotations and implementations for GET, PUT, POST, DELETE, but I'm not quite sure how one should treat binary data in this case. Could somebody please point me in the right direction, or show me a simple implementation? I've had a look at the jersey-samples-1.4, but I can't seem to be able to find what I am looking for.
Many thanks!
You should use #Produces annotation to specify which media type file is (pdf, zip, etc..). Java specification for this annotation can be found here.
Your server should return created file. For example in core java you can do something like this:
#GET
#Produces(MediaType.APPLICATION_OCTET_STREAM)
#Path("path")
public StreamingOutput getFile() {
return new StreamingOutput() {
public void write(OutputStream out) throws IOException, WebApplicationException {
try {
FileInputStream in = new FileInputStream(my_file);
byte[] buffer = new byte[4096];
int length;
while ((length = in.read(buffer)) > 0){
out.write(buffer, 0, length);
}
in.close();
} catch (Exception e) {
throw new WebApplicationException(e);
}
}
};
}