I am trying to use the autodesk forge viewer tutorial
https://developer.autodesk.com/en/docs/model-derivative/v2/tutorials/prepare-file-for-viewer/
I have successfully uploaded and downloaded a dwg file
on the step where i convert it to svf it never seems to process and fails with
{"input":{"urn":"Safe Base64 encoded value of the output of the upload result"},"output":{"formats":[{"type":"svf","views":["2d","3d"]}]}}
HTTP/1.1 400 Bad Request
Result{"diagnostic":"Failed to trigger translation for this file."}
First question do i need to remove the urn: before Base64 encoding.
Second is there any more verbose error result that I can see.
Note I have also tried with a rvt file and tried with "type":"thumbnail" nothing seems to work.
I feel my Encoded URN is incorrect but I am not sure why it would be.
On the tutorial page they seem to have a much longer and raw urn not sure if I should be appending something else to it before encoding. they have a version and some other number
from tutorial
raw
"urn:adsk.a360betadev:fs.file:business.lmvtest.DS5a730QTbf1122d07 51814909a776d191611?version=12"
mine
raw
"urn:adsk.objects:os.object:gregbimbucket/XXX"
EDIT:
This is what i get back from the upload of a dwg file
HTTP/1.1 200 OK
Result{
"bucketKey" : "gregbimbucket",
"objectId" : "urn:adsk.objects:os.object:gregbimbucket/XXX",
"objectKey" : "XXX",
"sha1" : "xxxx",
"size" : 57544,
"contentType" : "application/octet-stream",
"location" : "https://developer.api.autodesk.com/oss/v2/buckets/gregbimbucket/objects/XXX"
}
This is what i send to convert the file
{"input":{"urn":"dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6Z3JlZ2JpbWJ1Y2tldC9YWFg"},"output":{"formats":[{"type":"svf","views":["2d","3d"]}]}}
This is the error I get back
HTTP/1.1 400 Bad Request
Result{"diagnostic":"Failed to trigger translation for this file."}
EDIT 2: SOLUTION
It looks like the object_id when uploading a file has to have the file extension and not end in a GUI or random set of characters for it to know what file type it is. So that it can be converted.
"objectId" : "urn:adsk.objects:os.object:gregbimbucket/Floor_sm.dwg",
SOLUTION It looks like the object_id when uploading a file has to have the file extension and not end in a GUI or random set of characters for it to know what file type it is.
Related
We have Apache Nifi set to write files to local drive, then run program that processes these files and outputs response to "response" attribute. This is a JSON string that we then deliver to API to update records.
However, issue is that while we can successfully write, read and process the files, Nifi fails to understand non-English characters in the response text. This leads to names being corrupted when we send back the response. This only applies to the JSON string we receive from the program.
Nifi is running Windows 10 environment When we run the program manually using files outputted by Nifi, we get correct output. Issue only happens in Nifi.
To provide example, input json is:
{
"player" : "mörkö",
"target" : "goal",
"didhitin" : ""
}
This is stored in our programs work folder and we call program using ExecuteStreamCommand , giving our input JSON file as the parameter. JSON is processed and our program outputs following JSON, which is then stored into response attribute of the flowfile:
{
"player" : "mörkö",
"target" : "goal",
"didhitin" : "true"
}
However, issue is that when this is read by Nifi into the response attribute of the flowfile, it becomes
{
"player" : "m¤rk¤",
"target" : "goal",
"didhitin" : "true"
}
(Not the actual process, but close enough to demonstrate the issue)
Which, when we feed it into API, would either fail or corrupt the name of the original (In this case, value of player). Neither which is desirable output.
So far, we have figured out that this is most likely issue with encoding, but we have not found a way to change the encoding of Nifi to possibly fix incorrectly read characters.
Managed to fix this issue by adding following line to the start of the program:
Console.OutputEncoding = Encoding.UTF8;
This effectively enforces the program to output UTF-8 characters, which would be in-line with rest of the flow.
While uploading doc file(example test.doc) to server(unix machine), I am using apache commons jar which gives me FormFile instance at server side which is having all the data in byte array form.
When I write the same byte array to response output stream and send it to browser to download the same file, weird content is shown. I get one pop up to select encoding in which i would like to see the data and weird data is shown in that doc.The content type is set as follows :
response.setContentType("application/msword");
response.setHeader("Content-Disposition", "attachment;filename=test.doc");
I think that while writing data to output stream, meta data related to doc file is also written which causes this issue.
Is there anything specific for doc or docx file formats, which needs to be done so file is in proper format and i can see correct data which i uploaded or I am missing something?
Any help would be appreciated.
Thanks in Advance.
Let me know if more info is required.
There's a known issue in Microsoft which provide workaround for the
Encoding Pop Up
It may not be a fix for your problem because I have not run any test around. But to check the correct mime types please refer to this link:
https://technet.microsoft.com/en-us/library/ee309278(office.12).aspx
Updated:
You can use response type as ArrayBuffer and set the content as Blob.
Blob([response], {type: 'application/vnd.openxmlformats-officedocument.wordprocessingml.document'});
Or this could work
response.setContentType("application/x-msdownload");
response.setHeader("Content-disposition", "attachment; filename="+ fileName);
I have been using a python cgi script to get files from a database everything is working fine but for some reason there seems to be an extra byte added to the file. On the database the size is 10,265 bytes but in the http response the Content-Length is 10,266 the problem seems to be from the http response itself. The problem is the files served are .jar and are being used by a java applcation which then isnt able to load them with the class loader due to this extra byte. The snippet used to serve the download from the server is :
def printFileHeader():
print 'Content-Type: text/plain;charset=utf-8'
print
def downloadAddon(addon_id):
dbConn = sqlite3.connect("addons.db")
dbCursor = dbConn.cursor()
dbCursor.execute("SELECT addon_file FROM uploaded WHERE id="+addon_id)
blobl = dbCursor.fetchone()
blobl = blobl[0]
printFileHeader()
print blobl
the downloadAddon() function is then called with the requested id but no matter where I fetch the file from (blob in database or direct file) the http response always has that extra byte in the content even though server side the file is ok. Any help is welcome.
ps. I know the header is not a proper file header but I put it this way for testing purposes.
I managed to "fix" the issue by providing the content length of the file in the header like so the code now looks like this :
def printFileHeader(size):
print("Content-Disposition: attachment; filename=addon.jar")
print("Content-Type: application/octet-stream")
print("Content-Length: "+str(size))
print
def downloadAddon(addon_id):
dbCursor.execute("SELECT addon_file FROM uploaded WHERE id="+addon_id)
blobl = dbCursor.fetchone()
blobl = blobl[0]
printFileHeader(len(blobl))
print(blobl)
this solves the issue but I still dont understand why so any explanations are still welcome. Also while checking the response before and after the fix here are the last 6 bytes of the file :
Before (with extra byte) : AAAAAK
After : AAAAA=
Any explanation as to why is appreciated
I am getting a string from our database (third party tool) - and I have a trouble with one
name - sometimes it is right "Tarsøy", and all runs smoothly but sometimes it is "Tars00F8y".
And this ruins the process - I have tried to write some validator function via URLDecoder.decode(name, "UTF-8") that gets a string and return validated one but not succeed.
this is how I get a sting from our base:
Database.WIKI.get(index); // the index is the ID of the string
// this is no sql DB
now about "sometimes" - it means that this code just works different =) I think that is connected with inner DB exceptions or so. So I am trying to do something like validate(Database.WIKI.get(index))
May be I should try something like Encode String to UTF-8
In Java, JavaScript and (especially interesting) JSON there exists the notation \u00F8 for ø. I think this was sent to the database, maybe from a specific browser on a specific computer locale. \u disappeared and voilà. Maybe it is still as invisible control character in the string. That would be nice for repairs.
My guess is JSON data; however normally JSON libraries should parse u-escaped characters. That is weird.
Check what happens when storing "x\\u00FDx". Is the char length 6 or maybe 7 (lucky).
Some sanity checks: assuming you work in UTF-8, especially if the data arrive by HTML or JS:
Content-Type header text/html; charset=UTF-8
(Optional) meta tag with charset=UTF-8
<form action="..." accept-charset="UTF-8">
JSON: contentType: "application/json; charset=UTF-8"
I want to save the arabic word into oracle database. User type a arabic word from client side and submit that word.In client side I printed that word by using alert it is shown arabic text. But the word shown in server side, java console (by using Sytem.out.println) as شاØÙØ©. So it is shown in db as ????. I saw the related post, in one of the post discuss changing the 'text file encoding' into UTF-8 in Eclipse, I changed the 'text file encoding' into UTF-8. But no effect it is showing previous characters like شاØÙØ©. Then I changed the applications 'text file encoding' into UTF-8 , then got same output. I think the word sending into db like this that is why db shows as ????. Is any solution?
my code is in Java
vehicleServiceModel.setVehicleType(request.getParameter("vehicleType"));
System.out.println("vehicle Type : "+vehicleServiceModel.getVehicleType());
client side
jQuery.ajax({
type: "GET",
cache : false,
url: "addvehicle.htm",
data:{vehName:VehicleName,make:Make,model:Model,color:Color,plateNumber:PlateNumber,driverName:DriverName,vehicleType:VehicleType,vehTimeZone:vehTimeZone},
contentType: "application/json; charset=utf-8",
dataType: "json",
success: Success,
error: Error
});
function Success(data, status) {
//some code
}
I am answering this question myself.Hopefully this will help some other.
My issue resolved by :
I changed
in java:
vehicleServiceModel.setVehicleType(request.getParameter("vehicleType"));
String str = vehicleServiceModel.getVehicleType();
str = new String(str.getBytes("8859_1"), "UTF-8");
System.out.println("vehicle Type : "+str);
vehicleServiceModel.setVehicleType(str);
Now it is resolved and arabic words saved into the database.
For more details Please have a look into this
In Command line, you need to set the code page to 1256 (Arabic). But storing arabic texts in a database, you need to set the column data to UTF-8. Also, make sure that your charset is set to UTF-8 (if you're doing a web page).
I suggest you use UTF-8 all the way, i.e. in the web page, in Eclipse and your source code, and in the database (NLS_CHARACTERSET or define the column as NVARCHAR2). This way you will not need conversions.