Bearer token working in Postman but not in Server - java

The api changed some of it´s security configurations tonight, but i have been doing server side calls for a few months so i discard this being a problem in the server.
This is my configuration in postman
Hello, this is my API Call in JAVA
{Authorization=Bearer XXXXXXXXXXXXXXXX, headers={Content-Type=application/json}, params={limit=50, state=published, page=1}, url=https://app.tuotempo.com/api/v3/tt_portal_fiatc_test/catalog}
The exception i get in the HTTPRequest from JAVA is
{"result":"ERROR","return":[],"msg":"ACCESS RIGHT DENIED","exception":"TUOTEMPO_SERVICE_NOT_ALLOWED","execution_time":"","debug":"You need a valid access right for the instance tt_portal_fiatc_test"}
What i am missing?
EDIT: Additional INFO: the GET call in JAVA
EDIT2: Already tried without the "bearer" just with Auth: XXXXX.
{Authorization=XXXXXXXXXXXXXXXX, headers={Content-Type=application/json}, params={limit=50, state=published, page=1}, url=https://app.tuotempo.com/api/v3/tt_portal_fiatc_test/catalog}
RESPUESTA{headers={content-type=application/json, transfer-encoding=chunked, vary=Accept-Encoding, expires=Thu, 19 Nov 1981 08:52:00 GMT, cache-control=no-cache, pragma=no-cache, set-cookie=lang=es; expires=Sun, 25-Dec-2022 05:31:28 GMT; Max-Age=2592000; path=/; secure; HttpOnly, x-status-code=403, date=Fri, 25 Nov 2022 05:31:28 GMT, connection=close}, status_code=403, reason_phrase=Forbidden, content=[B#241dde53}
CONTENT: {"result":"ERROR","return":[],"msg":"ACCESS RIGHT DENIED","exception":"TUOTEMPO_SERVICE_NOT_ALLOWED","execution_time":"","debug":"You need a valid access right for the instance tt_portal_fiatc_test"}

Try with adding user-agent header like a normal browser would do:
User-Agent = Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36
Postman automatically adds additional headers incl. user-agent. You can copy the user-agent header from postman as well. It is probably in the hidden headers section.

It was a silly thing in the end. As we can see in the logs (visible in EDIT2 for example).
The Auth. is OUTSIDE the HEADERS, if it´s not in the headers, the API doesn´t read it.
There was a security problem in the API and this was the way to discover that the API (third party one) was PUBLIC since it´s implementation, that´s why we didn´t thought that the call wasn´t well implemented.

Related

OneDrive API download in parts

I am currently trying to develop a Java based app to access OneDrive.
Today i tried to implement the download as described here: https://dev.onedrive.com/items/download.htm
I wanted to use the range parameter, to offer the user the capability to pause large downloads. But no matter how i send the parameter be at within the HTTP-Request header or in the URL as a GET-Parameter it will always send me the complete file.
Things i tried so far:
https:/ /api.onedrive.com/v1.0/drive/items/***/content?range=0-8388607
(OAuth via HTTP header)
https:/ /api.onedrive.com/v1.0/drive/items/***/content:
Header: Authorization: ***
range: 0-8388607
https:/ /api.onedrive.com/v1.0/drive/items/***/content:
Header: Authorization: ***
range: bytes=0-8388607
I also tried Content-Range and various variations on lower and upper case without success. Any reason why this dose not work?
PS.:
The links a broken because i am using a new account that only allows 2 links per post, I am aware that ther is a space between the two // in my post ;)
Requesting the range of the file is supported. You might want to use fiddler or some other tool to see if the original headers are being passed after the 302 redirect is performed. Below are the HTTP requests and responses when I provide the range header which is being passed on after the 302 redirect occurs. You'll notice that a HTTP 206 partial content response is returned. Additionally, to resume a download, you can use "Range: bytes=1025-" or whatever the last byte received was. I hope that helps.
GET https://api.onedrive.com/v1.0/drive/items/item-id/content HTTP/1.1
Authorization: Bearer
Range: bytes=0-1024
Host: api.onedrive.com
HTTP/1.1 302 Found
Content-Length: 0
Location: https://kplnyq.dm2302.livefilestore.com/edited_location
Other headers removed
GET https://kplnyq.dm2302.livefilestore.com/edited_location
Range: bytes=0-1024
Host: kplnyq.dm2302.livefilestore.com
HTTP/1.1 206 Partial Content
Cache-Control: public
Content-Length: 1025
Content-Type: audio/mpeg
Content-Location: https://kplnyq.dm2302.livefilestore.com/edited_location
Content-Range: bytes 0-1024/4842585
Expires: Tue, 11 Aug 2015 21:34:52 GMT
Last-Modified: Mon, 12 Dec 2011 21:33:41 GMT
Accept-Ranges: bytes
Server: Microsoft-HTTPAPI/2.0
Other headers removed

JDOM2 - Follow Redirects (HTTP Error 301)

I'm currently working on a third-party-program for a website using its public XML API. I don't want to go into deeper matters about what the program is actually doing or whatsoever because there seems to be a problem right at the beginning. The website's API expects a client to follow redirects and to set a proper user agent to verify the application itself, but the JDOM2 library, which I use for this project, doesn't seem to do any of these things. Neither the SAXBuilder (org.jdom2.input) integrated in the package nor the native HTTPURLConnection (java.net) class seem to do a proper job.
I'm very confused and don't know where to start at all. Is there any way to make the JDOM2 library follow redirects or am I just missing a simple method call?
JDOM uses the URL given to the SAXBuilder to create a URL Connection, and from that connection, it opens an input stream to read the XML content.
While I understand that the HTTP protocol has a redirect functionality, that is something that is handled by the client.... consider this:
# curl -i 'http://stackoverflow.com/questions/24913206'
HTTP/1.1 301 Moved Permanently
Cache-Control: public, no-cache="Set-Cookie", max-age=60
Content-Type: text/html; charset=utf-8
Expires: Wed, 23 Jul 2014 18:44:06 GMT
Last-Modified: Wed, 23 Jul 2014 18:43:06 GMT
Location: /questions/24913206/jdom2-follow-redirects-http-error-301
Vary: *
X-Frame-Options: SAMEORIGIN
Set-Cookie: prov=xxxx.yyyy.zzzz; domain=.stackoverflow.com; expires=Fri, 01-Jan-2055 00:00:00 GMT; path=/; HttpOnly
Date: Wed, 23 Jul 2014 18:43:05 GMT
Content-Length: 174
<html><head><title>Object moved</title></head><body>
<h2>Object moved to here.</h2>
</body></html>
The data that will be given to JDOM when it builds from the URL http://stackoverflow.com/questions/24913206 will be the redirect / HTTP-301 to http://stackoverflow.com/questions/24913206/jdom2-follow-redirects-http-error-301, and the HTML content that makes that human readable.
Now, the URL handling API for Java just returns the input stream for JDOM. What you are suggesting is that JDOM should interpret that stream, and automatically redirect.
There are a few problems with this.
JDOM does not even know it is an HTTP URL. It is often a File name, or an FTP URL, etc.
what if you did not want to follow the redirect?
etc.
The other issue is that this should be either supported natively by Java, or actively by the application.
What are the real solutions:
Tell all HTTP requests in your application to follow redirects using: HTTPUrlConnection.setFollowRedirects(true)
Don't give JDOM a raw URL to build from, but process it yourself:
URL httpurl = new URL(.....);
HTTPURLConnection conn = (HTTPUrlConnection)httpurl.openConnection();
conn.setInstanceFollowRedirects(true);
conn.connect();
Document doc = saxBuilder.build(conn.getInputStream());

Tumblr API Photo Post Returns 401 (Not Authorized)

I'm attempting to use the Tumblr API in an Android app to authorize users and make text and photo posts. I'm using the Scribe library. So for, I can successfully obtain an access token and use it to get user info. I can also make text posts without any issues. This tells me that I'm signing requests correctly.
However, I've spent the last week and a half attempting to make photo posts without success. I continuously receive 401 errors (Not Authorized) I've read through many posts on the Tumblr support forum as well as here on Stack Overflow, but was unable to find a solution.
I'm reluctant to include the Jumblr library because I'm trying to keep my app as lean as possible. That said, I reviewed the Jumblr code and decided to mimic how photo posts are sent (https://github.com/tumblr/jumblr/blob/master/src/main/java/com/tumblr/jumblr/request/MultipartConverter.java). I'm still receiving the exact same error.
Below is an example my multipart POST request and the response I receive. I've replace the blog name, and OAuth signature, consumer key, and token variables, and have removed the binary image data for brevity sake. Everything else is untouched. I have a few questions...
Are there any other variables that should be included in the
multipart section? A Stack Overflow user stated that placing the
"oauth_" signature variables in there fixed his problem. I didn't
have success with this, but maybe there was something I was missing.
The Jumblr app doesn't appear to do any encoding of the image data,
although the Tumblr documentation states that it should be URL
encoded. Right now I'm sending it as the Jumblr app appears to (raw
binary). Is this correct?
Does anything else in my request look
incorrect?
REQUEST:
NOTE: I learned that the OAuth signature should be generated WITHOUT the multipart form. My code takes that into account when building this request!
POST http://api.tumblr.com/v2/blog/**REMOVED**.tumblr.com/post HTTP/1.1
Content-Type: multipart/form-data, boundary=cbe6b79db1b3cbe6b79e104e
Authorization: OAuth oauth_signature="**REMOVED**", oauth_version="1.0", oauth_nonce="3181201716", oauth_signature_method="HMAC-SHA1", oauth_consumer_key="**REMOVED**", oauth_timestamp="1388791537", oauth_token="**REMOVED**"
Content-Length: 1001
User-Agent: Dalvik/1.6.0 (Linux; U; Android 4.3; SM-N900T Build/JSS15J)
Host: api.tumblr.com
Connection: Keep-Alive
Accept-Encoding: gzip
--cbe6b79db1b3cbe6b79e104e
Content-Disposition: form-data; name="type"
photo
--cbe6b79db1b3cbe6b79e104e
Content-Disposition: form-data; name="caption"
Another pic test...
--cbe6b79db1b3cbe6b79e104e
Content-Disposition: form-data; name="data[0]"; filename="postr_media_file_1388791537-1709648435.jpg"
Content-Type: image/jpeg
---- BINARY DATA REMOVED FOR BREVITY ----
RESPONSE:
HTTP/1.1 401 Not Authorized
Server: nginx
Date: Fri, 03 Jan 2014 23:25:39 GMT
Content-Type: application/json; charset=utf-8
Transfer-Encoding: chunked
Connection: close
Set-Cookie: tmgioct=52c746f34266840643527780; expires=Mon, 01-Jan-2024 23:25:39 GMT; path=/; httponly
P3P: CP="ALL ADM DEV PSAi COM OUR OTRo STP IND ONL"
3c
{"meta":{"status":401,"msg":"Not Authorized"},"response":[]}
I posted the answer in the "Tumblr API Discussion" Google Group. This is what I did:
The key to doing it correctly is NOT just signing without the multipart form!!! Here are the steps...
Add all fields EXCEPT the data field as regular url encoded POST
body variables
Sign the request
Remove ALL off the post variables you just added from the request
Add the multipart form, including the data field this time
Some things to consider...
The Content-Type in the header should be "multipart/form-data"
The Content-Disposition of all form parts should be "form-data" and, of course, include a valid "name" attribute (ie. type, caption, etc...)
The Content-Disposition of the data part should also include a "filename" attribute
The only form part that should contain a Content-Type is data, and it should be set to the mime type of the file you are uploading (ie. "image/jpeg")
I used "data[0]" as the name of the data field. I haven't tested this with just "data", but according to everything I've read it should work that way as well. If you are creating a photo set, I believe you simple add additional parts (ie. data1. data[2], etc...). Again, I haven't tested anything except "data[0]", so do your due diligence!!!
I did NOT encode the binary image data!!! I saw people spending considerable amount of time on this in other posts when adding the image as a POST body variable. If doing this as a multipart form, you can skip the encoding and send raw binary data! ;-)
I hope this helps someone! I've spent two weeks banging my head on random solid objects trying to figure this out. The implementation is very easy to do, but there is zero documentation available on how exactly to build POST requests for photos properly. The official docs really should include that. If I had know what I just posted above I could have completed this in minutes instead of weeks!!!
The last request I posted earlier is still valid, but here it is again. Just remember what I mentioned about the signature!!!
REQUEST:
POST http://api.tumblr.com/v2/blog/REMOVED.tumblr.com/post HTTP/1.1
Content-Type: multipart/form-data, boundary=c60f7c041c02c60f7c046e9b
Authorization: OAuth oauth_signature="***REMOVED***", oauth_version="1.0", oauth_nonce="315351812", oauth_signature_method="HMAC-SHA1", oauth_consumer_key="***REMOVED***", oauth_timestamp="1388785116", oauth_token="***REMOVED***"
Content-Length: 1001
User-Agent: Dalvik/1.6.0 (Linux; U; Android 4.3; SM-N900T Build/JSS15J)
Host: api.tumblr.com
Connection: Keep-Alive
Accept-Encoding: gzip
--c60f7c041c02c60f7c046e9b
Content-Disposition: form-data; name="type"
photo
--c60f7c041c02c60f7c046e9b
Content-Disposition: form-data; name="caption"
Another pic test...
--c60f7c041c02c60f7c046e9b
Content-Disposition: form-data; name="data[0]"; filename="postr_media_file_1388785116-1709648435.jpg"
Content-Type: image/jpeg
***** BINARY DATA REMOVED FOR BREVITY *****
--c60f7c041c02c60f7c046e9b--

Is escaping the Accept header necessary?

I have a Jersey app that has been run through our corporations website vulnerability tool. It came back with a vulnerability that is quite odd. If you send in this header:
"*/*'"!#$^*\/:;.,?{}[]`~-_<sCrIpT>alert(81363)</sCrIpT>"
you actually get it back in the response and it is not escaped. Anyone think this is a problem?
Here is the actual response:
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)
Accept: */*'"!#$^*\/:;.,?{}[]`~-_<sCrIpT>alert(81363)</sCrIpT>
Pragma: no-cache
...
And one more thing. I just upgraded to jersey 1.14 and it still does this...
You mean in produced error message? Something like:
< HTTP/1.1 400 Bad Request
< Content-Type: text/plain
< Date: Thu, 27 Sep 2012 14:30:30 GMT
< Connection: close
< Transfer-Encoding: chunked
<
* Closing connection #0
The HTTP header field "Accept" with value "..." could not be parsed
?
If so, we can definitely do something about it, please report this as a new RFE at http://java.net/jira/browse/JERSEY. (I wasn't able to reproduce anything else related to this issue).
Thanks!

http: conditional get does not give a chance to refresh headers without sending body again

I don't know if this is a bug or a feature in the http spec, or I am not understanding things ok.
I have a resource that changes at most once a week, at the week's beginning. If it didn't change, then the previous week's resource continues to be valid for the whole week.
(For all our tests we have modified the one week period for five minutes, but I think our observations are still valid).
First we send the resource with the header Expires: next Monday. The whole week the browser retrieves from the cache. If on Monday we have a new resource then it is retrieved with its new headers and everything is ok.
The problem occurs when the resource is not renewed. In response to the conditional get our app (Java+Tomcat) sends new headers with Expires: next Monday but without the body. But our frontend server (apache) removes this header, because the spec says you should not send new headers if the resource did not change. So now forever (until the resource changes) the browser will send a conditional get when we would like it to continue serving straight from the cache.
Is there a spec compliant way to update the headers without updating the body? (or sending it again)
And subquestion: how to make apache pass along tomcat's headers?
Just a Expires header is not enough. According to RFC 2616 section 13.3.4, a server needs to respond with two headers, Last-Modified and ETag, to do conditional GET right:
In other words, the preferred behavior for an HTTP/1.1 origin server is to send both a strong entity tag and a Last-Modified value.
And if the client is HTTP/1.1 compliant, it should send If-Modified-Since. Then the server is supposed to respond as following (quoted from Roy Fielding's proposal to add conditional GET):
If resource is inaccessible (for whatever reason), then the server should return a 4XX message just like it does now.
If resource no longer exists, the server should return a 404 Not Found response (i.e. same as now).
If resource is accessible but its last modification date is earlier (less than) or equal to the date passed, the server should return a 304 Not Modified message (with no body).
If resource is accessible and its last modification date is later than the date passed, the server should return a 200 OK message (i.e. same as now) with body.
So, I guess you don't need to configure Apache and/or Tomcat the way you described. You need to make your application HTTP/1.1 compliant.
Try sending a valid HTTP-Date for the Expires header?
One way to solve the problem is using separate URIs for each week. The canonical url redirects to the appropriate url for the week, and instructs the browser to cache the redirect for a week. Also, URLs that have a date in them will instruct the browser to cache forever.
Canonical URL : /path/to/resource
Status Code : 301
Location : /path/to/resource/12-dec or /path/to/resource/19-dec
Expires : Next Monday
Week 1 : /path/to/resource/12-dec
Status code : 200
Expires : Never
Week 2 : /path/to/resource/19-dec
Status code : 200
Expires : Never
When the cache expires on Monday, you just send a redirect response. You either send last weeks URL or this weeks, but you never send the entire response body.
With this approach, you have eliminated conditional gets. You have also made your resources "unmodifiable-once-published", and you also get versioned resources.
The only caveat - redirects aren't cached by all browsers even though the http spec requires them to do so. Notably IE8 and below don't cache. For details, look at the column "cache browser redirects" in browserscope.
The Expires header was basically deprecated with HTTP 1.1; use Cache-Control: max-age instead.
Make sure you are including Last-Modified.
It's optional, but you may also want to specify Cache-Control: must-revalidate, to make sure intermediate proxies don't deliver potentially stale content.
You don't need to set ETag.
Example request:
GET http://localhost/images/logo.png HTTP/1.1
Accept: image/png, image/svg+xml, image/*;q=0.8, */*;q=0.5
Referer: http://localhost/default.aspx
Accept-Language: en-US
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)
Accept-Encoding: gzip, deflate
Host: localhost
Connection: Keep-Alive
The response includes the requested content:
HTTP/1.1 200 OK
Cache-Control: max-age=10
Content-Type: image/png
Last-Modified: Sat, 21 Feb 2009 11:28:18 GMT
Accept-Ranges: bytes
Date: Sun, 18 Dec 2011 05:48:34 GMT
Content-Length: 2245
Requests made before the 10 second timeout are resolved from cache, with no HTTP request. After the timeout:
GET http://localhost/images/logo.png HTTP/1.1
Accept: image/png, image/svg+xml, image/*;q=0.8, */*;q=0.5
Referer: http://localhost/default.aspx
Accept-Language: en-US
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
If-Modified-Since: Sat, 21 Feb 2009 11:28:18 GMT
Host: localhost
The response is just headers, without content:
HTTP/1.1 304 Not Modified
Cache-Control: max-age=10
Last-Modified: Sat, 21 Feb 2009 11:28:18 GMT
Accept-Ranges: bytes
Date: Sun, 18 Dec 2011 05:49:04 GMT
Subsequent requests are again resolved from the browser's cache until the specified cache expiration time.

Categories