I have doubts about what the correct schema should be for the next technical solution.
I need to authenticate a user in a mobile application by reading a QR code, the user being previously authenticated in a web application.
The use case consists in that the user uses a web application located in an intranet, but needs to be able to upload images from a mobile device that will be connected to the internet.
The mobile application will consume a public API exposed on the internet through a API Gateway.
The API Gateway will connect to the backend to upload the images.
As a requirement, when the user needs to use the mobile device to capture and upload images, they should not authenticate again, since they have an open session in the web application, and simply use a QR code to authenticate the device. Logically the QR will not use the user's credentials.
My idea is to make use of Oauth 2.0 with the following flow to authenticate mobile device:
The web application requests API Gateway to generate an authorization token and it responds with a UUID.
The web application will display the authorization token using a QR received from the API Gateway.
The mobile device will read the QR, and request an access token to the API Gateway with the authorization token.
The API Gateway validates the authorization token and generates the access token that is returned to the mobile device.
The mobile device makes calls to the public API (API Gateway) using the access token.
My question is whether it is the correct scheme, or there is another standard solution.
Thanks!!
Your scheme will work BUT it's not reaching its fullest security potential considering the fact that you can transfer a new generated authorization token from an already authorized device directly to another one(via QR code read by camera); this fact would make step 3 and 4 an unnecessary vulnerability(it's also a redundancy,there already is a token!, why get another one?);
The following alternative along with good cryptography can make the later authorized device connection almost impossible to intrude.the idea is that by adding a symmetric encryption layer before sending the data in step 5 and using a key which is exchanged over another medium(the already authorized device and server) the encrypted data can never be exposed;
step 3 replacement: read the authorization token;
step 4 replacement: check the secure hash derivation of the authorization token(instead of the token itself) with the server to see if its valid;
token0=read_auth_token_from_camera()
public_token=hash_function(token0) //the useless exposed token
if(check_token_with_server_for_authenticity(public_token)==true)
continue_to_step_5() //it's authorized
else
handle_the_scenario()
step 5 replacement: encrypt your request and authorization token with another hash derivation of the authorization token then make calls to the server API;
token2=another_hash_function(token0)
request="i am top secret data"
encryption_key=token0
encrypted_request=encryption_function( token2 + request , encryption_key)
send_to_server( public_token+encrypted_request)
//notice that token2 is unknown to the intruder because its encrypted,but it is known to the server; hence the authenticity of each request can be checked by the server;
how it is more secure: in this alternative way the actual authorization token is never really exchanged between the server and the new client; So if an intruder could hypothetically break the SSl/TLS layer and capture the public token, the intruder won't be able to send any requests on your behalf or modify the data in requests;
If this scheme fulfills your desired application flow then its correct. Their is no standard way to authenticate android devices if you are going for self-made authentication.
One thing that I think you need to add is a check whether the client is actually your android application and not any other application using similar authentication flow. If you can cater this than its good to go.
You can use FireBase ML Kit for a good solution and it can also performs a lot AI-based functions to your apps.
You can check it from here:
https://firebase.google.com
Related
I am working on a solution to read log files from the GCP for an internal process. However, i am having a difficult time trying to generate an Auth Token for the request to grab the logs needed. This is more of a flow\context question rather than a whats wrong with my code one
The key issues i am having is that i do not want to prompt for web-browser authentication. I want to be able to do this all through API request and have no user interaction. Everywhere i have looked and all implementations i have tried, i am prompt for user interaction in some way and that is just not feasible for this solution.
How can this be achieved?
We do not have IAM enabled, so i cannot generate a JWT token.
I am trying to do this through using a Service Account created using client id and client secret.
I have tried getting a "code" to pass into a request to generate an authorization token, but that has been prompting me for user authorization in the browser which will not work, even when I add the query parameter 'prompt' or 'approval_prompt' to none or force.
I feel like i am missing one crucial piece to be able to achieve this flow and any help/guidance will be greatly appreciated.
There are several ways to authenticate API calls. If you want to do it without user interaction, you will need to use a Service Account (more info here). The process would be the following:
You use the client ID and one private key to create a signed JWT and construct an access-token request in the appropriate format. Your application then sends the token request to the Google OAuth 2.0 Authorization Server, which returns an access token. The application uses the token to access a Google API. When the token expires, the application repeats the process.
For this, you can use Client Libraries or you can do it manually with HTTP requests directly. In the docs there is a guide to do so.
I've a very compelling question to begin first. But this is not about code level, it's just for me to gauge my understanding. Correct me if i'm wrong, now days MobileApps as per design utilizes the session token method for a request.
Currently, for first time user will pass in their deviceID, pin, and timestamp for an encrypted value from the back-end. Subsequently , the front-end system will send this parameter for request and get a response success. The token has expiration date of 30 minutes.
With a recent event, hackers are able to manipulate the request from a script by spoofing. So the hackers are imitating the same behavior as from mobileapps. How do we cater for this kind of scenario? Is there a way to block these kind of request ?
Token Method
Correct me if i'm wrong, now days MobileApps as per design utilizes the session token method for a request.
I think you are referring to OAUTH2 or OpenID tokens.
OAUTH2
OAuth 2.0 focuses on client developer simplicity while providing specific authorization flows for web applications, desktop applications, mobile phones, and living room devices.
OpenID
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
OpenID Connect allows clients of all types, including Web-based, mobile, and JavaScript clients, to request and receive information about authenticated sessions and end-users. The specification suite is extensible, allowing participants to use optional features such as encryption of identity data, discovery of OpenID Providers, and session management, when it makes sense for them.
Mobile App Identification to the API
Currently, for first time user will pass in their deviceID, pin, and timestamp for an encrypted value from the back-end. Subsequently , the front-end system will send this parameter for request and get a response success. The token has expiration date of 30 minutes.
Normally developers identify the mobile to the API server with a secret that is normally called api-key or some kind of named token *-token. No matter what the convention name used this is identifier is always a secret that sometimes is a simple unique string to identify the mobile app and sometimes is more sophisticated, like in your case.
With a recent event, hackers are able to manipulate the request from a script by spoofing.
The problem is that anything running in the client side can be reverse engineered
easily by an attacker on a device he controls. He will use introspection frameworks like Frida or xPosed to intercept at runtime the running code of the mobile app or will use a proxy tool like MiTM for watching the communications between the mobile app and the API server. Normally their first step in reverse engineer a mobile app will be to use the Mobile Security Framework to reverse engineer the binary of you mobile app to extract all static secrets and to identify attack vectors.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
Mobile Security Framework
Mobile Security Framework is an automated, all-in-one mobile application (Android/iOS/Windows) pen-testing framework capable of performing static analysis, dynamic analysis, malware analysis and web API testing.
MiTM Proxy
An interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers.
So after using all this tools is possible for an attacker to replay all steps your mobile app does to get the token from the API server and use it afterwards to automate the attacks against the API server.
Possible Solution
So the hackers are imitating the same behavior as from mobileapps. How do we cater for this kind of scenario? Is there a way to block these kind of request?
So anything that runs on the client side and needs some secret to access an API can be abused in different ways and you can learn more on this series of articles about mobile api security techniques. This articles will teach you how API Keys, User Access Tokens, HMAC, TLS Pinning can be used to protect the API and how they can be bypassed.
A better solution can be employed by using a Mobile App Attestation solution to enable the API server to know is receiving only requests from a genuine mobile app.
A Mobile App Attestation service will guarantee at run-time that your mobile app was not tampered or is not running in a rooted/jail broken device by running a SDK in the background(without impacting user experience), that will communicate with a service running in the cloud to attest the integrity of the mobile app and the device is running on.
On successful attestation of the mobile app integrity a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud are aware. In the case of failure on the mobile app attestation the JWT token is signed with a secret that the API server does not know.
Now the App must send with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature and expiration time in the JWT token and refuse them when it fails the verification.
Once the secret used by the Mobile App Attestation service is not known by the mobile app, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
The Mobile App Attestation service already exists as a SAAS solution at Approov(I work here) that provides SDKs for several platforms, including iOS, Android, React Native and others. The integration will also need a small check in the API server code to verify the JWT token issued by the cloud service. This check is necessary for the API server to be able to decide what requests to serve and what ones to deny. Your current token could be used as a payload claim in the Approov Token to allow the API server to continue using it with the guarantee that was not tampered with.
I'm having a hard time wrapping my head around how to authenticate a user in my REST service. I plan to use Google Sign-in (on Android, namely). I can't quite figure out how to authenticate users on my server. I do not want to have any authorizations (other than validating the identity of the user), all I want to do is when I receive a request, validate that the user is who he (or she) says he is.
My understanding is that the user will login, get some sort of token from Google, then send that token along his request to my server which I will use to validate his identity. However, from what I read, the user will encode their requests in a JWT (json web token), which I will then use to validate their identity without ever talking to the Google server directly. Did I understand properly?
On Google's documentation, it says
If you do not require offline access, you can retrieve the access token and send it to your server over a secure connection. You can obtain the access token directly using GoogleAuthUtil.getToken() by specifying the scopes without your server's OAuth 2.0 client ID.
But it does not say what the server should do with the token.
You have an android app which enables user to log in via Google+ Sign-In, and then this Android app will call your REST API. What you want is how your service authenticates this request. This Android client will send request to your service with token, and you need to validate this token for authentication. Is my understanding right?
If so, you need to validate the token sent to your service. The reference you mentioned is for Google API calls, in your case; it's your own service API call. For the Android side, just follow the reference, in your service side you can use TokenInfo validation to authenticate users.
My company is building a RESTful API that will return moderately sensitive information (i.e. financial information, but not account numbers). I have control over the RESTful API code/server and also am building the Android app. I've setup the API to use OAuth 2 with authorization code grant flow (with client ID and secret), and I auto-approve users without them having to approve the client since we own both client and provider. We use CAS for SSO and I am using this for the Authorization server as part of the OAuth 2 process when the user logs in to retrieve the token.
I am contemplating various ways to secure the data on the Android app. I've concluded that storing the client id and secret on the device is definitely not going to happen, but am thinking that storing the auth token might work, since it is only risk to the individual user (and really only if they happen to have a rooted phone).
Here are two options I have thought of. They both require me to have a sort of proxy server that is CAS protected, does the dance with the API server, and returns the auth token. This gets rid of the need for storing the client id and secret in the app code.
Here are what I've come up with:
1) Require the user to enter their password to access data each time they startup the App. This is definitely the most foolproof method. If this were done, I'd probably want to save the userID for convenience, but in that case couldn't use the CAS login (since it's web-based). I might be able to use a headless browser on the backend to log the user into CAS and retrieve the token based on what they enter in the Android form, but this seems hacky. Saving the userID is similar to what the Chase app does (if you happen to use this one) - it saves the userID but not your password between sessions.
2) Store the auth token on the Android device. This is a little less secure, but almost foolproof. When the user starts the app for the first time, open the webpage to the CAS login of the proxy server that returns the token (similar to https://developers.google.com/accounts/docs/MobileApps). After the user logs in and the token is returned to the app, encrypt it and store it private to the application. Also, use ProGuard to obfuscate the code, making the encryption algorithm more difficult to reverse engineer. I could also work in a token refresh, but I think this would be more of a false sense of security.
3) Don't use CAS but come up with another way to get an auth token for the service.
Any advice of how others have implemented similar scenarios (if it's been done)?
Thanks.
Well the reason why standards like OAuth are developed is that not everyone has to rethink the same attack vectors again and again. So most often it is your best choice to stick to something already available instead of baking your own thing.
The first problem with clients that are not capable of secretly storing data is that the user's data could be accessed by some attacker. As it is technically not possible to prevent this (code obfuscation won't help you against an expert attacker), the access token in OAuth 2 typically expires after short time and doesn't give an attacker full access (bounded by scope). Certainly you shouldn't store any refresh token on such a device.
The second problem is client impersonation. An attacker could steal your client secret and access your API in his own (maybe malicious) app. The user would still have to login there himself. The OAuth draft there requires the server to do everything it can to prevent this, but it is really hard.
The authorization server MUST authenticate the client whenever possible. If the authorization server cannot authenticate the client due to the client's nature, the authorization server MUST require the registration of any redirection URI used for receiving authorization responses, and SHOULD utilize other means to protect resource owners from such potentially malicious clients. For example, the authorization server can engage the resource owner to assist in identifying the client and its origin.
I think Google are the first to try another approach to authenticate a client on such devices, by checking the signature of the application, but they are not yet ready for prime time. If you want more insight into that approach, see my answer here.
For now, your best bet is to stay on the OAuth way, i.e. having the access token, client ID and client secrect (when using the authorization code grant flow) on the device, and configure your server to do additional checks. If you feel more secure obfuscating these, just do it, but always think of it as if these values were publicly available.
I am developing a Java application that needs to access personal account Google Data of a user. The development is currently in netbeans on my localhost. I am implementing 3-legged OAuth. And while sending Grant request, it sends me Unauthorized Request Token and then redirects to Callback URL.
While trying to access Access Token, it gives me Error "Error Getting HTTP Response". Now, as per it given in Google Documentation, it is given that "If the application is not registered, Google uses the oauth_callback URL, if set; if it is not set, Google uses the string "anonymous"." Does it mean that I must register my application on Google Apps Engine before granting authorization & accessing request ? Please Help.
For reference : OAuth for Web Applications, OAuth in the Google Data Protocol Client Libraries
Based on your question, it's probably not the registration piece that's causing you trouble. It sounds like you just haven't implemented OAuth correctly — not that doing so is easy. The OAuth process is roughly as follows:
Get a request token. You must pass in a bunch of stuff that declares what kind of stuff you want access to and where you want Google to send the user when they're done granting you access to that data. This is where you pass in your consumer key, which you get by registering. The consumer key will be the string anonymous if you are developing an installed application (i.e., mobile app, desktop app, etc). This is a work-around; the alternative would be to embed your client secret or RSA private key within the application itself, which is a very, very bad idea. If you use 'anonymous', you should absolutely be setting the xoauth_displayname parameter. (Actually, everyone should set this parameter, but it's especially important if you're using anonymous.)
Once you have a request token, you then redirect the user to the special authorization endpoint, passing along the request token key in the query string. Assuming the user grants access, Google will redirect the user back to the callback URL that you associated with your request token. The request token is now authorized, but it can't be used directly just yet.
Once the request token is authorized, you can exchange it for an access token key/secret pair. The access token key/secret can then be used to sign requests for protected resources, such as the private data in the API you're trying to access.
For web applications, registering is almost always a good idea. It makes it much easier for users to manage their access tokens and revoke them if your application misbehaves or if they don't want you to have access anymore. If you don't register, your application will probably show up as a fairly scary-looking 'anonymous' in that list. It's really only installed applications that you wouldn't want to register for. You probably also want to register for an API key. An API key will dramatically increase your rate limit and it will also allow Google to get in touch with you if your application starts to malfunction.
I'd link to the OAuth docs, but you've already found them. Hope my explanation helps!
If you're developing on your local machine, you'll continue to get the same result as above.
For more interesting tests, then yes, you'll have to register your app and push it to the app engine.
Google will check if the domainname of the return-url is registered. You could also modify your dns/host-file to point the domain-name you're using to localhost.