Security problems for google app engine webservices - java

I've write a google web-services based on java to encrypt and decrypt messages, just wondering if I deploy this project to app engine, how can I make sure my java classes are safely at the server side? Because there are secret keys inside code.
Do I need add some identification steps before can use the web-services?

It's as secure as any other hosted service - trusted people (Google admins) have access to servers and your compiled code. If you are not OK with that, you should run your own servers.
You may want to disable code download, in case your google account gets compromised.
Also, access to your Google account is an attack vector to get to AppEngine admin access, so make sure anybody with developer access to your GAE app has two-step authentication enabled.

Related

MobileApps Security Issue

I've a very compelling question to begin first. But this is not about code level, it's just for me to gauge my understanding. Correct me if i'm wrong, now days MobileApps as per design utilizes the session token method for a request.
Currently, for first time user will pass in their deviceID, pin, and timestamp for an encrypted value from the back-end. Subsequently , the front-end system will send this parameter for request and get a response success. The token has expiration date of 30 minutes.
With a recent event, hackers are able to manipulate the request from a script by spoofing. So the hackers are imitating the same behavior as from mobileapps. How do we cater for this kind of scenario? Is there a way to block these kind of request ?
Token Method
Correct me if i'm wrong, now days MobileApps as per design utilizes the session token method for a request.
I think you are referring to OAUTH2 or OpenID tokens.
OAUTH2
OAuth 2.0 focuses on client developer simplicity while providing specific authorization flows for web applications, desktop applications, mobile phones, and living room devices.
OpenID
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
OpenID Connect allows clients of all types, including Web-based, mobile, and JavaScript clients, to request and receive information about authenticated sessions and end-users. The specification suite is extensible, allowing participants to use optional features such as encryption of identity data, discovery of OpenID Providers, and session management, when it makes sense for them.
Mobile App Identification to the API
Currently, for first time user will pass in their deviceID, pin, and timestamp for an encrypted value from the back-end. Subsequently , the front-end system will send this parameter for request and get a response success. The token has expiration date of 30 minutes.
Normally developers identify the mobile to the API server with a secret that is normally called api-key or some kind of named token *-token. No matter what the convention name used this is identifier is always a secret that sometimes is a simple unique string to identify the mobile app and sometimes is more sophisticated, like in your case.
With a recent event, hackers are able to manipulate the request from a script by spoofing.
The problem is that anything running in the client side can be reverse engineered
easily by an attacker on a device he controls. He will use introspection frameworks like Frida or xPosed to intercept at runtime the running code of the mobile app or will use a proxy tool like MiTM for watching the communications between the mobile app and the API server. Normally their first step in reverse engineer a mobile app will be to use the Mobile Security Framework to reverse engineer the binary of you mobile app to extract all static secrets and to identify attack vectors.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
Mobile Security Framework
Mobile Security Framework is an automated, all-in-one mobile application (Android/iOS/Windows) pen-testing framework capable of performing static analysis, dynamic analysis, malware analysis and web API testing.
MiTM Proxy
An interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers.
So after using all this tools is possible for an attacker to replay all steps your mobile app does to get the token from the API server and use it afterwards to automate the attacks against the API server.
Possible Solution
So the hackers are imitating the same behavior as from mobileapps. How do we cater for this kind of scenario? Is there a way to block these kind of request?
So anything that runs on the client side and needs some secret to access an API can be abused in different ways and you can learn more on this series of articles about mobile api security techniques. This articles will teach you how API Keys, User Access Tokens, HMAC, TLS Pinning can be used to protect the API and how they can be bypassed.
A better solution can be employed by using a Mobile App Attestation solution to enable the API server to know is receiving only requests from a genuine mobile app.
A Mobile App Attestation service will guarantee at run-time that your mobile app was not tampered or is not running in a rooted/jail broken device by running a SDK in the background(without impacting user experience), that will communicate with a service running in the cloud to attest the integrity of the mobile app and the device is running on.
On successful attestation of the mobile app integrity a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud are aware. In the case of failure on the mobile app attestation the JWT token is signed with a secret that the API server does not know.
Now the App must send with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature and expiration time in the JWT token and refuse them when it fails the verification.
Once the secret used by the Mobile App Attestation service is not known by the mobile app, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
The Mobile App Attestation service already exists as a SAAS solution at Approov(I work here) that provides SDKs for several platforms, including iOS, Android, React Native and others. The integration will also need a small check in the API server code to verify the JWT token issued by the cloud service. This check is necessary for the API server to be able to decide what requests to serve and what ones to deny. Your current token could be used as a payload claim in the Approov Token to allow the API server to continue using it with the guarantee that was not tampered with.

Rest API security for mobile apps

I'm in the process of designing a mobile application that will need to connect to a server-side process for its business logic and data transactions. I'm writing my server-side code in Java using Spring Boot and I intend to create a Rest API in order for the mobile app to connect to the server.
I'm just doing some research at the moment for the best way to secure the connection between mobile app and server. What I'd like to do is allow the user on the mobile app to log in once they open the app and for them to use the app for as long as they like and for their access to time out after a period of inactivity.
Can anyone recommend any very simple reading on this? I've looked at OAuth2 but that appears to be for logging into web services using another account (like Google, GitHub, etc).
Would it be acceptable to login over https (SSL) by passing username and password to a rest endpoint and returning some sort of token (a GUID?). Then have the client (mobile app) pass that GUID with each subsequent call so the server can verify the call? Is it better to just do everything over SSL in this scenario?
I have done a fair bit of research but I don't seem to be able to find anything that quite matches what I'm trying to do.
Hope someone can help
Thanks
OAUTH2 IS NOT ONLY FOR WEB
Can anyone recommend any very simple reading on this?
I've looked at OAuth2 but that appears to be for logging into web services using another account (like Google, GitHub, etc).
No OAUTH2 is not only for web apps is also for mobile apps and you read this article for a more in depth explanation, but I will leave you with the article introduction:
Like single-page apps, mobile apps also cannot maintain the confidentiality of a client secret. Because of this, mobile apps must also use an OAuth flow that does not require a client secret. The current best practice is to use the Authorization Flow along with launching an external browser, in order to ensure the native app cannot modify the browser window or inspect the contents. If the service supports PKCE, then that adds a layer of security to the mobile and native app flow.
The linked article is very brief, you will need to follow the next chapters to get the full picture.
DO NOT ROLL YOUR OWN AUTHENTICATION / AUTHORIZATION SOLUTION
Would it be acceptable to login over https (SSL) by passing username and password to a rest endpoint and returning some sort of token (a GUID?). Then have the client (mobile app) pass that GUID with each subsequent call so the server can verify the call?
While you can do it I strongly advise you to use an already established OAUTH2 or OPENID connect solution, because they are developed and maintained by experts in the field and battle tested by millions of web and mobile apps using them. This enables to identify ans fix security issues much more quickly that anyone could do in their own in-house solution.
OAuth2
OAuth 2.0 is the industry-standard protocol for authorization. OAuth 2.0 supersedes the work done on the original OAuth protocol created in 2006. OAuth 2.0 focuses on client developer simplicity while providing specific authorization flows for web applications, desktop applications, mobile phones, and living room devices. This specification and its extensions are being developed within the IETF OAuth Working Group.
OpenID Connect
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
OpenID Connect performs many of the same tasks as OpenID 2.0, but does so in a way that is API-friendly, and usable by native and mobile applications. OpenID Connect defines optional mechanisms for robust signing and encryption. Whereas integration of OAuth 1.0a and OpenID 2.0 required an extension, in OpenID Connect, OAuth 2.0 capabilities are integrated with the protocol itself.
So for your authentication/authorization needs I would recommend you to go with an OpenID connect solution, that leverages OAuth2 under the hood.
SSL IS ALWAYS NECESSARY?
Is it better to just do everything over SSL in this scenario?
SSL must be always used for everything, http MUST not be used at all in any situation, because once you allow an http request you are vulnerable to a man in the middle attack and I strongly recommend you to read this article from a well know security researcher, Troy Hunt, to see how even a static website must use https and he goes to a great extent to explain why and names very important attack vectors that can harm an application not using https, like WiFi hot-spots hijacking, DNS Hijeacking, Router Exploits, China great cannon, and others.
IMPROVE SSL WITH CERTIFICATE PINNING
Communicating using https is the way to go for any kind of application but developers must be aware that an attacker in control of device where the application is installed can spy https traffic by doing a man in the middle attack with a custom certificate installed in the device the mobiel app is installed, enabling this way for him to understand how the mobile app communicates with the API server in order to mount automated attacks to abuse from same API.
Certificate Pinning
Pinning is the process of associating a host with their expected X509 certificate or public key. Once a certificate or public key is known or seen for a host, the certificate or public key is associated or 'pinned' to the host. If more than one certificate or public key is acceptable, then the program holds a pinset (taking from Jon Larimer and Kenny Root Google I/O talk). In this case, the advertised identity must match one of the elements in the pinset.
You can read this article, with code sample to see how easy is to implement certificate pinning, how it can be difficult to maintain in the operational side, and with a video to see how an attacker can bypass pass certificate pinning in the client side by using Xposed framework.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
EDIT:
Nowadays, you can use the Mobile Certificate Pinning Generator to help you with implementing certificate pinning in your mobile app:
That will give you a ready to use pinning configuration for Android and iOS:
RESEARCHING FOR A SOLUTION
Before I point you out to a possible solution I would like to make clear the distinction between 2 concepts that developers frequently are not aware off or take as being the same thing...
The Difference Between WHO and WHAT is Accessing the API Server
The WHO is the user of the mobile app that you can authenticate,authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
Now you need a way to identify WHAT is calling your API server and here things become more tricky than most developers may think. The WHAT is the thing making the request to the API server, is it really your genuine mobile app or is a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
Well to identify the WHAT developers tend to resort to an API key that usually they hard-code in the code of their mobile app and some go the extra mile and compute it at run-time in the mobile app, thus becomes a dynamic secret in opposition to the former approach that is a static secret embedded in the code.
Some Mobile API Security Techniques
I have done a fair bit of research but I don't seem to be able to find anything that quite matches what I'm trying to do.
You can start by read this series of articles about Mobile API Security techniques to understand how Https, Certificate Pinning, APi Keys, HMAC, OAuth2 and other techniques can be used to protect the communication channel between your mobile app and the API serve, and how they can be bypassed.
To solve the problem of WHAT is accessing your mobile app you need to use one or all the solutions mentioned in the series of articles about Mobile API Security Techniques that I mentioned above and accepted that they can only make unauthorized access to your API server harder to bypass but not impossible.
A better solution can be employed by using a Mobile App Attestation solution that will enable the API server to know is receiving only requests from a genuine mobile app.
A POSSIBLE BETTER SOLUTION
The use of a Mobile App Attestation solution will enable the API server to know WHAT is sending the requests, thus allowing to respond only to requests from a genuine mobile app while rejecting all other requests from unsafe sources.
The role of a Mobile App Attestation service is to guarantee at run-time that your mobile app was not tampered or is not running in a rooted device by running a SDK in the background that will communicate with a service running in the cloud to attest the integrity of the mobile app and device is running on.
On successful attestation of the mobile app integrity a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud are aware. In the case of failure on the mobile app attestation the JWT token is signed with a secret that the API server does not know.
Now the App must sent with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature and expiration time in the JWT token and refuse them when it fails the verification.
Once the secret used by the Mobile App Attestation service is not known by the mobile app, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
This is a positive model where false positives do not occur, thus the API server is able to deny requests with the confidence of not blocking legit users of your mobile app.
The Mobile App Attestation service already exists as a SAAS solution at Approov(I work here) that provides SDKs for several platforms, including iOS, Android, React Native and others. The integration will also need a small check in the API server code to verify the JWT token issued by the cloud service. This check is necessary for the API server to be able to decide what requests to serve and what ones to deny.
CONCLUSION
Properly securing a mobile app and the API server is a task composed of several layers of defense that you must put together in order to protect it.
How many layers to use will depend on the data your are protecting, the value it have for the business, the damage it can cause when leaked in a data breach and how much you may be penalized by law enforcement, like GDPR in Europe.
What I usually do is crafting a JSON web token (https://jwt.io/), and handle the sessions on my own.
JWT is really nice, since you only need to define a secret key on the server side. As long as your clients are able to pass the string you crafted (inside the headers for example), and as long as nobody gets to retrieve your secret key, you are sure that every data you push when creating the token was generated by you. (Don't hesitate to use the strongest encryption algorithm)
For a secure connection use HTTPS at level TLS 1.2 level. Then pin the server certificate in the app, that will prevent MITM attacks.
It is safe to pass the user name and password. You can return a time-limited token for further authentication is needed/desired.
With HTTPS everything but the address portion of the URL is encrypted. But be careful with the query string, it may end up in the server logs.
Thanks again for these replies. I've been implementing my service to run under https by using server.ssl.key-store parameters and it looks like it's working okay. I have used keytool.exe to create a trust store and I run my SpringBoot app (with Tomcat embedded) using that trust store. I can open a browser to my REST endpoint (using https this time, not http), the browser asks for authentication and when I enter my user details, it matches them against my db user and allows me to see the response from the server.
One question though, what's the point of having a trust store on the server side (Java) if I can just access the REST endpoint using any old browser and just enter my user name and password? Eventually, this REST endpoint won't be accessed via a browser, it'll be accessed using a mobile app programmatically, so I assume I'll be logging on using that with username and password over https. I thought I'd need to have a certificate of some sort on the client side in order to communicate or does it not work like this?
Thanks again

Secure way to store and use Amazon Access key for Android

I am trying to build an application for android that uses Amazon SimpleDB. I have viewed the source code of the example code provided by Amazon. However in the demo, the credentials are just stored in a Constants.java and I believe this method is not secure at all as potentially there are people that could decompile the apk to expose the credentials even with Proguard on.
Therefore i went to read up on Amazon article regarding this and I could not quite understand as I am not very familiar with cryptography in android/java.
How am I supposed to actually allow access to Amazon SimpleDB from my application while keeping the access key safe from external parties?
Edit 1:
I want to use the application to retrieve data from the SimpleDB, showing in listview. For example like a simple review on food and other users will be able to retrieve the same review that other users posted. Maybe if the user wants to post a review, they would require to sign up an account and log in.
AWS offers a couple of solutions for delivering credentials to the device outside of hard coding them, one or both may meet your specific needs:
Token Vending Machine. AWS offers example TVMs for both Anonymous and User Authentication that can be customized to meet your needs.
Web identity federation which uses identities from Facebook, Google, and Amazon.
Our samples repository includes samples for integrating with both of these technologies, though not specifically in the SimpleDB example.
There is no foolproof way to do this. Whichever way we do, somebody taking your APK can potentially reverse engineer and crack the password (You make it difficult by obfuscating the code, but it is just making difficult and not foolproof).
If your app requires users to login (with some credential from your backend or using openid), then use this to let users access your server. Then on the server code, you can provide the AWS credentials using IAM Roles (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html).
So your web (REST) API access is allowed by using user-provided password and your server code gets the access by IAM roles. This is the most secure way.
If you dont want to have your server / backend, then there is no real foolproof way.
Answer from #Bob above is precise how to achieve this:
https://stackoverflow.com/a/21839911/2959100

How to do Android Sync Provider server side implimentation?

I have a running J2EE based Web Application for Point of Sales hosted in cloud and database is using postgresql. Now I am building an android app for Point of Sales. For data sync I have read a lot how to create SyncAdapter etc but very few about server side developments. My first question:
How the authentication token will be created? Who will create this AuthToken? My server side RESTful webservice or Device SyncAdapter itself? Who should initiate expiry of AuthToken - the device or server webservice?
Currently my web application got many user level permission. When a data will come from device to sync I need to check the user permission before sync. Do I need to custom write these permission checking inside my server side webservices?
In short the server is responsible for creating an AuthToken. For a better understanding you should read the OAuth2 specification. Although you could write your own simpler solution, I recommend to you that you use OAuth2 to avoid some pitfalls. In your case authentication grant type "Resource Owner Password Credentials" seems most appropriate. Apache Oltu is a framework for implementing OAuth2 in Java, but you still have to handle some things like persistence of tokens on your own.
Yes, you have to the check the permissions on server side. Don't trust the client! Besides that you may restrict the apps more strictly by using different token types in Android. For example: When you're accessing the web service from within your server application and from Android apps, you could differentiate permissions based on the token type (Android vs. internal). Or maybe you provide your app's users an option to restrict the app's access only to read operations.

Is using AD credentials entered into form fields as opposed to the browser integrated auth window bad practice?

I’m looking for a bit of feedback on the practice of requesting users to authenticate to an intranet based web app by entering their AD credentials directly in form fields. For example, using domain\username and password fields as opposed to using the native browser based challenge window for integrated authentication. In the form based example, credentials are passed to the application in plain text and it’s essentially up to the integrity of the application to handle the data appropriately. It seems to me this is the equivalent of entering my Open ID credentials directly into a host app on the Internet.
So my questions are:
Is there any best practice guidance on authenticating to a custom web app (assume predominantly .NET / Java stacks) in an AD environment?
Can you think of any legitimate circumstances where this is really necessary?
Is this a legitimate concern or am I just being paranoid?!
In a highly secure environment, users would be encouraged to only enter their credentials when using the Secure Attention Sequence CTRL-ALT-DEL, which is designed so that it can't be intercepted by applications.
So in such an environment, even the browser challenge window for authentication would be suspect. Instead you would log on locally using the same AD credentials as you need to access the website, and would be authenticated without needing to be prompted.
I'd say entering AD credentials in form fields is extremely suspect if the credentials can also be used for access to other sensitive resources. Even if the app developers are well-intentioned, it is an unnecessary security hole. For example, anyone who has write access to the web directory can easily replace the login form and capture credentials.
If it's a browser based application, why wouldn't you just enable Windows authentication in your web.config (not sure what the equivalent is in the Java world, sorry) and let the browser handle authentication.
Otherwise, I'd say if you do this over a secure transport (SSL) then you should be ok. Microsoft's own products often use form fields to submit AD credentials (I know Outlook Web Access and Internet Security & Acceleration Server both do this).
The best approach is to use Kerberos tokens instead of an encrypted username/password.
This open source library, http://spnego.sourceforge.net, will allow your java web apps to perform integrated windows authentication using Kerberos tokens.
The library is installed as a servlet filter so you will not have to write any code.

Categories