The previous version of reCAPTCHA provided the option to make a global key which would work on any domain. Now, in version 2, that option is gone, and the reCAPTCHA site claims that "Global Keys are not supported in the V2 API."
I'm working with a large number of domain names that can change frequently without my intervention, and I don't want to have to add each new domain to the key.
Is there a way to get reCAPTCHA to work on any domain without specifically authorizing each one?
It is possible to implement reCAPTCHA Version 2.0 without verifying each domain: https://developers.google.com/recaptcha/docs/domain_validation
To do so, visit the admin console and click the API key in question under "Your reCAPTCHA Sites". Then under "Advanced Settings", uncheck "Verify the origin of reCAPTCHA solutions".
Security Warning
Per Google, doing this creates a security risk that then requires you to check the hostname yourself.
Turning off this protection by itself poses a large security risk - your key could be taken and used by anyone, as there are no restrictions as to the site it's on. For this reason, when verifying a solution, you are required to check the hostname field and reject any solutions that are coming from unexpected sources.
Related Link: (from "Stack Exchange Information Security")
- Why bother validating the hostname for a Google Recaptcha response?
NOTE: This applies to a previous version of the reCAPTCHA API. See the other answer for an updated solution.
This doesn't seem to be well-known, but reCAPTCHA's documentation mentions that a Secure Token can be used to have one key working on a large number of domains. This feature seems to be exactly designed for this type of situation.
It's created by encrypting a JSON string with your site secret, but the documentation doesn't say exactly what encryption method to use. Here's some PHP code I've used to get it working in one of my projects. This should help with whatever language you're working with.
$token = json_encode(array(
'session_id' => bin2hex(openssl_random_pseudo_bytes(16)), // Random ID; no special format
'ts_ms' => intval(round(microtime(true) * 1000))) // Time in milliseconds
);
$secret_key = '{reCAPTCHA secret key}';
$secret_key_hash = substr(hash('sha1', $secret_key, true), 0, 16);
$stoken_bin = openssl_encrypt(
$token,
'AES-128-ECB', // Encryption method
$secret_key_hash,
OPENSSL_RAW_DATA // Give me the raw binary
);
// URL-safe Base64 encode; change + to -, / to _, and remove =
$stoken = strtr(base64_encode($stoken_bin), array('+'=>'-', '/'=>'_', '='=>''));
Related
For a while now I've been using dropbopx-sdk-js in a Meteor application without any trouble.
My Meteor app simply uses Dropbox to fetch images to be used in product cards. These files are synced now and then and that's it. By synced what I mean is they are scanned, shared links created or obtained, and some info is then saved in Mongo (name, extension, path, public link)
End users do not remove nor add files, nor are the files related to an end user specific account.
To achieve this, I created (in the far past) an App in the Dropbox App Console, generated a permanent token, and used that token in my Meteor app to handle all the syncing.
Now I've tried to replicate that very same thing in a new similar project, but found that the permanent tokens have been recently deprecated and are no longer an option.
Now, checking Dropbox's Authentication Types it seems to me like "App Authentication"
"This type only uses the app's own app key and secret, and doesn't
identify a specific user or team".
is what I'm after. I can safely provide app key and secret in the server exclusively, as the client will never need those. The question is how do I achieve such kind of authentication? Or for that matter, how do I achieve an equivalent of the long-lived token for my app, ultimately meaning that end users don't actually need to know Dropbox is behind the scenes in any way (and they surely don't need dropbox accounts to use this app nor should be prompted with any Dropbox authentication page)
In the js-sdk examples repo, I only found this example using app key and secret. Yet afterwards it goes through the oauth process in the browser anyways. If I don't do the oauth part, I get an error
"error": {
"name": "DropboxResponseError",
"status": 409,
"headers": {},
"error": {
"error_summary": "path/unsupported_content_type/...",
"error": {
".tag": "path",
"path": {
".tag": "unsupported_content_type"
}
}
}
}
as a result of calling
dbx.filesListFolders({ path: '', recursive: true }):
If I replace the initialization of the dbx object with a generated token everything works out. However eventually the token expires and I'm back in square one.
Any ideas what may I be missing?
The short answer is:
You need to obtain a refresh-token. You can then use this token for as long as you want. But in order to get it is necessary to go through at least one oauth flow in the browser. Then capturing the generated refresh-token in the backend. Then store it and use it to initialize the API. So it's kind of "hacky" (IMO).
For example, you can use the mentioned example code, and log/store the obtained refresh token in this line (as per Greg's accepted answer in the forum). Then use that value as a constant to immediately call the setRefreshToken method (as done in that very same line) upon initialization.
The long answer is:
ClientId + Client secret are not enough to programmatically generate a refresh token.
Going through the oauth flow at least once is mandatory to obtain a refresh token
If you want to hide such flow from your clients, you'll need to do what the short answer says.
The intended flow of usage according to Dropbox is: each user access his own files. Having several users accessing a single folder is not officially supported.
The longer answer is:
Check out the conversation we had in the dropbox forum
I suggested to replace the "Generate Access Token" button in the console for a "Generate Refresh Token" button instead. At least it made sense to me according to what we discussed. Maybe if it gets some likes... ;).
The Firebase Web-App guide states I should put the given apiKey in my Html to initialize Firebase:
// TODO: Replace with your project's customized code snippet
<script src="https://www.gstatic.com/firebasejs/3.0.2/firebase.js"></script>
<script>
// Initialize Firebase
var config = {
apiKey: '<your-api-key>',
authDomain: '<your-auth-domain>',
databaseURL: '<your-database-url>',
storageBucket: '<your-storage-bucket>'
};
firebase.initializeApp(config);
</script>
By doing so, the apiKey is exposed to every visitor.
What is the purpose of that key and is it really meant to be public?
The apiKey in this configuration snippet just identifies your Firebase project on the Google servers. It is not a security risk for someone to know it. In fact, it is necessary for them to know it, in order for them to interact with your Firebase project. This same configuration data is also included in every iOS and Android app that uses Firebase as its backend.
In that sense it is very similar to the database URL that identifies the back-end database associated with your project in the same snippet: https://<app-id>.firebaseio.com. See this question on why this is not a security risk: How to restrict Firebase data modification?, including the use of Firebase's server side security rules to ensure only authorized users can access the backend services.
If you want to learn how to secure all data access to your Firebase backend services is authorized, read up on the documentation on Firebase security rules. These rules control access to file storage and database access, and are enforced on the Firebase servers. So no matter if it's your code, or somebody else's code that uses you configuration data, it can only do what the security rules allow it to do.
For another explanation of what Firebase uses these values for, and for which of them you can set quotas, see the Firebase documentation on using and managing API keys.
If you'd like to reduce the risk of committing this configuration data to version control, consider using the SDK auto-configuration of Firebase Hosting. While the keys will still end up in the browser in the same format, they won't be hard-coded into your code anymore with that.
Update (May 2021): Thanks to the new feature called Firebase App Check, it is now actually possible to limit access to the backend services in your Firebase project to only those coming from iOS, Android and Web apps that are registered in that specific project.
You'll typically want to combine this with the user authentication based security described above, so that you have another shield against abusive users that do use your app.
By combining App Check with security rules you have both broad protection against abuse, and fine gained control over what data each user can access, while still allowing direct access to the database from your client-side application code.
Building on the answers of prufrofro and Frank van Puffelen here, I put together this setup that doesn't prevent scraping, but can make it slightly harder to use your API key.
Warning: To get your data, even with this method, one can for example simply open the JS console in Chrome and type:
firebase.database().ref("/get/all/the/data").once("value", function (data) {
console.log(data.val());
});
Only the database security rules can protect your data.
Nevertheless, I restricted my production API key use to my domain name like this:
https://console.developers.google.com/apis
Select your Firebase project
Credentials
Under API keys, pick your Browser key. It should look like this: "Browser key (auto created by Google Service)"
In "Accept requests from these
HTTP referrers (web sites)", add the URL of your app (exemple: projectname.firebaseapp.com/* )
Now the app will only work on this specific domain name. So I created another API Key that will be private for localhost developement.
Click Create credentials > API Key
By default, as mentioned by Emmanuel Campos, Firebase only whitelists localhost and your Firebase hosting domain.
In order to make sure I don't publish the wrong API key by mistake, I use one of the following methods to automatically use the more restricted one in production.
Setup for Create-React-App
In /env.development:
REACT_APP_API_KEY=###dev-key###
In /env.production:
REACT_APP_API_KEY=###public-key###
In /src/index.js
const firebaseConfig = {
apiKey: process.env.REACT_APP_API_KEY,
// ...
};
I am not convinced to expose security/config keys to client. I would not call it secure, not because some one can steal all private information from first day, because someone can make excessive request, and drain your quota and make you owe to Google a lot of money.
You need to think about many concepts from restricting people not to access where they are not supposed to be, DOS attacks etc.
I would more prefer the client first will hit to your web server, there you put what ever first hand firewall, captcha , cloudflare, custom security in between the client and server, or between server and firebase and you are good to go. At least you can first stop suspect activity before it reaches to firebase. You will have much more flexibility.
I only see one good usage scenario for using client based config for internal usages. For example, you have internal domain, and you are pretty sure outsiders cannot access there, so you can setup environment like browser -> firebase type.
The API key exposure creates a vulnerability when user/password sign up is enabled. There is an open API endpoint that takes the API key and allows anyone to create a new user account. They then can use this new account to log in to your Firebase Auth protected app or use the SDK to auth with user/pass and run queries.
I've reported this to Google but they say it's working as intended.
If you can't disable user/password accounts you should do the following:
Create a cloud function to auto disable new users onCreate and create a new DB entry to manage their access.
Ex: MyUsers/{userId}/Access: 0
exports.addUser = functions.auth.user().onCreate(onAddUser);
exports.deleteUser = functions.auth.user().onDelete(onDeleteUser);
Update your rules to only allow reads for users with access > 1.
On the off chance the listener function doesn't disable the account fast enough then the read rules will prevent them from reading any data.
I believe once database rules are written accurately, it will be enough to protect your data. Moreover, there are guidelines that one can follow to structure your database accordingly. For example, making a UID node under users, and putting all under information under it. After that, you will need to implement a simple database rule as below
"rules": {
"users": {
"$uid": {
".read": "auth != null && auth.uid == $uid",
".write": "auth != null && auth.uid == $uid"
}
}
}
}
No other user will be able to read other users' data, moreover, domain policy will restrict requests coming from other domains.
One can read more about it on
Firebase Security rules
While the original question was answered (that the api key can be exposed - the protection of the data must be set from the DB rulles), I was also looking for a solution to restrict the access to specific parts of the DB.
So after reading this and some personal research about the possibilities, I came up with a slightly different approach to restrict data usage for unauthorised users:
I save my users in my DB too, under the same uid (and save the profile data in there). So i just set the db rules like this:
".read": "auth != null && root.child('/userdata/'+auth.uid+'/userRole').exists()",
".write": "auth != null && root.child('/userdata/'+auth.uid+'/userRole').exists()"
This way only a previous saved user can add new users in the DB so there is no way anyone without an account can do operations on DB.
Also adding new users is posible only if the user has a special role and edit only by admin or by that user itself (something like this):
"userdata": {
"$userId": {
".write": "$userId === auth.uid || root.child('/userdata/'+auth.uid+'/userRole').val() === 'superadmin'",
...
EXPOSURE OF API KEYS ISN'T A SECURITY RISK BUT ANYONE CAN PUT YOUR CREDENTIALS ON THEIR SITE.
Open api keys leads to attacks that can use a lot resources at firebase that will definitely cost your hard money.
You can always restrict you firebase project keys to domains / IP's.
https://console.cloud.google.com/apis/credentials/key
select your project Id and key and restrict it to Your Android/iOs/web App.
It is oky to include them, and special care is required only for Firebase ML or when using Firebase Authentication
API keys for Firebase are different from typical API keys:
Unlike how API keys are typically used, API keys for Firebase services are not used to control access to backend resources; that can only be done with Firebase Security Rules. Usually, you need to fastidiously guard API keys (for example, by using a vault service or setting the keys as environment variables); however, API keys for Firebase services are ok to include in code or checked-in config files.
Although API keys for Firebase services are safe to include in code, there are a few specific cases when you should enforce limits for your API key; for example, if you're using Firebase ML or using Firebase Authentication with the email/password sign-in method. Learn more about these cases later on this page.
For more informations, check the offical docs
I am making a blog website on github pages. I got an idea to embbed comments in the end of every blog page. I understand how firebase get and gives you data.
I have tested many times with project and even using console. I am totally disagree the saying vlit is vulnerable.
Believe me there is no issue of showing your api key publically if you have followed privacy steps recommend by firebase.
Go to https://console.developers.google.com/apis
and perfrom a security steup.
You should not expose this info. in public, specially api keys.
It may lead to a privacy leak.
Before making the website public you should hide it. You can do it in 2 or more ways
Complex coding/hiding
Simply put firebase SDK codes at bottom of your website or app thus firebase automatically does all works. you don't need to put API keys anywhere
I have been reading about securing REST APIs and have read about oAuth and JWTs. Both are really great approaches, but from what I have understood, they both work after a user is authenticated or in other words "logged in". That is based on user credentials oAuth and JWTs are generated and once the oAuth token or JWT is obtained the user can perform all actions it is authorized for.
But my question is, what about the login and sign up apis? How does one secure them? If somebody reads my javascript files to see my ajax calls, they can easily find out the end points and the parameters passed, and they could hit it multiple times through some REST Client, more severely they could code a program that hits my sign up api say a thousand times, which would be create a thousand spam users, or they could even brute force the login api. So how does one secures them?
I am writing my API in yii2.
The Yii 2.0 framework has a buil-in filter called yii\filters\RateLimiter that implements a rate limiting algorithm based on the leaky bucket algorithm. It will allow you to limit the maximum number of accepted requests in a certain interval of time. As example you may limit both login and signup endpoints to accept at most 100 API calls within a 10 minutes interval of time. When that limit is exceeded a yii\web\TooManyRequestsHttpException exception (429 status code) will be thrown.
You can read more about it in the Yii2 RESTful API related documentation or within this SO post.
I didn't use it myself so far but from what I did read about it in official docs, and I mean this:
Note that RateLimiter requires
$user
to implement the
yii\filters\RateLimitInterface.
RateLimiter will do nothing if
$user
is not set or does not implement
yii\filters\RateLimitInterface.
I guess it was designed to work with logged in users only maybe by using the user related database table, the default one introduced within the advanced template. I'm not sure about it but I know it needs to store the number of allowed requests and the related timestamp to some persistent storage within the saveAllowance method that you'll need to define in the user class. So I think you will have to track your guest users by IP addresses as #LajosArpad did suggest then maybe redesigning your user class to hold their identities so you can enable it.
A quick google search let me to this extension:yii2-ip-ratelimiter to which you may also have a look.
Your URLs will easily be determined. You should have a black list of IP addresses and when an IP address acts suspiciously, just add it to the black list. You define what suspicious is, but if you are not sure, you can start with the following:
Create something like a database table with this schema:
ip_addresses(ip, is_suspicious, login_attempts, register_attempts)
Where is_suspicious means it is blacklisted. login_attemtps and register_attempts should be json values, showing the history of that ip address trying to log in/register. If the last 20 attempts were unsuccessful and were within a minute, then the ip address should be blacklisted. Blacklisted ip addresses should receive a response that they are blacklisted whatever their request was. So if they deny your services or try to hack things, then you deny your services from them.
Secure passwords using sha1, for example. That algorithm is secure-enough and it is quicker than sha256, for instance, which might be an overkill. If your API involves bank accounts or something extremely important like that, important-enough for the bad guys to use server parks to hack it, then force the users to create very long passwords, including numbers, special characters, big and small letters.
For javascript you should use OAuth 2.0 Implicit Grant flow like Google or Facebook.
Login and Signup use 2 basic web page. Don't forget add captcha for them.
For some special client such as mobile app or webServer:
If you sure that your binary file is secure, You can create a custom login API for it. In this API you must try to verify your client.
A simple solution, you can refer:
use an encryption algorithm such as AES or 3DES to encrypt password
from client use a secret key (only client and server knows about it)
use a hash algorithm such as sha256 to hash (username + client time + an other
secret key). Client will send both client time and hash string to
server. Server will reject request if client time is too different
from server or hash string is not correct.
Eg:
api/login?user=user1&password=AES('password',$secret_key1)&time=1449570208&hash=sha256('user1'+'|'+'1449570208'+'|'+$secret_key2)
Note: In any case, server should use captcha to avoid brute force attack, Do not believe in any other filter
About captcha for REST APIs, we can create captcha base on token.
Eg.
For sign up action: you must call 2 api
/getSignupToken : to get image captcha url and a signup token
respectively.
/signup : to post sign up data (include signup token and
captcha typed by user)
For login action: we can require captcha by count failed logins base on username
Folow my api module here for reference. I manager user authentication by access token. When login, i generate token, then access again, client need send token and server will check.
Yii2 Starter Kit lite
I'd like to embed some of my code on GitHub into my blog. The best way I've found so far for this is to use http://www.jamesward.com/2012/06/15/dynamically-rendering-github-files-in-web-pages (with a small modification to fix the base64 decoding) and then do some custom syntax highlighting on it.
However, without authentication, this is subject to a 60 request/hour rate limit enforced by GitHub. It's not clear to me how authentication could work in this case -- since any auth token I might use will need to be part of the JavaScript on my blog, so it will basically be public...
And also, even if I could somehow authenticate this usage (by perhaps connecting my Origin domain with my GitHub user account?), won't that mean that all readers of my blog will count against this shared rate limit, vs. the unauthenticated case where every reader is counted against his own 60/hour limit?
To answer the second question first -- yes, that is what would happen. When authenticated - you have a single quota shared between users. When unauthenticated - the quotas are "distributed" between users (based on IP address, I guess).
Regarding authenticated communication with GitHub's API from JavaScript -- yes, you would have to put the token (or username and password) into your script and make it public. Which you obviously do not want to do. The way you are "expected" to solve this problem is to have a server side. The JavaScript executing in the browser would communicate with your server (for which there is no rate limit and you can secure it however you want), and the server would communicate with GitHub's API and return the results to your JS script. Since nobody can see into your server's code, the credentials for authenticating are not public.
I am attempting to create a login system for my website that permits both authentication via Google's API and access to any of the OAuth-supported Google Data APIs while ideally only showing the user one prompt ever, no matter if he's creating an account or logging into his existing one. I want to minimize the number of times he's asked for approval.
I am aware that Google provides Hybrid OpenID/OAuth for this purpose, but the issue is that every time I add OAuth extensions to my OpenID request, it never remembers the user's approval for that request. Is there any way for the approval to be remembered when I am doing Hybrid OpenID/OAuth? If I just do OpenID without OAuth extensions, everything is remembered just fine and it doesn't keep bugging the user with the prompt.
Here are the pertinent extensions I'm sending in addition to my OpenID request, which result in me getting an OAuth request token (good) but cause the approval to never get remembered (bad).
PHP syntax:
$args["openid.ns.ext2"] = 'http://specs.openid.net/extensions/oauth/1.0';
$args["openid.ext2.consumer"] = 'www.MYSITE.com';
$args["openid.ext2.scope"] = 'http://www-opensocial.googleusercontent.com/api/people/';
$args["openid.mode"] = 'checkid_setup';
$args["openid.realm"] = 'http://www.MYSITE.com/';
Is it normal for Hybrid OpenID/OAuth to act this way (not remembering the last OAuth authorization)? What is the best way to get around this? I have thought of storing cookies on the user's computer to link to somewhere in my database so I could use the last access token again, etc... (the issue here being I don't know whose token to look up unless I know who the user is...a circular problem). And doing an OpenID-only request to get his user ID to see if he has an account in order to look up his access token, followed by an OpenID+OAuth request (if an access token for him isn't stored) would result in two prompts, which really wouldn't help.
It also seems like Hybrid only supports OAuth 1.0, which I think is fine until 2015, so it's not an issue right now for me. I am assuming they will support OAuth 2.0 in the future.
Is checkid_immediate relevant to this in any way? I'm just not sure how to use this to accomplish what I want.
I would suggest using OAuth 2.0. This supports getting both identity and access to APIs -- so accomplishes the same end goal, but is much easier than OAuth 1 Hybrid.
Take a look here:
https://developers.google.com/accounts/docs/OAuth2Login
The scopes you're trying to access are included in the URL (see "Forming the URL"). The referenced doc lists the scopes required for getting identity/profile information. You can simply add additional scopes to the string, comma-delimited in order to request access to other APIs. The resulting access token will access both the APIs and identity information (via the UserInfo API endpoint mentioned).
That said, what you're trying to do with OpenID 2.0/OAuth 1 hybrid should work-- and the user should see a checkbox for "remembering" the authorization. If you really want to debug this further, it'd be helpful to have a webpage you can point to which kicks off this authentication+authorization flow so we can see what's happening.
I figured out that checkid_immediate (and x-has-session, not sure if that's needed or even working) is allowing me to determine whether or not a user is logged in without prompting him, and if he is, it is giving me a claimed_id by which I can identify the user. That's exactly what I needed. The original question is solved, but I do want to figure out how to use identify with OAuth 2.0 because I have already implemented that.
Furthermore, I've noticed that when using OpenID/OAuth that the user still gets asked to authorize OAuth even after he's authorized OpenID. I can't see the advantage to the hybrid approach from the user's perspective.
If the user is logged out of Google, that's a total of three prompts just to sign up for my website and grab his name and profile image.
If anyone wondered, here are the steps necessary to get Hybrid OpenID/OAuth completely working (an overview). I was confused thoroughly throughout this process, so I hope this helps someone.
Do normal OpenID handshake and add on AX extensions for OAuth 1.0.
Use 'checkid_immediate' to permit probing for an active Google session without prompting the user. Use *claimed_id* as a unique identifier to link the user to your database.
If 'setup_needed' is returned, use 'checkid_setup' so the user is prompted and verified before continuing.
This leaves you with two possibilities. *checkid_immediate* returning immediately giving you a claimed_id, or a claimed_id coming through after *checkid_setup* (basically sign-up) succeeds.
Hybrid OpenID/OAuth 1.0 will give you an authorized request token.
Use the authorized request token to get an access token (you only need to call OAuthGetAccessToken)
Use that OAuth 1.0 access token to do whatever you want.
I was successful in using OAuthGetAccessToken to get an access token from the authorized request token my Hybrid OAuth dance, omitting the 'oauth_verifier' parameter (irrelevant to Hybrid).
I was successful in using OAuthGetAccessToken to get an access token after my Hybrid OAuth dance, omitting the 'oauth_verifier' parameter (irrelevant to Hybrid).
In a PHP/Zend environment:
$config = array(
'accessTokenUrl' => 'https://www.google.com/accounts/OAuthGetAccessToken',
'consumerKey' => $consumer_key,
'consumerSecret' => $consumer_secret
);
$consumer = new Zend_Oauth_Consumer($config);
$zendRToken = new Zend_Oauth_Token_Request(); // create class from request token we already have
$zendRToken->setToken($requestToken);
try{
$accessToken = $consumer->getAccessToken(array(
'oauth_token' => $requestToken,
// 'oauth_verifier' => '', // unneeded for Hybrid
'oauth_timestamp' => time(),
'oauth_nonce' => md5(microtime() . mt_rand()),
'oauth_version' => '1.0'
), $zendRToken);
} catch (Zend_Oauth_Exception $e){
echo $e->getMessage() . PHP_EOL;
exit;
}
echo "OAuth Token: {$accessToken->getToken()}" . PHP_EOL;
echo "OAuth Secret: {$accessToken->getTokenSecret()}" . PHP_EOL;