If I make a site open source, how can I save some data that should not be accessible by everyone? For example, I'd like to store some secret keys for APIs, while making the site available for others to fork and view.
One solution would be to maintain a copy of the website which is open source, and keep the live site closed source, but it is a bit cumbersome to always update the open-source one. And dangerous too, I might accidentally leak some sensitive keys.
Not sure how you could make use of the API keys as currently there is no way to make HTTP requests from the Boomla server to the outside world. You could only use it in the client - but then it's not secret any more.
There is an experimental solution for this. You can create a branch named db-fj9h9wdw. You will be able to access it like you would access a DB (from any of the website's branches).
EDIT: I have created a demo: secret-keys.boomla.net
Related
Let's say we have a requirement that every file users upload to our ASP.NET Core MVC application must be stored in encrypted form in a shared folder, and these files must remain until the user deletes them (so it can be a long time). The shared folder part is not hard, but what's the "proper" way to encrypt uploaded files? What we get when the user uploaded the file is a string with the original file name and a stream with the contents of the file, so that's the starting point but I don't know how to proceed from there.
There are plenty of Google results but all of those use primitives directly on the stream, and I guess there are some glaring security holes when doing that, so there must be a proper way to do so.
"Proper" is a funny word. It really depends how secure you want to be and what you mean by it.
If you want to make sure the file is encrypted over the network, you would want to look into securing your application over TLS with certificates. I assume you're talking about encrypting it once you receive the file, there are a large number of ways, but the way you implement it really depends. A better question is who has access to these files and how can you prevent unauthorized users from accessing someone else's files. If you're giving users direct access to these files on your server, you may want to rethink your approach altogether instead of trying to individually encrypt each file for each user.
This link might be helpful.
Anyway, one option is Symmetric Encryption. It's not the most secure, but simple to implement. Requires key(s) to encrypt and decrypt. You could use one set of keys for every user, but if those keys are compromised every users' files are compromised. On the other hand, you could use separate sets of keys for each user, but that's more to maintain. One of the most popular symmetric encryption algorithms is the Advanced Encryption Standard (AES). .NET Core has libraries from Microsoft for this.
I made a styled map in Google Maps and I needed an API key to make it work. I did it, and it workes fine if the API key is not restricted. If I restrict it on a particular IP or domain, it doesn't work, but this is not the problem I want to discuss here. I was wondering, why not leaving my key unrestricted? So i searched online and they said that a key shouldn't be unrestricted for security reasons, so they suggest to
store them in environment variables or in files outside of your application's source tree
I asked myself then, even if I put my API key in an external file, get it with php or whatever, wouldn't it be shown in the html's source code anyway? I mean, in the how-to page they say to write this code
<script src="https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&callback=initMap">
that's fine, but whatever is my way to set the YOUR_API_KEY, whether getting it from an external file or not, it will be shown on my html's source code anyway.
So,
does anyone know what they mean when they say to put the API key on an external file?
If i find a way to put it on an external file, can I leave it unrestricted? If so, what could happen (speeching from a security point of view)?
Leaving an API completely unrestricted is awful practice and should be avoided in almost every situation. All someone with malicious intent needs to do is find your API key and suddenly they have access to almost everything function that the API key gives you access to. Always restrict keys to the bare minimum.
Since you're going to be embedding the map into an iframe, that script is going to be run client side meaning there isn't much when it comes to hiding the API key but if you restrict a key to simply view maps (and any other functions that will be needed on the users side) there won't be any need to hide it and you can leave it in there as the docs show you.
The document you read on best practices for security with API's applies more to applications of the Google API where it is handling sensitive data or the key has access to functions that could seriously compromise the security of your application if accessed. This doesn't apply in the scenario you described with maps.
Im using basic auth in nginx, no issue there, but i would like to limit the number of distinct locations a user is authenticated,
The end goal is to prevent user sharing access data to website, since the website does real time "monitoring" of some data, i wan't that if the same user/pass combination is used from another ip, that or either both users stop getting data,
or one of them stops getting data.
I don't think that is a good idea, because a user may log in via pc and mobile phone at the same time and has two different ip addresses that way. Also http-auth isn't designed to do what you want it to. It would have to remember the ip-address and make it expire somehow, when the user leaves without logging out. Altogether would it be difficult to guess for how long the session is valid. Another problem is, that most users don't have static IPs and get disconnected by their providers every 24 hours. What happens if that occurs after a valid login?
The most popular method to deal with this kind of problems are session-cookies. These can be described as a one time password and you can use that for as long as you want or until it expires. SessionIDs are usually saved in some kind of database and making those sessions unique would not be a big deal and may therefor be what you want. Luckily the
ngx_http_auth_request_module would allow you to only implement this missing part and would bring you as close as you can get without developing your own nginx-module (see https://www.nginx.com/resources/wiki/modules/ for available modules).
On the other hand: Don't do that. Seriously. If you care for security, do not try to reinvent the wheel and use something, that has already proven. E.g. ngx_http_auth_jwt_module allows you the use of OpenID, which also sets you free from saving sensible user data on your server (because nobody wants to save passwords unless it is absolutely necessary).
Both of these methods require nginx-modules, which may not be installed on your server. If you don't have the permissions to build them, I would suggest to add that to your question, so that others can suggest solutions for non root servers.
If you want to keep it simpler you should also consider to generate download links each and every time and save ip-address and download link address in a database. Delete entries when a user downloads that file and you are done. For that to work you can use the
Content-Disposition: attachment; filename=FILENAME-HTTP-Header, so that your download.php doesn't save a file that called alike.
May be you can also find some kind of javascript to replace ngx_http_auth_jwt_module and use OpenID with http-auth. That can work, because it is possible to do the authentication with ajax as well.
Last but not least: If you still want to do http-auth, also use HTTPS, because your passwords won't be encrypted by this auth-method by default.
What you want to do is unusual so you will need to write a lot of the logic to handle the process.
Your code will need to store a User ID and IP Address pair for each loged in user and validate each attempted log in against this. As the previous answer pointed out, you will need to expire logins etc. Basically, you need to roll a session handler.
This is not impossible or particularly difficult but you need to write it yourself in one of the scripting languages available to Nginx which are either Perl, which is not recommended due to limited ecosystem in Nginx, or Lua, which is highly recommended due to the massive Nginx lua ecosystem (used by Cloudflare for instance).
You will need to compile in the 3rd party Nginx Lua Module and associated modules or just uninstall Nginx and use the Openresty Bundle which already has everything you will need included instead ... including Redis for storage if you need to scale up.
Here are some tools you can use as your building blocks
Openresty Session Library
Openresty Redis Session Library
Openresty Encrypted Session Module
Note that you can implement Openresty stuff directly in Nginx if you wish without having to run Openresty as it is just a convenient bundle of Nginx and useful module.
Like most people, we're pretty impressed with BigQuery. We're willing to put up with it being based on proprietary "Dremel" in exchange for not having to configure a ton of servers in our LAN, on EC2, or anywhere else.
The REST API is excellent, and we're incorporating that into our apps, but we still find ourselves using the BQ Browser interface as well. We'd like to incorporate something like a 'generic SQL window' into our app, without divulging that the backend is BQ or that data is stored in Google at all, for that matter. Does Google provide a way to use their BQ browser tool in a white-label manner?
Note also, that even extending access to the existing browser tool is problematic. It relies on user-accounts existing in one's own domain - something that can't be done, in our case, with a customer's email address. The REST interface solves this with service-level accounts, but that doesn't get you to the SQL window/browser tool.
If the folks at Google are listening (and I know that you are), consider the benefits of white-labeling the browser tool: I think you'd find a lot of software companies integrating it into their suites of products and, then, running circles around any Hadoop/CDH/EMR/Impala/Hive combination.
So, to summarize: How does a software developer import or emulate the BQ browser tool (with all it's autocompletes, query histories, etc..) in their own web-based app?
The initial version of the BigQuery web interface was considered just an 'example' UI that anyone could create themselves. It uses only the public BigQuery API to talk to BigQuery.
There are a couple of Google-internal things we've added since then, such as the current design of 'saved queries', and an auth shortcut so that users don't have to explicitly grant permission to the UI to access BigQuery data. But it is still mostly plain-ol-javascript talking to BigQuery via the REST API the same way anybody else does.
The javascript is obfuscated, however, but my understanding is that this is just for compression purposes so that it downloads more quickly.
The SQL highlighting is done by CodeMirror with special configuration for the BigQuery SQL variant.
I'll talk to the other members of the BigQuery team about open-sourcing the javascript code in the Web UI. It may be difficult to do at this point, but it doesn't hurt to have a conversation about it. I'll bring this up with the team and update this thread. The most likely answer will be "We'll think about it", but hopefully we can also think about it and start working on it too :-)
Let me know if that sounds like it would meet your needs. It might not solve the auth problems you mention, since your users likely won't have BigQuery accounts, but you may be able to solve that by proxying oauth2 access tokens.
I recently finished one of my first AgilityJS projects, which is a web-based file browser that lets you create and manage folders and files, and navigate around the folder tree. I followed the various AgilityJS recommendations regarding the design and ended up with all my HTML and Javascript in a single Javascript file.
Now, I would like to provide a "read-only" version of this app which does not have the ability to add/edit/remove files and folders. I'd like to have 2 user types on the website, one type which can only read the files and folders, and another user type who can administer.
My question is, how do I proliferate these permission differences to my AgilityJS app? I know how to secure my endpoints and operations on the server side, but I'm wonder about the best way to do this on the client side. Should I create a separate version of the app with a limited set of functionality? Should I simply hide certain buttons/features? Are there theories, frameworks, etc.? which deal with this issue? Any point in the right direction would be helpful.
LOL - probably one could write books about that topic. Some very basic ideas:
I would start with the philosophical debate according to MVC. There are people argue with the help of MVC that any piece of code and also any piece of data model should never be implemented twice. Business logic and model to the server. The opposite view is focussing on serving users at any cost - even if that means to double maintain code or the model for the sake of avoiding extra round trips. The way in between defines a master source for business code and model and makes sure to follow on other places that leading master (the master will be changed first). Take your choice. Your answer to that question results into boundaries for how the user interface can/have to look like for the user.
You need to think by hard about a permissions concept. Looking at Microsoft I would assume that they invested for all their applications a couple of dozens man years to make up the permission concepts. The ideal permission concept very much depends on your application. So it is close to impossible to work this out without knowing at least a very little of your application. However the permission concept has to come up with policies deciding on roles, groups, access rigths, access levels, context driven permissions (eg. based IP address), permissions black or white listing (permissions each user has at creation). An example from Microsoft: http://office.microsoft.com/en-us/windows-sharepoint-services-help/permission-levels-and-permissions-HA010100149.aspx
Data on the client is not secured!!! Whatever you do on the client, be it data hiding, encryption, compression... - if this is done on the client there are ways to read the data (even by disabling the data manipulation) or by reverting those. Somebody can send data to your server, where the client should not even have given an update form could be implemented by hackers. So as soon as you start to implement permissions make sure, that for all data you send to clients users are permitted to read and that you inlcude permissions checking for each time you add/update data to the database.