I have a rather simple question but could not find an answer on the internet so far:
Is there any way an attacker can download a CGI Script (Perl, Python, whatever) from a webpage?
Or in other words: Is it a security risk to have sensible information (e.g. access keys to another service) within a cgi script?
Thanks!
Is there any way an attacker can download a CGI Script (Perl, Python, whatever) from a webpage?
The design of web servers tries to prevent this, but the design of web applications should assume this. - i.e. It's not how it's supposed to work, but it's not at all uncommon. Whether or not it's possible depends on why you're asking the question.
There may be a way for a particular hacker to download a script from your webpage, if he had an attack vector that was worth their time to do so.
This answer to another question (though relating to PHP, principles are the same), explains in a good amount of detail some of the factors that can lead to an exploit.
Is it a security risk to have sensible information (e.g. access keys to another service) within a cgi script?
Yes. While there's always risk, and the acceptability of that risk depends on your application and specific tradeoffs, this particular scenario has two major flaws (if not more):
If anything went wrong with your server, or CGI handler, you could have a public facing text file
It's just so, so easy, just to not put the private key there.
Even if the private key only accessed a VM on a free tier which did nothing but return the weather in your local city, you should at least employ a model where private keys are not accessible by the web server user (i.e. www-data, etc). i.e. You could have your CGI handler simply pass the paramaters/arguments to another local service to parse and invoke the actions required.
Related
If I make a site open source, how can I save some data that should not be accessible by everyone? For example, I'd like to store some secret keys for APIs, while making the site available for others to fork and view.
One solution would be to maintain a copy of the website which is open source, and keep the live site closed source, but it is a bit cumbersome to always update the open-source one. And dangerous too, I might accidentally leak some sensitive keys.
Not sure how you could make use of the API keys as currently there is no way to make HTTP requests from the Boomla server to the outside world. You could only use it in the client - but then it's not secret any more.
There is an experimental solution for this. You can create a branch named db-fj9h9wdw. You will be able to access it like you would access a DB (from any of the website's branches).
EDIT: I have created a demo: secret-keys.boomla.net
Im using basic auth in nginx, no issue there, but i would like to limit the number of distinct locations a user is authenticated,
The end goal is to prevent user sharing access data to website, since the website does real time "monitoring" of some data, i wan't that if the same user/pass combination is used from another ip, that or either both users stop getting data,
or one of them stops getting data.
I don't think that is a good idea, because a user may log in via pc and mobile phone at the same time and has two different ip addresses that way. Also http-auth isn't designed to do what you want it to. It would have to remember the ip-address and make it expire somehow, when the user leaves without logging out. Altogether would it be difficult to guess for how long the session is valid. Another problem is, that most users don't have static IPs and get disconnected by their providers every 24 hours. What happens if that occurs after a valid login?
The most popular method to deal with this kind of problems are session-cookies. These can be described as a one time password and you can use that for as long as you want or until it expires. SessionIDs are usually saved in some kind of database and making those sessions unique would not be a big deal and may therefor be what you want. Luckily the
ngx_http_auth_request_module would allow you to only implement this missing part and would bring you as close as you can get without developing your own nginx-module (see https://www.nginx.com/resources/wiki/modules/ for available modules).
On the other hand: Don't do that. Seriously. If you care for security, do not try to reinvent the wheel and use something, that has already proven. E.g. ngx_http_auth_jwt_module allows you the use of OpenID, which also sets you free from saving sensible user data on your server (because nobody wants to save passwords unless it is absolutely necessary).
Both of these methods require nginx-modules, which may not be installed on your server. If you don't have the permissions to build them, I would suggest to add that to your question, so that others can suggest solutions for non root servers.
If you want to keep it simpler you should also consider to generate download links each and every time and save ip-address and download link address in a database. Delete entries when a user downloads that file and you are done. For that to work you can use the
Content-Disposition: attachment; filename=FILENAME-HTTP-Header, so that your download.php doesn't save a file that called alike.
May be you can also find some kind of javascript to replace ngx_http_auth_jwt_module and use OpenID with http-auth. That can work, because it is possible to do the authentication with ajax as well.
Last but not least: If you still want to do http-auth, also use HTTPS, because your passwords won't be encrypted by this auth-method by default.
What you want to do is unusual so you will need to write a lot of the logic to handle the process.
Your code will need to store a User ID and IP Address pair for each loged in user and validate each attempted log in against this. As the previous answer pointed out, you will need to expire logins etc. Basically, you need to roll a session handler.
This is not impossible or particularly difficult but you need to write it yourself in one of the scripting languages available to Nginx which are either Perl, which is not recommended due to limited ecosystem in Nginx, or Lua, which is highly recommended due to the massive Nginx lua ecosystem (used by Cloudflare for instance).
You will need to compile in the 3rd party Nginx Lua Module and associated modules or just uninstall Nginx and use the Openresty Bundle which already has everything you will need included instead ... including Redis for storage if you need to scale up.
Here are some tools you can use as your building blocks
Openresty Session Library
Openresty Redis Session Library
Openresty Encrypted Session Module
Note that you can implement Openresty stuff directly in Nginx if you wish without having to run Openresty as it is just a convenient bundle of Nginx and useful module.
I recently finished one of my first AgilityJS projects, which is a web-based file browser that lets you create and manage folders and files, and navigate around the folder tree. I followed the various AgilityJS recommendations regarding the design and ended up with all my HTML and Javascript in a single Javascript file.
Now, I would like to provide a "read-only" version of this app which does not have the ability to add/edit/remove files and folders. I'd like to have 2 user types on the website, one type which can only read the files and folders, and another user type who can administer.
My question is, how do I proliferate these permission differences to my AgilityJS app? I know how to secure my endpoints and operations on the server side, but I'm wonder about the best way to do this on the client side. Should I create a separate version of the app with a limited set of functionality? Should I simply hide certain buttons/features? Are there theories, frameworks, etc.? which deal with this issue? Any point in the right direction would be helpful.
LOL - probably one could write books about that topic. Some very basic ideas:
I would start with the philosophical debate according to MVC. There are people argue with the help of MVC that any piece of code and also any piece of data model should never be implemented twice. Business logic and model to the server. The opposite view is focussing on serving users at any cost - even if that means to double maintain code or the model for the sake of avoiding extra round trips. The way in between defines a master source for business code and model and makes sure to follow on other places that leading master (the master will be changed first). Take your choice. Your answer to that question results into boundaries for how the user interface can/have to look like for the user.
You need to think by hard about a permissions concept. Looking at Microsoft I would assume that they invested for all their applications a couple of dozens man years to make up the permission concepts. The ideal permission concept very much depends on your application. So it is close to impossible to work this out without knowing at least a very little of your application. However the permission concept has to come up with policies deciding on roles, groups, access rigths, access levels, context driven permissions (eg. based IP address), permissions black or white listing (permissions each user has at creation). An example from Microsoft: http://office.microsoft.com/en-us/windows-sharepoint-services-help/permission-levels-and-permissions-HA010100149.aspx
Data on the client is not secured!!! Whatever you do on the client, be it data hiding, encryption, compression... - if this is done on the client there are ways to read the data (even by disabling the data manipulation) or by reverting those. Somebody can send data to your server, where the client should not even have given an update form could be implemented by hackers. So as soon as you start to implement permissions make sure, that for all data you send to clients users are permitted to read and that you inlcude permissions checking for each time you add/update data to the database.
I've seen the many different ways I can build a function/service to generate short URLs which I can then control via my own domain.
This sounds like a great idea; however, as I look at the advantages such as being able to control these URLs long term, adjusting the end location if needed have more tracking over where they wind up, etc.
I'm wondering if there is already a service out there that provides for this level of control without needing to build/host/support the solution myself?
The exact features desired are as follows:
Control of where the URL points to AFTER it's generated (the underlying URL needs to change due to legal/regulatory issues)
More robust tracking of where the URL is used as opposed to just doing a Google Search
for the tiny URL
The advantage would be that you own the links and are not dependent on a service that may go out of business. Also, if the shortened URL still has your domain it would have SEO advantages for page rank. Another thing is that it would reduce friction from clicking the link by your users. When you use another domain for shortening you are dependent on the trust the user has with that organization as well.
They may not necessarily be optimised for shortness, typically they look like http://purl.org/net/kin , but PURLs might be what you want
PURLs (Persistent Uniform Resource Locators) are Web addresses that act as permanent identifiers in the face of a dynamic and changing Web infrastructure. Instead of resolving directly to Web resources, PURLs provide a level of indirection that allows the underlying Web addresses of resources to change over time without negatively affecting systems that depend on them. This capability provides continuity of references to network resources that may migrate from machine to machine for business, social or technical reasons.
purl.org
You have complete control over where they point if you use the centralised server; you can also download their software to run your own.
I'm building a utility that will hopefully keep my wife in tune with how much money we have available.
I need a simple secure way of logging into my bank account and retrieving the balance.
Something like mechanize is the only method I can think of. I'm not even sure if that would work given the properly authenticated https that banks use.
Any ideas?
Write a perl script using LWP::UserAgent. It supports HTTPS connections. The only issue might be if the site requires javascript.
Web Client Programming with Perl has a few examples to get you started if you're not too familiar with perl.
If you really want to go there, get these extensions for Firefox: Live HTTP Headers, Firebug, FireCookie, and HttpFox. Also download cURL and a scripting language that can run cURL command-line tasks (or a scripting language like PHP or Perl that has access to cURL libraries directly).
I've started down this road for some idempotent GET tasks like getting PDFs of the S&P reports (of the stocks I track) from my online brokerage, and downloading the check images for my bank account. Both tasks are repetitive and slow ways of downloading data to my computer that the financial institutions don't provide any way of making it easier.
Here's why you shouldn't: (as a shortcut I'm going to call the archetypal large bank, brokerage, or other financial institution "BloatBank")
BloatBank is not likely to make public their API for accessing this kind of information. So it can change any time and all your hard work will be for naught. Whenever they change their mechanism, you'll have to adapt.
If BloatBank finds out you've been using automatic scripting to try to access your account information, they may ban you because you've violated their terms of service.
You might screw up, and the interaction between the hodgepodge of scripts on BloatBank's server, and your scripts that access your account, might cause a Bad Thing like closing your account. Testing this kind of script is tremendously difficult because you don't have any documentation about how their online service works, and you don't have a test account you can mess with.
(a variant of the above) You think you're safe because you're issuing GET requests. But BloatBank is just a crazy bank that doesn't know anything about REST, so there are some GET requests that can mess up your account.
If someone else does use your script to maliciously sniff your online password or mess with your account, any liability coverage from BloatBank may disappear because you've opened a security hole.
Why don't you teach your wife how to login to the bank herself? Or use Quicken (or Mint, etc) and teach her how to use the auto-download feature?
Have you checked out Watir? It is fantastic for automating web-browser actions. And since it's written in Ruby, you can take the results and store them in a DB (or email them to yourself) if needed.
If you are open to AIR, I'd say build an AIR app. I have worked with mechanize and I think it's cool. AIR gives you similar features with a richer GUI (see HTMLLoader and DOM manipulation of webpage).
If I were you, I'd simply pull the page and manipulate the DOM to suit my visual needs.
Please, if you find this easy to do for your bank please post your bank's name. If I have the same one I'll be closing my account.
More to your question. The process of loading a web page inside of your code rather than in a browser can be a black art, especially if their is any javascript involved. Your best bet would probably be embedding the IE Web Browser control in your app and then simulating key strokes and mouse clicks to arrive at your balance page. Then scrape the HTML for the balance.
I could try paying for Quicken and letting it do the balance downloading. Then I'd just need to find a way to get the number out of the software automatically.
This way I'm not violating any terms of service and I'm also reducing security risk since all "hacking" goes on locally.