How do I address the following security items with Azure Storage - azure-storage

Does anybody have a clue how to address the following security issues that come with scanning Azure blob storage through ZAP:
Remote OS Command Injection:
https://<xxx>.blob.core.windows.net/00d36000000tnwaeaa/06836000000kUsRAAU?sv=2014-02-14&sr=b&sig=<yyy>%3D&se=2016-10-07T07%3A25%3A13Z&sp=rw%3Bstart-sleep+-s+5.
Azure requires the 'sp' parameter, however this could be hijacked with insertion of OS commands as shown above. Is there a way to address this in Azure.
I haven't found any.
What is the way to set up following with Azure blob server -
X-Frame-Options Header Not Set, Incomplete or No Cache-control and Pragma HTTP Header Set, Web Browser XSS Protection Not Enabled
Please help me as my Azure web service is failing the above ZAP scans.

For #1, it is not possible to modify the parameters for components of the SAS, as they are used to generate the signature (sig= param), and then validated in the service. If they do not match the values used to generate the original signature, the request is rejected.
For #2, Azure storage does not allow modifying the headers returned, but it does allow configuring CORS. See https://msdn.microsoft.com/en-us/library/azure/hh452235.aspx

Related

ImageFlow.NET server accessing private Azure Blob Storage containers

I want to make sure I understand how ImageFlow.NET server works with images stored on a private Azure Blob Storage container.
Currently, we access images directly from Azure Blob Storage and we need to create a SAS token for images to be available in our frontend apps -- inlcuding mobile apps.
Our primary interest in ImageFlow.NET server is resizing images on demand. Would we still need to generate a SAS token for each image if we use ImageFlow.NET server to handle images for us?
For example, if we were to request a downsized version of image myimage.jpg, which is stored on Azure Blob Storage, do we still need to generate a SAS token or will ImageFlow server simply pull the image and send it to the requesting app without a SAS token?
Imageflow.NET Server has an easy API if you need to change this or hook up a different blob storage provider or design.
In the default Azure plugin setup, Imageflow authenticates with Azure using the configured credentials to access protected blobs, but clients themselves do not need an SAS token. Imageflow's own access can be restricted via Azure and by configuring the allowed buckets list.
Often, you need to have authorization for client/browser access as well as for Imageflow getting to blob storage. You can use any of the existing ASP.NET systems and libraries for this as if you're protecting static files or pages, or you can use Imageflow's built-in signing system that is actually quite similar to SAS tokens.
You can configure Imageflow to require a signature be appended to URLs. There's a utility method for generating those.
Then it's on you to only give those URLs to users who are allowed to access them.
Essentially, Imageflow supports any client authentication/authorization system you want to add to the app.
If you need something customized between Imageflow and Azure, that's also easy to customize (In fact, there's a single file adapter in the example project that implements a different approach for cases where you don't want to limit which buckets Imageflow accesses).

Is there a way to make APIs temporarily unavailable on Azure?

I'd like to implement a 503 feature for my APIs whenever system is in maintenance so that users get the appropriate HTTP response message. As of now, the way I'm doing this is to store a flag in the database and check it with every API request to see if I should be issuing a 503 error. However, I'm thinking there could be a different way to do this on Azure. Is there any setting I could turn on and off on the portal for this purpose so that I don't need to add another lookup trip to the database? I'm using Azure Functions for my services.
You can simply introduce an AppSetting that you check in your function. And then return a HTTP 503. Additionally you can add a timestamp when your API is back and return that in the HTTP header (Retry-After) with your 503.

Access-Control-Allow-Origin issue on BulkSMS

I am using Angular 5 to send post request to send SMS through Bulksms : http://bulksms.com/
When making the request from Angular (client), I am facing this issue :
Origin http://TTTT:4200 is not allowed by Access-Control-Allow-Origin.
How can I correct this issue in BulkSMS ?
Regards,
Your browser's same-origin policy is restricting your Javascript code from accessing a third party (i.e. api.bulksms.com in this case) in the way in which you hoped to do it - and CORS (Cross-Origin Resource Sharing), which is a mechanism to relax those restrictions, is not relaxed enough to allow these requests (from you as an untrusted third party) either.
Wikipedia Same-origin policy : "Under the [same-origin] policy, a web browser permits scripts contained in a first web page to access data in a second web page, but only if both web pages have the same origin. An origin is defined as a combination of URI scheme, host name, and port number. This policy prevents a malicious script on one page from obtaining access to sensitive data on another web page". The Wikipedia page contains some good examples of the sorts of malicious Javascript code uses that the same-origin policy tries to limit.
It is important to note that these restrictions are only enforced by browsers: HTTP client code that is not running under a browser typically doesn't care about any of this.
For development purposes, there are some tools that can make your life easier - for example, you could use live-server to run a simple HTTP server which serves up your static files, while also using its --proxy option to route requests to api.bulksms.com and solve your same-origin policy problem in the process.
For production, a typical solution is to route your AJAX requests, which are destined for the third party service, via your own server (the one serving up your Javascript files to your browser), or a reverse proxy (which would front both your own and the third party service). If there is a server side to your application, you can make the HTTP requests to api.bulksms.com from there, using an HTTP client, and then have your Javascript code talk to your own server, to indirectly make the requests to bulksms.com. This also gives you the opportunity to add authentication headers on your server side, without your Javascript code ever having to know them (e.g. if you have one bulksms.com account, and many users able to use that account via your Angular app, but who should not know your credentials). Similarly, you could impose limits on what your Angular users can do in this way (e.g. to limit the number of SMSs they could each send per day).

How to get a Signature for a Blob Storage in Windows Azure ?

https://configuat.blob.core.windows.net/b2c/Motorist/uat/files/ar/es/config_AR_es.json
This is my web service address.
As a read here there is a way to update Blob storage and I could not get the Signature.
My account is: configuat.
When using Shared Key authentication, the Authorization header must be in a specific format and must be able to authenticate your request with a matching signature. For more information on this topic, please see our MSDN article Authentication for the Windows Azure Storage Services.

How to set different web authentication mode for different database in Lotus Domino

Disclaimer: I'm not a Notes admin, I just wrote the application :), and I try to help our client to use it.
We provide a simple database with one agent to accept and process HTTP POST messages from Internet.
The Domino server where this database is going to be installed is configured for Single SignOn authentication for web access.
Is there a way so set only our database to use different type of authentication - i.e. Basic Authentication, so we can hit it like this to POST messages to the agent:
http://username:password#my.domino.server/mydb.nsf/myagent
I thought about another approach as well - to remove any form of auth, and pass the credentials in the POSTed data itself. Then the agent will take care to process or not the data, base on if the creds are OK. But this most probably will require some form of "impersonation" - I.e. to map somehow the anonymous user to the one, which has the rights to execute the agent. So, I valid answer to this question may be an advise how to set this up.
Additionally - we are looking at the web service approach (available in Domino 7.0+), but it will require changes on both sides - the sender (our publisher service) and the receiving agent. And most probably will lead back to the original question about how to authenticate the sender.
Any advice in that regard (even changing the approach) will be highly appreciated.
Cheers
Since Domino 7.0.2 there is a new kind of website rule entitled "Override Session Authentication" that allows you to specify, for a specific URL pattern (ex : /folder/myapp.nsf/myagent?*) to use BASIC auth even if the whole server is configured for session-based auth.
This was initially introduced for RSS readers (that cannot handle sessions).
More information here :
http://publib.boulder.ibm.com/infocenter/domhelp/v8r0/index.jsp?topic=/com.ibm.help.domino.admin.doc/DOC/H_OVERRIDING_SESSION_AUTHENTICATION_8847_STEPS.html
Although it's horribly insecure to allow this, it is possible using web site documents on the server.
Create a website document that has basic authentication for your database (it will need it's own domain name) and then everyone else can access the server through the default website document which uses session authentication.
I'd suggest adding Anonymous to the ACL of the database, with No access and nothing but Read public documents checked. Then, you can grant access to the agent by checking Allow Public Access users to view and run this agent in the Agent properties.
I don't know if it is possible to get the Authorization header into the agent to check the authentication. If there are only two parties communicating I would compute a hash of the message, a timestamp and a shared secret and use that to check access.
Edit
You won't be able to parse the Authorization header manually. Domino (at least 7.0.3) tries to do a session authentication if your request contains an authorization header, regardless of access settings on the object you request.
Here, put that URL in your Favorites toolbar :
http://www-01.ibm.com/support/knowledgecenter/SSKTMJ_8.5.3/welcome_Domino_8_5_3.html
Also did you know that your Notes client and Domino Server come with help databases full of very adequate documents ? Try the [Help] menu for starters.
Then, said help databases are usually in the aptlty named "help" folder". Open them.