What is the main difference between BeyondTrust Password Safe and DevOps Secrets safe? as my understanding is that BeyondTrust Password Safe can be used to save code and tool passwords from the DevOps process, so why would I need DevOps Secrets safe?
TL;DR = It depends on the problems you are looking to solve.
It's mainly about 3 criteria.
DevOps Secrets Safe is designed from the ground up for the high-volume, elastic scaling space that DevOps normally operates in. It's also built on the same technologies, e.g., Kubernetes, so fits easily into DevOps environments.
Password Safe is designed primarily for user interactions although it does have a very robust, efficient, and broad API for use in automation and application-to-application authentication.
The first criteria is the size of the secret you are looking to store. Password Safe maxes out around 2k so not quite enough for certs in many cases.The second criteria is the working environment for the solution. The last criteria is volume of requests. DevOps Secrets Safe, as an API-first, solution is always the first choice when you are working with high volumes of requests for machine-only identities. While Password Safe could be used, you have to consider the volume of requests and the impact that potentially has on the user experience of Password Safe.
If you are a DevOps environment, I'd heartily recommend DevOps Secrets Safe. If you are a mainly user oriented environment, Password Safe is your choice. If you are a mixed environment then it's either Password Safe or both. I personally prefer to keep the identities used in DevOps separate from those used by users and applications. For Applications, I'm talking about identities used by applications for database access and other connections that are often established and pooled rather than extremely high volume, high frequency connections that are created and closed quickly. There are options for that with Password Safe but DevOps offers a cleaner solution.
Related
I am building a service to handle a large number of devices, for a large number of users.
We have a complex schema of access roles assigned to each entity. Some data entries can be written to by certain users, while some users can only read from some entities (but can write to others).
This is a cloud service: there are more devices, and users than can be handled by a single server machine (we are using non relational Cloud databases for this).
I was wondering if there was an established cloud-scale user/role management backend system which I could integrate to enforce the access rules, instead of writing my own. This tech should preferably be cloud agnostic, so I would prefer not to use a SAAS solution, but deploy my own.
I am looking for a system which can scale to millions of users, and billions of data entities
I think authentication is not going to be a big issue, there are very robust cloud based solutions available for storing identities and authenticating millions of users. Authorization will be trickier, and will depend a lot on how granular you want it to be. You could look at Apigee for example as a very scalable proxy that might help you implement this. So getting to the point where you have a token that you can verify the users identity with and that might contain some scopes is not going to be hard imo. If that is enough for you then I would just look at Auth.0, Okta and the native IDM solution of whatever cloud platform you are using (Cognito, Cloud Identity etc.).
I think you will find that more features come with a very hefty pricetag. So Auth.0 is far superior compared to Cognito, but Cognito still has enough features for basic use cases and will end up costing a fraction of Auth.0 in large deployments. So everything comes with pros and cons. If you have very complex requirements such as a bunch of big legacy repositories that you need to integrate then products like Auth.0 rapidly start looking more attractive.
Personally I would look at Auth.0, Cognito and Apigee and my decision would depend massively on parameters that you haven't mentioned in your question. Obviously these are all SaaS solutions, which I think you should definitely be using anyways. I would not host this myself unless I had no other choice, and going that route will radically limit your choices and probably increase expenses. All the cool stuff is happening in the cloud.
I have a unique use case. I want to create a front-end system to manage employee pay. I will have a profile for each employee and their hourly rate stored for viewing/updates in the future.
With user permissions, we can block certain people from seeing pay in the frontend.
My challenge is that I want to keep developers from opening up the database and viewing pay.
An initial thought was to hash the pay against my password. I'm sure there is some reverse engineering that could be used to get the payout, but it wouldn't be as easy.
Open to thoughts on how this might be possible.
This is by no means a comprehensive answer, but I wanted at least to point out a couple of things:
In this case, you need to control security at the server level. Trying to control security at the browser level, using Javascript (or any similar frameword like ReactJs) is fighting a losing battle. It will be always insecure, since any one (given the necessary time and resources) will eventually find out how to break it, and will see (and maybe even modify) the whole database.
Also, if you need an environment with security, you'll need to separate developers from the Production environment. They can play in the Development environment, and maybe in the Quality Assurance environment, but by no means in the Production environment. Not even read-only access. A separate team controls Production (access, passwords, firewalls, etc.) and deploys to it -- using instructions provided by the developers.
I am working on an application in which I am using AWS Cognito to store users data. I am working on understanding how to manage the back-up and disaster recovery scenarios for Cognito!
Following are the main queries I have:
I wanted to know what is the availability of this stored user data?
What are the possible scenarios with Cognito, which I need to take
care before we go in production?
AWS does not have any published SLA for AWS Cognito. So, there is no official guarantee for your data stored in Cognito. As to how secure your data is, AWS Cognito uses other AWS services (for example, Dynamodb, I think). Data in these services are replicated across Availability Zones.
I guess you are asking for Disaster Recovery scenarios. There is not much you can do on your end. If you use Userpools, there is no feature to export user data, as of now. Although you can do so by writing a custom script, a built-in backup feature would be much more efficient & reliable. If you use Federated Identities, there is no way to export & re-use Identities. If you use Datasets provided by Cognito Sync, you can use Cognito Streams to capture dataset changes. Not exactly a stellar way to backup your data.
In short, there is no official word on availability, no official backup or DR feature. I have heard that there are feature requests for the same but who knows when they would be released. And there is not much you can do by writing custom code or follow any best practices. The only thing I can think of is that periodically backup your Userpool's user data by writing a custom script using AdminGetUser API. But again, there are rate limits on how many times you can call this API. So, backup using this method can take a long time.
AWS now offers a SLA for Cognito. In the event they are unable to meet their availability target (99.9% at the time of writing), you will receive service credits.
Even through there are couple of third party solutions available, when restoring a user pool users will be created using admin flow (users are not restored rather they will be created from an admin) and they will end up with "Force Change Password" status. So the users will be forced to change the password using the temporary password and that has to be facilitated from the front end of the application.
More info : https://docs.amazonaws.cn/en_us/cognito/latest/developerguide/signing-up-users-in-your-app.html
Tools available.
https://www.npmjs.com/package/cognito-backup
https://github.com/mifi/cognito-backup
https://github.com/rahulpsd18/cognito-backup-restore
https://github.com/serverless-projects/cognito-tool
Pls bear in mind that some of these tools are outdated and can not be used. I have tested "cognito-backup-restore" and it is working as expected.
Also you have to think of how to secure the user information outputted by these tools. Usually they create a json file containing all the user information (except the passwords as passwords can not be backed up) and this file is not encrypted.
The best solution so far is to prevent accidental deletion of user pools with AWS SCPs.
I've been needing a new VM host for some time now, and from working with/on AWS at work, "The Cloud" seems to be a good idea.
I've done some math, and no matter how I count, it's going to be cheaper to do it myself, than colo or something else. Plus, I really like lots of blinking lights :D
A year or so, I heard about Openstack and have been looking cursory at it since then. Seems big and complex (and scary!), and some friends who have been trying to do it at work for a year and still not quite finished/succeeded, indicate that it is what it seems :)
However, I like tormenting myself, so I've decided I'm going to give it a try. It does provide all the functionality, and then some, that I need. Theoretically, I could go with Vagrant, but that's not quite half-way to what I want/need.
So, I've been looking at https://en.wikipedia.org/wiki/OpenStack#Components and from that came to the following conclusion:
Required: (Nova, Glance, Horizon, Cinder)
This seems to be the "core" services. I need all of them.
Nova
Compute fabric controller
Glance
Image service (for templates)
Horizon
Dashboard
Cinder
Block storage devices (can work with ZoL w/ 3rd party driver)
Less important: (Barbican, Trove, Designate)
I really don't need any of this, it's more of "could be nice to have at some point".
Barbican
REST API designed for the secure storage, provisioning and management of secrets
Trove
Database-as-a-service provisioning relational and non-relational database engine
Designate
DNS as a Service
Possibly not needed: (Neutron, Keystone)
These ones I don't know if I need. I have DHCP, VLAN, VPN, DNS, LDAP, Kerberos services on the network that work just fine, and I'm not replacing them!
Neutron (previously Quantum)
Network management (DHCP, VLAN)
Keystone
Identity service (can work with existing LDAP servers)
Not needed: (Swift, Ceilometer, Ironic, Zaqar, Searchlight, Sahara, Heat, Manilla)
Meh! I'm doing this for me, for my basement and for my own development and enjoyment, so don't need that. Would be nice to go with a fully object based storage, but that's not feasible for me at this time.
Swift
Object storage system
Ceilometer
Telemetry Service (billing)
Ironic
Bare metal provisoning instead of virtual machines
Zaqar
multi-tenant cloud messaging service for Web developers (~ SQS)
Searchlight
Advanced and consistent search capabilities across various OpenStack cloud services
Sahara
Easily and rapidly provision Hadoop (storing and managing vast amounts of data cheaply and efficient) clusters
Heat
Orchestration layer (store the requirements of a cloud application in a file that defines what resources are necessary for that application)
Manila
Shared File System Service (manage shares in a vendor agnostic framework)
If we don't count storage (I already have my own block storage, which I can use with Cinder and some 3rd party plugins/modules) and compute nodes (everything that's left over will become compute nodes), can I run all this on one machine? With a hot standby/failover?
Everything is going to be connected to the same power jack, same rack, same [outgoing] network cable so more redundancy that that is overkill. I don't even need that, but "why not" :)
The basic recommendation I've heard is four to six machines. And after a lot of pestering the ones who said that, it turns out that "two storage, two controller, two compute". Which, is what I was thinking as well: Running this on two machines should be enough. They're basically only going to run Glance, Horizon and Cinder. And possibly Neutron and Keystone.
Neither of them seems to be very resource-heavy.
Is there something I'm missing?
Oh, and nothing of this is going to face the 'Net! It's all just for me.
Though it is theoretically possible to bring up OpenStack without Keystone, it is almost practically impossible and makes the system pretty inconvenient to use.
You can definitely run full OpenStack on a machine (or even in a VM). Checkout the devstack (http://docs.openstack.org/developer/devstack/) -- you just run a shell script to bring up a full working OpenStack setup.
As long as you are not worried about availability and your workload is minimal, single-node deployment is a pretty good start to get your hands wet.
I worked on a project a while back where the Architect decided to use LDAP for managing authentication / authorization, rather than a traditional database approach.
The application was expected to scale rapidly by approx 500 - 1000 users a day, and then plateau at around 200k users. Beyond that, there was nothing special about this application.
I didn't ask at the time, but I'm curious around why we would've used LDAP here.
As I understand it, the real strengths of LDAP lie in organizations where users are required to authenticate against several disconnected systems, and LDAP provides a single auth provider.
Are there additional benefits that make it a good fit for certain applications?
Actually, the scaling is one: LDAP is easily distributed across multiple servers.
The other reason, whether your architect mentioned it or not, is that there's never in the history of the world been a single application that stayed single. Someone will have a new idea, and now there's a unified single sign on technique already available and standard.
LDAP is incredibly fast for read operations compared with your average RDBMS. Auth operations are read intensive and generally change data infrequently - which is well suited for ldap's strengths. On the flip side, write operations are generally much slower than their database counterparts.
So, while LDAP would most likely not be an adequate alternative for your general data storage needs, it is a strong choice for auth.
I wouldn't call the database solution for users 'traditional': in fact I would call LDAP traditional in this case. It is much better as a user registry in all sorts of ways, including performance, standardisation, availability of APIs, availability of browser clients, ease of scaling/federation, security, ...