Restrict commit in git stash by allowing few users to commit to Master - locking

Currently we have TFS and use this feature of only allowing certain users to checkin after review and testing. Our company is adopting devops model and moving towards Atlassian STASH and this tool doesn't have this feature readily available. Anyone has implemented it?

I'm assuming you mean Atlassian Stash, not git stash?
If so, you can use branch permissions to enforce a workflow that only allows certain users to write to a branch (such as master), and/or to allow changes only via pull requests.
Combined with pull request settings that require a minimum number of approvals you can achieve a strict change management workflow as desired.
Now, what I just said does require slightly different thinking. Your wording implied a review and test should happen before checkin. With git, people can and should work on branches that are committed and pushed to the central repository. It is before merge that review and testing takes place.
As an aside, these features aren't exactly pre-requisites for a "devops model" (some may in fact argue the opposite), but I can see how judicious use of workflow settings (branch permissions and pull request settings) could play a part.
Disclosure: I work for Atlassian

Related

Keeping information private, even from database users

I have a unique use case. I want to create a front-end system to manage employee pay. I will have a profile for each employee and their hourly rate stored for viewing/updates in the future.
With user permissions, we can block certain people from seeing pay in the frontend.
My challenge is that I want to keep developers from opening up the database and viewing pay.
An initial thought was to hash the pay against my password. I'm sure there is some reverse engineering that could be used to get the payout, but it wouldn't be as easy.
Open to thoughts on how this might be possible.
This is by no means a comprehensive answer, but I wanted at least to point out a couple of things:
In this case, you need to control security at the server level. Trying to control security at the browser level, using Javascript (or any similar frameword like ReactJs) is fighting a losing battle. It will be always insecure, since any one (given the necessary time and resources) will eventually find out how to break it, and will see (and maybe even modify) the whole database.
Also, if you need an environment with security, you'll need to separate developers from the Production environment. They can play in the Development environment, and maybe in the Quality Assurance environment, but by no means in the Production environment. Not even read-only access. A separate team controls Production (access, passwords, firewalls, etc.) and deploys to it -- using instructions provided by the developers.

How to work with AWS Cognito in production environment?

I am working on an application in which I am using AWS Cognito to store users data. I am working on understanding how to manage the back-up and disaster recovery scenarios for Cognito!
Following are the main queries I have:
I wanted to know what is the availability of this stored user data?
What are the possible scenarios with Cognito, which I need to take
care before we go in production?
AWS does not have any published SLA for AWS Cognito. So, there is no official guarantee for your data stored in Cognito. As to how secure your data is, AWS Cognito uses other AWS services (for example, Dynamodb, I think). Data in these services are replicated across Availability Zones.
I guess you are asking for Disaster Recovery scenarios. There is not much you can do on your end. If you use Userpools, there is no feature to export user data, as of now. Although you can do so by writing a custom script, a built-in backup feature would be much more efficient & reliable. If you use Federated Identities, there is no way to export & re-use Identities. If you use Datasets provided by Cognito Sync, you can use Cognito Streams to capture dataset changes. Not exactly a stellar way to backup your data.
In short, there is no official word on availability, no official backup or DR feature. I have heard that there are feature requests for the same but who knows when they would be released. And there is not much you can do by writing custom code or follow any best practices. The only thing I can think of is that periodically backup your Userpool's user data by writing a custom script using AdminGetUser API. But again, there are rate limits on how many times you can call this API. So, backup using this method can take a long time.
AWS now offers a SLA for Cognito. In the event they are unable to meet their availability target (99.9% at the time of writing), you will receive service credits.
Even through there are couple of third party solutions available, when restoring a user pool users will be created using admin flow (users are not restored rather they will be created from an admin) and they will end up with "Force Change Password" status. So the users will be forced to change the password using the temporary password and that has to be facilitated from the front end of the application.
More info : https://docs.amazonaws.cn/en_us/cognito/latest/developerguide/signing-up-users-in-your-app.html
Tools available.
https://www.npmjs.com/package/cognito-backup
https://github.com/mifi/cognito-backup
https://github.com/rahulpsd18/cognito-backup-restore
https://github.com/serverless-projects/cognito-tool
Pls bear in mind that some of these tools are outdated and can not be used. I have tested "cognito-backup-restore" and it is working as expected.
Also you have to think of how to secure the user information outputted by these tools. Usually they create a json file containing all the user information (except the passwords as passwords can not be backed up) and this file is not encrypted.
The best solution so far is to prevent accidental deletion of user pools with AWS SCPs.

What is the purpose of Agent User Id in configuring Replication Agent in AEM?

As per my understanding it is user who has access to publish the specific page/resource.
Documentation goes like this:
Depending on the environment, the agent will use this user account to:
collect and package the content from the author environment
create and write the content on the publish environment
Leave this field empty to use the system user account (the account defined in sling as the administrator user; by default this is admin).
means this replication agent comes into action only when replicating the content from packagemenager(by clicking replicate for specific package) ? or activating the page/resource from siteadmin?
The Agent User ID property is used to manage what part of content tree will be replicated using given replication queue. This has nothing to do with actual package creation - it applies to all replication process.
Multitenant use case
For the complicated infrastructure it may happen that the multi-tenant architecture involves some sharding approach. Imagine a geo-spread architecture with no CDN involved where the brand site should be quickly accessible from the given localisation. Due to technical limitations, pushing whole content (all sites) around the world might not be acceptable.
Dedicated DAM environment use case
When DAM storage is shared across multiple AEM implementations it is often desired to dettach that from the regular authoring by creating a separated DAM-only instance. On such platform the replication agents should be configured to have the read access to /content/dam only in order not to mess up with other content trees.
Solution
In this case, the user agent ID can be configured to use a dedicated user permission scheme. All the changes the preconfigured user sees will be replicated to the corresponding endpoint. There are technical alternatives like implementing a transport handler (see https://github.com/Cognifide/CQ-Transport-Handler/blob/master/README.md)

In a continuous delivery process, is there a proper way to automatically move data from production to development?

In a common continuous-delivery process, the code is moving from a development instance to a staging instance to production instance.
For development purpose (reproducing bugs, testing performance with a full data set), most of the time developers fetch data from production database to their development environment. See, for example, this question.
In my company, we use three instances beside production in our continuous delivery process:
latest: sync every night with our SCM trunk
staging: with the last released version before deployment to production
stable: with the exact same version of the software deployed in production (useful to reproduce bugs found on production)
The problem is that on the stable instance, for reproducing bugs we would like to have the exact same data set that is on production. So we would like to sync databases on a nightly basis.
Is it a good practice ? How to implement it ? Any pitfalls ?
Depending on the data you have in production, you may not want to replicate it back to non-production environments. (Or may not even be allowed to under certain regulations.) If you have customer data, personally identifiable information (PII), regulated data, financial data, credit card data, health data, SSN, or any other type of sensitive data, if you replicate it you need the full controls you have (or should have) in production - which you probably don't, and probably don't want.
There are several VDB solutions which I recommend you to look for.
One of them is Delphix
Windocks supports containers with integrated database cloning, and is used for just the use case described. Full disclosure, I work for Windocks.

Perforce: Any side-effects to sharing Login accounts / Client-Specs among multiple users?

I am currently working on a file system application in C# that requires users to login to a Perforce server.
During our analysis, we figured that having unique P4 login accounts per user is not really beneficial and would require us to purchase more licenses.
Considering that these users are contractual and will only use the system for a predefined amount of time, it's hard to justify purchasing licenses for each new contractual user.
With that said, are there any disadvantages to having "group" of users share one common Login account to a Perforce server ? For example, we'd have X groups who share X logins.
From a client-spec point-of-view, will Perforce be able to detect that even though someone synced to head, the newly logged user (who's on another machine), also needs to sync to head ? Or are all files flagged as synced to head since someone else synced already ?
Thanks
The client specs are per machine, and so will work in the scenario you give.
However, Perforce licenses are strictly per person, and so you will be breaking the license deal and using the software illegally. I really would not advocate that.
In addition to the 'real' people you need licenses for, you can ask for a couple of free 'robot' accounts to support things like automatic build services, admin etc.
Perforce have had arrangements in the past for licensing of temporary users such as interns, and so what I would recommend is you contact them and ask what they can do for you in your situation.
Greg has an excellent answer and you should follow his directions first. But I would like to make a point on the technical side of sharing clients on multiple machines. This is generally a bad idea. Perforce keeps track of the contents of each client by client name only. So if you sync a client on one machine, and then try to sync the same client on another machine, then the other machine will only get the "recently" changed files and none of the changes that were synced on the first machine.
The result of this is that you have to do a lot of force syncing. Or keep track of the changelists you sync to and do some flushing and then syncing.