With scope based repository workpsace will all the members be able to check in the code.? - rtc

With scope based repository workpsace will all the members be able to check in the code.?
Or only deliver to the stream is possible like public repository work space.

With scope based repository workpsace will all the members be able to check in the code.?
No: only the owner of a repo workspace can check in code, or accept change set from other flow targets.
Or only deliver to the stream is possible like public repository work space.
No: only the owner of a repo workspace can initiate a deliver.
If you want to deliver change set from another repo workspace, you have to accept them in your own repo workspace first (by modifying its flow target in order to temporarily point to that other repo workspace), and then, once accepted, you can deliver them to the stream.

Related

Hyperledger private data dynamic access

Is it possible to dynamically set access to private data in Hyperledger fabric 1.4? Unlike the collections file where we have to add the organizations that can have access to a particular "collection", is it possible to add access through chaincode?
Had to do some research on this myself, but since Fabric v1.4 it is possible to dynamically add peers to private data collections. Private data reconciliation ensures that all private data state in that collection, which was created prior to the peer joining, will be delivered to the new peer.
In more detail: With the collections file you specify an initial endorsement policy. This endorsement policy can be updated later through a SDK function called SetPrivateDataValidationParameter. After this update takes place, new private data key-value-pairs will be delivered according to the new endorsement policy.
Additionally, if you want to update the collections definition file itself, you can specify a new one when upgrading the chaincode. The collections definition file specifies, which peers are allowed to see the data, so in order to change that, you need to upgrade your chaincode.

What is the best way to handle multiple AWS accounts as environments in Terraform?

We want to have each of our terraform environments in a separate AWS account in a way that will make it hard for accidental deployments to production to occur. How is this best accomplished?
We are assuming that an account is dedicated to Production, another to PreProduction and potentially other sandbox environments also have unique accounts, perhaps on a per-admin basis. One other assumption is that you have an S3 bucket in each AWS account that is specific to your environment. Also, we expect your AWS account credentials to be managed in ~/.aws/credentials (or with an IAM role perhaps).
Terraform Backend Configuration
There are two states. For the primary state we’re using the concept of Partial Configuration. We can’t pass variables into the backend config through modules or other means because it is read before those are determined.
Terraform Config Setup
This means that we declare the backend with some details missing and then provide them as arguments to terraform init. Once initialized, it is setup until the .terraform directory is removed.
terraform {
backend "s3" {
encrypt = true
key = "name/function/terraform.tfstate"
}
}
Workflow Considerations
We only need to make a change to how we initialize. We use the -backend-config arguments on terraform init. This provides the missing parts of the configuration. I’m providing all of the missing parts through bash aliases in my ~/.bash_profile like this.
alias terrainit='terraform init \
-backend-config "bucket=s3-state-bucket-name" \
-backend-config "dynamodb_table=table-name" \
-backend-config "region=region-name"'
Accidental Misconfiguration Results
If the appropriate required -backend-config arguments are left off, initialization will prompt you for them. If one is provided incorrectly, it will likely cause failure for permissions reasons. Also, the remote state must be configured to match or it will also fail. Multiple mistakes in identifying the appropriate account environment must occur in order to deploy to Production.
Terraform Remote State
The next problem is that the remote states also need to change and can’t be configured through pulling configuration from the backend config; however, the remote states can be set through variables.
Module Setup
To ease switching accounts, we’ve setup a really simple module which takes in a single variable aws-account and returns a bunch of outputs that the remote state can use with appropriate values. We also can include other things that are environment/account specific. The module is a simple main.tf with map variables that have a key of aws-account and a value that is specific to that account. Then we have a bunch of outputs that do a simple lookup of the map variable like this.
variable "aws-region" {
description = "aws region for the environment"
type = "map"
default = {
Production = "us-west-2"
PP = "us-east-2"
}
}
output "aws-region" {
description = “The aws region for the account
value = "${lookup(var.aws-region, var.aws-account, "invalid AWS account specified")}"
}
Terraform Config Setup
First, we must pass the aws-account to the module. This will probably be near the top of main.tf.
module "environment" {
source = "./aws-account"
aws-account = "${var.aws-account}"
}
Then add a variable declaration to your variables.tf.
variable "aws-account" {
description = "The environment name used to identify appropriate AWS account resources used to configure remote states. Pre-Production should be identified by the string PP. Production should be identified by the string Production. Other values may be added for other accounts later."
}
Now that we have account specific variables output from the module, they can be used in the remote state declarations like this.
data "terraform_remote_state" "vpc" {
backend = "s3"
config {
key = "name/vpc/terraform.tfstate"
region = "${module.environment.aws-region}"
bucket = "${module.environment.s3-state-bucket-name}"
}
}
Workflow Consideration
If the workflow changes in no way after setting up like this, the user will be prompted to provide the value for aws-account variable through a prompt like this whenever a plan/apply or the like is performed. The contents of the prompt are the description of the variable in variables.tf.
$ terraform plan
var.aws-account
The environment name used to identify appropriate AWS account
resources used to configure remote states. Pre-Production should be
identified by the string PP. Production should be identified by the
string Production. Other values may be added for other accounts later.
Enter a value:
You can skip the prompt by providing the variable on the command line like this
terraform plan -var="aws-account=PP"
Accidental Misconfiguration Results
If the aws-account variable isn’t specified, it will be requested. If an invalid value is provided that the aws-account module isn’t aware of, it will return errors including the string “invalid AWS account specified” several times because that is the default values of the lookup. If the aws-account is passed correctly, but it doesn’t match up with the values identified in terraform init, it will fail because the aws credentials being used won’t have access to the S3 bucket being identified.
We faced a similar problema and we solved (partially) creating pipelines in Jenkins or any other CI tool.
We had 3 different envs (dev, staging and prod).Same code, different tfvars, different aws accounts.
When terraform code is merged to master can be applied to staging and only when staging is Green, production can be executed.
Nobody runs terraform manually in prod, aws credentials are stored in the CI tool.
This setup can solve an accident like you decribed but also prevents different users applying different local code.

S3 notification when file is overwritten, or deleted

since we store our log files on S3 and to meet PCI requirements we have to be notified when someone tampers with the log files.
How can I be notified every time a put request is placed that replaces an existing object, or when an existing object is delete. The alert should not fire if a new object is created unless it replaces an existing one.
S3 does not currently provide deletion or overwrite-only notifications. Deletion notifications were added after the initial launch of the notification feature and can notify you when an object is deleted, but does not notify you when on object is implicitly deleted by overwrite.
However, S3 does have functionality to accomplish what you need, in a way that seems superior to what you are contemplating: object versioning and multi-factor authentication for deletion, both discussed here:
http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
With versioning enabled on the bucket, an overwrite of a file doesn't remove the old version of the file. Instead, each version of the file has an opaque string, assigned by S3, identifying the Version ID.
If someone overwrites a file, you would then have two versions of the same file in the bucket -- the original one and the new one -- so you not only have evidence of tampering, you also have the original file, undisturbed. Any object with more than one version in the bucket has, by definition, been overwritten at some point.
If you also enable Multi-Factor Authentication (MFA) Delete, then none of the versions of any object can be removed without access to the hardware or virtual MFA device.
As an developer of AWS utilities, tools, and libraries (3rd party; I'm not affiliated with Amazon), I am highly impressed by Amazon's implementation of object versioning in S3, because it works in such a way that client utilities that are unaware of versioning or that versioning is enabled on the bucket should not be affected in any way. This means you should be able to activate versioning on a bucket without changing anything in your existing code. For example:
fetching an object without an accompanying version id in the request simply fetches the newest version of the object
objects in versioned buckets aren't really deleted unless you explicitly delete a particular version; however, you can still "delete an object," and get the expected response back. Subsequently fetching the "deleted" object without specifying an accompanying version id still returns a 404 Not Found, as in the non-versioned environment, with the addition of an unobtrusive x-amz-delete-marker: header included in the response to indicate that the "latest version" of the object is in fact a delete marker placeholder. The individual versions of the "deleted" object remain accessible to version-aware code, unless purged.
other operations that are unrelated to versioning, which work on non-versioned buckets, continue to work the same way they did before versioning was enabled on the bucket.
But, again... with code that is version-aware, including the AWS console (two new buttons appear when you're looking at a versioned bucket -- you can choose to view it with a versioning-aware console view or versioning-unaware console view) you can iterate through the different versions of an object and fetch any version that has not been permanently removed... but preventing unauthorized removal of objects is the point of MFA delete.
Additionally, of course, there's bucket logging, which is typically only delayed by a few minutes from real-time and could be used to detect unusual activity... the history of which would be preserved by the bucket versioning.

How to find the Transport Request with my custom objects?

I've copied two Function Modules QM06_SEND_PAPER_STEP2 and QM06_FM_TASK_CLAIM_SEND_PAPER to similar Z* Function Modules. I've put these FMs into a ZQM06 Function Group which was created by another developer.
I want to use Transaction SCC1 to move my developments from one client to another. In transaction SE01 Transport Organizer I don't find the names of my 2 function modules anywhere.
How can I find out the change request with my work?
I copied the FM in order to modify functionality and I know FMs are client independent.
Function modules, like other ABAP workbench entities, are client-independent. That is, you do not need to copy them between clients on the same instance.
However, you can find the transport request that contains your changes by going to transaction SE37, entering the name of your function module, and then choosing Utilities -> Versions -> Version Management from the menu.
Provided you did not put the changes into a local package (like $TMP) the system will have asked you for a transport request when you saved or activated your changes, that is, unless the function group is in a modifiable transport request, in which case it would have created a new task for your user under that request which will contain you changes. To check the package, use Goto -> Object Directory Entry from the menu in SE37.
Function modules are often added to transports under the function group name, especially if they're new.
The easiest way to fin dhte transport is to go to SE37, display the function module, and then go to Version Management.
The answer from mydoghasworms is correct. Alternatively you can also use transaction SE03 -> Search for Objects in Requests/Tasks (top of the transaction screen) -> check the box next to "R3TR FUGR" and type in your function group name.

Identify 'Current' open workspace through TFS API?

Is there a way to programatically determine the current workspace of the open sln/proj in visual studio using the TFS API? I've seen how the VersionControlServer can retreive all of the known workspaces, but is there anything I can use to tie that to what the user currently has (or doesn't have) open?
There is another override to the GetWorkspace method of an instantiated VersionControlServer object. You can call GetWorkspace with the local path like Bernhard states, but you can also call it with the workspace name and workspace owner. Since the workspace name defaults to the local computer name, you can usually get away with using Environment.MachineName, but there is always going to be that developer who changes the workspace name.
Example:
TeamFoundationServerFactory _tfs = TeamFoundationServerFactory.GetServer(server);
_tfs.EnsureAuthenticated();
VersionControlServer _vcs = (VersionControlServer)_tfs.GetService(typeof(VersionControlServer));
Workspace _ws = _vcs.GetWorkspace(Environment.MachineName, Environment.UserName);
If you can determine the physical path of the solution or project file, then you can query that file in TFS and you should see which workspace has been mapped to that local file location.
The problem with the approach from Dave Teply is that it assumes you already have an instance of a VersionControlServer or at least the TeamFoundationServerUri.
There is a more powerful way though, using the Workstation class. Ricci Gian Maria has written a quite extensive blog post about this topic. The below snippet is the essentials of that post:
Use the Workstation class to get the WorkspaceInfo for the path you're looking for, this will search the workpaces for all TFS servers registered on that workstation to see if there's a match:
Workstation workstation = Workstation.Current;
WorkspaceInfo info = workstation.GetLocalWorkspaceInfo(path);
Now that you have the WorkspaceInfo, you can sue it to connect to TFS, the workspace info contains the ProjectCollectionUri for that specific team project collection. And from that the Workspace instance:
TfsTeamProjectCollection collection = new TfsTeamProjectCollection(info.ServerUri);
Workspace workspace = info.GetWorkspace(collection);
Use the collection or the workspace to then get access to the VersionControlServer:
VersionControlServer versionControlServer = collection.GetService<VersionControlServer>();