Adding a TFS server group to access levels via command line - tfs-security

I am creating a group of users within TFS 2013 and I want to add them to the none default access level (ex. the full access group) but I noticed I am only able to do this through the web interface by adding a TFS Group under that certain level. I am wondering if there is a way to do this via the developer tool (command line) as everything I am doing is being done in a batch script.
Any input would be appreciated. Thanks!

Create 3 TFS server groups; add these groups to the different access levels (e.g. TFS_ACCESS_LEVEL_(NONE|STANDARD|FULL)). Now use the TFSSecurity commandline tool to add groups to these existing and mapped groups(tfssecurity /g+ TFS_ACCESS_LEVEL_NONE GroupYouWantToHaveThisAccessLevel). There is no other way to directly add people to the access levels, except probably through the Object Model using C#.

For the record, tfssecurity may require the URI, which can be obtained via API. This is easy to do in Powershell, here is how to create a TFS group
[psobject] $tfs = get-tfs -serverName $collection
$projectUri = ($tfs.CSS.ListAllProjects() | where { $_.Name -eq $project }).Uri
& $TFSSecurity /gc $projectUri $groupName $groupDescription /collection:$collection
Full script at TfsSecurity wrapper.

Related

Unable to upgrade/import a dacpac file

ORIGINAL QUESTION:
Trying to upgrade a blank database created in a test VM using a .dacpac file, but get the following error message:
Error SQL72014: .Net SqlClient Data Provider: Msg 15401, Level 16, State 1, Line 1 Windows NT user or group 'SOURCE_DOMAIN\SOURCE SQL Readers' not found. Check the name again.
Error SQL72045: Script execution error. The executed script:
CREATE LOGIN [SOURCE_DOMAIN\SOURCE SQL Readers]
FROM WINDOWS WITH DEFAULT_LANGUAGE = [us_english];
(Microsoft.SqlServer.Dac)
------------------------------
Program Location:
at Microsoft.SqlServer.Dac.DeployOperation.ThrowIfErrorManagerHasErrors()
at Microsoft.SqlServer.Dac.DeployOperation.<>c__DisplayClass14.<>c__DisplayClass16.<CreatePlanExecutionOperation>b__13()
at Microsoft.Data.Tools.Schema.Sql.Dac.OperationLogger.Capture(Action action)
at Microsoft.SqlServer.Dac.DeployOperation.<>c__DisplayClass14.<CreatePlanExecutionOperation>b__12(Object operation, CancellationToken token)
at Microsoft.SqlServer.Dac.Operation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.ReportMessageOperation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.OperationExtension.CompositeOperation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.OperationExtension.CompositeOperation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.DeployOperation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.OperationExtension.Execute(IOperation operation, DacLoggingContext loggingContext, CancellationToken cancellationToken)
at Microsoft.SqlServer.Dac.DacServices.InternalDeploy(IPackageSource packageSource, Boolean isDacpac, String targetDatabaseName, DacDeployOptions options, CancellationToken cancellationToken, DacLoggingContext loggingContext, Action`3 reportPlanOperation, Boolean executePlan)
at Microsoft.SqlServer.Dac.DacServices.Deploy(DacPackage package, String targetDatabaseName, Boolean upgradeExisting, DacDeployOptions options, Nullable`1 cancellationToken)
at Microsoft.SqlServer.Management.Dac.DacWizard.UpgradeModel.RunAction()
at Microsoft.SqlServer.Management.Dac.DacWizard.ExecuteDacPage.backgroundWorker1_DoWork(Object sender, DoWorkEventArgs e)
at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e)
at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument)
Assuming that user existed in the source, but not in the destination. Will creating that user on the VM fix this issue or will I need to use a different approach to get the schema data from the source re-created in a VM destination for testing purposes?
UPDATE TO QUESTION 1:
The .dacpac file is generated on a server which is on a totally different domain and it will not be possible for the test VM to ever be on the same domain. With that in mind, how do I get the .dacpac file to work on the test VM?
If you still have access to VM, you could generate .dacpac again this time ignoring the logins. Depending which tool you use you should have access to option like "Include User Login Mapping".
The most roboust one has the VS: "How to create DACPAC file?" by Kamil Nowinski:
Image source: https://sqlplayer.net/wp-content/uploads/2018/10/visual-studio-extract-dacpac-options.png
You could recreate the proper logins and users afterwards with own SQL script.
Related: Using Publish Profiles to Deploy a DACPAC Database Without User Accounts
The solution to this problem lies in defining an appropriate publish profile for your DACPAC, which then instructs your chosen deployment tool – SQLPackage.exe, Visual Studio, or Azure DevOps – on how to carry out the deployment
The profile is defined as an XML file.
ExcludeUsers
ExcludeLogins
ExcludeDatabaseRoles
By setting these options to True within our publish profile, creation or modification of these objects will be skipped entirely during any database deployment.
One more option is to use dbtools.io - Export-DbaDacPackage
Key point here is:
$exportProperties = "/p:IgnorePermissions=True /p:IgnoreUserLoginMappings=True" # Ignore
and publish.xml:
...
<ExcludeLogins>True</ExcludeLogins>
<IgnorePermissions>True</IgnorePermissions>
<IgnoreLoginSids>True</IgnoreLoginSids>
<IgnoreRoleMembership>True</IgnoreRoleMembership>
Summary:
create a dacpac without login
create a publish.xml file that will ignore permissions
Creating the user inside the VM is one way to solve this issue, but you will need to change 'SOURCE_DOMAIN' to the VM hostname, as the user will be part of the local user database.
Probably the best solution is to fix VM communication to the Domain Controller, so authentication will work and user accounts end up being actually visible within the VM.
Take a look at this,
This error usually occurs because of COMPATIBILITY_LEVEL
I would recommend trying this quarry out:
ALTER DATABASE database_name
SET COMPATIBILITY_LEVEL = 130;
Hope it helps!
If a dacpac contains users or groups that aren’t on the domain where the dacpac is being deployed, then one way to deploy it is using the SqlPackage command line tool, as this allows you to explicitly list the object types you want to exclude.
To exclude users and groups, the PowerShell command would be something like this:
.\SqlPackage.exe `
/a:Publish `
/tsn:"(localdb)\mssqllocaldb" `
/tdn:YourDatabaseName `
/p:ExcludeObjectTypes="Users;RoleMembership;Logins;ServerRoles;ServerRoleMembership;Permissions" `
/sf:YourFile.dacpac
This command uses the following switches:
/a (Action): the action to run, in this case Publish
/tsn (TargetServerName): the name of the server to deploy to
/tdn (TargetDatabaseName): the name of the database to deploy to
/p (Properties): name value pair of action-specific properties, in this case:
ExcludeObjectTypes: a semicolon-delimited list of object types that should be ignored
/sf (SourceFile): the dacpac file to deploy
More details of the syntax for Publish (including a list of the object types that can be exlcuded) are available in the docs for the publish action.

Is it possible to use service accounts to schedule queries in BigQuery "Schedule Query" feature ?

We are using the Beta Scheduled query feature of BigQuery.
Details: https://cloud.google.com/bigquery/docs/scheduling-queries
We have few ETL scheduled queries running overnight to optimize the aggregation and reduce query cost. It works well and there hasn't been much issues.
The problem arises when the person who scheduled the query using their own credentials leaves the organization. I know we can do "update credential" in such cases.
I read through the document and also gave it some try but couldn't really find if we can use a service account instead of individual accounts to schedule queries.
Service accounts are cleaner and ties up to the rest of the IAM framework and is not dependent on a single user.
So if you have any additional information regarding scheduled queries and service account please share.
Thanks for taking time to read the question and respond to it.
Regards
BigQuery Scheduled Query now does support creating a scheduled query with a service account and updating a scheduled query with a service account. Will these work for you?
While it's not supported in BigQuery UI, it's possible to create a transfer (including a scheduled query) using python GCP SDK for DTS, or from BQ CLI.
The following is an example using Python SDK:
r"""Example of creating TransferConfig using service account.
Usage Example:
1. Install GCP BQ python client library.
2. If it has not been done, please grant p4 service account with
iam.serviceAccout.GetAccessTokens permission on your project.
$ gcloud projects add-iam-policy-binding {user_project_id} \
--member='serviceAccount:service-{user_project_number}#'\
'gcp-sa-bigquerydatatransfer.iam.gserviceaccount.com' \
--role='roles/iam.serviceAccountTokenCreator'
where {user_project_id} and {user_project_number} are the user project's
project id and project number, respectively. E.g.,
$ gcloud projects add-iam-policy-binding my-test-proj \
--member='serviceAccount:service-123456789#'\
'gcp-sa-bigquerydatatransfer.iam.gserviceaccount.com'\
--role='roles/iam.serviceAccountTokenCreator'
3. Set environment var PROJECT to your user project, and
GOOGLE_APPLICATION_CREDENTIALS to the service account key path. E.g.,
$ export PROJECT_ID='my_project_id'
$ export GOOGLE_APPLICATION_CREDENTIALS=./serviceacct-creds.json'
4. $ python3 ./create_transfer_config.py
"""
import os
from google.cloud import bigquery_datatransfer
from google.oauth2 import service_account
from google.protobuf.struct_pb2 import Struct
PROJECT = os.environ["PROJECT_ID"]
SA_KEY_PATH = os.environ["GOOGLE_APPLICATION_CREDENTIALS"]
credentials = (
service_account.Credentials.from_service_account_file(SA_KEY_PATH))
client = bigquery_datatransfer.DataTransferServiceClient(
credentials=credentials)
# Get full path to project
parent_base = client.project_path(PROJECT)
params = Struct()
params["query"] = "SELECT CURRENT_DATE() as date, RAND() as val"
transfer_config = {
"destination_dataset_id": "my_data_set",
"display_name": "scheduled_query_test",
"data_source_id": "scheduled_query",
"params": params,
}
parent = parent_base + "/locations/us"
response = client.create_transfer_config(parent, transfer_config)
print response
As far as I know, unfortunately you can't use a service account to directly schedule queries yet. Maybe a Googler will correct me, but the BigQuery docs implicitly state this:
https://cloud.google.com/bigquery/docs/scheduling-queries#quotas
A scheduled query is executed with the creator's credentials and
project, as if you were executing the query yourself
If you need to use a service account (which is great practice BTW), then there are a few workarounds listed here. I've raised a FR here for posterity.
This question is very old and came on this thread while I was searching for same.
Yes, It is possible to use service account to schedule big query jobs.
While creating schedule query job, click on "Advance options", you will get option to select service account.
By default is uses credential of requesting user.
Image from bigquery "create schedule query"1

How do I make a Bigquery dataset public using command line tool or Python?

I'm making an open data website powered by BigQuery. How do I make a Bigquery dataset public using command line tool or Python?
Note I tried to make every dataset in my project public but got an unexplained error. In project permission settings via WebUI under "Add members" I put
allAuthenticatedUsers and did the permission Data Viewer. The error was "Error
Sorry, there’s a problem. If you entered information, check it and try again. Otherwise, the problem might clear up on its own, so check back later."
I wasn't able to find any command line examples for updating permissions. I also can't find a JSON string to pass to https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets/update
To achieve this programatically, you need to use a dataset patch request and use the specialGroup item with the value allAuthenticatedUsers, like so:
{
"datasetReference":{
"projectId":"<removed>",
"datasetId":"<removed>"
},
"access":[
... //other access roles
{
"specialGroup":"allAuthenticatedUsers",
"role":"READER"
}
]
}
Note: You should use a read-modify-write cycle as described here & here:
Note about arrays: Patch requests that contain arrays replace the existing array with the one you provide. You cannot modify, add, or delete items in an array in a piecemeal fashion.

Get the JDBC Providers for the Cell using wsadmin

I am trying to list the jdbcprovider list at cell scope but it also list the jdbcproviders at node and server scope, how to get rid off the providers at node and server scope from the list?
AdminConfig.list('JDBCProvider', AdminConfig.getid( '/Cell:CellV70A/'))
output:
'"DB2 Universal JDBC Driver Provider(cells/CellV70A/nodes/nodename|resources.xml#JDBCProvider_1302300228086)"\n"DB2 Universal JDBC Driver Provider(cells/CellV70A|resources.xml#JDBCProvider_1263590015775)"\n"WebSphere embedded ConnectJDBC driver for MS SQL Server(cells/CellV70A|resources.xml#JDBCProvider_1272027151294)"'
If you look at the help for the AdminConfig.list command:
wsadmin>print AdminConfig.help('list')
WASX7056I: Method: list
...
Method: list
Arguments: type, scope
Description: Lists all the configuration objects of the type named
by "type" within the scope of the configuration object named by "scope."
...
It says "within the scope". Since node and server-scoped JDBCProviders are within the scope of the cell, they are returned by your command. If you list all JDBCProviders at cell scope using the Admin Console and then look at the Command Assistance, you'll see something like:
Note that scripting list commands may generate more information than is displayed by the administrative console because the console generally filters with respect to scope, templates, and built-in entries. AdminConfig.list('JDBCProvider', AdminConfig.getid('/Cell:MyCell/'))
So you'll need to filter your return list similarly. You could throw together a very simple script to do so:
jdbcProviders = AdminConfig.list('JDBCProvider', AdminConfig.getid('/Cell:MyCell')).split('\r\n')
for jdbcProvider in jdbcProviders:
if "/nodes/" or "/servers/" in jdbcProvider:
continue
print jdbcProvider

Creating a document in Domino Data Service(REST api)

I am trying to create a document in Reservation form using Domino Data Service(REST api), response is showing document is created but when i am trying to access that reservation(xyz's) using Notes client it is not showing up.
URL : http://server/Conf.nsf/api/data/documents?form=Reservation
Payload: {
"#authors":
["server",""
],
"#form":"Reservation",
"From":"xyz",
"AltFrom":"xyz",
"Chair":"xyz",
"AltChair":"xyz",
"Principal":"xyz",
"SequenceNum":1,
"ORGState":"5",
"ResourceType":"1",
"ResourceName":"BELLA VISTA/Building15",
"Room":"BELLA VISTA/Building15",
"Capacity":2,
"_ViewIcon":133,
"AppointmentType":"3",
"StartTimeZone":"Z=-3005$DO=0$ZN=India",
"EndTimeZone":"Z=-3005$DO=0$ZN=India",
"Topic":"Test Meeting",
"SendTo":"CN=BELLA VISTA/O=Building15",
"Encrypt":"0",
"Categories":"",
"RouteServers":"server",
"StartDate":"2015-03-28T06:30:00Z",
"StartTime":"2015-03-28T07:30:00Z",
"StartDateTime":"2015-03-28T06:30:00Z",
"EndDate":"2015-03-28T07:30:00Z",
"EndTime":"2015-03-28T07:30:00Z",
"EndDateTime":"2015-03-28T07:30:00Z",
"UpdateSeq":1,
"Author":"xyz",
"ResourceOwner":"",
"ReservedFor":"xyz",
"ReservedBy":"xyz",
"RQStatus":"A",
"Purpose":"Test from REST",
"NoticeType":"A",
"Step":3,
"Site":"Building15",
"ReserveDate":"2015-03-28T06:30:00Z"
}
If you're successfully creating the document but unable to see it, that suggests that you have a problem with a Reader Names or Author Names field that is denying you access to the document.
Do you have Manager access to the conf.nsf database, and Full Access Administrator rights on the server? If so, activate your full access rights via Domino Administrator, before opening conf.nsf and see if you can find your document, then check the document properties and examine all fields with SUMMARY READ-ACCESS NAMES or SUMMARY READ/WRITE-ACCESS NAMES types to determine what you've put there. Compare to manually created documents to see what should be there.
(If you lack the necessary permissions for this, either work with an administrator who has the rights, or set up a test server as AFAIK, there's no way to test the Domino Data Service with a local replica.)