Find the tfs path of merged branch - api

Using TFS, I have trunk $/project/trunk and a branch $/project/dev/feature/new_one.
I have merged my branch back to trunk as follows:
C33($/project/trunk)
| \
| \
| C32($/project/dev/feature/new_one)
| |
| |
| |
...
I use the TFS API and can find the merge changeset C33. With the method QueryMerges(), I'm able to find the parent changeset C32 with all the changes on the files, but not the information I need :(
Is there a way, using the TFS API, to find the repository path of the branch merged $/project/dev/feature/new_one?
With the changeset C32, I'm only able to get paths of modified files, like $/project/dev/feature/new_one/path/to/file.txt but I'm unable to extract the path of the branch from the full path of the file :(
PS : A solution working since TFS2008 will be the best, but if it works only since 2010, it should be good...
PS2 : solving this problem will help to manage merge changesets in git-tfs which I develop...

Unfortunately there is no API method to get a branch for a given item path, which you would think is a fairly common use case.
TFS 2010 onwards you can use VersionControlServer.QueryRootBranchObjects to query all branches in version control. Using RecursionType.Full as the parameter to this method you will get a BranchObject array of all branches with no parent and all of their descendents. You can then determine a branch for a given file path as follows:
var collection = new TfsTeamProjectCollection(new Uri("http://tfsuri"));
var versionControl = collection.GetService<VersionControlServer>();
var branchObjects = versionControl.QueryRootBranchObjects(RecursionType.Full);
var mergeFilePath = "$/project/dev/feature/new_one/path/to/file.txt";
var branch = branchObjects.SingleOrDefault(b => {
var branchPath = b.Properties.RootItem.Item;
return mergeFilePath.StartsWith(branchPath.EndsWith("/") ? branchPath : branchPath + "/");
});
Console.WriteLine(branch.Properties.RootItem.Item);
As shown, the path to the branch is at BranchObject.Properties.RootItem.Item. I believe it is safe to find the relevant BranchObject in the array simply by checking which branch's path is contained in the merge file's path (given it is only possible match at most one branch as TFS enforces that only one branch can exist in a given folder hierarchy).
Just to be aware, I have been burned by this Connect issue when using QueryRootBranchObjects in TFS 2012. The cause were some spurious branches that had apostrophes in the branch name.
The workaround to this is to use VersionControlServer.QueryBranchObjects, however this takes an item identifier which is the exact path to the branch. Clearly you don't know the branch path at this point as all you have is a file path, so you have to recurse up the directories of the file path calling QueryBranchObjects each time until you get a match.

Related

read specific files names in adf pipeline

I have got requirement saying, blob storage has multiple files with names file_1.csv,file_2.csv,file_3.csv,file_4.csv,file_5.csv,file_6.csv,file_7.csv. From these i have to read only filenames from 5 to 7.
how we can achieve this in ADF/Synapse pipeline.
I have repro’d in my lab, please see the below repro steps.
ADF:
Using the Get Metadata activity, get a list of all files.
(Parameterize the source file name in the source dataset to pass ‘*’ in the dataset parameters to get all files.)
Get Metadata output:
Pass the Get Metadata output child items to ForEach activity.
#activity('Get Metadata1').output.childItems
Add If Condition activity inside ForEach and add the true case expression to copy only required files to sink.
#and(greater(int(substring(item().name,4,1)),4),lessOrEquals(int(substring(item().name,4,1)),7))
When the If Condition is True, add copy data activity to copy the current item (file) to sink.
Source:
Sink:
Output:
I took a slightly different approaching using a Filter activity and the endsWith function:
The filter expression is:
#or(or(endsWith(item().name, '_5.csv'),endsWith(item().name, '_6.csv')),endsWith(item().name, '_7.csv'))
Slightly different approaches, similar results, it depends what you need.
You can always do what #NiharikaMoola-MT suggested . But since you already know the range of the files ( 5-7) , I suggest
Declare two paramter as an upper and lower range
Create a Foreach loop and pass the parameter and to create a range[lowerlimit,upperlimit]
Create a paramterized dataset for source .
Use the fileNumber from the FE loop to create a dynamic expression like
#concat('file',item(),'.csv')

Terraform, how to call 100 times module with seperate values

I have a problem with terraform configuration which I really don't know how to resolve. I have written module for policy assigments, module as parameters taking object with 5 attributes. The question is if it is possible to split into folder structure tfvars file. I mean eg.
I have main folder subscriptions -> folder_subscription_name -> some number of files tfvars with configuration for each of the policy assigment
Example of each of the file
testmap = {
var1 = "test1"
var2 = "test2"
var3 = "test3"
var4 = "test4"
var5 = "test5"
}
In the module I would like to iterate over all of the maps combine into list of maps. Is it good approach? How to achive that or maybe I should use something other to do it like terragrunt ?
Please give me some tips what is the best way to achive that, basically the goal is that I don't want to have one insanely big tvars file with list of 100 maps but splitted into 100 configuration file for each of the assignment.
Based on the comment on the original question:
The question is preety simple how to keep input variables for each of
resource in seperate file instead of keeping all of them in one very
big file
I would say you aim at having different .tfvars files. For example, you could have a dev.tfvars and a prod.tfvars file.
To plan your deployment, you can pass those file with:
terraform plan --var-file=dev.tfvars -out planoutput
To apply those changes
terraform apply planoutput

Need to get issuelinks types with Jira python

I am writing a script with Jira python and I have encountered a big obstacle here.
I need to access to one of the issuelinks under "is duplicated by" but I don't have any idea about the attributes I can use.
I can get to the issuelinks field but I can't go further from here.
This is I've got so far:
issue = jira.issue(ISSUE_NUM) #this is the issue I am handling
link = issue.fields.issuelinks # I 've accessed to the issuelinks field
if hasattr(link, "inwardIssue"):
inwardIssue = link.inwardIssue
and I want to do this from here :
if(str(inwardIssue.type(?)) == "is duplicated by"):
inward Issues can be
is cloned by
is duplicated by
and so on.
how can I get the type of inward Issues??
There seem to be a few types of issue links. So far I've seen: Blocker, Cause, Duplicate and Reference.
In order to identify the type that the IssueLink is you can do the following:
issue = jira.issue(ISSUE_NUM)
all_issue_links = issue.fields.issuelinks
for link in all_issue_links:
if link.type.name == 'Duplicate':
inward_issue = link.inwardIssue
# Do something with link

Error: Not found: Dataset my-project-name:domain_public was not found in location US

I need to make a query for a dataset provided by a public project. I created my own project and added their dataset to my project. There is a table named: domain_public. When I make query to this table I get this error:
Query Failed
Error: Not found: Dataset my-project-name:domain_public was not found in location US
Job ID: my-project-name:US.bquijob_xxxx
I am from non-US country. What is the issue and how to fix it please?
EDIT 1:
I change the processing location to asia-northeast1 (I am based in Singapore) but the same error:
Error: Not found: Dataset censys-my-projectname:domain_public was not found in location asia-northeast1
Here is a view of my project and the public project censys-io:
Please advise.
EDIT 2:
The query I used to type is based on censys tutorial is:
#standardsql
SELECT domain, alexa_rank
FROM domain_public.current
WHERE p443.https.tls.cipher_suite = 'some_cipher_suite_goes_here';
When I changed the FROM clause to:
FROM `censys-io.domain_public.current`
And the last line to:
WHERE p443.https.tls.cipher_suite.name = 'some_cipher_suite_goes_here';
It worked. Shall I understand that I should always include the projectname.dataset.table (if I'm using the correct terms) and point the typo the Censys? Or is this special case to this project for some reason?
BigQuery can't find your data
How to fix it
Make sure your FROM location contains 3 parts
A project (e.g. bigquery-public-data)
A database (e.g. hacker_news)
A table (e.g. stories)
Like so
`bigquery-public-data.hacker_news.stories`
*note the backticks
Examples
Wrong
SELECT *
FROM `stories`
Wrong
SELECT *
FROM `hacker_news.stories`
Correct
SELECT *
FROM `bigquery-public-data.hacker_news.stories`
In Web UI - click Show Options button and than select your location for "Processing Location"!
Specify the location in which the query will execute. Queries that run in a specific location may only reference data in that location. For data in US/EU, you may choose Unspecified to run the query in the location where the data resides. For data in other locations, you must specify the query location explicitly.
Update
As it stated above - Queries that run in a specific location may only reference data in that location
Assuming that censys-io.domain_public dataset has its data in US - you need to specify US for Processing Location
The problem turned out to be due to wrong table name in the FROM clause.
The right FROM clause should be:
FROM `censys-io.domain_public.current`
While I was typing:
FROM domain_public.current
So the project name is required in the FROM and `` are required because of - in the project name.
Make sure your FROM location contains 3 parts as #stevec mentioned
A project (e.g. bigquery-public-data)
A database (e.g. hacker_news)
A table (e.g. stories)
But in my case, I was using the LegacySql within the Google script editor, so in that case you need to state that to false, for example:
var projectId = 'xxxxxxx';
var request = {
query: 'select * from project.database.table',
useLegacySql: false
};
var queryResults = BigQuery.Jobs.query(request, projectId);
check exact case [upper or lower] and spelling of table or view name.
copy it from table definition and your problem will be solved.
i was using FPL009_Year_Categorization instead of FPL009_Year_categorization
using c as C and getting the error "not found in location asia-south1"
I copied with exact case and problem is resolved.
On your Big Query console, go to the Data Explorer on the left pane, click the small three dots, then select query option from the list. This step confirms you choose the correct project and dataset. Then you can edit the query on the query pane on the right.
may be dataset name changed in create dataset option. it should be US or default location
enter image description here

Query repository file contents from GitLab

I want retrieve the commit id of a file readmeTest.txt through Invantive SQL like so:
select * from repository_files(29, file-path, 'master')
But for this to work I need a project-id, file-path and a ref.
I know my project-id (I got it from select * from projects) and my ref (master branch) but I don’t know where I can find the path to the file I want to retrieve information of.
So where can I find the value of file-path and ref?
This is my repository directory tree, where I can see the files exist:
You need to join several entities in GitLab to get the information you need.
The fields from your repository_files table function and their meaning:
project-id can be found as id in the projects entity, as you already knew;
ref-name can be found as name in repositories;
ref is the name of a branch, a tag or a commit, so let's assume you want the master for now.
Giving this information, you need the below query to get all repository files and their content in a project (I narrowed it down to a single project for now):
select pjt.name project_name
, rpe.name repository_name
, rpf.content file
from projects pjt
join repositories(pjt.id) rpe
on 1=1
and rpe.name like '%.%'
join repository_files(pjt.id, rpe.name, 'master') rpf
on 1=1
where pjt.id = 1