Forcing a TFS checkin of a file via C# - checkin

How can I specify that I ALWAYS want the local file to replace the server copy even if the TFS copy is newer?
if (pendingChanges.GetUpperBound(0)>-1)
ChangeSetNumber = workspace.CheckIn(pendingChanges, filename);
I can see from the intelisense that I can specify checkinoptions as a parameter of the CheckIn method, I just cannot find what I need to put in to have it always check in and ignore any conflict I might come up with.
Thanks in advance.
EDIT: I found a command TF RESOLVE "item" /auto:AcceptYours /recursive So I guess my revised question would be is there a programming equivalent to the /auto:AcceptYours switch?
NecroEDIT: process the conflicts before doing the checkin
Conflict[] conflicts = workspace.QueryConflicts(new string[] { TFSProject }, true);
foreach (Conflict conflict in conflicts)
{
conflict.Resolution = Resolution.AcceptTheirs;
workspace.ResolveConflict(conflict);
}

Checkins are atomic - either they all succeed or they all fail. If there are any conflicts that need to be resolved before the check in, the check in operation will throw an exception. (Documentation)
You have to evaluate checkin for conflicts and then resolve the CheckinConflicts by Workspace.ResolveConflict Method.
ResolveConflict expects CheckinConflict, and result of the EvaluateCheckin (which is CheckinEvaluationResult) includes CheckinConflicts.
This page may help.
Note: checkinoptions is not related with what you are asking.
Hope this helps.

Related

Detect the first time a VS Code extension version is loaded

I'd like to take an action the first time a user loads a new version of my VS Code extension. This is different from merely detecting first run as described by the How to run vscode extension command just right after installation? because I don't want to detect "just right after installation" I want to detect first run of each new version which is a totally different problem.
Mike Lischke's answer to that question doesn't actually answer that question, it answers this question, but that doesn't mean this is a duplicate question, it means the response to the other question doesn't actually answer the asked question, and since unlike many people I actually read the question, I didn't bother to read the answers, because answers to that question are not what I seek. Frankly I'm tempted to delete the question myself just to spite Stack Overflow because I'm fed up with this crap. Do whatever you like.
Searching the net turned up sample code
export function activate(context: vscode.ExtensionContext) {
if (context.firstTimeUse) {
//do the one-time-per-version-update thing
}
}
but ExtensionContext doesn't seem to have this property, at least not any more.
So how do you do it now?
I could record the version in a file and compare to the file before updating it, but if there's baked in support I'd rather do it the supported way.
There is no supported mechanism.
Since each update gets a new folder, you don't need to log a timestamp, just probe for the file. If it exists, not first run. If it doesn't exist, first run so create the file and do other first run things.
This is so simple and straightforward there probably won't ever be a supported mechanism. Thanks to Lex Li in the comments for confirming that this is the standard solution.
If you need to differentiate major, minor and maintenance releases, the simplest solution is to store the version string in context.globalState. You begin by trying to fetch from context.globalState. Absence means first run ever. If it's present, an exact match for current version means no change. For a non-match you can parse out major and minor version numbers context.globalState.
const currentVersion = context.extension.packageJSON.version as string;
const lastVersion = context.globalState.get("version") as string ?? "0.0.0"
if (lastVersion !== currentVersion) {
logger.warn(`Updated to ${currentVersion}`);
const lastVersionPart = lastVersion.split(".");
const currVersionPart = currentVersion.split(".");
if (lastVersionPart[0] !== currVersionPart[0]) {
// major version change
advertiseWalkthrough();
if (lastVersionPart[1] !== currVersionPart[1]) {
// minor version change
launchWhatsNew();
} else {
// it's a maintenance version change so don't pester the user
}
}
context.globalState.update("version", currentVersion);
}

Kotlin BuildType 'XYZ': id 'XYZ' is already used in BuildType(uuid='', id='XYZ', name='Deploy to envr') error

I was trying to refactor my Kotlin file that contains the configuration for a TeamCity pipeline. However, I keep getting the following error:
BuildType 'KotlinExperiments_DeployToEnvironment': id 'KotlinExperiments_DeployToEnvironment' is already used in BuildType(uuid='', id='KotlinExperiments_DeployToEnvironment', name='Deploy to test')
I tried to dynamically assign an ID, but that doesn't seem to work. Here are the links to the relevant files:
.teamcity/settings.kts
.teamcity/KotlinExperiments.kt
.teamcity/_buildTypes/DeployToEnvironment.kt
What am I missing?
It appears that there was an extra } in this line which is an invalid character for an ID. TeamCity didn't really provide an accurate error. After deleting the whole and recreating it again, TeamCity provided a much better error which led me to this finding.

Issues pulling change log using python

I am trying to query and pull changelog details using python.
The below code returns the list of issues in the project.
issued = jira.search_issues('project= proj_a', maxResults=5)
for issue in issued:
print(issue)
I am trying to pass values obtained in the issue above
issues = jira.issue(issue,expand='changelog')
changelog = issues.changelog
projects = jira.project(project)
I get the below error on trying the above:
JIRAError: JiraError HTTP 404 url: https://abc.atlassian.net/rest/api/2/issue/issue?expand=changelog
text: Issue does not exist or you do not have permission to see it.
Could anyone advise as to where am I going wrong or what permissions do I need.
Please note, if I pass a specific issue_id in the above code it works just fine but I am trying to pass a list of issue_id
You can already receive all the changelog data in the search_issues() method so you don't have to get the changelog by iterating over each issue and making another API call for each issue. Check out the code below for examples on how to work with the changelog.
issues = jira.search_issues('project= proj_a', maxResults=5, expand='changelog')
for issue in issues:
print(f"Changes from issue: {issue.key} {issue.fields.summary}")
print(f"Number of Changelog entries found: {issue.changelog.total}") # number of changelog entries (careful, each entry can have multiple field changes)
for history in issue.changelog.histories:
print(f"Author: {history.author}") # person who did the change
print(f"Timestamp: {history.created}") # when did the change happen?
print("\nListing all items that changed:")
for item in history.items:
print(f"Field name: {item.field}") # field to which the change happened
print(f"Changed to: {item.toString}") # new value, item.to might be better in some cases depending on your needs.
print(f"Changed from: {item.fromString}") # old value, item.from might be better in some cases depending on your needs.
print()
print()
Just to explain what you did wrong before when iterating over each issue: you have to use the issue.key, not the issue-resource itself. When you simply pass the issue, it won't be handled correctly as a parameter in jira.issue(). Instead, pass issue.key:
for issue in issues:
print(issue.key)
myIssue = jira.issue(issue.key, expand='changelog')

Create a repository on a remote server with RDF4J

I've been trying to create a new repository on a remote GraphDB server using RDF4J, but I'm having problems.
This runs, but is seemingly not correct
HTTPRepositoryConfig implConfig = new HTTPRepositoryConfig(address);
RepositoryConfig repoConfig = new RepositoryConfig("test", "test", implConfig);
Model m = new
However, based on the info I get from "edit repository" in the workbench, the result doesn't look right. All the values are empty, except for id and title.
This fails
I tried to copy the settings from an existing repository that I created on the workbench, but that failed with:
org.eclipse.rdf4j.repository.config.RepositoryConfigException:
Unsupported repository type: owlim:MonitorRepository
The code for that attempt is inspired by the one found here . Except that the config file is based on an existing repo, as explained above. I also tried to config file provided in the example, but that failed aswell:
org.eclipse.rdf4j.repository.config.RepositoryConfigException:
Unsupported Sail type: graphdb:FreeSail
Anyone got any tips?
UPDATE
As Henriette Harmse correctly pointed out, I should have provided my code, not simply linked to it. That way I might have discovered that I hadn't done a complete copy after all, but changed the important first bits that she points out in her answer. Full code below:
String address = "serveradr";
RemoteRepositoryManager repositoryManager = new RemoteRepositoryManager( address);
repositoryManager.initialize();
// Instantiate a repository graph model
TreeModel graph = new TreeModel();
InputStream config = Rdf4jHelper.class.getResourceAsStream("/repoconf2.ttl");
RDFParser rdfParser = Rio.createParser(RDFFormat.TURTLE);
rdfParser.setRDFHandler(new StatementCollector(graph));
rdfParser.parse(config, RepositoryConfigSchema.NAMESPACE);
config.close();
// Retrieve the repository node as a resource
Resource repositoryNode = graph.filter(null, RDF.TYPE, RepositoryConfigSchema.REPOSITORY).subjects().iterator().next();
// Create a repository configuration object and add it to the repositoryManager
RepositoryConfig repositoryConfig = RepositoryConfig.create(graph, repositoryNode);
It fails on the last line.
ANSWERED #HenrietteHarmse gives the correct method in her answer below. The error is caused by missing dependencies. Instead of using RDF4J directly, I should have used the graphdb-free-runtime.
There are a number of issues here:
(1) RepositoryManager repositoryManager = new LocalRepositoryManager(new File(".")); will create a repository where ever your Java application is running from.
(2) Changing to new LocalRepositoryManager(new File("$GraphDBInstall/data/repositories")) will cause the repository to be created under the control of GraphDB (assuming you have a local GraphDB instance) only if GraphDB is not running. If you start GraphDB after running your program, you will be able to see the repository in GraphDB workbench.
(3) What you need to do is get the repository manager of the remote GraphDB, which can be done with RepositoryManager repositoryManager = RepositoryProvider.getRepositoryManager("http://IPAddressOfGraphDB:7200");.
(4) In the way you have specified the config, you cause the RDF graph config to be lost. The correct way to specify it is:
RepositoryConfig repositoryConfig = RepositoryConfig.create(graph, repositoryNode);
repositoryManager.addRepositoryConfig(repositoryConfig);
(5) A minor issue is that GraphUtil.getUniqueSubject(...) has been deprecated, for which you can use something like the following:
Model model = graph.filter(null, RDF.TYPE, RepositoryConfigSchema.REPOSITORY);
Iterator<Statement> iterator = model.iterator();
if (!iterator.hasNext())
throw new RuntimeException("Oops, no <http://www.openrdf.org/config/repository#> subject found!");
Statement statement = iterator.next();
Resource repositoryNode = statement.getSubject();
EDIT on 20180408:
(5) Or you can use the compact option as #JeenBroekstra suggested in the comments:
Models.subject(
graph.filter(null, RDF.TYPE, RepositoryConfigSchema.REPOSITORY))
.orElseThrow(() -> new RuntimeException("Oops, no <http://www.openrdf.org/config/repository#> subject found!"));
EDIT on 20180409:
For convenience I have added the complete code example here.
EDIT on 20180410:
So the actual culprit turned out to be an incorrect pom.xml. The correct version is as below:
<dependency>
<groupId>com.ontotext.graphdb</groupId>
<artifactId>graphdb-free-runtime</artifactId>
<version>8.4.1</version>
</dependency>
I believe I just had the same issue. I used the example code from GraphDB Free for running with RDF4J as a remote service and ran into the same exception as you (Unsupported Sail type: graphdb:FreeSail). Henriette Harmse's answer does not directly address this issue but one should follow the suggestions given there to avoid running into issues later. In addition, based on a look into the RDF4J code you need the following dependency in your pom.xml file (assuming GraphDB 8.5):
<dependency>
<groupId>com.ontotext.graphdb</groupId>
<artifactId>graphdb-free-runtime</artifactId>
<version>8.5.0</version>
</dependency>
This seems to be because there is some kind of service loading going on with META-INF, which I frankly am not familiar with. Maybe someone can provide more details in the comments. The requirement for adding this dependency in also seems to be absent from the instructions, so if this works for you, please let me know. Others who followed the same steps we did should be able to resolve this issue as well then.

Using System.Reflection and resources in Phalanger

I need to embed some resource in a pure compiled dll written in php using phalanger.
These are txt files tha I set in visual studio as "Embedded Resource".
My problem is that I cannot use the Assembly class to get the resource using GetManifestResourceStream.
I tried code like this:
use System\Reflection\Assembly
$asm = Assembly::GetExecutingAssembly(); //this gives me mscorlib instead of my dll
$str = $asm->GetManifestResourceStream("name");
My question is: how do I get access to embedded resources in phalanger?
Many thanks
I'm not sure, why Assembly::GetExecutingAssembly() returns an incorrect value. Anyway to workaround the $asm value, use following code:
$MyType = CLRTypeOf MyProgram;
$asm = $MyType->Assembly;
Then you can access embedded resources as you posted
$asm->GetManifestResourceStream("TextFile1.txt");
or you can include standard resource file (.resx) into your project, and use \System\Resources\ResourceManager
$this->manager = new \System\Resources\ResourceManager("",$asm);
$this->manager->GetObject("String1",null);
Just note, currently there can be just one .resx within Phalanger project
This question is old, but the part of the Phalanger code (Php.Core.Emit.AddResourceFile() method) responsible for this hasn't changed since this was asked. I faced the same problem and solved it in (almost) non-hacky way. You have to provide alternative name (/res:/path/to/filename,alternative-name) for this to work though.
$asm = clr_typeof('self')->Assembly;
$resourceStream = $asm->GetManifestResourceStream("filename");
$reader = new \System\Resources\ResourceReader($resourceStream);
$type = $data = null;
$reader->GetResourceData("alternative-name", $type, $data);
// and still there are 4 excess bytes
// representing the length of the resource
$data = \substr($data, 4);
$stream = new IO\MemoryStream($data);
// after this $stream is usable as you would expect
Straightforward GetManifestResourceStream() (as suggested by Jakub) does not work because Phalanger does not use System.Reflection.Emit.ModuleBuilder.DefineManifestResource() (like I think it should when supplied with unrecognized file format). It uses ModuleBuilder.DefineResource() which returns ResourceWriter instead, that only really suited for .resources files. And this is what dictates the requirement to use ResourceReader when you need to read your resource.
Note: This answer applies to Phalanger master branch at the time of writing and prior versions since circa 2011. Noted because it looks like a bug (especially the need to use both original and alternative names).