Azure DevOps Server: Migrating Test Cases - Expecting end of string error - azure-devops-migration-tools

I am attempting my first migration between two different collections on Azure DevOps Server 2019.
The new collection has a custom inheritance process model.
I am trying to migrate test cases only for now. I would like to migrate test cases, test suites, and test plans.
I have added the configuration as per my understanding, but the migration keeps on failing.
Error message:
migration.exe Warning: 0 : [EXCEPTION] Microsoft.TeamFoundation.WorkItemTracking.Client.ValidationException: Expecting end of string. The error is caused by «BY».
at Microsoft.TeamFoundation.WorkItemTracking.Client.Query.Initialize(WorkItemStore store, String wiql, IDictionary context, Int32[] ids, Int32[] revs, Boolean dayPrecision)
Redacted Processor from the config file below.
Thanks!
"Processors": [
{
"ObjectType": "VstsSyncMigrator.Engine.Configuration.Processing.NodeStructuresMigrationConfig",
"PrefixProjectToNodes": false,
"Enabled": false,
"BasePaths": [
"****\\Market Regulation\\Market Surveillance - Bonds",
"****\\Trading Value Stream"
]
},
{
"ObjectType": "VstsSyncMigrator.Engine.Configuration.Processing.WorkItemMigrationConfig",
"ReplayRevisions": true,
"PrefixProjectToNodes": false,
"UpdateCreatedDate": true,
"UpdateCreatedBy": true,
"UpdateSourceReflectedId": false,
"BuildFieldTable": false,
"AppendMigrationToolSignatureFooter": false,
"QueryBit": "AND [System.AreaPath] = '****\\Market Regulation\\Market Surveillance – Bonds' AND [System.WorkItemType] = 'Test Case' ",
"OrderBit": "ORDER BY [System.Id]",
"Enabled": true,
"LinkMigration": true,
"AttachmentMigration": true,
"AttachmentWorkingPath": "c:\\temp\\WorkItemAttachmentWorkingFolder\\",
"FixHtmlAttachmentLinks": false,
"SkipToFinalRevisedWorkItemType": false,
"WorkItemCreateRetryLimit": 5,
"FilterWorkItemsThatAlreadyExistInTarget": true,
"PauseAfterEachWorkItem": false,
"AttachmentMazSize": 480000000,
"CollapseRevisions": false
}
]

I believe your "OrderBit" syntax is wrong, it should look like this:
"OrderBit": "[System.ChangedDate] desc"
Just adapt it to your situation.

Related

Create a oauth-token for integration tests

We want to create a "long lasting" token for integration testing purposes. We are using a keycloak for creating tokens normally. But I don't see a possibility to create tokens with no expiration, so that reoccuring integration tests on dev-stage can be done without interuption.
What would you suggest for such automatically repeating testing regarding oauth?
All access token should expire at some point, that why the rfc details the use of refresh token, wich can be used indefinitly to keep your service running, the basic idea is that when you request an access token, you'll get an access token plus a refresh token, and when the first token expires, you send it to the keycloak server and it will regenerate a new access token and a new refresh token.
In practice, you should use your token as long as its valid (since you know in advance it lifespan), and when your token is about to expire you resend the request using the refresh token.
src: RFC oauth2
How about make longer time(a couple of days) for access token before start integration test?
After finish integration test, return to default time(5 minutes).
This is my demo test steps
Get master of realm access token and assign token variable
Get my-realm's realm data
this is default my-realm's setting data
{
"id": "my-realm",
"realm": "my-realm",
"notBefore": 0,
"defaultSignatureAlgorithm": "RS256",
"revokeRefreshToken": false,
"refreshTokenMaxReuse": 0,
"accessTokenLifespan": 300,
"accessTokenLifespanForImplicitFlow": 900,
"ssoSessionIdleTimeout": 1800,
"ssoSessionMaxLifespan": 36000,
"ssoSessionIdleTimeoutRememberMe": 0,
"ssoSessionMaxLifespanRememberMe": 0,
"offlineSessionIdleTimeout": 2592000,
"offlineSessionMaxLifespanEnabled": false,
"offlineSessionMaxLifespan": 5184000,
"clientSessionIdleTimeout": 0,
"clientSessionMaxLifespan": 0,
"clientOfflineSessionIdleTimeout": 0,
"clientOfflineSessionMaxLifespan": 0,
"accessCodeLifespan": 60,
"accessCodeLifespanUserAction": 300,
"accessCodeLifespanLogin": 1800,
"actionTokenGeneratedByAdminLifespan": 43200,
"actionTokenGeneratedByUserLifespan": 300,
"oauth2DeviceCodeLifespan": 600,
"oauth2DevicePollingInterval": 5,
"enabled": true,
"sslRequired": "external",
"registrationAllowed": false,
"registrationEmailAsUsername": false,
"rememberMe": false,
"verifyEmail": false,
"loginWithEmailAllowed": true,
"duplicateEmailsAllowed": false,
"resetPasswordAllowed": false,
"editUsernameAllowed": false,
"bruteForceProtected": false,
"permanentLockout": false,
"maxFailureWaitSeconds": 900,
"minimumQuickLoginWaitSeconds": 60,
"waitIncrementSeconds": 60,
"quickLoginCheckMilliSeconds": 1000,
"maxDeltaTimeSeconds": 43200,
"failureFactor": 30,
"defaultRole": {
"id": "3798f9f6-3383-474e-997e-123d9b534ae4",
"name": "default-roles-my-realm",
"description": "${role_default-roles}",
"composite": true,
"clientRole": false,
"containerId": "my-realm"
},
"requiredCredentials": [
"password"
],
"otpPolicyType": "totp",
"otpPolicyAlgorithm": "HmacSHA1",
"otpPolicyInitialCounter": 0,
"otpPolicyDigits": 6,
"otpPolicyLookAheadWindow": 1,
"otpPolicyPeriod": 30,
"otpSupportedApplications": [
"FreeOTP",
"Google Authenticator"
],
"webAuthnPolicyRpEntityName": "keycloak",
"webAuthnPolicySignatureAlgorithms": [
"ES256"
],
"webAuthnPolicyRpId": "",
"webAuthnPolicyAttestationConveyancePreference": "not specified",
"webAuthnPolicyAuthenticatorAttachment": "not specified",
"webAuthnPolicyRequireResidentKey": "not specified",
"webAuthnPolicyUserVerificationRequirement": "not specified",
"webAuthnPolicyCreateTimeout": 0,
"webAuthnPolicyAvoidSameAuthenticatorRegister": false,
"webAuthnPolicyAcceptableAaguids": [],
"webAuthnPolicyPasswordlessRpEntityName": "keycloak",
"webAuthnPolicyPasswordlessSignatureAlgorithms": [
"ES256"
],
"webAuthnPolicyPasswordlessRpId": "",
"webAuthnPolicyPasswordlessAttestationConveyancePreference": "not specified",
"webAuthnPolicyPasswordlessAuthenticatorAttachment": "not specified",
"webAuthnPolicyPasswordlessRequireResidentKey": "not specified",
"webAuthnPolicyPasswordlessUserVerificationRequirement": "not specified",
"webAuthnPolicyPasswordlessCreateTimeout": 0,
"webAuthnPolicyPasswordlessAvoidSameAuthenticatorRegister": false,
"webAuthnPolicyPasswordlessAcceptableAaguids": [],
"browserSecurityHeaders": {
"contentSecurityPolicyReportOnly": "",
"xContentTypeOptions": "nosniff",
"xRobotsTag": "none",
"xFrameOptions": "SAMEORIGIN",
"contentSecurityPolicy": "frame-src 'self'; frame-ancestors 'self'; object-src 'none';",
"xXSSProtection": "1; mode=block",
"strictTransportSecurity": "max-age=31536000; includeSubDomains"
},
"smtpServer": {},
"eventsEnabled": false,
"eventsListeners": [
"jboss-logging"
],
"enabledEventTypes": [],
"adminEventsEnabled": false,
"adminEventsDetailsEnabled": false,
"identityProviders": [],
"identityProviderMappers": [],
"internationalizationEnabled": false,
"supportedLocales": [],
"browserFlow": "browser",
"registrationFlow": "registration",
"directGrantFlow": "direct grant",
"resetCredentialsFlow": "reset credentials",
"clientAuthenticationFlow": "clients",
"dockerAuthenticationFlow": "docker auth",
"attributes": {
"cibaBackchannelTokenDeliveryMode": "poll",
"cibaExpiresIn": "120",
"cibaAuthRequestedUserHint": "login_hint",
"oauth2DeviceCodeLifespan": "600",
"oauth2DevicePollingInterval": "5",
"parRequestUriLifespan": "60",
"cibaInterval": "5"
},
"userManagedAccessAllowed": false,
"clientProfiles": {
"profiles": []
},
"clientPolicies": {
"policies": []
}
}
Extend logger period of access(2 days) token lifetime
I changed accessTokenLifespan value from 300 to 172800 (= 3600 * 24 *2) seconds
use PUT method for update realm data
In the UI of Keycloak, the Access Token Lifespan will be changed into 2 days
the Status of API call should be return 204 (No Content)
Run your integration tests
Return back 2.'s default(or previous) lifetime

Ansible 'find' command - print only filenames

I am trying to use Ansible find command to delete files with a given pattern. Before executing the delete part, I want to list the files that will be deleted. I want to list only filenames including path. The default debug prints a lot of information
- name: Ansible delete old files from pathslist
find:
paths: "{{ pathslist }}"
patterns:
- "authlog.*"
- "server.log.*"
register: var_log_files_to_delete
- name : get the complete path
set_fact:
files_found_path: "{{ var_log_files_to_delete.files }}"
- debug:
var: files_found_path
This outputs like below
{
"atime": 1607759761.7751443,
"ctime": 1615192802.0948966,
"dev": 66308,
"gid": 0,
"gr_name": "root",
"inode": 158570,
"isblk": false,
"ischr": false,
"isdir": false,
"isfifo": false,
"isgid": false,
"islnk": false,
"isreg": true,
"issock": false,
"isuid": false,
"mode": "0640",
"mtime": 1607675101.0750349,
"nlink": 1,
"path": "/var/log/authlog.87",
"pw_name": "root",
"rgrp": true,
"roth": false,
"rusr": true,
"size": 335501,
"uid": 0,
"wgrp": false,
"woth": false,
"wusr": true,
"xgrp": false,
"xoth": false,
"xusr": false
}
I tried files_found_path: "{{ var_log_files_to_delete.files['path'] }}" but it generates an error.
How can I print only the paths?
Thank you
The Jinja2 filter map as a parameter attribute to transform a list of dict into a list of a specific attribute of each element (https://jinja.palletsprojects.com/en/2.11.x/templates/#map):
- name : get the complete path
set_fact:
files_found_path: "{{ var_log_files_to_delete.files | map(attribute='path') | list }}"
For more complex data extraction, there is the json_query filter (https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#selecting-json-data-json-queries)

Migrating WorkItems that have no longer existing 'Team Project' references in their history fails

While migrating a project across two distinct Azure DevOps Organizations with the following config for the processor:
{
"ObjectType": "VstsSyncMigrator.Engine.Configuration.Processing.WorkItemMigrationConfig",
"Enabled": true,
"PrefixProjectToNodes": false,
"UpdateCreatedDate": true,
"UpdateCreatedBy": true,
"AppendMigrationToolSignatureFooter": false,
"LinkMigration": true,
"AttachmentMigration": true,
"AttachmentMazSize": 480000000,
"AttachmentWorkingPath": "c:\\temp\\WorkItemAttachmentWorkingFolder\\",
"FixHtmlAttachmentLinks": true,
"SkipToFinalRevisedWorkItemType": false,
"WorkItemCreateRetryLimit": 5,
"PauseAfterEachWorkItem": false,
"FilterWorkItemsThatAlreadyExistInTarget": false,
"BuildFieldTable": false,
"QueryBit": "AND [System.WorkItemType] NOT IN ('Test Suite', 'Test Plan') ",
"OrderBit": "[System.ChangedDate] desc",
"ReplayRevisions": true,
"CollapseRevisions": false
}
The process fails in the items that, in any of their revisions, reference Team Projects that no longer exist - those items were moved across projects within the same Organization in the past, and the originating project was in the meanwhile deleted.
[ User Story][Complete: 18/3284][sid:1471 |Rev:64 ][tid:null | Microsoft.TeamFoundation.WorkItemTracking.Client.DeniedOrNotExistException: TF26192: The team project specified by the ID 129 does not exist. Check the team project ID and try again.
at Microsoft.TeamFoundation.WorkItemTracking.Client.ProjectCollection.GetById(Int32 projectId)
at Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItem.CheckUpdateCachedData(Boolean projectChanged, Boolean typeChanged)
at Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItem.get_Type()
at Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItemFieldData.Microsoft.TeamFoundation.WorkItemTracking.Internals.IWorkItemOpenFieldDataHelper.SetLatestData(Dictionary`2 latestData)
at Microsoft.TeamFoundation.WorkItemTracking.Internals.WorkItemHelper.LoadWorkItemFieldData(IRowSetCollectionHelper tables, IWorkItemOpenFieldDataHelper helper)
at Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItem.LoadWorkItemFromRowSetInternal(Int32 rev, Nullable`1 asof, IWorkItemRowSets witem)
at Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItem..ctor(WorkItemStore store, Int32 id, Int32 revision)
at VstsSyncMigrator.Engine.WorkItemStoreContext.GetRevision(WorkItem workItem, Int32 revision) in D:\a\1\s\src\VstsSyncMigrator.Core\Execution\ComponentContext\WorkItemStoreContext.cs:line 202
at VstsSyncMigrator.Engine.WorkItemMigrationContext.ReplayRevisions(List`1 revisionsToMigrate, WorkItem sourceWorkItem, WorkItem targetWorkItem, Project destProject, WorkItemStoreContext sourceStore, Int32 current, WorkItemStoreContext targetStore) in D:\a\1\s\src\VstsSyncMigrator.Core\Execution\MigrationContext\WorkItemMigrationContext.cs:line 361
Is there any way to overcome this while maintaining the full revision history?
The issue is overcome if I migrate only the tip revisions.

During migration commit links are "Fixed", "Removed" but not "Added"

When I run migration in azure-devops-migration-tools, there is Fixing, Removing of commit links, but no Adding.
As a result commit links do not get migrated.
However it doesn't reproduce in every project - in most projects this works fine and commits also get added. For example:
My WorkItemMigrationConfig:
{
"ObjectType": "VstsSyncMigrator.Engine.Configuration.Processing.WorkItemMigrationConfig",
"ReplayRevisions": true,
"PrefixProjectToNodes": false,
"UpdateCreatedDate": true,
"UpdateCreatedBy": true,
"UpdateSourceReflectedId": false,
"BuildFieldTable": false,
"AppendMigrationToolSignatureFooter": false,
"QueryBit": "AND [System.ID] = 41128",
"OrderBit": "[System.ChangedDate] desc",
"Enabled": true,
"LinkMigration": true,
"AttachmentMigration": true,
"AttachmentWorkingPath": "c:\\temp\\WorkItemAttachmentWorkingFolder\\",
"FixHtmlAttachmentLinks": false,
"SkipToFinalRevisedWorkItemType": false,
"WorkItemCreateRetryLimit": 5,
"FilterWorkItemsThatAlreadyExistInTarget": true,
"PauseAfterEachWorkItem": false,
"AttachmentMazSize": 480000000,
"CollapseRevisions": false
}
I suspect this may be project specific, but currently I have no idea what is causing the issue. What may be the reason for this?
//edit
After some research in the tool's source code I have found that commit links are only added if commit link URIs are different (https://github.com/nkdAgility/azure-devops-migration-tools/blob/9ef6ee4fd863de30d8a2179450bc86cb5cfafeb5/src/VstsSyncMigrator.Core/Execution/OMatics/RepoOMatic.cs#L137)
In my case these links are the same because TFS / AzureDevops project IDs are the same, as destination project is a result of cloning source project's collection.
In order for this to work the Git repository must be in the target first! If you have changes the name of the Git Repository you must add it to the mapping.
FixGitCommitLinks - Allows you to fix the migrated Git commit hooks (and
thus external links) to point to the new repository in the target
project. If the source and target repository names are the same, this
will work out of the box. If the target repository has a different
name, you can specify that name via the “TargetRepository” property
https://nkdagility.github.io/azure-devops-migration-tools/
This is slightly out of date, you need to use a "GitRepoMapping" similar as the "WorkItemTypeDefinition" element in the yaml that maps all of the old names to the new names. Again it is only needed when you use a different name from the source.

Storing account settings in a single row with complex data

I need to store account settings for each account profile. I decided to use SQL DB for this, but not sure should I go with complex data (json/xml).
I found answers
Using a Single Row configuration table in SQL Server database. Bad idea?
https://softwareengineering.stackexchange.com/questions/163606/configuration-data-single-row-table-vs-name-value-pair-table
but none of them discusses using single row approach containing complex data
Complex data would be stored inside a following DB table
AccountID int
AccountSettings nvarchar(max)
and would contain AccountSettings data such as
"settings": {
"branding": {
"header_color": "1A00C3",
"page_background_color": "333333",
"tab_background_color": "3915A2",
"text_color": "FFFFFF",
"header_logo_url": "/path/to/header_logo.png",
"favicon_url": "/path/to/favicon.png",
},
"apps": {
"use": true,
"create_private": false,
"create_public": true
},
"tickets": {
"comments_public_by_default": true,
"list_newest_comments_first": true,
"collaboration": true,
"private_attachments": true,
"agent_collision": true
"list_empty_views": true,
"maximum_personal_views_to_list": 12,
"tagging": true,
"markdown_ticket_comments": false
},
"chat": {
"maximum_request_count": 5,
"welcome_message": "Hello, how may I help you?",
"enabled": true
},
"voice": {
"enabled": true,
"maintenance": false,
"logging": true
},
"twitter": {
"shorten_url": "optional"
},
"users": {
"tagging": true,
"time_zone_selection": true,
"language_selection": true
},
"billing": {
"backend": 'internal'
},
"brands": {
"default_brand_id": 47
},
"active_features": {
"on_hold_status": true,
"user_tagging": true,
"ticket_tagging": true,
"topic_suggestion": true,
"voice": true,
"business_hours": true,
"facebook_login": true,
"google_login": true,
"twitter_login": true,
"forum_analytics": true,
"agent_forwarding": true,
"chat": true,
"chat_about_my_ticket": true,
"customer_satisfaction": true,
"csat_reason_code": true,
"screencasts": true,
"markdown": true,
"language_detection": true,
"bcc_archiving": true,
"allow_ccs": true,
"advanced_analytics": true,
"sandbox": true,
"suspended_ticket_notification": true,
"twitter": true,
"facebook": true,
"feedback_tabs": true,
"dynamic_contents": true,
"light_agents": true
},
"ticket_sharing_partners": [
"foo#example.com"
]
}
Other solution is a widely used single row approach such as
AccountID int
SettingsName nvarchar(max)
SettingsValue nvarchar(max)
that could hold data such as
AccountID SettingsName SettingsValue
1 Branding.Header_Color 1A00C3
1 Branding.Page_Background_Color 333333
1 Apps.Use true
......
I assume both solutions are valid and would depend based on application needs, but would REALLY like to know is there an issue that i am not seeing when using complex data with single row approach to store application settings?
One concern is if you have a busy large database, you cannot do ONLINE re-indexing (if you have the Enterprise version) on XML, text, varchar(max) type fields. This causes grief for 2008 R2. The newer versions of MS SQL Server can re-index online varchar(max) fields, so it depends on which version you are running.
Also, you won't be able to query or index if you need to search for specific records, unless you go with the SettingsName, SettingsValue type of table, which I've used a lot. This solves the re-index online issue (if that applies to your situation), as well as indexing the fields for quick queries.