I am trying to migrate an Azure DevOps project from one organization to another organization. I get the following message at console output.
[08:22:35 INF] Found target project as myTestProject [08:22:35 WRN]
ValidatingRequiredField: Epic does not contain
Custom.ReflectedWorkItemId
Do this mean that the custom process used has to be used in target project?
If so, is there a method to export processes in Azure DevOps?
The message actually only means that the Epic work item is missing the ReflectedWorkItemId field (see documentation and documentation 2).
The field is used to store the state of the migration. Each affected WI Type (Source + Target) must have this field. Depending on the Process Model type you use the old tooling (witadmin) or the new tooling(web access).
"Custom" actually only means a derived process template. With the inheritance model you cannot change the template directly, but derive from the Microsoft original.
Related
I am creating c# program and want to execute it from custom activity azure data factory. However, I am not getting the steps that I should follow.
I have followed a Microsoft site for the same, but the steps are not clear. So please help.
The deployment happens at runtime. Basically, Data Factory passes the executable to the Batch service. If you haven't already done so, create an Azure Batch Linked Service to your Batch Account and reference it in the Custom Activity's "Azure Batch" tab.
You will need to load the executable package to a folder in Azure Blob Storage. Make sure to include the EXE and any dependent DLLs. In the "Settings" tab, do the following:
Reference the Blob Storage Linked Service
Reference the folder path that holds the executable(s).
Specify the command to execute (which should be the ConsoleAppName.exe).
Here is a screen shot of the Settings:
If you need to pass parameters from ADF to Batch, they are called "Extended properties", and are handled differently in your Console app than typical parameters. More information can be found at this answer.
With Azure DevOps Server 2019 RC it is possible to enable inherited process model on new collections (see release notes). Is there any way to use the inherited process model also for existing collections, where no customization on the process was made
Inherited process model is currently only supported for new collections created with Azure DevOps Server 2019 and not for existing collections.
See this Developer Community entry which asks for it.
I added a set of comments on how I hacked my way from an existing XML collection with a set of Projects to the Inherited type.
https://developercommunity.visualstudio.com/content/idea/614232/bring-inherited-process-to-existing-projects-for-a.html
Working as long as a vanilla workflow is applied to an existing XML collection before doing the voodoo thing.
Not exactly an answer for your question but we recently had the same task and I want to share how we handled this. We also wanted to move to the inherited model and we did not want to do any hacking. So we decided to create a new Collection on our Azure Devops Server 2020 with the inherited model and also migrate our tfvc repository to git.
Create the new Collection. Documentation
git-tfs to create a local repository from our tfvc repository and push it
azure-devops-migration-tools to copy all work items from the old collection to the new collection
In the old collection add the ReflectedWorkItemId for every WorkItem look here
In the new collection add the ReflectedWorkItemId for every WorkItem by using the process editor
Pro-Tip: create a full backup of the new collection to revert to this state easily. I had multiple try-error-restores.
You can't migrate shared steps or shared parameters like this, because you can't edit these work item types in the new collection. There is a workaround
We used the WorkItemTrackingProcessor to migrate all Epics/Features/Product Backlog Items/Bugs/Tasks/Test Cases. Then the same processor but with the mentioned workaround for Shared Steps and Shared Parameters.
This processor also migrates the Iterations and Area Paths
Finally we used the TestPlansAndSuitesMigration to migrate the Test Plans and Suites
For speeding up the migration, you can chunk the work items (for example by date or id) and start the migration multiple times.
Our build and release pipelines + task groups were migrated manually by import and export
We migrated the variable groups by using the API
The teams were created manually and we added the default area paths also by hand
I need to change a package for ~250 SAP development objects (ABAP classes, data elements, tables, etc). I'm getting an error message TR242 (Object already exported, no package change is possible) when I'm trying to do the change via se24/se80 transactions or via RSWBO052 report.
SAP help docs say that the object must be copied under new name, the old one must be deleted and the new one must be renamed to the old name back. However, it's not a good way for 250 objects.
Is there any way to do a mass package change except call tranaction/LSMW for this case?
The problem occurred because I was trying to move the development objects to a non transportable package as #vwegert metnioned above. The target package was marked as non transportable because it was marked as a legacy one. This happened because the target package was moved from a system with basis level lower then the current system basis level. Next steps are necessary to resolve the issue:
The legacy package must be migrated via report RS_MIGRATE_PACKAGES (see note 1711900). The mark 'legacy package' will be removed, but the package will be still non transportable. However, you will be able to recreate the package after the migration.
Delete the non transportable target package and create a new as copy of the non TMS package.
Assign all necessary objects to package created at step 2 using RSWBO052 report.
This message occurs if you try to move objects from a transport-enabled package to a non-transportable package like $TMP. The rationale behind this is:
The object once was in a transportable package, so it must have been added to at least one transport request.
The transport request might have been transported to another system (directly or via ToC), so the other system might have that object.
The current system is the original system of the object, so it is responsible for notifying the other systems (via transport) when the object is to be deleted.
Moving the object to a non-transportable package is semantically equivalent to deleting it for the rest of the system landscape.
Since that process happens very infrequently, it's usually sufficient to direct the developer to copy and delete the object.
Let me explain in more detail:
1st: I'm running endeca 3.1, so Endeca Server here refers to 3.0's Data Domain.
I'm required to use an Endeca Server currently present on Endeca (Downloaded a Demo VM). All the info on it, including, groups, attributes and data, must be merged into out Endeca Server. (It can also be the other way around, i could merge my Endeca Server into this one.)
So far, i've tried to do the following:
1) Clone the Endeca Server
2) use the putCollection sconfig operation to create a collection on it with the same name i have on mine.
3) Load configurations using the LoadCollection & LoadAttributes graphs from OEID POC Template 3.1. I point to the new collection on the Configuration.xls file.
This is where i encounter an issue. The LoadAttributes graph gets a T/O message from the server's WS. Then the config WSDL becomes inaccesible for a while. I can't go beyond this point.
I've been able to load data into the collection, but i need to load the attributes first.
THanks in advance for your replies.
Regards
There are a few techniques.
Have you tried exporting the data domain and then importing it?
You can use the endeca-cmd tools to export to a file, and then import from that file. This would enable you to add 2 datastores into one server.
If you want to combine 2 datastores then that is a different question.
The simplest approach in 3.1 if the data collections are small. Extract then as CSV (via a data-table), convert to XLS and add them via self provisioning into separate collections within a single data store. If you are running in the VM this is potentially the easiest approach.
This can also be done using Integrator.
You don't need to load the attributes unless you are using multi-value types. You can call against the conversation web-service to extract data and then load it using 'bulk-load' I would not worry too much about creating the attributes unless this becomes essential due to their type or complexity. If you cannot call against the conversation web-service, then again extract as csv and load using Integrator.
I have have a solution that I created with the new modeler tools. This gave me
two full "endpoints" in a single solution.
Now when I run them through my automated build, I have two dlls in the same
folder that implement IConfigureThisEndpoint.
If I just run NServiceBus.Host.exe \install (to get a Windows Service), it gives
me the (expected) error that there is more than one class that can be used.
I did some searching and Udi states here:
http://tech.groups.yahoo.com/group/nservicebus/message/3937 that "You can
specify which class you want loaded and avoid these issues - as the server
project in the pub/sub sample shows".
I looked at the pub/sub sample and I can't see how I can specify my class (at
least not at the command line).
Is there a way to get around having to modify my build to put the files in
separate folders? (Not really an easy task for me.)
Add a config entry to your app settings with the key EndpointConfigurationType and the value being the assembly qualified name of the type.