Issue while migrating Classic pipeline using AzurePipelineProcessor - azure-devops-migration-tools

I am trying to migrate the classic build pipeline from one Azure DevOps organization to another, I get an error because of the mismatch in the id of the service connection
Error Message
[07:39:31 ERR] Error migrating BuildDefinition: RnL_Modernization. Please migrate it manually.
Url: POST https://dev.azure.com/Unisys-Sandbox//BPS-Remittance-and-Lockbox-NextGen/_apis/build/definitions/
{"$id":"1","innerException":null,"message":"The pipeline is not valid. Job Phase_1: Step input SonarQube references service connection c5aeac57-e3ce-45ac-a0d7-9ff582d26635 which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz. Job Phase_1: Step input SonarQube references service connection c5aeac57-e3ce-45ac-a0d7-9ff582d26635 which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz.","typeName":"Microsoft.TeamFoundation.DistributedTask.Pipelines.PipelineValidationException, Microsoft.TeamFoundation.DistributedTask.WebApi","typeKey":"PipelineValidationException","errorCode":0,"eventId":3000}
[07:39:31 INF] 0 of 1 BuildDefinition(s) got migrated..
My config file
{
"$type": "AzureDevOpsPipelineProcessorOptions",
"Enabled": true,
"MigrateBuildPipelines": true,
"MigrateReleasePipelines": true,
"MigrateTaskGroups": true,
"MigrateVariableGroups": true,
"MigrateServiceConnections": true,
"BuildPipelines": [ "RnL_Modernization" ],
"ReleasePipelines": null,
"ProcessorEnrichers": null,
"RefName": null,
"SourceName": "AzurePipelineSource",
"TargetName": "AzurePipelineTarget",
"RepositoryNameMaps":{
"eShopOnWeb":"eShopOnWeb",
"AKS":"AKS"
}
}
The error appears to be a mismatch in the ID of the service connection in the source and target project. When service connection is created in the target project the ID is autogenerated which is not matching with the source
Any help on this is appreciated

Related

How to get .Net Core 3.1 Azure WebJob to read the AzureWebJobsStorage connection string from the Connected Services setup?

I'm building a WebJob for Azure to run in an App Service using .Net Core 3.1.
The WebJob will be triggered via Timers (it's basically a cronjob).
Timer triggers require the AzureWebJobsStorage connection string as storage is required for Timer events.
When deployed to Azure App Service, I want the WebJob to read the AzureWebJobsStorage value from the properties on the App Service.
I have a Resource Manager template that deploys my infrastructure and sets the connection string on my App Service resource:
"connectionStrings": [
{
"name": "AzureWebJobsStorage",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('_StoreAccountName'), ';AccountKey=', listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('_StoreAccountName')), '2019-04-01').keys[0].value,';EndpointSuffix=core.windows.net')]"
}
],
When testing my WebJob locally, I need to set that AzureWebJobsStorage value so that my local builds can connect to storage.
Since I re-deploy the infrastructure all the time as I make tweaks and changes to it, I do not want to manually maintain the long connection string in my appsettings.json or a local.settings.json file.
In Visual Studio, In theory, I can add a Service Dependency to the project for Azure Storage and that will store the connection string in my local Secrets.json file. Then, when I redeploy the infrastructure I can use the Visual Studio UI to edit the connection and re-connect it to the newly deployed storage account (i.e. it will create and update the connection string without me having to do it manually).
When I add Azure Storage as a connected service, Visual Studio adds a line like this in my Secrets.json file:
"ConnectionStrings:<LABEL>": "DefaultEndpointsProtocol=https;AccountName=<LABEL>;AccountKey=_____________;BlobEndpoint=https://<LABEL>.blob.core.windows.net/;TableEndpoint=https://<LABEL>.table.core.windows.net/;QueueEndpoint=https://<LABEL>.queue.core.windows.net/;FileEndpoint=https://<LABEL>.file.core.windows.net/",
and this in my ServiceDependencies/serviceDependencies.local.json:
"storage1": {
"resourceId": "/subscriptions/[parameters('subscriptionId')]/resourceGroups/[parameters('resourceGroupName')]/providers/Microsoft.Storage/storageAccounts/<LABEL>",
"type": "storage.azure",
"connectionId": "<LABEL>",
"secretStore": "LocalSecretsFile"
}
and this in my ServiceDependencies/serviceDependencies.json:
"storage1": {
"type": "storage",
"connectionId": "<LABEL>"
}
Where <LABEL> is the name of the Storage Account (in both JSON snippits).
When I run the WebJob locally, it loads the appsettings.json, appsettings.Development.json, secrets.json, and Environment Variables into the IConfiguration.
However, when I run the WebJob locally it dies with:
Microsoft.Azure.WebJobs.Host.Listeners.FunctionListenerException: The listener for function 'Functions.Run' was unable to start.
---> System.ArgumentNullException: Value cannot be null. (Parameter 'connectionString')
at Microsoft.Azure.Storage.CloudStorageAccount.Parse(String connectionString)
at Microsoft.Azure.WebJobs.Extensions.Timers.StorageScheduleMonitor.get_TimerStatusDirectory() in C:\azure-webjobs-sdk-extensions\src\WebJobs.Extensions\Extensions\Timers\Scheduling\StorageScheduleMonitor.cs:line 77
at Microsoft.Azure.WebJobs.Extensions.Timers.StorageScheduleMonitor.GetStatusBlobReference(String timerName) in C:\azure-webjobs-sdk-extensions\src\WebJobs.Extensions\Extensions\Timers\Scheduling\StorageScheduleMonitor.cs:line 144
at Microsoft.Azure.WebJobs.Extensions.Timers.StorageScheduleMonitor.GetStatusAsync(String timerName) in C:\azure-webjobs-sdk-extensions\src\WebJobs.Extensions\Extensions\Timers\Scheduling\StorageScheduleMonitor.cs:line 93
at Microsoft.Azure.WebJobs.Extensions.Timers.Listeners.TimerListener.StartAsync(CancellationToken cancellationToken) in C:\azure-webjobs-sdk-extensions\src\WebJobs.Extensions\Extensions\Timers\Listener\TimerListener.cs:line 99
at Microsoft.Azure.WebJobs.Host.Listeners.SingletonListener.StartAsync(CancellationToken cancellationToken) in C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Singleton\SingletonListener.cs:line 72
at Microsoft.Azure.WebJobs.Host.Listeners.FunctionListener.StartAsync(CancellationToken cancellationToken, Boolean allowRetry) in C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Listeners\FunctionListener.cs:line 69
I have confirmed that if I add the ConnectionStrings:AzureWebJobsStorage value to my appsettings.json then the program runs fine.
So I know it's an issue with the loading of the AzureWebJobsStorage value.
Has anyone figured out how to get an Azure WebJob, running locally, to properly read the connection string that Visual Studio configures when adding the Azure Storage as a Connected Service?
What's the point of adding the Connected Service to the WebJob if it won't read the connection string?
(note: I realize that the WebJobs docs https://learn.microsoft.com/en-us/azure/app-service/webjobs-sdk-how-to#webjobs-sdk-versions state that Because version 3.x uses the default .NET Core configuration APIs, there is no API to change connection string names. but it's unclear to me if that means the underlying WebJobs code also refuses to look in the Connected Services setup or if I'm just missing something)
I found a work-around, but I don't like it... basically check if there's a ConnectionStrings:AzureWebJobsStorage value at the end of my ConfigureAppConfiguration code and if not, try and read the one from the secrets.json file and set the ConnectionStrings:AzureWebJobsStorage to that value.
private const string baseAppSettingsFilename = "appsettings.json";
private const string defaultStorageAccountName = "<LABEL>";
...
IHostBuilder builder = new HostBuilder();
...
builder.ConfigureAppConfiguration(c =>
{
c.AddJsonFile(
path: baseAppSettingsFilename.Replace(".json", $".{Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT")}.json"),
optional: true,
reloadOnChange: true);
if (Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT") == "Development")
{
c.AddUserSecrets<Program>();
}
// Add Environment Variables even though they are already added because we want them to take priority over anything set in JSON files
c.AddEnvironmentVariables();
IConfiguration config = c.Build();
if (string.IsNullOrWhiteSpace(config["ConnectionStrings:AzureWebJobsStorage"]))
{
string storageConnectionString = config[$"ConnectionStrings:{defaultStorageAccountName}"];
if (string.IsNullOrWhiteSpace(storageConnectionString))
{
throw new ConfigurationErrorsException($"Could not find a ConnectionString for Azure Storage account in ConnectionStrings:AzureWebJobsStorage or ConnectionStrings:{defaultStorageAccountName}");
}
c.AddInMemoryCollection(new Dictionary<string, string>() {
{ "ConnectionStrings:AzureWebJobsStorage", storageConnectionString }
});
}
});
This seems exceedingly dumb but even looking at the Azure SDK source code I'm thinking it's just hard coded to a single key name and the Service Configuration in Visual Studio is simply not supported: https://github.com/Azure/azure-webjobs-sdk-extensions/blob/afb81d66749eb7bc93ef71c7304abfee8dbed875/src/WebJobs.Extensions/Extensions/Timers/Scheduling/StorageScheduleMonitor.cs#L77
I just ran into a similar problem where VS2019 automatically configured Function and Function1 with Connection = "ConnectionStrings:AzureWebJobsStorage" and it couldn't find that. Simply changing it to Connection = "AzureWebJobsStorage" worked like a charm.
FYI - I also had to change BlobTrigger("Path/{name}"... to BlobTrigger("path/{name}"...
re: Microsoft.Azure.StorageException: The specified resource name contains invalid characters

What does it mean "unable to get source run" while connecting to Wercker api?

While trying to trigger a Wercker CI/CD pipeline via API, I got the following error:
{
"error": "unable to get source run",
"message": "unable to get source run",
"code": 13
}
My command is:
POST https://app.wercker.com/api/v3/runs
It turned out I was pointing to a pipeline not connected with the versioning system, one step further down the workflow.
Wercker API only allows you to execute "root" pipeline.

RabbitMQ - ACCESS_REFUSED - Login was refused

I'm using rabbitmq-server and fetch messages from it using a consumer written in Scala. This has been working like a charm but since I migrated my RabbitMQ server from a server to another, I get the following error when trying to connect to it:
com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
In addition, the rabbitmq-server logs:
=INFO REPORT==== 18-Jul-2018::15:28:05 ===
accepting AMQP connection <0.7107.0> (127.0.0.1:42632 -> 127.0.0.1:5672)
=ERROR REPORT==== 18-Jul-2018::15:28:05 ===
Error on AMQP connection <0.7107.0> (127.0.0.1:42632 -> 127.0.0.1:5672, state: starting):
PLAIN login refused: user 'my_personal_user' - invalid credentials
=INFO REPORT==== 18-Jul-2018::15:28:05 ===
closing AMQP connection <0.7107.0> (127.0.0.1:42632 -> 127.0.0.1:5672)
I went through every SO questions about authentication problems and found the following leads:
My credentials are wrong
I'm trying to connect with guest from
remote
My RabbitMQ version is not compatible with the consumer
All those leads did not help me. My crendetials are good, I'm not using guest to connect but a privileged user with full access and admin I created and my RabbitMQ version did not change through the migration.
NB: I migrated my RabbitMQ server from a separate server to the same as my consumer, so now the consumer is fetching from localhost. Don't know the consequences but I figured it could help you guys help me.
So I just had a similar problem googled solutions, which is how I found this page. I didn't find a direct answer to my question, but I ended up discovering that rabbitmq has 2 different sets of rights to configure that don't exactly overlap with each other, in my case I had 0 rights for 1 set of rights and admin rights for the other set of rights. I wounder if you could be running into a similar scenario.
Seeing code will make the 2 sets of rights make more since, but first some background context:
My RMQ is hosted on Kubernetes where stuffs ephemeral, and I needed some usernames and passwords to ship preloaded with a fresh rabbitmq instance, well in Kubernetes there's an option to inject a preconfigured broker definition on first startup. (When I say broker definition I'm referring to that spot in the management Web GUI there's an option to import and export broker definitions AKA backup or replace your RMQ live configuration.)
Here's a shortened version of my config with sensitive stuff removed:
{
"vhosts": [
{"name":"/"}
],
"policies": [
{
"name": "ha",
"vhost": "/",
"pattern": ".*",
"definition": {
"ha-mode": "all",
"ha-sync-mode": "automatic",
"ha-sync-batch-size": 2
}
}
],
"users": [
{
"name": "guest",
"password": "guest",
"tags": "management"
},
{
"name": "admin",
"password": "PASSWORD",
"tags": "administrator"
}
],
"permissions": [
{
"user": "guest",
"vhost": "/",
"configure": "^$",
"write": "^$",
"read": "^$"
},
{
"user": "admin",
"vhost": "/",
"configure": ".*",
"write": ".*",
"read": ".*"
}
]
}
Ok so when I originally saw that tags attribute, I assumed o arbitrary value I'll put a self documenting tag there, and that was equivalent to "", which resulted in me having 0 rights to the web management GUI/REST API, while below I had all ".*" so that part had full admin rights. It was really confusing for me because (I was getting a false error message saying I was supplying invalid credentials, but the credentials were correct, I just didn't have access.)
If it's not that then there's also this configuration thing where guest gets limited to localhost access only by default, but you can override it.
Similar problem, we were also facing with different tech stack. In our case tech stack was:
RabbitMQ deployed in Kubernetes (AKS) using Bitnami package in HA mode
Consumer and Producer created in microservice created using Java 8 with Spring Boot Framework using Apache Camel also running in same Kubernetes cluster
We verified below points:
User and password are correct
User associated with required VHOST
Required permission given (administrator tag)
User was able to login from RabbitMQ Web Console
Connectivity on host and port was there from microservice Pod to RabbitMQ service (checked with various tools like telnet)
All code and configuration was absolutely same (as there is same configuration in lower environment working correctly)
Was getting issue:
com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
After much investigation and troubleshoot we found that, the size of username was larger than consumer API supported.
Example, we used username 'productionappuser'. This user was able to login in management web console but was failing from microservice.
We just changed the username to a new user with 8 characters and it started working.
This looks very weird as same user was able to login thus shared findings.

I am getting error AccessRules: Account does not have the right to perform the operation when I am using postman to hit the register api of ejabberd

What version of ejabberd are you using?
17.04
What operating system (version) are you using?
ubuntu 16.04
How did you install ejabberd (source, package, distribution)?
package
What did not work as expected? Are there error messages in the log? What
was the unexpected behavior? What was the expected result?
I used postman to make a HTTP request to ejabberd register api. The ejabberd is set up and the admin is running properly at the url - http://localhost:5280/admin.
The Url of http request is - http://localhost:5280/api/register
Body - {
"user": "bob",
"host": "example.com",
"password": "SomEPass44"
}
Header - [{"key":"Content-Type","value":"application/json","description":""}]
Response - {
"status": "error",
"code": 32,
"message": "AccessRules: Account does not have the right to perform the operation."
}
I searched a lot to and figured out that it will require some changes in ejabberd.yml file. My yml file is available on the link attached.
THIS LINK CONTAINS YML FILE
ANY HELP WILL GREAT.
In config file /opt/ejabberd/conf/ejabberd.yml
Find api_permissions
Change values of public commands who and what. Compare your code with mentioned below.
see this post:
http://www.centerofcode.com/configure-ejabberd-api-permissions-solve-account-not-right-perform-operation-issue/

Datasource not working in JBoss 7.2

When I create a datasource, a service restart is required to make it work, regardless of the method used to create it (standalone.xml, JBoss CLI, JBoss Administration Console). Attached is the procedure I have written for my team (exported from our Wiki space). The datasource gets created successfully, but when I test the connection, I get this:
From JBoss Administration Console
Unknown error
Unexpected HTTP response: 500
Request
{
"address" => [
("subsystem" => "datasources"),
("data-source" => "dsMyApp")
],
"operation" => "test-connection-in-pool"
}
Response
Internal Server Error
{
"outcome" => "failed",
"failure-description" => "JBAS010440: failed to invoke operation: JBAS010442: failed to match pool. Check JndiName: java:/dsMyApp",
"rolled-back" => true,
"response-headers" => {"process-state" => "reload-required"}
}
From JBoss CLI
JBAS010440: failed to invoke operation: JBAS010442: failed to match pool. Check JndiName: java:/dsMyApp
If I restart the JBoss server, the datasource works fine (server, port, username and password are all correct).
Any thoughts?
Thank you
The Quick Answer: YES, restarting makes a reload and then activates the datasource
I suggest you doing a reload with jboss-cli (It´s the quickest way)
I´ve created all my datasources with jboss-cli and I always need to
perform this action to allow them to work. After the reload, the datasource connection can be tested.
/opt/wildfly/bin/jboss-cli.sh --connect --controller=192.168.119.116:9990 --commands="reload --host=master"
Hope it helps