Has Using the File System Endpoint to export or import Work Item Data - azure-devops-migration-tools

I've been trying to get the File System Endpoint to work for a migration I'm working on and so far have been unsuccessful getting the configuration to work right.
Situation: We have a Azure DevOps Service instance that contains our work items and we are now directed to migrate these work items to an instance of Azure DevOps Server that is in an "airgapped" environment. So the plan is to export the workitems via the FileSystemEndpoint and then run the reverse on the private environment.
My ask is has anyone used the FileSystemEndpoint for either export or import outside of the Tests?

Related

Unable to Modify Azure DevOps Project "Process". I just provisioned a free cloud server instance. I am an admin

I just created a new Azure DevOps cloud/instance. I am the project admin, but I am unable to update the Organization Settings -> "Process". When I try to add a custom field into a work item or try to change the "States", It seems I don't have the permission to do so.
Please help.
I figured it out. I created a Process that inherited the default Process. Then I changed the Process for the Project. Poof, now I can make changes to my Project's Process, e.g. add new work item types.

What is the correct Cloud SQL connection string syntax for dotnetcore app with Cloud Run?

I want to setup a .NET Core web application on Cloud Run with a Google Cloud SQL database. I easily deployed the database which has a public IP on Cloud SQL and my web application with Docker Container on Cloud Run. I can access the database with SQL Server Management Studio without any difficulties and the web app is up and running as expected. The only piece missing is the link between them that allows them to connect.
In my web app, I got a connection string in that format :
Data Source=***;Initial Catalog=***;User ID=***;Password=***;Pooling=true;Trusted_Connection=false;Connection Timeout=60;Integrated Security=false;Persist Security Info={0};Encrypt=true;TrustServerCertificate=true;MultipleActiveResultSets=true;
Once I got the public IP and the connection name from Cloud SQL, how should be precisely be the connection string and/or the next steps?
Furthermore, in the connections tab under Cloud Run Service, I added the Cloud SQL connection. This is supposed to configure a Cloud SQL Proxy for me.
In order to connect to Cloud SQL from Cloud Run, you must follow this guide
You have already made some configurations in the Connections tab as stated in the Configuring Cloud Run section. You can check the guide for the Public IP since you configured your instance that way, to be sure that all steps were followed.
Briefly, the steps are:
Configure the service account for your service. Make sure that the service account has the appropriate Cloud SQL roles and permissions to connect to Cloud SQL.
The service account for your service needs one of the following IAM roles:
Cloud SQL Client (preferred)
Cloud SQL Admin
If the authorizing service account belongs to a different project than the Cloud SQL instance, the Cloud SQL Admin API and IAM permissions will need to be added for both projects.
Like any configuration change, setting a new configuration for the Cloud SQL connection leads to the creation of a new Cloud Run revision. Subsequent revisions will also automatically get this Cloud SQL connection, unless you make explicit updates to change it.
Go to Cloud Run
Configure the service:
If you are adding Cloud SQL connections to an existing service:
Click on the service name.
Click on the Connections tab.
Click Deploy.
Enable connecting to a Cloud SQL instance:
Click Advanced Settings.
Click on the Connections tab.
If you are adding a connection to a Cloud SQL instance in your project, select the desired Cloud SQL instance from the dropdown menu.
If you are deleting a connection, hover your cursor to the right of the connection to display the Trash icon, and click it.
Click Create or Deploy.
After you've double checked the steps above, you could continue with the section Connecting to Cloud SQL. You can follow the steps on the Public IP tab.
Connect with Unix sockets
Once correctly configured, you can connect your service to your Cloud SQL instance's Unix domain socket accessed on the environment's filesystem at the following path: /cloudsql/INSTANCE_CONNECTION_NAME.
The INSTANCE_CONNECTION_NAME can be found on the Overview page for your instance in the Google Cloud Console or by running the following command:
gcloud sql instances describe [INSTANCE_NAME].
These connections are automatically encrypted without any additional configuration.
The code samples shown below are extracts from more complete examples on the GitHub site. To see this snippet in the context of a web application, view the README on GitHub.
// Equivalent connection string:
// "Server=<dbSocketDir>/<INSTANCE_CONNECTION_NAME>;Uid=<DB_USER>;Pwd=<DB_PASS>;Database=<DB_NAME>;Protocol=unix"
String dbSocketDir = Environment.GetEnvironmentVariable("DB_SOCKET_PATH") ?? "/cloudsql";
String instanceConnectionName = Environment.GetEnvironmentVariable("INSTANCE_CONNECTION_NAME");
var connectionString = new MySqlConnectionStringBuilder()
{
// The Cloud SQL proxy provides encryption between the proxy and instance.
SslMode = MySqlSslMode.None,
// Remember - storing secrets in plain text is potentially unsafe. Consider using
// something like https://cloud.google.com/secret-manager/docs/overview to help keep
// secrets secret.
Server = String.Format("{0}/{1}", dbSocketDir, instanceConnectionName),
UserID = Environment.GetEnvironmentVariable("DB_USER"), // e.g. 'my-db-user
Password = Environment.GetEnvironmentVariable("DB_PASS"), // e.g. 'my-db-password'
Database = Environment.GetEnvironmentVariable("DB_NAME"), // e.g. 'my-database'
ConnectionProtocol = MySqlConnectionProtocol.UnixSocket
};
connectionString.Pooling = true;
// Specify additional properties here.
return connectionString;
Google recommends that you use Secret Manager to store sensitive information such as SQL credentials. You can pass secrets as environment variables or mount as a volume with Cloud Run.
After creating a secret in Secret Manager, update an existing service, with the following command:
gcloud run services update SERVICE_NAME \
--add-cloudsql-instances=INSTANCE_CONNECTION_NAME
--update-env-vars=INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME_SECRET \
--update-secrets=DB_USER=DB_USER_SECRET:latest \
--update-secrets=DB_PASS=DB_PASS_SECRET:latest \
--update-secrets=DB_NAME=DB_NAME_SECRET:latest
See also:
GoogleCloudPlatform/dotnet-docs-samples on GitHub

SSIS flat file folder permission error when NOT running from SQL Server Agent

Setup: A pretty standard data export SSIS package (SQL Server 2016 compatible), created in VS2019/Data Tools and deployed using the SSIS Project Deployment model to the Integration Services Catalog of a SQL Server 2016 instance. The package creates files in a network folder before sending the file out via FTP and putting a copy of the file in a Sent folder.
The project requirements include having the package running on a schedule using "default" parameter values, as well as allowing users to manually run the package using "non-default" parameter values from within a stand-alone application.
Current behavior: the package behaves correctly when run from a SQL Server Agent Job that is configured with a SQL proxy and credentials mapped to a domain login with the proper permissions for the network folder.
Problem: the Data Flow task fails to create the file with a "Cannot open the datafile" error when running the package directly using any of the following methods (even when the "current" session is using the same credentials as the SQL Server Credentials/Proxy used by the SQL Server Agent Job):
Using SSMS to right-click on the package and selecting Execute
Using the DTEXEC SQL utility
Using the SSISDB.catalog.start_execution SQL Server stored procedure
As far as I'm aware, these are the only methods capable of starting a SSIS package and changing the package's parameter values. I either need to get one of the latter 2 methods to work, find another option that allows for changing the parameter values while launching the package, or use one of 2 techniques I'm aware of (detailed below) that would add yet another failure point to the process as well as other potential issues.
Note: If the process is changed to initially create the file on the SQL Server's local harddrive, then the Data Flow task succeeds, but the later copy to Sent folder task fails with a very similar permissions error.
Alternative #1: this technique requires creating a new table, loading the parameter values to the table, changing the package to check the table and potentially set it's parameters/variables based on what it finds. The package can then be launched using a SQL Server Agent Job (for which there are multiple methods to manually launch them) and if the calling object has correctly populated the table, the package will behave as if it's parameters were changed at runtime otherwise it will run with the default values.
Alternative #2: Change all folders used by the package to point to folders local to the SQL Server instance and then create a separate scheduled task/application/whatever, with the valid credentials, that would synchronize or move the files to their proper network folders.
even when the "current" session is using the same credentials as the SQL Server Credentials/Proxy used by the SQL Server Agent Job
This is probably because the account is not logged on locally at the SQL Server, and so it's a Double-Hop Impersonation scenario, and would require Kerberos Constrained Delegation to be configured.
And you are correct in assessing the options. The general solution is to invoke catalog.start_execution from a session running on the SQL Server, and an Agent Job is the simplest built-in way to do this (the others being xp_cmdshell, Service Broker Activation, or SQL CLR).

Real time Log mechanism while importing large files with blazor server app

I m using blazor server app and importing large files. During import operation service will be logged errors and warnings. Users want me to show logs real time.Should I use Signal-R. Because I needed updates one way.

Import data from CouchDB to Sql Server

Is there a way to export data from couchDB to Sql Server ?
We tried with SSiS using http connection manager, JSON component source etc. But the problem is that couchDB has some security and we can't retrieve data info. We tried installing their certificates no success.
Any other suggestions?
CounchDB has a number of .net client libraries you can use. I would get that working in a console app and then just write that to a csv on the disk and use SSIS to import it.
If you don't want the additional step you could write a .net script source component that used the .net client to retrieve data but get it working in a console app first so you know it is good before bringing SSIS into the equation.
Getting started with CouchDb and c#:
https://wiki.apache.org/couchdb/Getting_started_with_C%23
ed