ASP.NET Core Data Protection Key stored to ContentRootPath does not work on different machines - asp.net-core

I have Setup a Database for developing that is available in the local network.
I implemented the Dataprotection Api to encrypt some of the sensitive information of my models(Entity Framework), before saving it to the database.
In Startup I configured it like this:
var keysfolder = Path.Combine(Environment.ContentRootPath, "Keys");
services.AddDataProtection()
.PersistKeysToFileSystem(new DirectoryInfo(keysfolder));
The Key is in the folder,not protected and shared in the repository because it is only for Test Data.
I can access the data in my app on 2 different Linux machines but on one Windows PC I get a Invalid Payload exception.
They share the same commit and use the same purpose strings.
So I must have failed to understand it. I thought that I can backup the keys and the database in production and redeploy, if necesarry on a different machine without loosing the data.
Can anybody explain why I canĀ“t use the key on the Windows PC?

I have a solution to this problem.
I found the answer here: https://techcommunity.microsoft.com/t5/iis-support-blog/system-security-cryptography-cryptographicexception-the-payload/ba-p/1919096
You have to call SetApplicationName when registering DataProtection:
services.AddDataProtection()
.SetApplicationName("MyApp")
.PersistKeysToFileSystem(new DirectoryInfo(keysfolder))

Related

Release pipeline conflict with integration runtime

This question relates to how to propagate a data factory through CI (in VSTS) if there is a self hosted Integration Runtime defined in the Data Factory.
I have a 3 environments set up - Dev / UAT / Prod each with their own data factory.
The Dev hosts the master collaboration branch. I am using VSTS to retrieve the artifacts from the adf_publish branch and deploying the template to UAT (prod will be done later). I followed much of what is in this guide here.
When deploying to blank UAT with a self-hosted integration runtime (IR), the IR that is deployed in UAT is a copy of the shared IR from dev (not a linked type) and this causes an error since the credentials used by the IR will not be correct. I expect this since we are really just deploying an exact copy of the Resource Group template with just the factory name overridden however the IR will not work without it being re-credentialled with the self hosted IR VMs.
If I pre-register a linked IR with the UAT environment (linked to the dev IRs), then the deployment fails with a conflict because an IR in the resource group template is the same name as the one I just created in UAT. If it is a different name - no conflict but the linked services will be pointing to the template IR and not the one I created for UAT
The docs have a note that says the IR runtime should be the same across all the platforms but I do not think this can be true - one of them (presumably the source/dev) must be a shared type and the others linked and authorized.
One option I could see (untested) is to have each environments IR reference be a separate connection to an actual IR but then there then needs to be some way of overriding the linked services to point to the current environments IR reference (by template parameter override?). In this scenario, we need to block the templates IR from being deployed as it won't be needed and won't work.
Has anyone had success in getting CI working in this situation? My sense is the doc was written with the globally shared IR. Either that or I need to better understand the aim of Auto Integration setting in the linked services definition.
Many thanks.
Mark.
Update
I think there are a couple of bugs in the service so not expecting an answer. I'll post updates here if I see resolution from the bug report I have posted here for the dev group.
In a nutshell, this only affects you if
you have a self hosted integration runtime (IR), and
you are trying to deploy a template to a new data factory from an existing data factory (as you would in Dev->UAT->Prod)
you have a datalake (ADL) linked service defined and using the self hosted IR.
If you have a self hosted IR in the template, the newly deployed copy will not be registered with any server (either linked or unique to the new ADF) as the template only records an IR, it does not instantiate one.
While this can be fixed in post deployment config or scripting, what it can't fix is the dependency in ADL. This is because the ADL linked service wants to encrypt the service principal with the IR....but the IR does not exist at the time of template deployment (i.e. is not configured on a server and not active).
It is no better if you select Managed service identity as the auth on the ADL linked service instead of service principal, then the template fails to deploy because there are no credentials to encrypt and it looks like the resource is expecting to encrypt something.
The work around right now is to use Azure hosted IR for datalake connections. Unfortunately for us this causes a security problem because shared IRs cannot be whitelisted in our ADL Gen 1.
I'll keep you posted.

How to call Apache NMS from in a sandbox?

I'm trying to call Apache ActiveMQ NMS Version 1.6.0 from my code ('IntPub') that must run in a sandbox in a .NET 4.0 environment for security reasons. The program that creates the sandbox makes my code 'partially trusted' and therefore 'security-transparent' which seems to mean that it can't create a ConnectionFactory (see error log below) because NMS seems to be 'security-critical'. Here's the code that's causing this error:
connecturi = new Uri("tcp://my.server.com:61616");
var connectionFactory = new ConnectionFactory(connecturi);
I also tried this instead with similar results:
connecturi = new Uri("activemq:tcp://my.server.com:61616");
var connectionFactory = NMSConnectionFactory.CreateConnectionFactory(connecturi);
Since I can't change the security level of my assembly (the sandbox prevents it) is there a way to make NMS run as 'safe-critical' so it can be called by 'security-transparent' code? Would I have to recompile it to do so, or does NMS do some operation that would never be considered 'safe-critical?
I appreciate any help or suggestions...
Assembly 'IntPub, Version=1.0.0.0, Culture=neutral, PublicKeyToken=6fa620743b8dc60a' is partially trusted, which causes the CLR to make it entirely security transparent regardless of any transparency annotations in the assembly itself. In order to access security critical code, this assembly must be fully trusted.Detail:
<OrganizationServiceFault xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/xrm/2011/Contracts">
<ErrorCode>-2147220956</ErrorCode>
<ErrorDetails xmlns:d2p1="http://schemas.datacontract.org/2004/07/System.Collections.Generic" />
<Message>Unexpected exception from plug-in (Execute): Test.Client: System.MethodAccessException: Attempt by security transparent method 'Test.Client.Execute(System.IServiceProvider)' to access security critical method 'Apache.NMS.ActiveMQ.ConnectionFactory..ctor(System.Uri)' failed.
From the error message attributes, it looks like you're running a Dynamics CRM 2011 plugin in sandbox mode, which has some very specific rules about what you can and can't do. In particular, you're only allowed to make network connections via HTTP and HTTPS, so attempting raw TCP sockets will definitely fail.
Take a look at this MSDN page on Plug-in Isolation, Trusts, and Statistics. It looks like there may be a way to relax the network restrictions by modifying a system registry entry to include tcp, etc, in the regex value. Below is an excerpt from the page. Note: I have not done this myself, so can't say for sure it'll work.
Sandboxed plug-ins and custom workflow activities can access the
network through the HTTP and HTTPS protocols. This capability provides
support for accessing popular web resources like social sites, news
feeds, web services, and more. The following web access restrictions
apply to this sandbox capability.
Only the HTTP and HTTPS protocols are allowed.
Access to localhost (loopback) is not permitted.
IP addresses cannot be used. You must use a named web address that requires DNS name resolution.
Anonymous authentication is supported and recommended. There is no provision for prompting the logged on user for credentials or saving those credentials.
These default web access restrictions are defined in a registry key on
the server that is running the Microsoft.Crm.Sandbox.HostService.exe
process. The value of the registry key can be changed by the System
Administrator according to business and security needs. The registry
key path on the server is:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSCRM\SandboxWorkerOutboundUriPattern
The key value is a regular expression string that defines the web access restrictions.
The default key value is:
"^http[s]?://(?!((localhost[:/])|([.])|([0-9]+[:/])|(0x[0-9a-f]+[:/])|(((([0-9]+)|(0x[0-9A-F]+)).){3}(([0-9]+)|(0x[0-9A-F]+))[:/]))).+";*
By changing this registry key value, you can change the web access for sandboxed plug-ins.

Windows Azure Console for Worker Role Cloud Service

I have a worker role cloud service that I have recently developed on my local machine. The service exposes a WCF interface that receives a file as a byte array, recompiles the file, converts it to the appropriate format, then stores it in Azure Storage. I managed to get everything working using the Azure Compute Emulator on my machine and published the service to Azure and... nothing. Running it on my machine again, it works as expected. When I was working on it on my computer, the Azure Compute Emulator's console output was essential in getting the application running.
Is there a similar functionality that can be tapped into on the Cloud Service via RDP? Such as starting/restarting the role at the command prompt or in power shell? If not, what is the best way to debug/log what the worker role is doing (without using Intellitrace)? I have diagnostics enabled in the project, but it doesn't seem to be giving me the same level of detail as the Computer Emulator console. I've rerun the role and corresponding .NET application again on localhost and was unable to find any possible errors in the console.
Edit: The Next Best Thing
Falling back to manual logging, I implemented a class that would feed text files into my Azure Storage account. Here's the code:
public class EventLogger
{
public static void Log(string message)
{
CloudBlobContainer cbc;
cbc = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue("StorageClientAccount"))
.CreateCloudBlobClient()
.GetContainerReference("errors");
cbc.CreateIfNotExist();
cbc.GetBlobReference(string.Format("event-{0}-{1}.txt", RoleEnvironment.CurrentRoleInstance.Id, DateTime.UtcNow.Ticks)).UploadText(message);
}
}
Calling ErrorLogger.Log() will create a new text file and record whatever message you put in there. I found an example in the answer below.
There is no console for worker roles that I'm aware of. If diagnostics isn't giving you any help, then you need to get a little hacky. Try tracing out messages and errors to blob storage yourself. Steve Marx has a good example of this here http://blog.smarx.com/posts/printf-here-in-the-cloud
As he notes in the article, this is not for production, just to help you find your problem.

Web Deploy API (deploy .zip package) Clarification

I'm using the web deploy API to deploy a web package (.zip file, created by MSDeploy.exe) to programmatically roll the package out to a server (we need to do some other things before we release the package which is why we're not doing it all in one go using MSDeploy.exe).
Here's the code I have. My question is really to clarify what is happening when this is executed. In the package parameters XML file I have the application name specified ("Default Web Site") but that's about it, there's no other params are specified in there. From testing the server it appears the package gets deployed successfully but my question is are any other settings on the server I'm deploying to getting changed without my knowledge, are any default settings published etc.? Things like security settings, directory browsing etc. that I might not be aware of? The code here seems to deploy the package but I'm anxious about using this on a production environment when I'm so unsure of how this API works. The MS documentation is not helpful (more like non-existant, actually).
DeploymentChangeSummary changes;
string packageToDeploy = "C:/MyPackageLocation.zip";
string packageParametersFile = "C:/MyPackageLocation.SetParameters.xml";
DeploymentBaseOptions destinationOptions = new DeploymentBaseOptions()
{
UserName = "MyUsername",
Password = "MyPassword",
ComputerName = "localhost"
};
using (DeploymentObject deploymentObject = DeploymentManager.CreateObject(DeploymentWellKnownProvider.Package,
packageToDeploy))
{
deploymentObject.SyncParameters.Load(packageParametersFile);
DeploymentSyncOptions syncOptions = new DeploymentSyncOptions();
syncOptions.WhatIf = false;
//Deploy the package to the server.
changes = deploymentObject.SyncTo(destinationOptions, syncOptions);
}
If anyone could clarify that this snippet should deploy a package to a web site application on a server, without changing any existing server settings (unless specified in the SetParameters.xml file) that would be really helpful. Any good resources on using the API or an explanation of how web deployment works behind the scenes would also be much appreciated!
The setparameters file just controls the value for the parameters defined in the package. A package might be doing much more than that. Web deploy has a concept of providers and any given package can have one or more providers.
If you want to make sure that the package is not changing server side settings the best approach you can take is to use the API but make the packages be deployed via Web Management Service. This will give you two benefits:
You can control what providers you allow through.
You can add users and give restricted permissions to them to deploy to their site or their folder etc.
The alternate approach is to:
In the package manually look at the archive.xml and look for the providers in the package. As long as you dont see any of the following providers that can cause server settings change such as apphostconfig or webserver or regkey (this is not a comprehensive list) you should be good. Runcommand is a provider that allows you to execute batch scripts or commands. While it is a good provider for admins themselves you need to consider whether you want to allow packages with such providers to run.
You can do the above mentioned inspection in code by calling getchildren on the deployment object you create out of the package and inspect the providers and the provider paths.

FluentNHibernate blows up in Windows Service but not website

I've got a class library doing all my NHibernate stuff. It also handles all the mapping using Fluent NHibernate - no mapping files to deploy.
This class library is consumed by a number of apps, including a Windows Service running on my computer. Although it works fine in all my web apps, the Windows Service gets this when it tries to use NHibernate:
An invalid or incomplete configuration was used while creating a SessionFactory. Check PotentialReasons collection, and InnerException for more detail.
at FluentNHibernate.Cfg.FluentConfiguration.BuildSessionFactory()
at Kctc.NHibernate.KctcSessionFactory.get_SessionFactory() in C:\Kctc\Trunk\Kctc.NHibernate\KctcSessionFactory.cs:line 28
...more stack trace...
I have checked for an InnerException and there doesn't appear to be one. I have no idea what the PotentialReasons collection is, and Google doesn't seem to be forthcoming either.
This is my dev machine, so when I'm working on my web apps they run locally (i.e. using the web server in Visual Studio). The fact that the Windows Service and my dev web apps are running on this same machine suggest it's not to do with trust settings or what have you.
Can anyone suggest what I should try? This is one of those ones where I'm so stumped I can't even think of how to get more information about the problem.
Just a wild guess. NHibernate picks up the hibernate.cfg.xml file from the execution directory. Did you configure the execution directory of the service that it can find this file?
I've found out what the problem is. The Service did not deploy with the required NHibernate.ByteCode.LinFu.dll.
I appear to have an ongoing problem with the Visual Studio compiler not always copying indirect dependencies (i.e. dlls required by class libraries required by the app) into the output folder during the build. I should have thought of this sooner really.
Thanks for racking your brains on my behalf guys.
I bet the name of the connection string is missing from the app.config. For me that message is almost exclusively a missing connection string.
Are you targeting the same database or could it be some sort of schema mismatch between databases?
Could it be authentication issues on the service like you use windows authentication where it can't be used (or the sql authentication that doesn't work)?
It's hard to tell when there is no code, just an exception!
EDIT Are you ever using HttpContext, HostingEnvironment or anything else specific to "web"?