This question relates to how to propagate a data factory through CI (in VSTS) if there is a self hosted Integration Runtime defined in the Data Factory.
I have a 3 environments set up - Dev / UAT / Prod each with their own data factory.
The Dev hosts the master collaboration branch. I am using VSTS to retrieve the artifacts from the adf_publish branch and deploying the template to UAT (prod will be done later). I followed much of what is in this guide here.
When deploying to blank UAT with a self-hosted integration runtime (IR), the IR that is deployed in UAT is a copy of the shared IR from dev (not a linked type) and this causes an error since the credentials used by the IR will not be correct. I expect this since we are really just deploying an exact copy of the Resource Group template with just the factory name overridden however the IR will not work without it being re-credentialled with the self hosted IR VMs.
If I pre-register a linked IR with the UAT environment (linked to the dev IRs), then the deployment fails with a conflict because an IR in the resource group template is the same name as the one I just created in UAT. If it is a different name - no conflict but the linked services will be pointing to the template IR and not the one I created for UAT
The docs have a note that says the IR runtime should be the same across all the platforms but I do not think this can be true - one of them (presumably the source/dev) must be a shared type and the others linked and authorized.
One option I could see (untested) is to have each environments IR reference be a separate connection to an actual IR but then there then needs to be some way of overriding the linked services to point to the current environments IR reference (by template parameter override?). In this scenario, we need to block the templates IR from being deployed as it won't be needed and won't work.
Has anyone had success in getting CI working in this situation? My sense is the doc was written with the globally shared IR. Either that or I need to better understand the aim of Auto Integration setting in the linked services definition.
Many thanks.
Mark.
Update
I think there are a couple of bugs in the service so not expecting an answer. I'll post updates here if I see resolution from the bug report I have posted here for the dev group.
In a nutshell, this only affects you if
you have a self hosted integration runtime (IR), and
you are trying to deploy a template to a new data factory from an existing data factory (as you would in Dev->UAT->Prod)
you have a datalake (ADL) linked service defined and using the self hosted IR.
If you have a self hosted IR in the template, the newly deployed copy will not be registered with any server (either linked or unique to the new ADF) as the template only records an IR, it does not instantiate one.
While this can be fixed in post deployment config or scripting, what it can't fix is the dependency in ADL. This is because the ADL linked service wants to encrypt the service principal with the IR....but the IR does not exist at the time of template deployment (i.e. is not configured on a server and not active).
It is no better if you select Managed service identity as the auth on the ADL linked service instead of service principal, then the template fails to deploy because there are no credentials to encrypt and it looks like the resource is expecting to encrypt something.
The work around right now is to use Azure hosted IR for datalake connections. Unfortunately for us this causes a security problem because shared IRs cannot be whitelisted in our ADL Gen 1.
I'll keep you posted.
Related
Does anyone have an idea how to "undeploy" an API proxy from an "archive" type Apigee-x environment? It seems like it can't be done from the Apigee UI, it throws an error:
"This operation is not supported. The Environment DeploymentType is ARCHIVE. The required Environment DeploymentType is PROXY".
The environment type can't be changed. The available CLI commands are "delete", "deploy", "describe", "list", "update" (no "undeploy" command found), "delete" doesn't work as it can't delete an active deployment. The final goal is to be able to delete the environment, which requires to remove/undeploy all API proxies from it first.
I found a solution. The "undeploy" feature I was looking for is not included in the current Apigee-x release. On the Apigee community, Google staff stated that they are looking into implementing it at some point. Until then there is a workaround, where one can deploy an archive with no deployments defined to the environment. Once this is done the Proxy is "undeployed" and the environment could be deleted. Here is the step-by-step process of doing it.
I'm doing an ASPNet MVC integration with dynamics AX using web services.
We have 3 environments:
DEV
QA (Quality environment)
PROD
We've been developing it for a while on DEV and the services always worked.
Now we are passing it to the Quality Environment (QA, before PROD) and I changed the config to go to QA instead of DEV and we discovered a problem.
There are 2 services that send ~90% of their properties as NULL, the other 10% is ok.
We tried:
- Created console that saves the XML being received from AX.
- Reviewing web config
- Re-adding web services
We discovered that if we just add the service to the project using the QA URL, it works, but if we add it using DEV URL and afterwards change the config to match the QA URL it doesn't work.
So, it's not about the code, but something on the DEV > QA merge of AX, right?
Any ideas?
Fix
I found a fix for this.
I discovered that if the column order is different from server to server (WCF Server) this kind of things will happen.
I fixed it by downloading the WSDL and updating the reference from within Visual Studio.
Downloaded WSDL (Url: http://?singleWsdl)
Saved it to a folder in my pc
Configure the URL in VS (Changed it to the file in my pc instead of the URL)
Update service
It's a bit boring because you have to change this every time you want to compile it for a different environment, but until I find a permanent solution it will work this way.
There are scripts that build the admin server, then create clusters, managed servers, machines etc and when this domain is built, it is seen that an additional phantom server osb_server1 with port 8011, is getting built that isn't attached to any cluster or any machine.
This is built when the wlsb.jar was being referenced during one of the scripts.
Once after the admin server is up and running and we have other managed servers as well, Was trying to remove osb_server1 and this error creeps up
weblogic.management.configuration.AppDeploymentMBeanImpl.isCacheInAppDirectorySet()
Errors must be corrected before processding
There are like 120 default deployments on OSB that are targeted to osb_server1, was trying to retarget them to another server, but that is also throwing an error ...
Any ideas ???
That's due to the weird behaviour/bug of the standard osb template. There is a discussion here. http://theheat.dk/blog/?p=1255.
I didnt follow the steps given by Oracle(as in the URL). What I did was,
I keep the default osb_server1, and make it part of the cluster during the domain creation(ie, it's the first server). Once the domain is created, I re-set the osb_server1 to the desired value. That way the singleton services will still be deployed to the 1st server and others to cluster. Using WLST:
readDomain(domain_name)
cd('/Servers/osb_server1')
set('ListenPort', osb1_listen_port)
set('Name', osb1_name)
cd('/Servers/' + osb1_name + '/ServerDiagnosticConfig/osb_server1')
set('Name', osb1_name)
updateDomain()
closeDomain()
I'm using the web deploy API to deploy a web package (.zip file, created by MSDeploy.exe) to programmatically roll the package out to a server (we need to do some other things before we release the package which is why we're not doing it all in one go using MSDeploy.exe).
Here's the code I have. My question is really to clarify what is happening when this is executed. In the package parameters XML file I have the application name specified ("Default Web Site") but that's about it, there's no other params are specified in there. From testing the server it appears the package gets deployed successfully but my question is are any other settings on the server I'm deploying to getting changed without my knowledge, are any default settings published etc.? Things like security settings, directory browsing etc. that I might not be aware of? The code here seems to deploy the package but I'm anxious about using this on a production environment when I'm so unsure of how this API works. The MS documentation is not helpful (more like non-existant, actually).
DeploymentChangeSummary changes;
string packageToDeploy = "C:/MyPackageLocation.zip";
string packageParametersFile = "C:/MyPackageLocation.SetParameters.xml";
DeploymentBaseOptions destinationOptions = new DeploymentBaseOptions()
{
UserName = "MyUsername",
Password = "MyPassword",
ComputerName = "localhost"
};
using (DeploymentObject deploymentObject = DeploymentManager.CreateObject(DeploymentWellKnownProvider.Package,
packageToDeploy))
{
deploymentObject.SyncParameters.Load(packageParametersFile);
DeploymentSyncOptions syncOptions = new DeploymentSyncOptions();
syncOptions.WhatIf = false;
//Deploy the package to the server.
changes = deploymentObject.SyncTo(destinationOptions, syncOptions);
}
If anyone could clarify that this snippet should deploy a package to a web site application on a server, without changing any existing server settings (unless specified in the SetParameters.xml file) that would be really helpful. Any good resources on using the API or an explanation of how web deployment works behind the scenes would also be much appreciated!
The setparameters file just controls the value for the parameters defined in the package. A package might be doing much more than that. Web deploy has a concept of providers and any given package can have one or more providers.
If you want to make sure that the package is not changing server side settings the best approach you can take is to use the API but make the packages be deployed via Web Management Service. This will give you two benefits:
You can control what providers you allow through.
You can add users and give restricted permissions to them to deploy to their site or their folder etc.
The alternate approach is to:
In the package manually look at the archive.xml and look for the providers in the package. As long as you dont see any of the following providers that can cause server settings change such as apphostconfig or webserver or regkey (this is not a comprehensive list) you should be good. Runcommand is a provider that allows you to execute batch scripts or commands. While it is a good provider for admins themselves you need to consider whether you want to allow packages with such providers to run.
You can do the above mentioned inspection in code by calling getchildren on the deployment object you create out of the package and inspect the providers and the provider paths.
I have create a WCF Service Web Role project.I can consume the service locally. But I am having issues trying to deploy the service on the azure cloud. After starting the webrole it justs kepps going in a loop where it init then stops. I have not made any changes to the default WebRoleclass that was added automatically. Can anybody point me to some samples or examples of WCF being deployed to azure
The behaviour you're seeing occurs when the instance errors in the OnStart or Run. The usual diagnostics error trapping hasn't had a chance to start yet so this is a difficult problem to debug. You might try adding error trapping inside this functions that writes the error details out to either a blob or a queue so that you can see what is actually happening.
Having said that, with code that works in the dev fabric, but continues to cycle when deployed to live, the first thing to check is that all of the references have the appropriate "Copy Local" property set. Anything that is part of the framework or Microsoft.WindowsAzure.ServiceRuntime will need to have Copy Local to false, everything else should be set to true (third party assemblies an the like). If this is a web role and you're using MVC, you'll need to check that System.Web.Mvc has Copy Local set to true as well as this is not included as part of the standard framework deployed in Azure.
Have you looked at the Known Issues information on the WCF Azure code page? There's a patch that's needed, as well as a tweak to the service behavior. Hopefully this will help you.
I just found out the root of the problem. It was caused by one of my projects having the target platform set to x86. Seems like it does not support x86 build assemblies which can be a problem