How to get the Deployment method in Azure for Self-Hosted Integration runtime? - azure-powershell

I want to get the deployment method using Powershell for Self-Hosted Integration Runtime.
The requirement is to track whether the SHIR is installed using "Powershell" commands Or its done through the Azure Portal by a human user by manual steps.
Is there any Flag in Azure which indicates the method of the resource deployment? i.e. if its a PowerShell/ARM template/Portal deployment?

Self-Hosted Integration runtime is a service installed on a windows operating machine (the only OS supported now) either on cloud(VM) or on-prem. Closest you can get to its details is under Applications and Services Logs on system Event Viewer, here you can Filter Logs for timestamp between Start and End of your ADF Pipeline.
Additionally, if you feel this is not helpful, you can share your Feedback so the product team can look into this idea. ✌

Related

How to share Azure Datafactory self-hosted integration runtime for CI/CD managed ADF instances

Azure Datafactory tutorial(1) states the following:
In CI/CD scenarios, the integration runtime (IR) type in different environments must be the same. For example, if you have a self-hosted IR in the development environment, the same IR must also be of type self-hosted in other environments, such as test and production. Similarly, if you're sharing integration runtimes across multiple stages, you have to configure the integration runtimes as linked self-hosted in all environments, such as development, test, and production.
If I use dev/test/prod environments like described in the tutorial with self-hosted integration runtime(SHIR) do I need to create an extra Azure Datafactory which is serving SHIR for CI/CD managed environments as a linked service?
(1) https://learn.microsoft.com/en-us/learn/modules/operationalize-azure-data-factory-pipelines/4-continuous-integration-deployment
Yes, a separate data factory is needed whose role is to host the SHIR and share it to other data factories. This is from a Microsoft support ticket I had with the same question.
In my case I had two ADFs provisioned, prod and non-prod, and wanted to share a multi-node SHIR with both. I was unable share the one created on the non-prod ADF with the prod ADF like I was expecting. MS said I needed a third to host the SHIR and share it to prod & non-prod ADFs.
The same tutorial has the following extra best practices
Integration runtimes and sharing. Integration runtimes don't change often and are similar across all stages in your CI/CD. So Data Factory expects you to have the same name and type of integration runtime across all stages of CI/CD. If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can use this shared factory in all of your environments as a linked integration runtime type.
Other integration runtime related notes
Default Integration runtime
It should be also noted that on adf_publish only some of the Microsoft.DataFactory/factories/integrationRuntimes resources are visible.
E.g. AutoResolveIntegrationRuntime when deployed without VNet support is not visible but Self-Hosted Integration runtimes are.
Integration Runtime with Vnet support
Current portal driven deployment template is using vNetEnabled to control if
Microsoft.DataFactory/factories/managedVirtualNetworks
Microsoft.DataFactory/factories/integrationRuntimes with name AutoResolveIntegrationRuntime and managedVirtualNetwork config is created
VNet enriched integrationRuntimes is present in the adf_publish/ARM export

How to package and deploy cumulocity server-side agents?

We are creating a server-side agent which periodically fetches data from nodes and maps this data to cumulocity measurements, events.
What is an elegant approach for hosting and/or packaging such a server-side agent?
We are hosting our own instance of the Cumulocity platform.
It's preferable to keep this server-side agent as 'close' to the core platform as possible, e.g. share some core agent framework dependencies.
We'd like to limit the amount of setting up additional environments or containers (e.g. Tomcat).
Cumulocity uses Karaf, would it make any sense to deploy the server-side agent into Karaf as a bundle?
Is there any recommended approach for hosting server-side agents? Does the cumulocity platform offer an alternative to deploying the agent to some "own environment"?
The Cumulocity examples repository contains the "tracker-agent" server-side agent example, which is an embedded tomcat Java application. There is little information about the intended deployment location.
I don't recommend deploying agents/microservices directly into the core Karaf server, since that endangers the resources available to the core APIs and is not supported. (I.e., will likely be overwritten with the next upgrade...)
Typically, people just provision an additional VM or docker next to Cumulocity to place their agents/microservices in. On top of that, we, for example, often use Spring Boot, so the effort is pretty low (java -jar ...).
We do have a hosting system for agents/microservices and will make that generally available also for others to use in Q1/2018. Follow the announcement channel at https://support.cumulocity.com to stay posted...

Visual Studio Team Services Test Running

Apologies if similar has been asked before, I couldn't seem to find anything, just link me in the right direction if so.
I'm brand new to test automation, I will be writing selenium tests against a third party website hosted on an internal network. Our source control is provided by Visual Studio Team Services, although it is possible I can install TFS on premise.
Eventually I need to schedule test runs, I believe all this can be done with team services, seen some demo's, all good.
I will be using a URL to access the system under test which is on our internal network, if team services tries to run a selenium test and connect to the URL it will fail I imagine as it's running from wherever Microsoft are holding the code and building.
I don't think there would be a chance that we would allow Team services any access to our internal network if that was even possible.
So the question is, what are my options? can the build be moved from VS Team Services onto a local machine to run the tests with the internal URL? Is this a good idea if it can? Am i relying too much on the internet for testing on our internal network and is this a risk?
I have spent a bit of time on "the google" but struggling to find a great deal of information, it's possible I am asking the wrong questions.
Any help is greatly appreciated, links to articles are fine, don't mind doing the leg work, just need some pointers.
Many thanks for your help, apologies if any of that makes no sense.
You have a few options:
Install a VSTS Build agent on-premise and connect it to VSTS. The agent connects to VSTS using an outbound connection and it will be able to execute Builds and Release pipelines and from there orchestrate the execution of tests. You can either put this agent in a specific Agent Pool or Agent Queue, or you can add a Capability to it (e.g. "onprem"). By setting the Build Definition to use the specified Pool/Queue the agent will be selected. Or by adding the Demand "onprem" to your Build Definition it will ensure that it always requires that capability of any agent.
Use TFS 2015u3 or TFS2017 with the same agent, but that would mean you loose all the goodness that VSTS has to bring with regards to licenses, "free upgrades" and all.
With regards to security.
Adding a agent to your network that executes commands queued on a cloud service adds a risk. You can minimize that risk by configuring the build agent with a limited account, use Active Directory to limit the machines this user can run processes on/logon to and you can limit the access to this agent through permissions on the Queue and Pool as well. You can ensure that the users who have access to this pool and all your VSTS administrators have configured 2-factor-authentication on their AAD account and if needed add IP access control to these accounts as well. It's recommended that users that administer such agent pools/queues do not have alternate credentials configured and that the Personal Access Token used to register the agent is scoped to the permissions required to do just that.
With these extra measures in place you'll have a pretty secure setup. And it beats the hassle of having to install, backup, maintain a couple of TFS servers on-premise.

How to deploy an auto-updating WCF windows service?

I have a WCF Windows service that is used locally only. I need to deploy it in multiple sites and I need the option to auto update it - when an update is released, the service has to be able to get the new version and update itself.
The service will be used in Windows 7, so the permission issue needs to be taken into account somehow.
I have no experience with services and their deployment, feel free to explain thoroughly.
Edit
I've been considering ClickOnce since another application I'm writing is deployed using it. The thing is, ClickOnce only checks for updates on startup and Windows services are supposed to be up and running.
Is it possible to use the ClickOnce detection in my other app and then update the service? (Permission-wise)
Can ClickOnce start and stop the service?
Can the update be silent?
You have to take in account the expected availability of your service and the update policy for your application.
Besides that, you might want to take a look to:
Is there a way to check if a ClickOnce application is running the latest version
http://madprops.org/blog/Updating-ClickOnce-Application-Programatically/
http://msdn.microsoft.com/en-us/library/xc3tc5xx.aspx
Cheers,

Can Azure be inter-operable with Amazon?

I have a question about whether cloud vendors have an inter-operable mechanism. For example, I am developing a WCF service and hosting in Azure successfully. After a pro-long time using Azure, can I use the same code for deploying it in AWS? Will it be possible? Does the API of both matches the same for deploying? If not, what are all the extra care needed for hosting the same service when switching over other Cloud Vendors like Salesforce.com, OpenStack, etc.,
In general, you can't just take what you develop for one Cloud platform and put it on another: they have different functionality sets and expose different APIs. However, the more low-level you make your code, the more likely it is that you'll find another vendor with a very similar API, since virtualizing infrastructure is simpler (and closer to standardized) than virtualizing a CMS application.
If you're using just IaaS, you can probably port fairly rapidly but you have to do more work to make your application. If you're using PaaS (or SaaS!) then you're more locked-in but you get more support for developing rapidly: it's that support platform which is both the value-add and the lock-in, and you won't get one without the other.
If you're using an Azure web role for hosting your WCF service then from deployment point of view you will not have many problems with AWS. You'll simply use facilities offered by AWS SDK for .NET (aka Publish to AWS CloudFormation). For sure you'll have to change the logging part if you've used Azure Diagnostic and alla Azure services with related AWS services. We did this multiple times in the last year and it works.
For worker role it's not so simple because in Azure they are easily deployed like web role, but in AWS you haven't direct deployment from Visual Studio so you have to do some manual work using Windows Services or something else