I have an active Azure container Instance which is running, How can I add it to my Workspace using the Azure ML SDK.
I'm afraid you misunderstand the ML. As I know it's just a way to deploy the docker image into ACI (if you want to deploy ACI). And when you finish deploying, then the ACI is already a resource and has no relationship with the ML. It's no difference when the ACI is created. So there is no need to add the ACI to the ML workspace when it's running.
Related
We have a web applications developed with Angular and .Net, which is deployed on an Azure Cloud platform, lets say External A-Cloud.
We need to lift the same application and host in a different Internal Cloud Platform, lets say Internal B-Cloud.
How can we achieve this, please share some thoughts to do the ground work to start the process,
Warm Regards
KdM
Migrate an externally hosted cloud based application to our Cloud platform.
We have both AWS and Azure, but the externally hosted one is in Azure cloud platform
We can move from any Cloud to any Cloud. But we need to understand few points first.
How are the Angular and .Net hosted in Azure
If they are hosted on simple Virtual Machines - Then we can create a Virtual Machine in AWS and Migrate or host the apps in AWS ( Yes we definitely need to consider foundation of AWS like VNETs , hope thats already done )
If the Angular and .Net hosted in Azure is of Kubernetes and docker based
We need to Create EKS in AWS and then as its docker based, the same Manifest files etc would work in EKS as well with minor changes
We can look at migration tools as well if they are Windows VM based
How can I create a new Azure Machine Learning workspace when creating a new Azure Container Instance from Azure cloud shell.
Here is a sample of the command am using to create the ACI.
az container create --name dev-container –resource-group XXX –location eastus –image mcr.microsoft.com/XXX –cpu 2 –memory 6 –environment-variables WORKSPACE_NAME=XXX
Thanks
I think you're approaching the problem from the opposite direction than the Azure ML PG. My understanding is that when you make an Azure ML workspace, an Azure Container Instance service is automatically spun up and is inherently tied to the Azure ML workspace. Check out a similar question another user had this week
More generally, Azure ML has a core feature called Environments which provides a simple interface for creating custom Docker/Conda environments.
Say I am working on one computer, and I do a serverless deploy. And lets say for arguments sake that I toss my computer out the window out of anger, and buy another. Is there any way when I start working on my serverless project to connect to the existing deployed version?
As long as you still have access to the serverless.yml and any source code (cause you checked it into source control or backed it up somewhere) and you still have credentials for the AWS account the serverless app is deployed to, sure
I want to deploy Spinnaker components to private cloud (PCF). I want to know whether the following procedure works or it. Download spring-cloud-spinnaker-1.0.0.BUILD-SNAPSHOT.jar (mentioned in https://cloud.spring.io/spring-cloud-spinnaker) and run it (on Linux machine), then deploy the Spinnaker components to required space from local host.
If this procedure works, what are the requirements of my system? else mention the way to deploy.
Yes, Spring Cloud Spinnaker is the proper way to install Spinnaker components into a PCF setup.
Each Spinnaker module is installed with custom settings, some including resources (for example, clouddriver needs 4GB RAM + 2GB disk space), and Spring Cloud Spinnaker applies that.
Spring Cloud Spinnaker itself needs 8 GB RAM + 4 GB disk to operate properly. This is cited here => https://github.com/spring-cloud/spring-cloud-spinnaker#running-spring-cloud-spinnaker. When run locally, that probably won't be a problem. Should you install it into PCF itself, that would be a critical setting.
If you run into issues with the installer, you can reach out for assistance at http://join.spinnaker.io/ on the #general channel.
We are currently following the default deployment instructions for Spinnaker which states using m4.xlarge as the instance type.
http://www.spinnaker.io/v1.0/docs/creating-a-spinnaker-instance#section-amazon-web-services
We did make an unsuccessful attempt to deploy it to m4.large but the services didnt start.
Has anyone tried the something similar and succeeded?
It really depends on the size of your cloud.
There are 4 core services that you need - gate/deck/orca/clouddriver. You can shut the other ones off if say, you don't care about automated triggers, baking or jenkins integration.
I'm able to run this locally with the docker images with about 8 gigs of ram and it works. Using s3 instead of cassandra also helps here.
You can play around with the settings in the baked image of spinnaker, but for internal demos and what not, I've been able to just spin up a VM, install docker and run the docker compose config correctly on m4.large.