I'm using the Create Virtual Machine Deployment method of the Azure REST API: http://msdn.microsoft.com/en-us/library/windowsazure/jj157194.aspx
I'm trying to use an image sourced from the VM Depot, with a path such as this:
http://vmdepotwestus.blob.core.windows.net/linux-community-store/community-4-d803ca0a-5d98-4be8-8895-2a9d15ec3974-1.vhd
I am currently getting the following error:
The virtual machine image source is not valid.
I am assuming there is some process which first needs to be completed in order to make that image available to the specific API user, but I can't seem to work out what?
You can't deploy directly from VM Depot. You must first copy the image to your own storage account. There are instructions on the VM Depot help page for doing this via the Azure Management portal (see http://vmdepot.msopentech.com/Help/Help.cshtml#deployingUsingAUX). It can also be done via the CLI tools, see http://www.windowsazure.com/en-us/manage/install-and-configure-cli/#use
It is more complicated. You have to download the VHD from some link in VM Depot in your storage, create the image, then provision the machine.
The command line of VMdepot to do that is in node, so you can very easily reverse it to see how it works.
On the other hand, I do it also with IaaS Management Studio, you might be able to take a look with reflector how I did.
Related
I am trying to create a Pool using Azure Batch . I have uploaded content to Azure Storage using File Shares.
I would like my Pool to mount this Azure File Share as virtual file system (ref: https://learn.microsoft.com/en-us/azure/batch/virtual-file-mount#mount-a-virtual-file-system-on-a-pool ).
I am creating AzureFileShareConfiguration object using code:
mount_configuration=batchmodels.MountConfiguration(azure_file_share_configuration=batchmodels.AzureFileShareConfiguration(
account_name="mystorage",
azure_file_url="https://mystorage.file.core.windows.net/my-share1",
account_key="mystorage/key==",
relative_mount_path="S"
)
)
Using this, I get "CMDKEY: Credentials added successfully" in fsmounts. But when I RDP to the node in the pool, the S drive appears "Disconnected".
My Azure batch package versions are:
azure-batch==8.0.0
azure-common==1.1.24
Can you please help diagnose the issue or suggest the right usage?
Thanks in Advance!
I think this is windows VM you are trying?, just by looking at the drive letter : ).
Here is the key issue with RDP permissions is different then your Batch level model when your code runs and mount.
At Batch level when you mount your Drive: and you can see it via your Start task then it is working. i.e. that a Batch level permissioning model and when you RDP into Node it will be as a "user" you are logged-in. If you want to see via UI RDP user you should re-run the command from your RDP login to update that you have key to see that drive.
Although having said that try it with /persistent:Yes as mount_options.
The best test is going to be -- You mount the drive and from your start task go to the mounted directory via : S:\\Whatever_file.txt or read the mounted file which will add the result in your stdout.txt of batch node or might be just dir it or something.
Rest extra stuff below
try with this mount_options value
Also specifically this will help for various SMB version et. al. support: https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows and I think this you already know : https://learn.microsoft.com/en-us/azure/batch/virtual-file-mount#azure-files-share
In order to use an Azure file share outside of the Azure region it is
hosted in, such as on-premises or in a different Azure region, the OS
must support SMB 3.0.
So add this to your API and give it a try:
MountOptions = "/persistent:Yes" i.e. mount_options = "/persistent:Yes"
Also: key needs to be Storage account Key, i.e. it should not start with mystorage/key :) but it could be you hiding it, so just a mention and fyi.
Sample code:
I think SDK you have is python?
mount_configuration=batchmodels.MountConfiguration(azure_file_share_configuration=batchmodels.AzureFileShareConfiguration(
account_name="mystorage",
azure_file_url="https://mystorage.file.core.windows.net/my-share1",
account_key="mystorage/key==",
relative_mount_path="S",
mount_options = "/persistent:Yes"
)
hope this helps!
relative_mount_path: The relative path on the compute node where the file system will be mounted. All file systems are mounted relative to the Batch mounts directory, accessible via the AZ_BATCH_NODE_MOUNTS_DIR environment variable.
Azure Files is the standard Azure cloud file system offering. To learn more about how to get any of the parameters in the mount configuration code sample, see Use an Azure Files share.
Thanks for getting back at me. Sorry for the late reply, it was bed-time this time. I need to connect the Cloud SQL database that I have created to my application that is in App Engine. I tried to follow the online tutorials but when I do apply such info I would get then gcloud app deploy it return a connection error. Please help. Also clarify here: When I execute the gcloud app deploy command I suppose it takes my local file to Google Cloud where I would see the entire folder and files of my project on the project I was deploying but I am seeing the old version of my project while presentation has changed to the latest version. Also last one how can I link domain nam from http://domain.google.com to my app in http://cloud.google.com . Please help I am dying with stress I have been trying in here
Given that you haven't provided any information as to what settings you are using, or what error has been provided it is impossible to know what kind of problem you are running into.
I suggest taking a look at the "Connecting to App Engine" page here. It should answer a lot of your questions around connecting from an App Engine app.
I see two questions here.
1.
I need to connect the Cloud SQL database that I have created to my
application that is in App Engine. I tried to follow the online
tutorials but when I do apply such info I would get then gcloud app
deploy it return a connection error. Please help. Also clarify here:
When I execute the gcloud app deploy command I suppose it takes my
local file to Google Cloud where I would see the entire folder and
files of my project on the project I was deploying but I am seeing the
old version of my project while presentation has changed to the latest
version.
I see your problem here to be with CloudSQL and GAE connectivity. Depending on whether you use GAE Standard or Flex and CloudSQL MySQL or POSTGRES the steps varies. Documentation is quite clear in here though.
2.
Also last one how can I link domain nam from http://domain.google.com
to my app in http://cloud.google.com . Please help I am dying with
stress I have been trying in here
This is going to be super simple, goto GCP cloud console, Navigate to GAE-->Settings-->Custom Domain and click on add custom domain "Enter the domain name you want to link" When you click continue you will be shown the steps for verifying the domain owneship and to point the DNS to the GAE.
Documented properly by GCP folks at https://cloud.google.com/appengine/docs/standard/python/mapping-custom-domains
If you are using GAE Standard or Flex, a possible result of command gcloud app deploy :
An app.yaml (or appengine-web.xml) file is required to deploy this directory as an App Engine App, check next links:
https://cloud.google.com/appengine/docs/flexible/python/configuring-your-app-with-app-yaml
https://cloud.google.com/appengine/docs/flexible/python/writing-application-logs
Mysql and Postgres connection:
https://cloud.google.com/sql/docs/mysql/connect-app-engine
https://cloud.google.com/sql/docs/postgres/connect-app-engine
Sometimes it easy share the app.yaml for replicate the app correctly.
VMs is up on eu-gb region which is great.
However Horizon does not appear to be installed on there, which is fine, as I use the CLI most of the time. However the CLI file that I download from the Bluemix console is not correct.
It is missing the OS_TENANT_ID property. I cannot connect to my OpenStack tenant without this. Where can I get tenant ID from?
according to Bluemix VM documentation
https://www.ng.bluemix.net/docs/virtualmachines/vm_index.html#vm_setup_cli
you should login into your region&org and download the rc file one you are going to create the first VM.
The rc file download following these steps will contain all the information you need to access and manage your VM on Bluemix using openstack client.
In the case of you already downloaded your rc file following these steps I suggest you to try again generating a new one to check the new one contains all the information you need (consider that this environment is still beta and this kind of issues could be expected)
As in the topic.
I wonder since I cannot find this information anywhere and currently I am using a virtual machine (linux) on my vcenter which is cloned and then a special shell script is run on this freshly cloned machine to setup up environment and IP adresses etc.
Maybe I would be able to benefit from templates this way.
I think this will be helpful
https://www.robertparten.com/virtualization/vmware-difference-between-clone-and-template/
Few Differences in my opinion:-
Virtual machine is the running instance while Template is compact copy of VM ( with baseline and factory settings), which can be stored anywhere.
one need to deploy template to make running VM.
one can create copy from both VM and template but in VM you need to clone it and in case of template you need to deploy it.
moving between different setup is easy with template.
Rest are already mentioned in link provided.
But first you need to search on your own and still have doubts than only ask, that's how we all learn.
Happy Learning!
Looking at these two scenarios:
Create a template from your active VM, then deploy from the template.
Deploy from the active VM directly.
As far as I know, there will be no difference in the end result if you run these scenarios in the near future. You'll still have to run a script in order to get your IPs setup, etc.
So what's the difference?
If you mess stuff up with your active VM, change things around or whatever, you lose the ability to deploy from the (good) setup you had.
Once you make a template from your active VM, that configuration is saved as a file on the ESX (or the storage, not 100% sure) and can be re-deployed in the future.
I've a very simple application built in MVC4. This application allow the users to upload a file, and the application generates an output.
This app works great locally, but when I publish to azure (by right click -> publish), I get a less descriptive error. I've figured out that the error was because in the code, we accessed to a server relative path, and that is not possible in azure. So I've found a way to solve that in this link, that says that I should use LocalResource, rather than Server.MapPath. That make sense for me, but so far, I'm struggling with the suggested line.
LocalResource localResource = RoleEnvironment.GetLocalResource("DownloadedTemplates");
I'm not able to get it working, and also can't get a proper error. BTW I'm not sure how to enable the error log in azure :(
So, after going deeper in MSDN, I've seen that I should configure the Local Storage Resources, but as I've created a local MVC4 project, I can't find where I should configure this.
I need to be able to store a temporary file in the application (hosted in azure).
Did someone faced with this problem?
Anybody knows how to enable the Local Storage Resource in a project like that?
TIA!
Milton RodrÃguez
Well, after struggling a while, I've ended up using Windows Azure Tools.
The steps:
Add a new project
Under Cloud category, select Windows Azure Cloud Service.Note that if you don't have this option, an option to install the needed SDK will be shown. Install it first.
Name it properly :)
New Windows Azure Cloud Service window will appear, select the role that fits your needs. In my case, I choose ASP.Net MVC4, and then removed it.Note that you can edit the name of the created role at the right.
In the Roles folder of your new project, select Add, and then Web Role Project in solution. Your project will be an option to add.
You can remove the other role in the folder, the web project created in step 4, and also the folder ending in Content (ie. WebRole1Content). Basically, you can remove the created assets, but the Azure Service, and link the service to your project.
You're almost done. Follow this link to configurate your local storage :)
Now you're done!