Get all images instances - REST API Azure - api

I searched in the Azure REST API documentation but I didn't find something.
I created several virtual machines using an image and I want to retrieve all this virtual machines with REST API Azure.
I'm wondering if there is an URI I can call to get all instances of a Virtual Machine image ?

ListDisks Operation will give you all the Azure Disks that are present in your account.
The result set contains objects with properties like AttachedTo which you can use to identify a VM (if any) to which this disk is attached, or SourceImageName which you can use to identify the source image.
There is no direct API call to identify which images are in use. You have to make two calls - first one to get all disks and second one get all images. Then mix and match. (Or try out filtering the images only based on names, which you are interested in, but I am not sure whether the REST API supports filtering).

Related

Different backend endpoints in APIs depending on Products in Azure API Management

I'm an absolute newbie in Azure API Management and I have a doubt regarding how to manage Products and APIs.
Let's imagine this scenario:
I create 3 diferent Products: One for representing my Development environment (DEV), the second one for representing my Preproduction environment (PRE) and the last one to represent my Production environment (PRO).
I create several APIs which I want to publish in my DEV environment and later promotion to the others. So I need every API in every different Product to point a different backend service, as my backend services are different in every environment.
In example:
I have 3 different versions of my backend service: ServiceDEV, ServicePRE and ServicePRO. As I develop my API, I use as backend service the one named ServiceDEV, and so my API is assigned to the Product DEV. Later I want to keep this DEV version for my API but I also want to "deploy" that API in the Product PRE to make it act as a façade for ServicePRE, and the same would happen when promotioning it to PRO.
The problem with this approach is that I need to clone the APIs and change their settings to make them point to the correct backend endpoint every time I want to promotion one of them from one environment to another, thus losing all the versioning for that API, as the cloning operation just clones the current version of the API.
I don't know if policies would meet my needs in this subject.
I hope you get what I'm trying to mean...
How can I manage this situation?
Am I focusing this subject in a wrong way?
Any idea about how to overcome this?
Thank you!
If you follow this approach then you indeed could use policies to manage different backends for different products. You could create APIs without specifying backends ervice URL entirely and later use set-backend-service policy at product level to direct call to a proper endpoint.
One limiting factor of this approach is that whatever changes you may want to do to an API in dev environment (think change signature of an operation, or policy) will be immediately visible in other environments as well as this is a single API in all of them. If this is an issue, then consider having duplicate (triplicate) APIs - one per environment and later move their configuration via Azure API call.

Would a google cloud platform machine with many different CPUs allow me to run API requests through several differen IP addresses?

I am trying to query public utility data from an API (oasis.caiso.com) with a threaded script in R. Apparently this API will refuse requests from certain IP addresses if too many are made. Therefor I need to run many different API requests in parallel across different IP addresses, and am wondering if a machine with many different CPUs on google cloud platform will allow this?
I was looking at the n1-highcpu-96 option from this page: https://cloud.google.com/compute/docs/machine-types
If this is a poor solution can anyone suggest another distributed computing solution that can scale to allow dozens or even hundreds of API queries simultaneously from different IPs?
If I needed multiple IP to perform "light" API calls I would not scale vertically (with a machine having 96 core). I would create an instance group with 50 or 100 or n Debian micro or small preentible instances with the size depending on the kind of computation you need to perform.
You can set up a startup script loaded in the metadata or in a custom image that connects to the API server do what it has to do and save the result on a bucket and if the instance get a "API refuse" I would simply kill the instance automatically having the instances group creating a new one for me with possibly a new IP.
This I think is a possible easy solution to achieve what you want, but I guess there are multiple solutions.
I am not sure what you are trying to achieve and I think you need to check first that it is legal and if the owner of the API agree.

how to build google gadget with persistent storage

I'm trying to make a google gadget that stores some data (say, statistics of users' actions) in a persistent way (i.e. statistics accumulates over time and over multiple users). Also I want these data to be placed at google free hosting, possibly together with the gadget itself.
Any ideas on how to do that?
I know, Google gadgets API has tools for working with remote data, but then the question is where to host it. Google Wave seemed to be an option, but it is no longer supported.
You should get a server and host it there.
You have then the best control over the code, the performance and the data itself.
There are several hosting providers out there who provide hosting for a reasonable price.
Naming some: Hostgator.com (US), Hetzner.de (DE), http://swedendedicated.com (SE, never used, just a quick search on the internet).

No projects listed with Maps Engine Lite API

I've set up API service account access and that seems to be authenticating and connecting OK using the provided sample code (https://developers.google.com/maps-engine/documentation/oauth/serviceaccount).
I've shared my map with the provided service account email address in the Google Maps Engine UI.
Accessing the API method https://www.googleapis.com/mapsengine/v1/projects I expected to see my map in a returned list of projects visible to the service account. Instead, only an empty projects array is returned.
Ultimately my goal is to access place name and geodata stored within a layer on the map I have created in Maps Engine Lite. Is there a step I have missed or something I have misunderstood about granting API access to a Maps Engine Lite map?
Did you progress with your question?. I got one project in the list but because I singed in for a free Google Maps Engine account. That allows you to créate just one project.
But I was looking for accessing "my places" maps.
It turns out certain features, such as the ones I was looking to use, are only available on the (paid for) Maps Engine, and not "Maps Engine Lite". The API is different.
When you mention that they are just available on the Maps Engine Pro (paid for) version, do you mean that the user who owns the maps has to upgrade or yourself, as a developer that want's to access user's maps, have?.
I wouldn't mind to pay (a reasonable price) for it in order to get my app working again. But I don't think most of my app's users would.
Could you check if it worked that way?

Possible to get image from Amazon S3 but create it if it doesn't exist

I'm not sure how to word the question but here is what I am looking to do.
I have a site that uses custom map tile overlays on a google map.
The javascript calls a php file on my server that checks to see if an existing map tile exists for the given x, y, and zoom level.
If if exists, it displays that image using file_get_contents.
If it doesn't exist, it creates the new tile then displays it.
I would like to utilize Amazon S3 store and serve the images since there could end being a lot of them and my server is slow. If I have my script check to see if the image exists on amazon and then display it, I am guessing I am not getting the benefits of the speed and Amazons CDN. Is there a way to do this?
Or is there a way to try and pull the file from Amazon first then set up something on Amazon to redirect to my script if the files no there?
Maybe host the script on another of Amazons services? The tile generation is quite slow also in some cases.
Thanks
Ideas:
1 - Use CloudFront, but point it to a cluster of tile generation machines. This way, you can generate the tiles on demand, and any future requests are served right from Cloudfront.
2 - Use CloudFront, but back with with an S3 store of generated tiles. Turn on logging for the S3 bucket, so you can detect failed requests. Consume those logs on a schedule, and generate the missing tiles. This results in a cheaper way of generating tiles, but means that when a tile fails the user get's nothing.
3 - Just pre-generate all the tiles. Throw tasks in an SQS queue, then spin up a collection of EC2 instances to generate the tiles. This will cost the most up front, but all users get a fast experience.
I've written a blog post with a strategy for dealing with this. It's designed to make intelligent and thrifty use of CloudFront, maximize caching and deal with new versions of existing images. You may find the technique described there helpful. The example code shows how to handle different dimensions (i.e. thumbnails) of images. You could modify it to handle different zoom levels.
I need to update that post to support CloudFront custom origins, and I think that for your application you might be better off skipping S3 and using a custom origin. The advantage of a custom origin is simply that it's probably going to be easier to manage all of your images on your local filesystem compared to managing them on S3.