Openstack create volume via Nova API - api

I'm trying to build a small webapp that will handle our development environments located on an openstack infrastructure (version 2012.2.2-dev, bundled in ubuntu 12.04) and I need to create some volumes using the API (i decided to use openstack rest api). I'm able to start machines and do some other operations (everything is built based on this: http://api.openstack.org/api-ref.html). If I send the request to create a volume as explained on the api reference, i get a 404. I tried different api versions (v1), but still no success.
Thank you in advance.

What language are you coding in? You could just use an SDK for this and skip trying to talk to the API directly. See
https://wiki.openstack.org/wiki/SDKs

In newer releases of OpenStack it is preferable to make use of the Cinder API rather than Nova API.
In folsom, Cinder uses IDENTICAL API refs to Nova volume related API sets. This is because this was the first release to separate out volume management to cinder as a stand alone project. While volume API references remain in folsom it is not the default and it is not the preferred method for accessing volumes REST queries.
Check out.
http://docs.openstack.org/developer/cinder/

Related

Google App Engine Flex - ASP.net core 2.1

Need your help. I just want to locate the published files (physical files published) of my .NET Core 2.1 in the App Engine server. I used Google plugin tool to publish my site and everything is done automatically.
I'm using simple app.yaml file:
runtime: aspnetcore
env: flex
I tried to scan some folders of the App Engine server but I could not locate my site. I also wonder maybe because google uses docker (don't have experience with docker too) and those file are in the docker's container. Not really sure.
In your project's Cloud Storage Buckets list, you will find a Bucket named like artifacts.[project-id].appspot.com. There you will find the container images that were deployed to App Engine.
However, App Engine provides On Demand server provisioning and scaling. This means that App Engine instances will be created when requests start coming in and more instances will be created if traffic increases.
Each instance will load your app's image individually.
In this type of environment, you should not store any relevant information in the App's directory, because all of it will be erased when the instance is killed due to lack of traffic and the data stored in one instance will not be available in other instances. See how instances are managed for more info.
If you want your app to store data in a SQL database, you could have a look at Cloud SQL, or also, you may find that Cloud Firestore, which is a NoSQL database, can suit your needs. Here is a list of GCP databases

Deploy a Vue.js app on Google Compute Engine

I created a simple Vue.js application. I am trying to deploy it on Google compute engine(not app engine or any other), but I can't find any appropriate solution. Can anyone help me?
Have a look at App Engine. It'll be the easiest way to deploy your app without thinking about infrastructure management.
If you really want/need to use Compute Engine you should decide on your own which OS to use and then install and configure all the required software manually.
Meanwhile, I should mention Cloud Run as managed compute platform that automatically scales your stateless containers with your code.
Please update your question if you need more details.
Deployed using express server, Nginx and pm2.

Anypoint CLI VS ARM REST Services (which one is preferred for Automated Deployment?) Using Cloud Console on-premis Deployment

I wanted to Automate the Cloud Console on-premis Deployment process. I see two options to deploy the services using anypoint-cli or Rest API. Can some one please let me know what are the differences between them and which one should i choose(In terms of long term support) ?
Anypoint cli is a command line tool to interact with the REST API. It might not provide access to every endpoint of the API.
Using the API directly requires that you make the API requests in some programming or scripting language.
You should choose the one that makes more sense to you, and your use case. That can not be determined here.

How to deploy multiple version of an application in production for microservice based application

Is it possible to have multiple versions of service(s) deployed in production at the same time. From my assumption, this should be pretty common pattern for microservice/api based projects or mobile projects. I want to know how do you do it and what are common pattern in industry for this kind of problems. It would be helpful if your answers around AWS environment or Kubernetes environment.
Thanks in Advance.
Is it possible to have multiple versions of service(s) deployed in production at the same time
Yes, it is possible. The idea is to keep all used microservices in production (v1, v2 ...) at the same time and to bring down the versions that are not used anymore. For this, you should somehow know when a version is not used anymore.
AFAIK, you have to options:
For every new version you make a new endpoint (like /v2/someApiCall) that is connected to the same (now upgraded) microservice and gradually instruct clients to use the new endpoind; when the old endpoint is not used anymore you deleted it; this is the preferred way.
For every new version you make a new microservice that share the same persistence with the old microservice; you should avoid the use of this solution; Netflix uses this strategy in rare occasions when the cost of changing old consumers is too high.
You can read more at page 62 from Building microservices by Sam Newman.
With AWS API Gateway you could deploy multiple versions of your code and switch between them from the mapping templates, as explained here. You might also want to look into stage variables.
Assuming your are exposing services over An HTTP REST API, the general standard is to always base line your service urls with a version.
Eg,
/v1/account/getUserInfo
If you need to release a new version, expose it over:
/v2/account/getUserInfo
Where v2 can run over a different branch of the codebase.
I have blogged about this: Multi-version Service Discovery using Spring Cloud Netflix Eureka and Ribbon, focussed on Spring Cloud Netflix components / libraries though.
But the idea is to deploy a new version of the artifact / binary in a new host / VPS / Container and have the service register with a registry server (Eureka, Consul, ....) and include metadata about the API versions it supports (v1, v2, ...). Client apps would discover which host / container / ... serves the API version needed.

Can Azure be inter-operable with Amazon?

I have a question about whether cloud vendors have an inter-operable mechanism. For example, I am developing a WCF service and hosting in Azure successfully. After a pro-long time using Azure, can I use the same code for deploying it in AWS? Will it be possible? Does the API of both matches the same for deploying? If not, what are all the extra care needed for hosting the same service when switching over other Cloud Vendors like Salesforce.com, OpenStack, etc.,
In general, you can't just take what you develop for one Cloud platform and put it on another: they have different functionality sets and expose different APIs. However, the more low-level you make your code, the more likely it is that you'll find another vendor with a very similar API, since virtualizing infrastructure is simpler (and closer to standardized) than virtualizing a CMS application.
If you're using just IaaS, you can probably port fairly rapidly but you have to do more work to make your application. If you're using PaaS (or SaaS!) then you're more locked-in but you get more support for developing rapidly: it's that support platform which is both the value-add and the lock-in, and you won't get one without the other.
If you're using an Azure web role for hosting your WCF service then from deployment point of view you will not have many problems with AWS. You'll simply use facilities offered by AWS SDK for .NET (aka Publish to AWS CloudFormation). For sure you'll have to change the logging part if you've used Azure Diagnostic and alla Azure services with related AWS services. We did this multiple times in the last year and it works.
For worker role it's not so simple because in Azure they are easily deployed like web role, but in AWS you haven't direct deployment from Visual Studio so you have to do some manual work using Windows Services or something else