I have very less experience working on VM provisioning. As part of my current poc, I want to provision vm through API of either vcenter(VmWare) or through Ovirt(https://www.ovirt.org/). What is the easiest way to set up the ecosystem?
Talking about oVirt, you have different methods to provision your VMs:
With Ansible, using official oVirt roles and methods;
With Terraform, using the Terraform oVirt Provider plugin;
Writing your own code, using the specific SDK for Python, Java, Ruby, Go or in plain REST API without any SDKs.
Related
I have an OpenShift cluster 4.7 in IBM CLOUD that runs many infrastructure tools ( Jenkins, Gitea, JFrog, SonarQube, WIKIJS, etc etc). I want to be able to login to these tools using my OpenShift credentials. Do you have any working way to use OpenShift's integrated OAUTH server to do so? Or any other idea?
I know Jenkins has already a plugin to do so, but what about the rest? Is auth-proxy the best way? Cause most of my tools have been installed with HelmCharts or Operators and I am not sure how easy is it gonna be to configure something like that.
Thank you in advance.
Probably, you can use product "keycloak"
https://www.keycloak.org/gettin.../getting-started-openshift
In my opinion it is more likely to use KeyCloak.
In addition, you can consider IBM Cloud product AppID. You can find it In IBM Cloud catalog using link https://cloud.ibm.com/catalog/services/app-id.
Did you follow the official documentation on this?
Configuring OAuth client
Configuring the internal OAuth server
Is it possible to have multiple versions of service(s) deployed in production at the same time. From my assumption, this should be pretty common pattern for microservice/api based projects or mobile projects. I want to know how do you do it and what are common pattern in industry for this kind of problems. It would be helpful if your answers around AWS environment or Kubernetes environment.
Thanks in Advance.
Is it possible to have multiple versions of service(s) deployed in production at the same time
Yes, it is possible. The idea is to keep all used microservices in production (v1, v2 ...) at the same time and to bring down the versions that are not used anymore. For this, you should somehow know when a version is not used anymore.
AFAIK, you have to options:
For every new version you make a new endpoint (like /v2/someApiCall) that is connected to the same (now upgraded) microservice and gradually instruct clients to use the new endpoind; when the old endpoint is not used anymore you deleted it; this is the preferred way.
For every new version you make a new microservice that share the same persistence with the old microservice; you should avoid the use of this solution; Netflix uses this strategy in rare occasions when the cost of changing old consumers is too high.
You can read more at page 62 from Building microservices by Sam Newman.
With AWS API Gateway you could deploy multiple versions of your code and switch between them from the mapping templates, as explained here. You might also want to look into stage variables.
Assuming your are exposing services over An HTTP REST API, the general standard is to always base line your service urls with a version.
Eg,
/v1/account/getUserInfo
If you need to release a new version, expose it over:
/v2/account/getUserInfo
Where v2 can run over a different branch of the codebase.
I have blogged about this: Multi-version Service Discovery using Spring Cloud Netflix Eureka and Ribbon, focussed on Spring Cloud Netflix components / libraries though.
But the idea is to deploy a new version of the artifact / binary in a new host / VPS / Container and have the service register with a registry server (Eureka, Consul, ....) and include metadata about the API versions it supports (v1, v2, ...). Client apps would discover which host / container / ... serves the API version needed.
Is there a difference between using salt-proxy ssh and directly salt-ssh? I'm interested because according to documentation both aimed to run remote commands without agent installation on the end machine.
You cant simply do salt-ssh on a proxy minion, for which you would have to write your own custom ssh interface to the remote system, because your proxy minion may not support doing salt-ssh.
How to choose between using salt-ssh vs salt-proxy totally depends on the type of a minion system.
As stated in the saltstack documentation - https://docs.saltstack.com/en/latest/topics/ssh/index.html and
https://docs.saltstack.com/en/latest/topics/proxyminion/index.html
For salt-ssh to be used, the remote system must have python installed - one of the criteria. For example, controlling ubuntu from centos.
As stated in the salt-proxy doc,
Proxy minions are a developing Salt feature that enables controlling
devices that, for whatever reason, cannot run a standard salt-minion.
Examples include network gear that has an API but runs a proprietary
OS, devices with limited CPU or memory, or devices that could run a
minion, but for security reasons, will not.
Is it possible to deploy vm's using a csv or something similar? I want to automate the install of about 100 servers. The only option I have found is using powershell. I would really like some other options though. Thanks.
VCenter exposes an API, the documentation for which can be found here:
http://www.vmware.com/support/pubs/sdk_pubs.html
Armed with that API and a template, you should be able to do what you want to do in the language of your choice.
Templates can be customized with a CustomizationSpec directly from the VSphere client as well, which means you can deploy directly from the template in the client.
Other options: VMware's Orchestrator or Microsoft Orchestrator with the VMware plugin.
However, is there really any reason not to use Powershell? PowerCLI can do what you want to do and very easily. I've deployed thousands of servers with it.
I'm trying to build a small webapp that will handle our development environments located on an openstack infrastructure (version 2012.2.2-dev, bundled in ubuntu 12.04) and I need to create some volumes using the API (i decided to use openstack rest api). I'm able to start machines and do some other operations (everything is built based on this: http://api.openstack.org/api-ref.html). If I send the request to create a volume as explained on the api reference, i get a 404. I tried different api versions (v1), but still no success.
Thank you in advance.
What language are you coding in? You could just use an SDK for this and skip trying to talk to the API directly. See
https://wiki.openstack.org/wiki/SDKs
In newer releases of OpenStack it is preferable to make use of the Cinder API rather than Nova API.
In folsom, Cinder uses IDENTICAL API refs to Nova volume related API sets. This is because this was the first release to separate out volume management to cinder as a stand alone project. While volume API references remain in folsom it is not the default and it is not the preferred method for accessing volumes REST queries.
Check out.
http://docs.openstack.org/developer/cinder/