How to create a VM using VSphere java API? - virtual-machine

I want to write some java code to create a VM, install iso (or copy the existing vm set up if install iso is not possible) and assign disk space, create login for the created VM.
I looked at Vsphere API examples in http://vijava.svn.sourceforge.net/viewvc/vijava/trunk/src/com/vmware/vim25/mo/samples/, it has power on/off of installed VM. I could not figure out how to create one with the API. I have two questions:
What are the steps to create VM using API?
What API or objects should be used to create VM programatically?
Appreciate your help.

I know i am about a year late, but when you download the SDK, you will have a sample how to create a VMdisk. Understand the code and then you can just do it your way :)
The link to the SDK.zip file
http://communities.vmware.com/community/vmtn/developer/forums/java_toolkit
and inside the SDk, the VMDisk file:
\SDK\vsphere-ws\java\JAXWS\samples\com\vmware\vm

You'll want to keep the VMware Web Services SDK documentation handy - unfortunately they changed formats recently so I'm not sure how good of deep links I can get for you. The specific method I've used is CreateVM_Task (you'll have to scroll down to find it on the Folder object). Alternatively, if you're using a resource pool, CreateChildVM_Task may be more applicable (again, scroll down to find it).
There is also a section of documentation on creating VMs that has some incomplete example code.
As far as where in the hierarchy to create the VM, that's up to you. Each host or cluster will have a vmfolder property that you can use to create VMs, or any other folder may work. Good luck!

Related

Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?

I want to deploy odoo as cheap as possible. I tried with gcloud sql (15-30€/m) + cloud run. But after some minutes passed the odoo interface shows me a white screen with so many logs in the console similar to this:
GET 404 1.04 KB24 ms Chrome 91 https://bf-dev3-u7raxlu3nq-ew.a.run.app/web/content/290-f328144/1/website.assets_editor.css
My interpretation is that, as cloud run is stateless, and the web static files seems to be stored in the core module, after the container is killed this information is lost. As I've been one month working looking for a solution, before trying any another way of deploying I ask the community: Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?
Here I listed all the ideas that I tried:
First, I thought that this css files were store in the werkzeug session, so I tried two addons that stored this session in a place different from the filestore. These addons were camptocamp odoo-cloud-platform-14.0/session-redis and misc-addons-13.0/base_session_store_psql. But, then the problem persisted.
Then I read that the static css and js file generated in the web editor are stored in odoo as attachments, and the addons misc-addons-13.0/ir_attachment_s3 could store these files in s3. But, although I configured this addon the problem persisted.
Next, I found this link describing needing to regenerate assets so them to be stored in the db. But, although I did that the problem persisted.
Finally, I thought to deploy odoo in other ways. The way of directly in a vm seems to be the more minimalistic and standard, and so seem to have the more chances to work, although it will be difficult to implement gitops. It can be deployed containers in the vm through docker compose what will help deploying updates. Gke anthos seems to implement gitops too and seems to persist volumes, but in the description it shows gke anthos is stateless. Finally, there's the way of deploying in a k8s cluster, this way will implement containers and allow autoscaling vs the docker compose way in a vm. But it's true it seems to be more expensive and more difficult to implement. Regarding seem to be more expensive it is thought of trying little working nodes machines so the cost stays small during the night. Regarding the difficulty of deploying, it is desired to implement gitops so it seems argo or other should be added. Also, I heard gke autopilot has a good free tier and is easier to deploy.
Thanks in advance :)
Cloud Run isn't the good solution for that. Indeed, if the werkzeug session is persisted in memory, the same client isn't sure to access to the same instance each time, and thus to lost the file even in the middle of a session.
The best solution is to use VM with sticky session configuration. You can use old school deployment on Compute Engine, or Cloud Native solution with GKE/K8S. It's more or less the same cost if you have only 1 cluster (the first one is free)
Just a correction about GKE Anthos. I think you talk about Cloud Run on Anthos, and yes, it's like Cloud Run but use KNative on GKE to manage the containers, and it's also serverless. But GKE can handle stateful deployment, as you need with odoo

Create network file share from an API

I am exploring the possibilities of exposing an EMC Documentum folder, and the files/folders within, as a network file share.
The reason is so we can enable another application to read and write files to what it thinks is a standard UNC path, but really the repository is in Documentum.
That Documentum product doesn’t seem to offer this, however does expose an API.
A few thoughts here were a bespoke ‘driver’ for SAMBA, possibly something using WebDAV, but really I haven’t investigated these much yet, so both may be unviable.
Basically, how can I wrap an API up to look like a network drive?
I’ll keep self exploring this but hopefully someone can provide some leads here too..?
Update: using FUSE for Linux.
Documentum "folder" as you see it is not something like Windows folder. It is a database record of object with its related properties. Nothing else.
Documentum "documents" are somehow more related to Windows documents but still are only database record of objects with related properties and specific content stored somewhere in storage. Storage can be something like:
file share on Windows / Linux OS
specialized storage soluton like
Centra
specialized storage cloud solution
So you have misunderstanding of what you call Documentum folder. Your requirement can still be achieved in some way, thats for sure.
For example you could make integration between windows folder to Documentum via Spring Intergration framework (SI) from Windows folder side and at Documentum side implement listeners to hook SI and implement BOF (Business Object Framework) services to process events from SI. This is just one of the options.
Technically it is possible to create an interface to Documentum repository using any standard (SMB, CIFS, WebDav, IMAP, .... ) which can represent a document.
The fun task / hard part is mapping Documentum functionality to your chosen standard.
For example: back in 2013 I wrote a basic proof of concept Webdav interface to Documentum repository. I used the Miltion WebDav java library (http://milton.io).
With a WebDav interface, the Documentum Repository was exposed to a Windows computer as a drive using Add Network Location.
We identified that we can use FUSE on Linux.

NetApp 7-Mode simulator CIFS share creation

What are the different ways to create CIFS share on NetApp 7-mode simulator? I created share using command line argument and NetApp onCommand System Manager. I want to know is there any other way to do the same things?
Well, everything will create basically the same way. System Manger is using the API, you can code yourself some scripts or apps to use the API, or you can use Workflow Automation(WFA) to do this work, but again, this is just calling the API. Once the 7-mode shares are setup, they can be managed with the windows mmc though.
Regarding 7 Mode Systems:
It is always better to work with CLI. Just one example:
If you create a volume on 7 mode systems you have no chance to define the security style. You have to change it after the volume is created. That will be often forgotten.
I only work with CLI.

Can I create new S3 users and add IAM policies from the Linux command line?

Is there any good way of creating and managing S3 policies and users from the command line of my Raspberry Pi?
The AWS Universal Command Line Tools are newer and better supported. They rely on Python, so if you can get Python for Raspberry Pi, you should be set.
I have no experience of using it myself, but I found a tool for interacting with Amazon IAM, the access control service for AWS, in a manner that might work for you:
IAM Command Line Toolkit (note: last updated September 2010)
There may be more usable stuff under the IAM Resources section.
If you are unfamiliar with IAM, the documentation is one place to start. Although, knowing the general style of AWS documentation, there may be better resources and tutorials to be found elsewhere.

How would I created a flexible EC2 Windows 2008 boot script?

If you look at the Linux ecosystem (especially the Ubuntu and Alestic EC2 images) there is a common technique where the VMs are pre-configured to look at the EC2 user-data and use it as a boot script. The nice thing about this approach is that you can write a boot script that further provisions your machine, allowing you to avoid making a new image every time your software that runs on the machine changes.
I want to do the same thing for Windows, but given that I'm an Mac and Linux guy, I'm a bit lost on where to start. My requirements are:
This must run on Windows Server 2008
A bootstrap script needs to start when the machine boots up, read the user-data file by pulling down the contents http://169.254.169.254/1.0/user-data
The bootstap script then needs to run the contents of that file as if it were a script
The script embedded in the user-data needs to run in such a way that it has access to the desktop environment (ie: it can launch a browser, etc).
I'm not quite sure how services work in Windows or if I need to enable auto-login, so any advice here would be appreciated. The ultimate goal is to run a Java program that launches some custom software that in turn launches a web browser (IE, Firefox, etc) and is capable of taking screenshots.
The screenshot part is interesting, because in the past when I've tried this the only way I could get something other than a black screen was to have UltraVNC or RealVNC boot up as a service, though I don't know why that helped.
I'm looking for answers to three specific questions, as well as any general advice:
Should I be focussing on a Windows service or auto-login + bat file in the "Startup" folder?
If I use a Windows service, is there anything special that I need to do to make sure desktop access and/or screenshots are available?
Do you recommend any tools for common Linux commands, like curl or wget? Last time I used Windows I used Cygwin a lot, but is there something more appropriate to use here?
I have not tried auto-login on Windows instances in EC2, but here's the support document on how to enable it.
We boot-strap our Windows instances using a custom AMI with a custom Windows 'install' service already installed. The boot-strap installer reads a URL from user-data at startup. The URL points to a ZIP file stored in S3. The installer then downloads, un-zips, and executes the actual application installer -- in our case a simple CMD fie.
This setups allows us to have one base AMI and then be able to easily overlay 15+ different application configurations (without having to rebuild the AMI). If you only have one application configuration this may be overkill for your situation.
The only trouble we ran into was having our installer service start to early -- changing the service startup mode to "Automatic Delayed" fixed that issue.
We wrote our boot-strap installer in Java, launched via YAJSW, because we're comfortable with it. If you just want a few simple Unix tools, most are available pre-compiled for Windows, for example wget.
For something completely different, you could try PsExec to configure the instance after it has booted.
You can try using RightScale's free developer account to create plain Powershell scripts and associate them with your Windows instances to run at boot time. The RightScale dashboard solves exactly the problems you are trying to solve above.
DISCLAIMER: I work for RightScale.
As for screen capture CutyCapt is a simple tool you can point at a URL and generate an image from.
Unxutils is a great solution for those looking for unix tools on Windows. It's got the wget.exe that you're looking for, however, using Powershell to download stuff is not so bad either:
$wc = new-object system.net.webclient
$wc.DownloadFile("http://stackoverflow.com","test.html")
If you can write a batch file to do your setup, then you can run it at startup of the vm by doing this:
1. Run REGEDT32.EXE.
2. Modify the following value within HKEY_CURRENT_USER:
Software\Microsoft\Windows NT\CurrentVersion\Winlogon\ParseAutoexec
1 = autoexec.bat is parsed
0 = autoexec.bat is not parsed
As an answer to #3, I would say that you can do just about anything in a batch file that you need which includes downloading from a ftp server (but not from a http server). I am really interested in this stuff and so if you have questions, try asking me.
If you use Elastic Beanstalks you can use this:
Customizing the Software on EC2 Instances Running Windows
It uses YAML formatting standards, e.g.
packages:
msi:
mysql: http://dev.mysql.com/get/Downloads/Connector-Net/mysql-connector-net-6.6.5.msi/from/http://cdn.mysql.com/
or
sources:
"c:/myproject/myapp": http://s3.amazonaws.com/mybucket/myobject.zip
I know this is a little bit late to help out with the original post but for anyone who is still reading this one solution is to use the http://cloudinitnet.codeplex.com/ project. The service is easily installed using a powershell script and will create a local administrator account to use while running.
The goal for this project was to replace the Cloud-Init project used in Amazon Linux and Ubuntu.