I am used to creating virtual machines using OVF format for VMware hypervisor. In OVF file, I can add custom XML tags which are made available as parameters to the guest when guest boots up. They are available as a device to the guest and guest software can mount and read the parameters.
Is it possible to do the same using KVM and QCOW2 format?
Openstack provide metadata service. Instances can retrieve user data (passed as the user_data parameter in the API call or by the --user_data flag in the nova boot command) through the metadata service, by making a GET request.
OpenStack can be configured to write metadata to a special configuration drive that will be attached to the instance when it boots. The instance can retrieve any information that would normally be available through the metadata service by mounting this disk and reading files from it. See about config-drive: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/End_User_Guide/config-drive.html
Without openstack:
I found blog post about Creating a cloud-init config disk for non-cloud boots. I didn't test it.
Related
I have set up Azure Monitor custom log collection on my Linux VM by following the tutorial and all works fine, except that the Computer Name column in my custom table does not get populated. This means I have no easy way to distinguish between similar logs sourced from multiple VMs.
I could probably hack in the hostname into the log file itself and get Azure to parse it into a field, but on one hand I don't want to customize the log file if possible, I believe the agent should be capable of propagating this information somehow.
Is there anything that needs to be configured outside of the tutorial, or is it a current limitation of the Azure Monitor Agent?
Fixed in 2023 Feb by Microsoft: https://learn.microsoft.com/en-us/answers/questions/951629/custom-logs-hostname-field-azure-monitoring-agent
I want to store data with GeoMesa into data store (e.g. Redis) and to visualize/publish this data with GeoServer.
I develop an interface (and the classes which implements this interface) in Java to store data in a Redis server. Then, the plugin "GeoServer with Redis" was installed.
Thus, when I add a new vector data source, GeoServer offers me the option "Redis (GeoMesa)". I get an error when I submit the parameters of this new data source in GeoServer. I try it before and after storing data in Redis, and the results are the same.
Redis was installed by the official Docker image.
Parameters to create a data
redis.url='localhost:6379'
redis.catalog='geomesa'
redis.connection.pool.size='16'
geomesa.query.threads='8'
geomesa.query.timeout=''
redis.pipeline.enabled=FALSE
redis.connection.pool.validate=TRUE
geomesa.stats.enable=TRUE
geomesa.query.audit=TRUE
geomesa.query.loose-bounding-box=FALSE
geomesa.query.caching=FALSE
geomesa.security.auths=''
geomesa.security.auths.force-empty=TRUE
GeoServer prints this output :
Error creating data store, check the parameters. Error message: Could not get a resource from the pool
Unfortunately, I don't access to the stack trace.
Are you sure that your Redis instance is accessible on localhost:6379? Are you running Redis 5+ (GeoMesa was developed against Redis 5)?
You could try running through the Redis GeoMesa quickstart, which would eliminate any potential issues with GeoServer and should also show you a stack trace.
I had a similar issue. However, both my geoserver and redis environment are dockerized and running in a VB Ubuntu virtual machine.
First, I port forwarded my redis port to my host IP. Then I pointed the geoserver redis URL to my host IP address instead of 127.0.0.1 or localhost.
This solved my problem.
If using your host IP address also works for anyone, kindly let us know.
The Bluemix documentation leads a reader to believe that the only persistent storage for a virtual server is using Bluemix Block Storage. Also, the documentation leads you to believe that virtual server's own storage will not persist over restarts or failures. However, in practice, this doesn't seem to be the case at least as far as restarts are concerned. We haven't suffered any virtual server outages yet.
So we want a clearer understanding of the rationale for separating the virtual server's own storage from its attached Block Storage.
Use case: I am moving our Git server and a couple of small LAMP-based assets to a Bluemix Virtual Server as we simultaneously develop new mobile apps using Cloud Foundry. In our case, we don't anticipate scaling up the work that the virtual server does any time soon. We just want a reliable new home for an existing website.
Even if you separate application files and databases out into block storage, re-provisioning the virtual server in the event of its loss is not trivial even when the provisioning is automated with Ansible or the like. So, we are not expecting to have to be regularly provisioning the non-persistent storage of a Bluemix Virtual Server.
The Bluemix doc you reference is a bit misleading and is being corrected. The virtual server's storage on local disk does persist across restart, reboot, suspend/resume, and VM failure. If such was not the case then the OS image would be lost during any such event.
One of the key advantages of storing application data in a block storage volume is that the data will persist beyond the VM's lifecycle. That is, even if the VM is deleted, the block storage volume can be left in tact to persist data. As you mentioned, block storage volumes are often used to back DB servers so that the user data is isolated, which lends itself well to providing a higher class of storage specifically for application data, back up, recovery, etc.
In use cases where VM migration is desired the VMs can be set up to boot from a block storage volume, which enables one to more easily move the VM to a different hypervisor and simply point to the same block storage boot volume.
Based on your use case description you should be fine using VM local storage.
I want to make sure I'm not storing sensitive keys and credentials in source or in docker images. Specifically I'd like to store my MySQL RDS application credentials and copy them when the container/task starts. The documentation provides an example of retrieving the ecs.config file from s3 and I'd like to do something similar.
I'm using the Amazon ECS optimized AMI with an auto scaling group that registers with my ECS cluster. I'm using the ghost docker image without any customization. Is there a way to configure what I'm trying to do?
You can define a volume on the host and map it to the container with Read only privileges.
Please refer to the following documentation for configuring ECS volume for an ECS task.
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html
Even though the container does not have the config at build time, it will read the configs as if they are available in its own file system.
There are many ways to secure the config on the host OS.
In my past projects, I have achieved the same by disabling ssh into the host and injecting the config at boot-up using cloud-init.
I'm trying to architect a system on GCP for scalable web/app servers. My initial intention was to have one disk per web server group hosting the OS, and another hosting the source code + imagery etc. My idea was to mount the OS disk on multiple VM instances so to have exact clones of the servers, with one place to store PHP session files (so moving in between different servers would be transparent and not cause problems).
The second idea was to mount a 2nd disk, containing the source code and media files, which would then be shared with 2 web servers, one configured as a CDN server and one with the main website and backend. The backend would modify/add/delete media files, and the CDN server would supply them to the browser when requested.
My problem arises when reading that the Persistent Disk Storage is only mountable on a single VM instance with read/write access, and if it's needed on multiple instances it can be mounted only in write access. I need to have one of the instances with read/write access with the others (possibly many) with read only access.
Would you be able to suggest ways or methods on how to implement such a system on the GCP, or if it's not possible at all?
Unfortunately, it's not possible.
But, you can create a Single-Node File Server and mount it as a read and write disk on other VMs.
GCP has documentation on how to create a single-Node File Server
An alternative to using persistent (which as you said, only alows a single RW mount or many read-only) is to use Cloud Storage - which can be mounted through FUSE.