Changing an existing Region's type in GemFire/Geode - gemfire

Once a Region is created in Geode, say as a PARTITION type, is it possible to change the type to something else, such as PARTITION_PERSISTENT?

#juanramos is correct. You really only have the ability to alter (modify) a Region based on the configuration exposed in the AttributesMutator interface (see Javadoc).
Programmatically, this would be acquired with:
AttributesMutator attributesMutator = region.getAttributesMutator();
// alter the Region using the AttributesMutator, such as by adding a new CacheListener
In fact this is exactly how the alter region command in Gfsh was implemented (i.e. by using a Function to distribute the operation of altering/modifying the Region using the Region's AttributesMutator across the cluster of nodes hosting the Region).
So, as Juan described, you can:
Export Data from the Region
Then, destroy the Region
Next, re-create the Region with the desired DataPolicy (e.g. PARTITION_PERSISTENT)
Finally, Import Data back into the re-created Region.
Of course, if you do not have any data, then you can simply destroy and recreate the Region as well.
Spring Boot for Apache Geode makes data handling (e.g. export and import) very simple to do, and can even be performed on an application restart (particularly during development time).

I don't think this is not possible out of the box... You can, however, use the gfsh export data command to export the current region data, destroy the region, create a new one with the correct type, and then use the gfsh import data command to re populate the region from the backup.

Related

How to migrate Apache Druid data between 2 instances?

We have 2 druid instances one for Stage and Data validation and another for Production. Once data is loaded and validated on stage instance we need to migrate it to production. Is there a way we can migrate data directly to other instances instead of reloading?
Well, in theory the only thing you need is the segments data records and the raw data files. If you store your metadata in (for example) MySQL, you can export the records from the druid_segments table.
The druid_segments record will also show you where the segment file is stored (see the payload column.
You now should copy the data files to the location which is used in production. Make sure that the payload column "points" to this correct location.
Now import the records in your production environment and you should be settled.
Before applying this in production please test this in a test environment.
Maybe this page will help you along. It contains useful information for your situation: https://support.imply.io/hc/en-us/articles/115004960053-Migrate-existing-Druid-Cluster-to-a-new-Imply-cluster

Adding auxiliary DB data during deployment

My app consists of two containers: the app itself and a database. I'm planning to wrap the app into a chart, thus paving a way for easy reproducible deployment.
Apart from setting/reading environment envs (which helm+kubernetes seems to handle really well), part of app's configuration is:
making sure the database is pre-filled with special auxiliary data (e.g. admin user exists, some user role names required to create new users are there, etc.).
I like the idea of having readable yaml files hold the entire configuration in a human readable format. However at a glance it doesn't seem that helm in any way would help with this (DB records) kind of configuration.
That being said, what is the best place to put code/configuration ensuring that DB contains certain auxiliary records? A config yaml file? An container init script, written in bash?
You are right, Kubernetes or Helm cannot help with preparing your pre-filled database records/schema.
You should probably have your application initialize those pre-filled data. If you don't want to put this logic into your application, you can ship an initialization script and configure an init container with Kubernetes.
Kubernetes makes sure every time your application container is restarted, the init container runs first. In the init container, you can execute a bash/python/... script that makes sure the records you want are there.

Is it possible to change the region of a Google Cloud Platform project?

If I go to the Google Developer Console then I can see all my Cloud Platform projects, but not their regions.
How do I see the region of each project? And is it possible to change the region once it has been set?
Thanks for any help.
There is no such thing as a region of a GCP project.
In other words, region/location is specific to resources, and a GCP project is not permanently tied to a single region/location.
For example, you can have a project with multiple BigQuery datasets in different regions.
That same project can have many Compute Engine instances running, each one in different location/region.
There is a default region that is set per GCP project, but that can always be overwritten when creating resources in GCP, and is mainly used to guess default location when location is not specified in API calls.
Regarding the BigQuery aspect of this question:
Data Locations on a table are immutable once set.
In order to change the location, the easiest solution would be to export the data to Google Cloud Storage, delete the table, re-create the table in the correct region, then import the data.
https://cloud.google.com/appengine/docs/python/console/#server-location
Setting the server location
When you create your project, you can specify the location from which it will be served. In the new project dialog, click on the link to Show Advanced Options, and select a location from the pulldown menu:
us-central
us-east1
europe-west
If you select us-east1 your project will be served from a single region in South Carolina. The us-central and europe-west locations contain multiple regions in the United States and western Europe, respectively. Projects deployed to either us-central or europe-west may be served from any one of the regions they contain. If you want to colocate your App Engine instances with other single-region services, such as Google Compute Engine, you should select us-east1.

Rename Google Compute Engine VM Instance

How do I rename a Google Compute Engine VM instance?
I created a new LAMP server and I'd like to rename it in the "VM Instances" dashboard.
I've tried renaming the Custom metadata, but that didn't seem to replicate to the dashboard.
I tried the solution provided by #Marius I . It works, but I lost my description, my metas, the tags and the permissions I've set on the old instance. I had to copy my metas, had to make sure the zone for the new instance was the same as the original, and had to check that the pricing was the same.
I think, it's best to just create a clone of your original instance, this way don't have to manually copy/set them on the new instance.
As #Marius said, create a snapshot of your disk ( DO NOT skip this part: you may lose all your files/configuration )
Make sure you completed the step 1.
Clone your instance (“Create similar” button)
Name your cloned instance the way you want.
Make sure to select the snapshop of your disk created at #1 ( make sure you select the same typeof disk as well: if your original disk was SSD for example, you have to select if you want the new disk to be SSD too )
Make sure your IPs are set correctly
You're done :)
Another way to do this is:
snapshot the disk of the existing instance
create a new disk from that snapshot
create a new instance with that disk and give it the name you would like
It sounds time-consuming, but in reality should take 5 minutes.
you can't ...! Once VM is created, you can’t change the Instance Name
There's now a "native" way to do this. The feature is currently in Beta and only available with gcloud and via the API. With gcloud you can run:
$ gcloud beta compute instances set-name CURRENT_NAME -—zone=ZONE -—new-name=NEW_NAME
Some caveats:
You'll need to shut down the VM first
The Developer Console UI won't be aware of the rename until you do a browser refresh
See the official documentation for more details.
Apart from the hacks above, it's not possible.
Yet, it has been requested on uservoice and has received 593 votes. (as of 2018) Currently, it's the topmost "planned" item.
I got lost in the instructions, so I thought I include screenshots because the navigation is confusing. I hope this helps you.
Stop your instance
Click on the stopped instance name
In VM Instance Details, scroll down and click on the disk
Click on Create snapshot
give it a name like snapshot-1 (or your new instance name)
click on Create button
click on newly created snapshot
Click on Create Instance
Give your instance the new name and configure the rest of the VM.
When dealing with a robust system, it's necessary to have a way to bring up a system quickly when it goes down. This could be via custom scripts, salt, ansible, etc.
So, if you want to change your instance name, delete the instance, create a new one with the correct name, and run your script again :)
To answer your question directly. You cannot edit the VM Instance name.
However, you can create New VM instance using your old disk. To meet the VM instance name that you want.
Kindly see below procedure:
Go to Compute Engine Page
Go to Disk Page
Select the disk of your VM instance that you want to create a snapshot
Click the three dot image same line of your disk
Select +Create Snapshot (You will be go to Create Snapshot page). Kindly name your snapshot (backup)
Just Click Create.
Then once you have created a snapshot for your VM instance disk, you may now proceed on creating your new instance from snapshot pointing to other region which you can consider such: us-central1, us-west1 and us-west2. Please see below procedure:
Go to Snapshot Page
Select snapshot "backup" (You should be on Snapshot details Page)
Click Create Instance (Choose best name for your new VM Instance)
Please select the region best fit for you (us-central1, us-west1 and us-west2) except us-east1.
Lastly, Click Create
Machine images are now in pre-GA!
This is currently the easiest way to clone an instance without losing your instance configurations, check this comparison table.
Detailed steps:
Go to Compute Engine > Virtual Machines > Machine Images
Click on create Machine Image
Select your current instance under Source VM instance and click create
Once the image becomes ready go to Machine image details and click on create instance
The form will be populated by your existing instance configuration and you'll be able to change them before creating the instance!
Sorry to resurrect this thread after so long, but when I searched for an answer I kept ending up in this article... :-)
The Cloud SDK now allows renaming an instance directly, provided it's stopped:
The command looks like this:
gcloud beta compute instances set-name INSTANCE_NAME --new-name=NEW_NAME [--zone=ZONE] [GCLOUD_WIDE_FLAG …]
This is not available yet in the UI.
The following worked for me:
gcloud beta compute instances set-name currentname --new-name=newname
Googler checking in. We're rolling out this feature (renaming a VM) to all users in the cloud console. Public documentation.
In the Google Cloud console:
Go to the VM instances page.
Go to VM instances
In the Name column, click the name of the VM.
Click Stop stop.
Click Edit edit.
In Basic information > Rename > VM instance name, enter a new name
for the VM.
Click Save.
Click Start / Resume play_arrow.
Using the gcloud Command Line Interface:
gcloud compute instances stop INSTANCE_NAME
gcloud beta compute instances set-name INSTANCE_NAME --new-name=NEW_INSTANCE_NAME
I also wanted to let you all know that we monitor these forums and use your feedback to influence our roadmap. Thank you for your engagement!
I am trying to do this 03/2019 and I saw a new option on panel
click Instance link
on top menu you will see "Create Similar"
could work if you need same machine without data. (solved my case)
if you need a full copy then you should create a snapshot and clone it.
This is now possible via the web console:

Merge two Endeca Servers (Endeca 3.1) into one. Including their current data

Let me explain in more detail:
1st: I'm running endeca 3.1, so Endeca Server here refers to 3.0's Data Domain.
I'm required to use an Endeca Server currently present on Endeca (Downloaded a Demo VM). All the info on it, including, groups, attributes and data, must be merged into out Endeca Server. (It can also be the other way around, i could merge my Endeca Server into this one.)
So far, i've tried to do the following:
1) Clone the Endeca Server
2) use the putCollection sconfig operation to create a collection on it with the same name i have on mine.
3) Load configurations using the LoadCollection & LoadAttributes graphs from OEID POC Template 3.1. I point to the new collection on the Configuration.xls file.
This is where i encounter an issue. The LoadAttributes graph gets a T/O message from the server's WS. Then the config WSDL becomes inaccesible for a while. I can't go beyond this point.
I've been able to load data into the collection, but i need to load the attributes first.
THanks in advance for your replies.
Regards
There are a few techniques.
Have you tried exporting the data domain and then importing it?
You can use the endeca-cmd tools to export to a file, and then import from that file. This would enable you to add 2 datastores into one server.
If you want to combine 2 datastores then that is a different question.
The simplest approach in 3.1 if the data collections are small. Extract then as CSV (via a data-table), convert to XLS and add them via self provisioning into separate collections within a single data store. If you are running in the VM this is potentially the easiest approach.
This can also be done using Integrator.
You don't need to load the attributes unless you are using multi-value types. You can call against the conversation web-service to extract data and then load it using 'bulk-load' I would not worry too much about creating the attributes unless this becomes essential due to their type or complexity. If you cannot call against the conversation web-service, then again extract as csv and load using Integrator.