SolrCloud update node by node after schema update - indexing

Is it possible to update one node by one in SOLRCluster so you after update of SOLR schema you will not have to reindex all nodes at once and have downtime of search?
Current SolrCluster configuration
So basically can I:
Put one node into recovering mode
Update this node
Update schema.xml
Reindex Node
Bring this node as Leader
Put another node into recovery mode
Update this node too and launch it
Or I didn't get something?

Your question is how to update the schema.xml in SolrCloud.
This answer is about changes in schema.xml which does not imply the nead of a reindexing.
I also assume that you have create your collection with an named configuration in zookeeper(collection.configName).
In this case:
Upload your changed configuration folder to zookeeper.
Reload your collection
Be aware that step 2. need the name of the collection (does not support alias).

Related

How to load updated java class on a existing Ignite cluster?

I have a Ignite cluster of 2 or more nodes (max of 4) in server mode.
Let's say I have an Ignite cache defined by Java class called Employee (let's say this is version 1) loaded and used. If I update this Employee class with a new member field (version 2), how would I go about updating the loaded class with the new version (ie update the cache definition)? How does Ignite handle objects (cache records) created previously based on Employee version 1 vs new cache records created with Employee version 2? If I have SQL queries using new fields as defined in version 2, is that going to fail because the Employee version 1 based objects/cache records are not compatible with new SQL using the newly defined field(s) in Employee version 2?
I can delete db folder from the working directory, reload the new class as part of restarting the Ignite service. But, I lose all previous data.
Cluster member with updated Employee class definition will not join other nodes in the cluster still loaded with original Employee version 1 class. Again, I need to shutdown all members in the cluster and reload the new Employee version and restart all members in the cluster.
Ignite doesn't store code versions. The latest deployed class is in use.
in order to preserve the fields, Ignite builds binary meta for a customer type and stores it for validation. If you are going to add new fields and leave the old ones untouched, Ignite will update the meta automatically, nothing to configure/change. A old record will be deserialised with new fields set to null.
For SQL it's recommended to go with DDL to adjust the schema accordingly:
ALTER TABLE "schema".MyTable DROP COLUMN oldColumn
ALTER TABLE "schema".MyTable ADD COLUMN newColumn VARCHAR;
You can check available meta using control script --meta command (not sure if it's available in Ignite edition though)
control.sh --meta list
Ignite won't propagate POJO changes automatically using peerClassLoading. You should either update the JARs manually or rely on some deployment SPI, like URL deployment.
Overall, you should not remove your db folder each time you are going to make changes to your POJOs/SQL tables. Adding new fields should be totally OK. Do not remove the old fields, it's better to mark them as deprecated.

Best way to duplicate a ClickHouse replica?

I want to create another replica from an existing one by copying it.
Made a snapshot in AWS, created a new server, all my data has a copy on the new server.
Fixed the macro replica in config.
When I start the server it throws "No node" in error log for the first table that it finds, and gets stalled, repeating the same error once in a while.
<Error>: Application: Coordination::Exception: No node, path: /clickhouse/tables/0/test_pageviews/replicas/replica3/metadata: Cannot attach table `default`.`pageviews2` from metadata file . . .
I suspect this is because the node for this replica does not exist in Zookeeper (obviously, it was not created, because I did not run the CREATE TABLE for this replica as it is just a duplicate of another replica).
What is the correct way to do a duplication of a replica?
I mean, I would like to avoid copying the data, and make the replica pull only the data that was added from the moment in time when the snapshot was created.
I mean, I would like to avoid copying the data,
It's not possible.

Update field in redis

i'm developing an application that required big request to databases
so my solution is to set a cache version in Redis
my question is how to update a specific field in stored document
in my case to increment NBRView by 1
and thnx

Solr & Hbase integration using NGDATA Hbase-indexer

Data is not reflected in Solr UI after indexing the Hbase table data using Hbase indexer. I followed the steps provided in Hbase-indexer.
1. Created Hbase table
2. Copied the hbase-sep jar files to the lib directory of HBase.
3. Created an indexer xml file with the index information
4. created an indexer using the indexer xml file.
After all the above steps i tried to search using Solr UI i dont see the data being reflected there. Has anyone worked on this?
Steps to verify:
1. Does Hbase having replication_scope = 1.
2. Are you using put to load the data? Because, indexer will pick wal's(write ahead logs). And put will create wal, where as bulk load will not.
3. Verify the indexer mapping details of hbase column qualifiers to the solr fields.

How can I add column to an existing custom table in MODX database?

I have a custom table in MODX database set up and working, thanks to this article:
http://bobsguides.com/custom-db-tables.html
and now I need to add new column to this existing table. How can I do this the "MODX way"? Or do I have to create the component from scratch again?
You can manually add the new column to the database, then update your xml schema and map files to include the new column metadata. If you have a build script you could simply run it again after amending the schema to regenerate the map files.
I could be more specific if you paste in your existing schema and description of the column you want to add.
I believe MigxDB plugin (part of migx plugin) sets up a utility under manager page to just do that.
Install migx as instructed (you need to do an extra step to set it up so read the instruction)
load your modified schema in midx-package manager and do 'parse schema' and then 'add field'.
Make sure you have package name and pre-fix specified when loading your schema. modx forum has a dedicated section for migx if you need further clarification.