i don't get mongodb collection shard key with pymongo - pymongo

i need to copy db meta to anthoer , i have done copy index, but i couldn't get the shard key of colleciont , so what should I do ?
can you give me a example ?
cnn = pymongo.MongoClient(mongo_uri)
dbadmin = cnn.admin
dbadmin.command('enableSharding',new_db_nm)
dbadmin.command({'shardCollection': coll1, 'key': {'key1': 'hashed'}})
i want get the key1 from old_db

Related

IndexedDB composite index partial match

I can't find an answer to this anywhere.
I have an indexeddb composite index of a group id and a time, which I use to sort.
let tmp_CREATEDTIMEindex = texts.index('GROUP_ID, CREATEDTIME');
This works great, except I need to result to reflect only the group id, not the time. How do I get a result from a match on just the group id?
To clarify, this returns one record:
let request = tmp_CREATEDTIMEindex.getAll(['someid', 'August, 25 2022 06:52:02']);
I need it to return all records.
let request = tmp_CREATEDTIMEindex.getAll(['someid', '*']);
You can use a key range:
let range = IDBKeyRange.bound(['someid'], ['someid\x00'], true, true);
let request = tmp_CREATEDTIMEindex.getAll(range);
['someid'] sorts before any other composite key starting with 'someid'
['someid\x00'] sorts after any other composite key starting with 'someid'
the true, true arguments exclude those keys specifically from the results

Getting null Records while reading multiple records in Aerospike

I have a namespace: test and set: user in Aerospike database. I add four records in users through the following command on console:
ascli put test users barberakey '{"username":"Barbera","password":"barbera","gender":"f","region":"west","company":"Audi"}'
Through aql command, I can view these four records.
aql> select * from test.users.
I know the method to get records one by one and it runs fine at my side, but it is very expensive operation for my task.I want to read multiple records(batch read) and to perform multiple algorithms on them. I took guidance from https://www.aerospike.com/docs/client/java/usage/kvs/batch.html and write code as followed:
Key[] keys = new Key[4];
for (int i = 0; i < 4; i++) {
keys[i] = new Key("test", "users", (i + 1));
System.out.println("Keys: "+keys[i]);
}
Record[] records = client.get(batchPolicy, keys);
System.out.println("Length of Records : "+records.length);
for(int a=0;a<records.length;a++){
System.out.println("RECORDS IN ARRAY: "+records[a]);
}
But, the problem is that it reads keys but giving Array records null.
Output:
Reading records from Aerospike DB
Keys: test:users:1:3921e84015258aed3b93d7ef5770cd27b9bb4167
Keys: test:users:2:1effb3ce25b23f92c5371dee0ac8e6b34f5703c6
Keys: test:users:3:d17519d72e22beab2c3fa1552910ea3380c262bd
Keys: test:users:4:3f09a505c913db8ad1118e20b78c5bb8495fb0f9
Length of Records : 4
RECORDS IN ARRAY: null
RECORDS IN ARRAY: null
RECORDS IN ARRAY: null
RECORDS IN ARRAY: null
Please guide me for the case.
......
Appears you wrote the records using the key "barberakey". Then read the records by an integer key "i+1". Thus, the records are not found.

LinkedHashMap behaviour with Redis Hashes?

I want to use Hashes data structure in Redis (Jedis client) but also want to maintain the insertion order something like LinkedHashMap in Java. I am totally new to Redis and gone through all the data structures and commands but somehow not able to think of any straight forward solution. Any help or suggestions will be appreciated.
Hashes in Redis do not maintain insertion order. You can achieve the same effect by using a Sorted Set and a counter to keep track of order. Here is a simple example (in Ruby, sorry):
items = {foo: "bar", yin: "yang", some_key: "some_value"}
items.each do |key, value|
count = redis.incr :my_hash_counter
redis.hset :my_hash, key, value
redis.zadd :my_hash_order, count, key
end
Retrieving the values in order would look something like this:
ordered_keys = redis.zrange :my_hash_order, 0, -1
ordered_hash = Hash[
ordered_keys.map {|key| [key, redis.hget(:my_hash, key)] }
]
# => {"foo"=>"bar", "yin"=>"yang", "some_key"=>"some_value"}
No need to use Sorted Set either counter. Just use a https://redis.io/commands#list, because it keeps the insertion order.
HSET my_hash foo bar
RPUSH my_ordered_keys foo
HSET my_hash yin yang
RPUSH my_ordered_keys yin
HSET my_hash some_key some_value
RPUSH my_ordered_keys some_key
LRANGE my_ordered_keys 0 10
1) "foo"
2) "yin"
3) "some_key"

How to filter avro records through filtering with a list of another file in pig?

I have a file "fileA" that is an avro with the following records:
{itemid:"Carrot"}
{itemid:"Lettuce"}
...
I have another file "fileB" that is an avro with multiple records following the same schema:
{item: "Carrot", cost: $2, ...other fields..}
{item: "Lettuce", cost: $2, ...other fields..}
{item: "Rice", cost: $2, ...other fields..}
...
How can I use pig to filter the data such that I can store all the relevant records in file "B" in a new output file?
I tried performing the following:
A = load 'fileA' using AvroStorage();
B = load 'fileB' using AvroStorage();
C = JOIN A by itemid , B by item;
STORE C into 'outputpath' using AvroStorage();
I am getting an error of "Pig Schema contains a name that is not allowed in Avro.
I want to avoid having to specify the complete schema of "B" inside of the AvroStorage() or any fields in A, as I only want to use A to filter down the records of B for storage and not add or change any of the schema output of B. Is there a way to do this?

ways to select values by keys list?

For example I have key structure entity:product:[id] where id - is an integer [0-n]
So I can use this keys entity:product:* but I don't how much load does this query to the redis server.
Another solution is
Create one list key that will store Ids of the entity:products.
RPUSH entity:products:ids 1
RPUSH entity:products:ids 2
RPUSH entity:products:ids 3
RPUSH entity:products:ids 4
And then (pseudo-code)
entityProducts = redis.LRANGE('entityLproducts:ids, 0, -1)
foreach (entityProducts as id)
{
redis.GET('entity:product:' + id)
}
What is the better way? What will be faster and what will do less load to the redis server?