I need to get a list of all streams (keys) in a database but I can't find a command for it.
I've already tried going over all keys and checking their typebut it is too slow/expensive.
I'd like to do something like XSCAN and get a list of keys like: ["stream1", "stream2"]
As of version 6.0 you can use the TYPE option to ask SCAN to only return objects that match a given type.
SCAN 0 TYPE stream
https://redis.io/commands/scan
There's no such command. Same as there's no way to get a list of other data structures, e.g. LIST, SET.
Instead, you can create an extra SET to record the keys of the streams you created. So that you can scan the SET to get the list of streams.
If you can have a prefix in the stream names ex: 'MyStream:1', 'MyStream:2'
Then you can use regular scan command with patterns matching MyStream:*
EDIT:
To address OPs concern to not have to use prefix and use SCAN command as is, adding from comments :
You can avoid using a prefix by using namespacing capability provided by redis. You can assign a 'database' (0-15 by default) for streams names. Say you use database 5 for streams, then scan command in database 5 should return the keys in it only. redis.io/commands/select
Related
I have few documents stored in redis using rejson
data1:{"a":"abs","b":{"c":123}}
data2:{"a":"sss","b":{"c":633}}
data3:{"a":"abs","b":{"c":633}}
I would like to extract all the data that has "a"=="abs" using json.get in python but everywhere it's told it is not possible to do element search but is there any alternate way?
For RedisJSON >=2.0 you can use RediSearch >=2.2 to create secondary indices on specific fields in RedisJSON documents, that afterwards can be queried.
https://oss.redis.com/redisjson/indexing_JSON/
I've got a Hash map of Redis stored, the key is id(Seperator)name(Seperator)othername
how can I get values by the pattern, using 2 potential patterns?
For example I have in the DB the keys:
1(Seperator)name1(Seperator)othername1
2(Seperator)name2(Seperator)othername2
the pattern to get by name would be: (Seperator)name1(Seperator)
And I want to get by name using multiple names like : (Seperator)name1|name2(Seperator)
but it does not work on Redis glob patterns.
Checked it and it is not possible to do so.
Glob Pattern only allows something like [ab][cde]* which will make and aORb and cORdORe, it can't be used for a whole string.
I am using the excellent RediSql, a module for Redis, to get a powerful caching solution.
When sending a command to Redis, that interacts with the SqLite db in the background, like this:
REDISQL.EXEC db "SELECT * FROM jobcache"
I get a result like this:
I get a type for the integer column, but not for the string, and no column names are provided.
Is there a way to get column name and defined data type always? I would need this, as I need to convert the results back to a more standard sql result format.
unfortunately, at the moment this is not possible with the EXEC command.
You can use the QUERY.INTO command reference
QUERY.INTO add the result of your query into a stream, it adds the column and the values for each row. Then you can consume the stream in whichever way you prefer.
When doing query (reads) against RediSQL is a good practice to use the .QUERY family of commands, this avoids useless replication of data, in the case you are in a cluster setup.
Moreover, it is possible to use the .QUERY commands also against replica of the main redis instance, while the .EXEC commands can be used only against the primary instance.
I'd like to set the prefix based on some of the data coming from event hub.
My data is something like:
{"id":"1234",...}
I'd like to write a blob prefix that is something like:
foo/{id}/guid....
Ultimately I'd like to have one blob for each id. This will help how it gets consumed downstream by a couple of things.
What I don't see is a way to create prefixes that aren't related to date and time. In theory I can write another job to pull from blobs and break it up after the stream analytics step. However, it feels like SA should allow me to break it up immediately.
Any ideas?
{date} , {time} and {partition} are the only ones supported in blob output prefix. {partition} is a number.
Using a column value in blob prefix is currently not supported.
If you have a limited number of such {id}s then you could workaround by writing multiple "select --" statements with different filters writing to different outputs and hardcode the prefix in the output. Otherwise it is not possible with just ASA.
It should be noted that now you actually can do this. Not sure when it was implemented but you can now use a single property from your message as a custom partition key and the syntax is exactly as the OP has asked for: foo/{id}/something/else
More details are documented here: https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-custom-path-patterns-blob-storage-output
Key points:
Only one custom property allowed
Must be a direct reference to an existing message property (i.e. no concatenations like {prop1+prop2})
If the custom property results in too many partitions (more than 8,000) then an arbitrary number of blobs may be created for the same parition
I'm completely new to HBase and I'm used to RDBMS database where I can use the WHERE clause to filter the records.
So, is there something similar to RDBMS using Java API or REST API exposed by HBase to filter the records using a column qualifier?
Yes, it's possible.
If you want to get only certain column qualifiers then you should use addColumn(byte[] family, byte[] qualifier) method of your Get or Scan instances. This is the most efficient way to query qualifiers you need because only HBase Stores representing the specific columns in request need to be accessed. Example of usage:
Get = new Get(Bytes.toBytes("rowKey"));
get.addColumn(Bytes.toBytes("columnFamily", Bytes.toBytes("Qual"));
Scan scan = new Scan();
scan.addColumn(Bytes.toBytes("columnFamily"), Bytes.toBytes("Qual1"));
scan.addColumn(Bytes.toBytes("columnFamily"), Bytes.toBytes("Qual2"));
If you need more complicated tool to filter your qualifiers then you can use QualifierFilter class from Java API. Example how you can query all columns with certain qualifiers:
Filter filter = new QualifierFilter(CompareFilter.CompareOp.EQUAL,
new BinaryComparator(Bytes.toBytes("columnQual")));
Get = new Get(Bytes.toBytes("rowKey"));
get.setFilter(filter);
Scan scan = new Scan();
scan.setFilter(filter);
Also you can read about another HBase filters and how combine them in official HBase documentation.