Evaluate column value into rows - sql

I have a column whose value is a json array. For example:
[{"att1": "1", "att2": "2"}, {"att1": "3", "att2": "4"}, {"att1": "5", "att2": "6"}]
What i would like is to provide a view where each element of the json array is transformed into a row and the attributes of each json object into columns. Keep in mind that the json array doesn't have a fixed size.
Any ideas on how i can achieve this ?

a stored procedure lexer to run against the string? anything else like trying a variable in the SQL or using regexp i imagine will be tricky.
if you need it for client-side viewing only, can you use JSON decode libraries (json_decode() if you are on PHP) and then build markup from that?
but if you're gonna use it for any Db work at all, i reckon it shouldn't be stored as JSON.

Related

How to do case insensitive search on dictionary key in CosmosDB?

I can't figure out a way to do a case insensitive search on dictionary keys in ComsosDB. My objects look like this:
...
"Codes": {
"CodeSystem1": [
"A1", "A2"
],
"CodeSystem2": [
"x1","x2"
]
},
...
Codes is a Dictionary<string, List<string>>
My query looks like this:
SELECT * FROM c WHERE ARRAY_CONTAINS(c.Codes["CodeSystem2"], 'x1')
However, I'd like to do a LOWER() on both the dictionary key and value, but it doesn't work like that.
SELECT * FROM c WHERE ARRAY_CONTAINS(c.Codes[LOWER("CodeSystem2"]), LOWER('x1'))
Any ideas? I can't change the structure of the objects, and rather not do the filtering in my .NET code.
LOWER/UPPER will not work with Array elements as you would want. If you have something like this:
"CodeSystem4": [
"Z1"
],
"CodeSystem5": "Z3"
We can use the lower with element CodeSystem5 as below:
select * from c where lower(c.Codes["CodeSystem5"]) = Lower('Z3')
But we cannot do the same with 'CodeSystem4' with ARRAY_CONTAINS, it will not return any result.
Also as per the below article, "The LOWER system function does not utilize the index. If you plan to do frequent case insensitive comparisons, the LOWER system function may consume a significant amount of RU's. If this is the case, instead of using the LOWER system function to normalize data each time for comparisons, you can normalize the casing upon insertion."
https://learn.microsoft.com/en-us/azure/cosmos-db/sql/sql-query-lower
One way is to add another searchable array in lower case to make it work through query. Or else we can filter it through the SDK

Extract specific key from array of jsons in Amazon Redshift

Background
I am working in Amazon Redshift database using SQL. I have a table and one of the column called attributes contains data that looks like this:
[{"name": "Color", "value": "Beige"},{"name":"Size", "value":Small"}]
or
[{"name": "Size", "value": "Small"},{"name": "Color", "value": "Blue"},{"name": "Material", "value": "Cotton"}]
From what I understand, the above is a series of path elements in a JSON string.
Issue
I am looking to extract the color value in each JSON string. I am unsure how to proceed. I know that if color was in the same location I could use the index to indicate where to extract from. But that is not the case here.
What I tried
select json_extract_array_element_text(attributes, 1) as color_value, json_extract_path_text(color_value, 'value') as color from my_table
This query works for some columns but not all as the location of the color value is different.
I would appreciate any help here as i am very new to sql and have only done basic querying. I have been using the following page as a reference
First off your data is in an array format (between [ ]), not object format (between { }). The page you mention is a function for extracting data from JSON objects, not arrays. Also array format presents challenges as you need to know the numeric position of the element you wish to extract.
Based on your example data it seems like objects is the way to go. If so you will want to reformat your data to be more like:
{"Color": "Beige", "Size": "Small"}
and
{"Size": "Small", "Color": "Blue", "Material": "Cotton"}
This conversion only works if the "name" values are unique in your data.
With this the function you selected - JSON_EXTRACT_PATH_TEXT() - will pull the values you want from the data.
Now changing data may not be an option and dealing with these arrays will make things harder and less performant. To do this you will need to expand these arrays by cross joining with a set of numbers that contain all numbers up to the maximum length of your arrays. For example for the samples you gave you will need to cross join by the values 0,1,2 so that you 3 element array can be fully extracted. You can then filter on only those rows that have a "name" of "color".
The function you will need for extracting elements from an array is JSON_EXTRACT_ARRAY_ELEMENT_TEXT() and since you have objects stored in the array you will need to run JSON_EXTRACT_PATH_TEXT() on the results.

How to get the equivalent of combinig [contains] and [in] operators in the same query?

So I have a field that's a multi-choice on the Directus back end so when the JSON comes out of the API it's a one-dimensional array, like so:
"field_name": [
"",
"option 6",
"option 11",
""
]
(btw I have no idea why all these fields produce those blank values, but that's a matter for another day)
I am trying to make an interface on the front end where you can select one or more of these values and the result will come back if ANY of them are found for that record. Think of it like a tag list, if the item has just one of the values it should be returned.
I can use the [contains] operator to find if it has one of the values I'm looking for, but I can only pass a single value, whereas I need all that have either optionX OR optionY OR optionZ. I would basically need a combination of [contains] and [in] to achieve what I'm trying to do. Is there a way to achieve this?
I've also tried setting the [logical] operator to OR, but then that screws up the other filters that need to be included as AND (or I'm doing something wrong). Not to mention the query gets completely unruly.
Help?

Cloudant - Lucene range search using numbers stored as text

I have a number of documents in Cloudant, that have ID field of type string. ID can be a simple string, like "aaa", "bbb" or number stored as text, e.g. "111", "222", etc. I need to be able to full text search using the above field, but I encountered some problems.
Assuming that I have two documents, having ID="aaa" and ID="111", then searching with query:
ID:aaa
ID:"aaa"
ID:[aaa TO zzz]
ID:["aaa" TO "zzz"]
returns first document, as expected
ID:111
returns nothing, but
ID:"111"
returns second document, so at least there is a way to retrieve it.
Unfortunately, when searching for range:
ID:[111 TO 999]
ID:["111" TO "999"]
I get no results, and I have no idea what to do to get around this problem. Is there any special syntax for such case?
UPDATE:
Index function:
function(doc){
if(!doc.ID) return;
index("ID", doc.ID, { index:'not_analyzed_no_norms', store:true });
}
Changing index to analyzed doesn't help. Analyzer itself is keyword, but changing to standard doesn't help either.
UPDATE 2
Just to add some more context, because I think I missed one key point. The field I'm indexing will be searched using ranges, and both min and max values can be provided by user. So it is possible that one of them will be number stored as a string, while other will be a standard non-numeric text. For example search all document where ID >= "11" and ID <= "foo".
Assumig that database contains documents with ID "1", "5", "alpha", "beta", "gamma", this query should return "5", "alpha", "beta". Please note that "5" should actually be returned, because string "5" is greater than string "11".
Our team just came to a workaround solution. We managed to get proper results by adding some arbitrary character, e.g. 'a' to an upper range value, and by introducing additional search term, to exclude documents having ID between upper range value and upper range value + 'a'.
When searching for a range
ID:[X TO Y]
actual query would be
(ID:[X TO Ya] AND -ID:{Y TO Ya])
For example, to find a documents having ID between 23 and 758, we execute
(ID:[23 TO 758a] AND -ID:{758 TO 758a]).
First of all, I would suggest to use keyword analyzer, so you can control the right tokenization during both indexing and search.
"analyzer": "keyword",
"index": "function(doc){\n if(!doc.ID) return;\n index(\"ID\", doc.ID, {store:true });\n}
To retrieve you document with _id "111", use the following range query:
curl -X GET "http://.../facetrangetest/_design/ddoc/_search/f?q=ID:\[111%20TO%A\]"
If you use a query q=ID:\[111%20TO%20999\], Cloudant search seeing numbers on both size of the range, will interpret it as NumericRangeQuery; and since your ID of "111" is a String, it will not be part of the results returned. Including a string into query [111%20TO%20A], will make Cloudant interpret it as a range query on strings.
You can get both docs returned like this:
q=ID:["111" TO "CCC"]
Here's a working live example:
https://rajsingh.cloudant.com/facetrangetest/_design/ddoc/_search/f?q=ID:[%22111%22%20TO%20%22CCC%22]
I found something quirky. It seems that range queries on strings only work if at least one of the range values is a string. Querying on ID:["111" TO "555"] doesn't return anything either, so maybe this is resolving to a numeric query somehow? Could be a bug.
This could also be achieved using regular expressions in queries. Something line this:
curl -X POST "https://.../facetrangetest/_design/ddoc/_search/f" -d '{"q":"ID:/<23-758>/"}' | jq .
This regular expressions means to retrieve all documents with ID field from 23 to 758. Slashes: / / are used to enclose a regular expression; the interval is enclosed inside <>.

MongoDB or CouchDB or something else?

I know this is another question on this topic but I am a complete beginner in the NoSQL world so I would love some advice. People at SO told me MySQL might be a bad idea for this dataset so I'm asking this. I have lots of data in the following format:
TYPE 1
ID1: String String String ...
ID2: String String String ...
ID3: String String String ...
ID4: String String String ...
which I am hoping to convert into something like this:
TYPE 2
ID1: String
ID1: String
ID1: String
ID1: String
ID2: String
ID2: String
This is the most inefficient way but I need to be able to search by both the key and the value. For instance, my queries would look like this:
I might need to know what all strings a given ID contains and then intersect the list with another list obtained for a different ID.
I might need to know what all IDs contain a given string
I would love to achieve this without transforming Type 1 into Type 2 because of the sheer space requirements but would like to know if either MongoDB or CouchDB or something else (someone suggested NoSQL so started Googling and found these two are very popular) would help me out in this situation. I can a 14 node cluster I can leverage but would love some advice on which one is the right database for this usecase. Any suggestions?
A few extra things:
The input will mostly be static. I will create new data but will not modify any of the existing data.
The ID is 40 bytes in length whereas the strings are about 20 bytes
MongoDB will let you store this data efficiently in Type 1. Depending on your use it will look like one these (data is in JSON):
Array of Strings
{ "_id" : 1, "strings" : ["a", "b", "c", "d", "e"] }
Set of KV Strings
{ "_id" : 1, "s1" : "a", "s2" : "b", "s3" : "c", "s4" : "d", "s5" : "e" }
Based on your queries, I would probably use the Array of Strings method. Here's why:
I might need to know what all strings
a given ID contains and then intersect
the list with another list obtained
for a different ID.
This is easy, you get one Key Value look-up for the ID. In code, it would look something like this:
db.my_collection.find({ "_id" : 1});
I might need to know what all IDs contain a given string
Similarly easy:
db.my_collection.find({ "strings" : "my_string" })
Yeah it's that easy. I know that "strings" is technically an array, but MongoDB will recognize the item as an array and will loop through to find the value. Docs for this are here.
As a bonus, you can index the "strings" field and you will get an index on the array. So the find above will actually perform relatively fast (with the obvious trade-off that the index will be very large).
In terms of scaling a 14-node cluster may almost be overkill. However, Mongo does support auto-sharding and replication sets. They even work together, here's a blog post from a 10gen member to get you started (10gen makes Mongo).