Im using redis to store tags for certain entities. Examples of the data:
+-------------+--------------------------------+
| key | value |
+-------------+--------------------------------+
| book:1:tags | [python, ruby, rails] |
+-------------+--------------------------------+
| book:2:tags | [fiction, fantasy] |
+-------------+--------------------------------+
| book:3:tags | [fiction, adventure] |
+-------------+--------------------------------+
How do I find all books with a particular tag, ie all books tagged with fiction?
You have to maintain reverse indexes yourself. Along with those keys that you posted, you should create reverse references.
tag:python => [1]
tag:ruby => [1]
tag:rails => [1]
tag:fiction => [2, 3]
tag:fantasy => [2]
tag:adventure => [3]
Then it's trivial to do what you want. But maybe you should consider using another tool for the job. For example, MongoDB can efficiently index and query arrays.
Related
According to mv-expand documentation:
Expands multi-value array or property bag.
mv-expand is applied on a dynamic-typed column so that each value in the collection gets a separate row. All the other columns in an expanded row are duplicated.
Just like the mv-expand operator will create a row each for the elements in the list -- Is there an equivalent operator/way to make each element in a list an additional column?
I checked the documentation and found Bag_Unpack:
The bag_unpack plugin unpacks a single column of type dynamic by treating each property bag top-level slot as a column.
However, it doesn't seem to work on the list, and rather works on top-level JSON property.
Using bag_unpack (like the below query):
datatable(d:dynamic)
[
dynamic({"Name": "John", "Age":20}),
dynamic({"Name": "Dave", "Age":40}),
dynamic({"Name": "Smitha", "Age":30}),
]
| evaluate bag_unpack(d)
It will do the following:
Name Age
John 20
Dave 40
Smitha 30
Is there a command/way (see some_command_which_helps) I can achieve the following (convert a list to columns):
datatable(d:dynamic)
[
dynamic(["John", "Dave"])
]
| evaluate some_command_which_helps(d)
That translates to something like:
Col1 Col2
John Dave
Is there an equivalent where I can convert a list/array to multiple columns?
For reference: We can run the above queries online on Log Analytics in the demo section if needed (however, it may require login).
you could try something along the following lines
(that said, from an efficiency standpoint, you may want to check your options of restructuring the data set to begin with, using a schema that matches how you plan to actually consume/query it)
datatable(d:dynamic)
[
dynamic(["John", "Dave"]),
dynamic(["Janice", "Helen", "Amber"]),
dynamic(["Jane"]),
dynamic(["Jake", "Abraham", "Gunther", "Gabriel"]),
]
| extend r = rand()
| mv-expand with_itemindex = i d
| summarize b = make_bag(pack(strcat("Col", i + 1), d)) by r
| project-away r
| evaluate bag_unpack(b)
which will output:
|Col1 |Col2 |Col3 |Col4 |
|------|-------|-------|-------|
|John |Dave | | |
|Janice|Helen |Amber | |
|Jane | | | |
|Jake |Abraham|Gunther|Gabriel|
To extract key value pairs from text and convert them to columns without hardcoding the key names in query:
print message="2020-10-15T15:47:09 Metrics: duration=2280, function=WorkerFunction, count=0, operation=copy_into, invocationId=e562f012-a994-4fc9-b585-436f5b2489de, tid=lct_b62e6k59_prd_02, table=SALES_ORDER_SCHEDULE, status=success"
| extend Properties = extract_all(#"(?P<key>\w+)=(?P<value>[^, ]*),?", dynamic(["key","value"]), message)
| mv-apply Properties on (summarize make_bag(pack(tostring(Properties[0]), Properties[1])))
| evaluate bag_unpack(bag_)
| project-away message
id | name | parent_id
ab | file | de
ad | song | de
bc | Bob |ad
mn | open.txt | bc
Assuming
ab is the ID of file and bc is the parent ID of file
then to store you can either use the bulk-insert utility
Or you can use the following Cypher query:
CREATE (A {id:'ab', name: 'file'}), (B {id:'bc', name: 'folder'}), (A)-[:child]->(B)
To query, depending on the data you would like to extract use a Cypher query similar to:
MATCH (c)-[:child]->(p) RETURN c,p
For the type of query you're running, I believe it would be better if you maintained a reverse edge [:parent] and modify your query as such:
GRAPH.QUERY Makinga "MATCH (r:Resource{Id:'6e3f67da-43ed-11e9-b149-d3f886f8337c'})-[:parent*1..]->(b:Resource) RETURN count(b) as count"
This is related to the way RedisGraph describes connections and applies filters.
I would like to create a highly scalable system for storing "candidates" the problem is each candidate has different "features" and sometimes these have different data types. One idea I'd like to try would involve something like this:
candidates:
| id | cType |
1 'fabric'
2 'belt'
candidateFeatures:
| candidateId | featureTable | featureId
1 'city' 1
1 'colour' 1
1 'colour' 2
2 'city' 2
2 'size' 1
city:
|id | lat | lng | name |
1 x x 'London'
1 x x 'Paris'
colour:
|id | name |
1 'Red'
2 'Green'
size:
|id | value |
1 10
2 12
Here you can see that there is one fabric candidate in London with Red and Green features and a belt candidate in Paris with size 10.
we do this because we get feedback in a universal way and I'm trying to write a scalable machine learning solution that will allow new types of candidates to be added seamlessly, as well as new candidate feature types - as they are discovered and added to the db. A candidate is assumed to be able to have more than one of each feature type.
Ultimately I need to be able to extract the data (probably through a materialised view) so that if I want all 'fabric' candidates I would end up with something like:
'id' | colourIds | cityIds |
1 [1, 2] [1]
4 [3] [4, 5]
but then if one day I find a fabric that doesn't have a colour but instead has a pattern I can easily get a new table for patterns and just add the features to my "candidateFeatures" table:
'id' | colourIds | cityIds | patternIds
1 [1, 2] [1] null
4 [3] [4, 5] null
14 null [6] [1]
This format is suitable for the front end, and the format of "candidateFeatures" is very useful for the backend. we can use it to easily scale without modifying existing tables and for scalable data analysis. Specifically when looking for correlations between user responses to candidates and presence of categorical features - or values of continuous features.
To me this seems like a really clever idea that hasn't got proper support in sql… which makes me think it's probably a really dumb idea in disguise. I think it's possible to do this using EXEC, but that does have some risks. Does anyone know of a smarter way to achieve the same result? or actually how to achieve this?
Since execution time isn't such a big concern I can always run it through a third party program e.g. in python and put the results into new tables. But ideally I'd use a bunch of materialised views and have them update periodically because that feels like it would scale better with more data.
This is too long for a comment.
It is neither a good idea nor an awful idea. It is simply not how SQL works. The problem is that queries have a well-defined set of tables and column references. This is quite important for optimizing the query -- a step that generally happens before the query is run.
Queries are not merely strings that permit dynamic substitution when they are processing data.
There are ways to address the data modeling:
Have separate tables for the features and association tables to match them back to the original data.
Use an entity-attribute-value model, which basically stored key-value pairs.
Use a flexible storage mechanism, such as JSON or arrays.
In addition, Postgres supports something called inheritance, which might be useful for representing this type data.
Category page meta_title, meta_key and meta_description tags come from the table ps_category_lang.
mysql> select * from ps_category_lang limit 1;
+-------------+---------+---------+-------+-------------+--------------+------------+---------------+------------------+
| id_category | id_shop | id_lang | name | description | link_rewrite | meta_title | meta_keywords | meta_description |
+-------------+---------+---------+-------+-------------+--------------+------------+---------------+------------------+
| 1 | 1 | 1 | Raíz | | raiz | | | |
+-------------+---------+---------+-------+-------------+--------------+------------+---------------+------------------+
1 row in set (0.00 sec)
Is it possible to add a prefix (or suffix) to those three values, so it uses the information from the database but it appends or prefixs a certain value?
If so, what shall be needed to be done? I already have a custom module overriding the category page with extended template and controller.
Prestashop 1.7.1
The best way is by overriding the /classes/controller/FrontController.php, specifically the method getTemplateVarPage() in the code:
$page = array(
'title' => '',
'canonical' => $this->getCanonicalURL(),
'meta' => array(
'title' => $meta_tags['meta_title'],
'description' => $meta_tags['meta_description'],
'keywords' => $meta_tags['meta_keywords'],
'robots' => 'index',
),
'page_name' => $page_name,
'body_classes' => $body_classes,
'admin_notifications' => array(),
);
Here you could validate the current page and alterate it as your needs.
for each standard controller in PrestaShop, you have a dedicated function in the Meta class, in your case, the getCategoryMetas() function that you can override and adapt to fit your needs.
You also can use the previous answer to rewrite the metas firstly computed in the Meta::getCategoryMetas() in CategoryController::getTemplateVarPage() function.
Good luck
I'm wondering if I'm doing something wrong, or if this is just a quirk in how everything is processed in MySQL. Here's the setup: (I can't seem to find this exact topic anywhere else)
I have two tables order and menu.
menu has an 'id' (for the item), an 'item' and three prices ('prc1' 'prc2' 'prc3') in each row.
menu +----+----------+------+------+------+
|'id'| 'item' |'prc1'|'prc2'|'prc3'|
+----+----------+------+------+------+
| 1 | 'tshirt' | 3.00 | 4.50 | 4.00 |
| 2 | 'socks' | 1.00 | 2.50 | 2.00 |
+----+----------+------+------+------+
order also has an item id to match against the order menu, ('i_id'), and an integer I use for filtering the prices later in php ('prc_id').
order +------+--------+
|'i_id'|'prc_id'|
+--------+------+
| 1 | 1 | # i_id matches id - tshirt and informs to use prc1
| 2 | 3 | # i_id matchis id - socks and uses prc3
+--------+------+
I use JOIN to match the orders up to the items
"SELECT order.prc_id, menu.item, menu.prc1, menu.prc2, menu.prc3
FROM order
LEFT JOIN menu
ON order.i_id = menu.id"
I then get the result back, and initially to make sure everything panned out, I printed the array:
$result = mysql_query($query)
while($row = mysql_fetch_array($result))
{
print_r($row);
}
This is the array I get back (obviously dummy info for the initial tests):
Array
(
[0] => 1 [prc_id] => 1 #the value (1) for 'prc_id' is given twice
[1] => tshirt [item] => tshirt #the value (tshirt) for 'item' is given twice
[2] => 3.00 [prc1] => 3.00 #the value (3.00) for 'prc1' is given twice
[3] => 4.50 [prc2] => 4.50 #etc
[4] => 4.00 [prc3] => 4.00
[0] => 3 [prc_id] => 3
[1] => socks [item] => socks
[2] => 1.00 [prc1] => 1.00
[3] => 2.50 [prc2] => 2.50
[4] => 2.00 [prc3] => 2.00
)
So my question(finally, right? xD)... Why is duplicate data sent back in the array response?
Have I done something wrong? Am I overlooking something?
It's not a huge issue, it doesn't affect my end result, I'd just like to be as precise as possible.
Thank you for your time. :)
If think this is a PHP print_r() problem.
print_r() return a numeric representation and the second the named store row.
EDIT:
Try your query with the Database Query Tool then you will see the original result.
SELECT order.prc_id, menu.item, menu.prc1, menu.prc2, menu.prc3
FROM order
LEFT JOIN menu
ON order.i_id = menu.id
It looks like you are joining based on the item_id in order and id of menu, while you should be doing based on (if I understand correctly) the item_id of menu.
Something like:
SELECT order.prc_id, menu.item, menu.prc1, menu.prc2, menu.prc3
FROM order
LEFT JOIN menu
ON order.i_id = menu.item_id