I was used to use node_auto_index( condition ) to search for nodes using indexes, but now i used batch-import ( https://github.com/jexp/batch-import/ ) and it created indexes with specific names ( type, code, etc ).
So, how to do a cypher query using indexes on multiple properties ?
old query example :
START n = node : node_auto_index( 'type: NODE_TYPE AND code: NODE_CODE' ) RETURN n;
how to do the 'same' query but without node_auto_index and specific index names ?
START n = node : type( "type = NODE_TYPE" ) RETURN n;
Also, the next query does not work (no errors, but the result is empty and it shouldn't be) :
START n = node : type( 'type: NODE_TYPE AND code: NODE_CODE' ) RETURN n;
So, type is an index, code is an index. how to mix the two in the same query for a single node ?
Another question: whats the difference of node_auto_index and this indexes with specific names ?
Thank you.
You almost had it:
START n = node:type("type:NODE_TYPE") RETURN n;
or
START n = node:type(type="NODE_TYPE") RETURN n;
Related
My ELT tools imports my data in bigquery and generates/extends automatically the schema for dynamic nested keys (in the schema below, under properties)
It looks like this
How can I get the list of nested keys of a repeated record ? so for example I can group by properties when those items have said property non-null ?
I have tried
select column_name
from my_schema.INFORMATION_SCHEMA.COLUMNS
where
table_name = 'my_table
But it will only list first level keys
From the picture above, I want, as a first step, a SQL query that returns
message
user_id
seeker
liker_id
rateable_id
rateable_type
from_organization
likeable_type
company
existing_attempt
...
My real goal through, is to group/count my data based on a non-null value of a 2nd level nested properties properties.filters.[filter_type]
The schema may evolve when our application adds more filters, so this need to be dynamically generated, I can't just hard-code the list of nested keys.
Note: this is very similar to this question How to extract all the keys in a JSON object with BigQuery but in my case my data is already in a shcema and it's not a JSON object
EDIT:
Suppose I have a list of such records with nested properties, how do I write a SQL query that adds a field "enabled_filters" which aggregates, for each item, the list of properties for wihch said property is not null ?
Example input (properties.x are dynamic and not known by the programmer)
search_id
properties.filters.school
properties.filters.type
1
MIT
master
2
Princetown
null
3
null
master
Example output
search_id
enabled_filters
1
["school", "type"]
2
["school"]
3
["type"]
Have you looked at COLUMN_FIELD_PATHS? It should give you the paths for all columns.
select field_path from my_schema.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS where table_name = '<table>'
[https://cloud.google.com/bigquery/docs/information-schema-column-field-paths]
The field properties is not nested by array only by structures. Then a UDF in JavaScript to parse thise field should work fast enough.
CREATE TEMP FUNCTION jsonObjectKeys(input STRING, shownull BOOL,fullname Bool)
RETURNS Array<String>
LANGUAGE js AS """
function test(input,old){
var out=[]
for(let x in input){
let te=input[x];
out=out.concat(te==null ? (shownull?[x+'==null']:[]) : typeof te=='object' ? test(te,old+x+'.') : [fullname ? old+x : x] );
}
return out;
Object.keys(JSON.parse(input));
}
return test(JSON.parse(input),"");
""";
with tbl as (select struct(1 as alpha,struct(2 as x, 3 as y,[1,2,3] as z ) as B) A from unnest(generate_array(1,10*1))
union all select struct(null,struct(null,1,[999])) )
select *,
TO_JSON_STRING (A ) as string_output,
jsonObjectKeys(TO_JSON_STRING (A),true,false) as output1,
jsonObjectKeys(TO_JSON_STRING (A),false,true) as output2,
concat('["', array_to_string(jsonObjectKeys(TO_JSON_STRING (A),false,true),'","' ) ,'"]') as output_sring,
jsonObjectKeys(TO_JSON_STRING (A.B),false,true) as outpu
from tbl
If an alias has a column consisting of a map,
A = LOAD .. AS ( id : int, data : [ DOUBLE ] );
and I want to keep the map if the map has at least one element, otherwise I want to replace it with a map [ 'dummy' : 1.0d ].
B = FOREACH A GENERATE id,
( SIZE( data ) == 0 ? TOMAP('dummy', (DOUBLE)1.0) : data ) AS data;
This results in the following error,
Two inputs of BinCond must have compatible schemas. left hand side: #133:map right hand side: udf_results#144:map(#145:double)
The issue seems to be that Pig does not implicitly cast MAP:[ ] to MAP:[ DOUBLE ]. Explicitly casting solved the problem,
B = FOREACH A GENERATE id,
( SIZE( data ) == 0 ? ( ([DOUBLE]) TOMAP('dummy', (DOUBLE)1.0) : data ) AS data;
I'm looking for Berkeley DB equivalent of
SELECT COUNT All, SELECT COUNT WHERE LIKE "%...%"
I have got 100 records with keys: 1, 2, 3, ... 100.
I have got the following code:
//Key = 1
i=1;
strcpy_s(buf, to_string(i).size()+1, to_string(i).c_str());
key.data = buf;
key.size = to_string(i).size()+1;
key.flags = 0;
data.data = rbuf;
data.size = sizeof(rbuf)+1;
data.flags = 0;
//Cursor
if ((ret = dbp->cursor(dbp, NULL, &dbcp, 0)) != 0) {
dbp->err(dbp, ret, "DB->cursor");
goto err1;
}
//Get
dbcp->get(dbcp, &key, &data_read, DB_SET_RANGE);
db_recno_t cnt;
dbcp->count(dbcp, &cnt, 0);
cout <<"count: "<<cnt<<endl;
Count cnt is always 1 but I expect it calculates all the partial key matches for Key=1: 1, 10, 11, 21, ... 91.
What is wrong in my code/understanding of DB_SET_RANGE ?
Is it possible to get SELECT COUNT WHERE LIKE "%...%" in BDB ?
Also is it possible to get SELECT COUNT All records from the file ?
Thanks
You're expecting Berkeley DB to be way more high-level than it actually is. It doesn't contain anything like what you're asking for. If you want the equivalent of WHERE field LIKE '%1%' you have to make a cursor, read through all the values in the DB, and do the string comparison yourself to pick out the ones that match. That's what an SQL engine actually does to implement your query, and if you're using libdb instead of an SQL engine, it's up to you. If you want it done faster, you can use a secondary index (much like you can create additional indexes for a table in SQL), but you have to provide some code that links the secondary index to the main DB.
DB_SET_RANGE is useful to optimize a very specific case: you're looking for items whose key starts with a specific substring. You can DB_SET_RANGE to find the first matching key, then DB_NEXT your way through the matches, and stop when you get a key that doesn't match. This works only on DB_BTREE databases because it depends on the keys being returned in lexical order.
The count method tells you how many exact duplicate keys there are for the item at the current cursor position.
You can use method DB->stat().
For example, number of unique keys in the BT_TREE.
bool row_amount(DB *db, size_t &amount) {
amount = 0;
if (db==NULL) return false;
DB_BTREE_STAT *sp;
int ret = db->stat(db, NULL, &sp, 0);
if(ret!=0) return false;
amount = (size_t)sp->bt_nkeys;
return true;
}
is there a way to create a small constant relation(table) in pig?
I need to create a relation with only 1 tuple that contains constant values.
something along the lines of:
A = LOAD using ConstantLoader('{(1,2,3)}');
thanks, Ido
I'm not sure why you would need that but, here's an ugly solution:
A = LOAD 'some/sample/file' ;
B = FOREACH A GENERATE '' ;
C = LIMIT A 1 ;
Now, you can use 'C' as the 'empty relation' that has one empty tuple.
DEFINE GenerateRelationFromString(string) RETURNS relation {
temp = LOAD 'somefile';
tempLimit1 = LIMIT temp 1;
$relation = FOREACH tempLimit1 GENERATE FLATTEN(TOKENIZE('$string', ','));
};
usage:
fourRows = GenerateRelationFromString('1,2,3,4');
myConstantRelation = FOREACH fourRows GENERATE (
CASE $0
WHEN '1' THEN (1, 'Ivan')
WHEN '2' THEN (2, 'Boris')
WHEN '3' THEN (3, 'Vladimir')
WHEN '4' THEN (4, 'Olga')
END
) as myTuple;
This for sure is hacky, and the right way, in my mind, would be to implement a StringLoader() that would work like this:
fourRows = LOAD '1,2,3,4' USING StringLoader(',');
The argument typically used for file location would just be used as litral string input.
Fast answer: no.
I asked about it in pig-dev mailing list.
say i m having fields stud_roll_number and date_leave.
select stud_roll_number,count(*) from some_table where date_leave > some_date group by stud_roll_number;
how to write the same query using Lucene....I tried after querying date_leave > some_date
for (ScoreDoc scoreDoc : topDocs.scoreDocs) {
Document doc = search.doc(scoreDoc.doc);
String value = doc.get(fieldName);
Integer key = mapGrouper.get(value);
if (key == null) {
key = 1;
} else {
key = key+1;
}
mapGrouper.put(value, key);
}
But, I m having huge data set, it takes much time to compute this. Is there any other way to find it???? Thanks in advance...
Your performance bottleneck is almost certainly the I/O it takes to perform the document and field value lookups. What you want to do in this situation is use a FieldCache for the field you want to group by. Once you have a field cache, you can look up the values by Lucene doc ID, which will be fast because all the values are in memory.
Also remember to give your HashMap an initial capacity to avoid array resizing.
There is a very new grouping module, on https://issues.apache.org/jira/browse/LUCENE-1421 as a patch, that will do this.