I have a json file/stream, i like to be able to make select SQL style
so here is the file
the file contain all the data i have, I'll like to be able to show, let said :
all the : odeu_nom and odeu_desc that is : categorie=Feuilles
if you can do that with PHP and json (eval) fine... tell me how...
on the other part in sql i will do : SELECT * from $json where categorie=Feuilles
p.s. i have found : jsonpath that is a xpath for json... maybe another option ?
p.s. #2... with some research, i have found anoter option, the json is the same as a array, maybe I can filter the array and just return the one i need ?... how do i do that ?
It makes more sense to try and stick with XPath-style selectors (like jsonpath), rather than using SQL, even if you are more familiar with SQL.
The advantage of the "path" is that it is more readily expressive of the hierarchical structure implicit to XML/JSON, as opposed to SQL which requires using various joins to help it "get out of its rectangular/tabular prison".
Although I never used jsonpath, by reading its summary page, I believe that the following should produce all the odeu_nom for objects which catagorie is 'Feuilles' (given the json input referred in the question).
$.Liste_des_odeurs[?(#.categorie = 'Feuilles'].odeu_nom
which correspond to the following XPath
/Liste_des_odeurs[categorie='Feuilles']/odeu_nom
Et voila...
BTW, 'Jazz is not dead, it just smells funny' (F Zappa)
Related
When trying to filter by tag, there is a small popup:
I have been looking for logfmt around, but all I can find is key=value format.
My questions are:
Is there a way for something more sophisticated? (starts_with, not equal, contains, etc)
I am trying to filter by url using http.url="http://example.com?bla=bla&foo=bar". I am pretty sure the value exists because I am copy/pasting from my trace. I am getting no results. Do I need to escape characters or do something else for this to work?
I did some research around logfmt as well. Based on the documentation of the original implementation and in the Python implementation of the parser (and respective tests), I would say that it doesn't support anything more sophisticated (like starts_with, not equal, contains). And this is because the output of the parser is a simple dictionary (with no regex involved in the values).
As for the second question, using the same mentioned Python parser, I was able to double-check that your filter looks fine:
from logfmt import parse_line
parse_line('http.url="http://example.com?bla=bla&foo=bar"')
Output:
{'http.url': 'http://example.com?bla=bla&foo=bar'}
This makes me suspect of an issue on the Jaeger side, but this is as far as I could go.
I am struggling a bit as I am new to programming. I am currently writing a python script and I am a bit stuck. The goal is to parse some spatial information the gets pulled from SQL to a format that is usable for my py script down the line.
I was able to CAST through a SQL query and fetchall using the obdc module. However once I fetch the data that is where it gets trick for me. Here is an example of a print from the fetchall:
[(u'POLYGON ((7014.186279296875 6602.99658203125 1612.5, 7015.984375 6600.416015625 1612.5))',), (u'POLYGON ((6730.962646484375 6715.2490234375 1522.5, 6730.0869140625 6714.13916015625 1522.5))',)]
I am not exactly sure what I am getting here it is like a list of tuples. which I have tried converting to a list of list, but there must be something I am missing.
Here is the usable format I am looking for:
[[7014.186279296875, 6602.99658203125, 1612.5], [7015.984375, 6600.416015625, 1612.5]]
[[6730.962646484375, 6715.2490234375, 1522.5], [6730.0869140625, 6714.13916015625, 1522.5]]
Any ideas of how I can accomplish this? Maybe there is a better way to CAST in SQL or a module in python that would be easier to use instead of just doing a cursor.fetchall() and parsing? Or any any parsing help would be useful. Thanks.
If you want to do parsing, that should be straight forward. For example you've provided next code would do the thing:
result = []
for element in data:
single_elements = element[0][10:-2].split(', ')
for se in single_elements:
row = str(se).split(' ')
result.append([float(a) for a in row])
Result will contain what you need. If parsing is not an option, then paste some of your code so I can see how you're fetching data.
I have got the following issue.
I have implemented WDR_SELECT_OPTIONS and it works fine, but i need the CP(*) for searching data.
Someone know why is not there?
CP isn't showing up because it is only available for character-like data elements (things like C, N, or string).
My guess is that field is an integer.
manual page for these relational operators -
https://help.sap.com/abapdocu_740/en/abenlogexp_op.htm
It is not there, because it does not need to be there. If you type * or + in this field, then the system knows automatically that this is a pattern.
Here is a screenshot for selection options from a traditional Dynpro.
Following conversion
SELECT to_tsvector('english', 'Google.com');
returns this:
'google.com':1
Why does TSearch2 engine didn't return something like this?
'google':2, 'com':1
Or how can i make the engine to return the exploded string as i wrote above?
I just need "Google.com" to be foundable by "google".
Unfortunately, there is no quick and easy solution.
Denis is correct in that the parser is recognizing it as a hostname, which is why it doesn't break it up.
There are 3 other things you can do, off the top of my head.
You can disable the host parsing in the database. See postgres documentation for details. E.g. something like ALTER TEXT SEARCH CONFIGURATION your_parser_config
DROP MAPPING FOR url, url_path
You can write your own custom dictionary.
You can pre-parse your data before it's inserted into the database in some manner (maybe splitting all domains before going into the database).
I had a similar issue to you last year and opted for solution (2), above.
My solution was to write a custom dictionary that splits words up on non-word characters. A custom dictionary is a lot easier & quicker to write than a new parser. You still have to write C tho :)
The dictionary I wrote would return something like 'www.facebook.com':4, 'com':3, 'facebook':2, 'www':1' for the 'www.facebook.com' domain (we had a unique-ish scenario, hence the 4 results instead of 3).
The trouble with a custom dictionary is that you will no longer get stemming (ie: www.books.com will come out as www, books and com). I believe there is some work (which may have been completed) to allow chaining of dictionaries which would solve this problem.
First off in case you're not aware, tsearch2 is deprecated in favor of the built-in functionality:
http://www.postgresql.org/docs/9/static/textsearch.html
As for your actual question, google.com gets recognized as a host by the parser:
http://www.postgresql.org/docs/9.0/static/textsearch-parsers.html
If you don't want this to occur, you'll need to pre-process your text accordingly (or use a custom parser).
I need to search a CLOB column and am looking for the best way to do this, I've seen variants online of using the DBMS_LOB package as well as using something called Oracle Text. Can someone provide a quick example of how to do this?
Oracle Text indexing is the way go. You can use either CONTEXT or CTXRULE index. CONTEXT can be used on unstructured document where CTXRULE is more helpful on structured documents.
This link will provide more info the index types & syntax.
The most important factor you need to consider is LEXER & STOPLIST.
You can also read the posts on asktom.oracle.com
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:5533095920114
What is in your CLOB and what are you searching for ?
Oracle Text is good if you are searching for words or phrases (which is probably what you have in a CLOB). Sometimes you'll store something 'strange' in a CLOB, like XML or the return value of a web-service call and that might be a different kettle of fish.
I needed to do this just recently and came up with the following solution (uses Spring JDBC)
String sql = "select * from clobtest where dbms_lob.instr(myclob, ? , 1, 1) > 0";
return (String) getSimpleJdbcTemplate().getJdbcOperations().queryForObject(sql, new RowMapper<Object>() {
public String mapRow(ResultSet rs, int rowNum) throws SQLException {
String clobText = lobHandler.getClobAsString(rs, "myclob");
return clobText;
}
}, searchText);
Seems to work pretty well, but I'm going to do some performance testing to see how well it works under load.