AllegroGraph Query Plan - sparql

Is it possible to view the query execution plan for evaluating a SPARQL query in AllegroGraph (something like EXPLAIN).

AllegroGraph provides a query analyzer through a Lisp API and REST API that will tell you what indices are being utilized. For more information, look at the following link (this is for AllegroGraph v4.14.1):
http://franz.com/agraph/support/documentation/v4/query-analysis.html

There is no explain query command in SPARQL, but here are two ways to execute SPARQL queries with verbose output which includes the query plan:
click "Show Plan" in AllegroGraph Webview;
or add the AllegroGraph query option PREFIX franzOption_logQuery: <franz:yes> to the query, which will write an execution log to agraph.log

Related

Laravel 5.4: logging SQL queries along with results

Logging SQL queries is widely described, for instance here:
How to get the query executed in Laravel 5?
but I found no infos about how to log the queries along with the query results or errors respectively.
Anyone who can fill the gap?
Thanks,
Armin.
IF you want to debug a query(ies) (based on your comment) there is this option
Before the query add
\DB::enableQueryLog();
and after the query you can do a dd or whatever with:
\DB::getQueryLog();
Note: This will debug all of the queries in between the two commands

EXASOL Explain Analyse Query

I want to get the query plan in Exasol database to check the total execution time, memory and cpu usage. Profiling in Exasol is so complex and difficult to understand.
Is there any way to get the query plan like explain analyze in PostgreSQL or any other simple way?
Please explain how to read the query plan in Exasol without executing the query?
You can check the EXASOL User Manual about profiling a query. I agree it's a bit cumbersome :)
Or you can use the scripts I wrote to have an explain like command: exasol-explain
Maybe it will be useful for someone who will try to use EXASOL Explain. There is a script with one missed field in select statement in exasol-explain/scripts/sqlprofile.lua, after temp_db_ram_peak field should follow:
max(PERSISTENT_DB_RAM_PEAK) as PERSISTENT_DB_RAM_PEAK
Otherwise "explain" and "explain_this" return an error "incorrect numbers of result column"

OrientDB: text searching using gremlin

I am using OrientDB and the gremlin console that comes with.
I am trying to search a pattern in text property. I have Email vertices with ebodyText property. The problem is that the result of querying with SQL like command and Gremlin language is quite different.
If I use SQL like query such as:
select count(*) from Email where eBodyText like '%Syria%'
it returns 24.
But if I query in gremlin console such as:
g.V.has('eBodyText').filter{it.eBodyText.matches('.*Syria.*')}.count()
it returns none.
Same queries with a different keyword 'memo' returns 161 by SQL but 20 by gremlin.
Why does this behave like this? Is there a problem with the syntax of gremlin command? Is there a better way to search text in gremlin?
I guess there might be a problem of setting properties in the upload script which uses python driver 'pyorient'.
Python script used to upload the dataset
Thanks for your help.
I tried with 2.1.15 and I had no problem.
These are the records.
EDITED
I added some vertexes to my DB and now the count() is 11
QUERY:
g.V.has('eBodyText').filter{it.eBodyText.contains('Syria')}.count()
OUTPUT:
==>11
Hope it helps.

Jena-Fuseki requires dataset specified

I have Jena-Fuseki server accessed via browser at http://localhost:3030/sparql.html. The query
select * where { }
results in an error:
Error 400: No dataset description in protocol request or in the query string
The query
select * from <http://xmlns.com/foaf/0.1/> where {}
results in an empty table.
Example queries at 2.1 Writing a Simple Query from the SPARQL specification do not require a 'from' clause. How to configure Jena so that examples execute without errors?
How to make a query to know which datasets are present in a database?
The endpoint "/sparql.html" is a general SPARQL query engine and needs to be told where to get the data from. That can be in protocol or with "FROM".
Fuseki can also be configured to have SPARQL services acting on a specific database. The URL will look like
http://localhost:3030/DATASET/sparql
where DATASET is your choice of name. See the documentation on configuration. http://jena.apache.org/documentation/serving_data/
[Jan2015] Fuseki1 requires datasets to be given on the command line or configuration. Fuseki2, soon to be released, has a UI for creating new datasets in a running server as well as the Fuseki1 style configuration.
Its easy to miss the first time you use Fuseki, but you've got to navigate to your dataset, and from there, there's a special query box for that dataset.
start at http://localhost:3030/
click on Control Panel
Select your dataset from the dropdown menu, click "select"
run a query

SQL Performance: LIKE vs IN

In my logging database I have serveral qualifiers specifying the logged action. A few of them are built like 'Item.Action' e.g. 'Customer.Add'. I wonder which approach will be faster if I want to get all logging items that starts with 'Customer.':
SELECT * FROM log WHERE action LIKE 'Customer.%'
or
SELECT * FROM log WHERE action IN ('Customer.Add', 'Customer.Delete', 'Customer.Update', 'Customer.Export', 'Customer.Import')
I use PostgreSql.
Depends on indexes on log table. Most likely - queries will have the same performance. To check - use explain or explain analyze. Queries with the same execution plan (output of explain) will have the same performance.