Jena-Fuseki requires dataset specified - sparql

I have Jena-Fuseki server accessed via browser at http://localhost:3030/sparql.html. The query
select * where { }
results in an error:
Error 400: No dataset description in protocol request or in the query string
The query
select * from <http://xmlns.com/foaf/0.1/> where {}
results in an empty table.
Example queries at 2.1 Writing a Simple Query from the SPARQL specification do not require a 'from' clause. How to configure Jena so that examples execute without errors?
How to make a query to know which datasets are present in a database?

The endpoint "/sparql.html" is a general SPARQL query engine and needs to be told where to get the data from. That can be in protocol or with "FROM".
Fuseki can also be configured to have SPARQL services acting on a specific database. The URL will look like
http://localhost:3030/DATASET/sparql
where DATASET is your choice of name. See the documentation on configuration. http://jena.apache.org/documentation/serving_data/
[Jan2015] Fuseki1 requires datasets to be given on the command line or configuration. Fuseki2, soon to be released, has a UI for creating new datasets in a running server as well as the Fuseki1 style configuration.

Its easy to miss the first time you use Fuseki, but you've got to navigate to your dataset, and from there, there's a special query box for that dataset.
start at http://localhost:3030/
click on Control Panel
Select your dataset from the dropdown menu, click "select"
run a query

Related

Mulesoft not able to pass dynamic SQL queries based on environments

Hello for demonstration purposes I trimmed out my actual sql query.
I have a SQL query
SELECT *
FROM dbdev.training.courses
where dbdev is my DEV database table name. When I migrate to TEST env, I want my query to dynamically change to
SELECT *
FROM dbtest.training.courses
I tried using input parameters like {env: p('db_name')} and using in the query as
SELECT * FROM :env.training.courses
or
SELECT * FROM (:env).training.courses
but none of them worked. I don't want my SQL query in properties file.
Can you please suggest a way to write my SQL query dynamically based on environment?
The only alternative way is to deploy separate jars for different environments with different code.
You can set the value of the property to a variable and then use the variable with string interpolation.
Warning: creating dynamic SQL queries using any kind of string manipulation may expose your application to SQL injection security vulnerabilities.
Example:
#['SELECT * FROM $(vars.database default "dbtest").training.courses']
Actually, you can do a completely dynamic or partially dynamic query using the MuleSoft DB connector.
Please see this repo:
https://github.com/TheComputerClassroom/dynamicSQLGETandPATCH
Also, I'm about to post an update that allows joins.
At a high level, this is a "Query Builder" where the code that builds the query is written in DataWeave 2. I'm working on another version that allows joins between entities, too.
If you have questions, feel free to reply.
One way to do it is :
Create a variable before DB Connector:
getTableName - ${env}.training.courses
Write SQL Query :
Select * from $(getTableName);

How do I edit the BigQuery Connector for Excel .iqy file to have the SQL statement already in it instead of relying on input from an Excel cell?

I'm having an issue similar to bigquery excel connector - query larger than 256 char
However, I AM referencing a cell range and get the result:
"WARNING Request failed: Error. Unable to execute query. 400 { code : 400, errors : [ { domain : global, location : query, locationType : other, message : 1.593 - 1.593: No query found., reason : invalidQuery } ], message : 1.593 - 1.593: No query found. }"
Perhaps I'm "splitting" the query incorrectly? I assumed each cell only needed to be less than 256 characters, and it would just concatenate subsequent cells in the range specified to the end of the string in the preceding cells.
Every help document I've found show simple SQL statements, and I can run simple ones, but the query I really need to work has a select statement in the where clause for a field. I've tried joining the table referenced in the where clause to see if that makes the statement simpler, more easily recognizable as a query, but no luck.
I've tried opening the .iqy file in NotePad that (BigQuery originally had me download) to see if I could just input the query there, but I cannot find any documentation for syntax on these types of files so when I load it into Excel it still shows a prompt for the query to be inputted.
The final result doesn't need to have the query read from a cell reference, in fact, if it could all just be in the .iqy file, that would be most preferable: less chance of users mucking up the data.
Make sure to URL encode the parameters (query, project and key) in .iqy file. Use an online tool like https://meyerweb.com/eric/tools/dencoder
if you had the .iqy loaded already in Excel before making above changes, you must delete the query definition. Go in Properties and uncheck Save query definition, then connect to the .iqy again
Not sure what the max size for q(query) is for https://bigquery-connector.appspot.com but I recommend using a BigQuery VIEW instead.
it hides the SQL plumbing and hence reduces the size of the SQL passed to the API. URL encoding the query can then be as easy as replacing spaces with +
you can tune/change the view definition in BigQuery without having to rollout a new .iqy to your users
implement some sort of row-level security using CURRENT_USER()...
But that's another topic !
Finally, coming back to the .iqy, you can combine and embed parameters in the query like so:
q=select+*+from+mydataset.myview+where+FiscalYear=["Year", "Enter a year:"]&p=myproject&k=myURLencodedKey

Parameterize a JDBC SQL Query in SOAP UI from a Custom Property

To proceed with a Database validation, I am having a need of comparing a record in the DB along with a data which is dynamically generated in the previous REST response, using SoapUI.
I have already captured the property value using a Property Transfer step and stored the required value in a custom property in Test-Case successfully i.e., using property expansion, say ${TestCase#customerId}
My intention is to use that particular value stored in the custom properties to query the result I am expecting, in the JDBC Request test step.
The query which I have drafted with the parameter is as below :
Select *
From ABC.SEC_CUST
Where ABC.SEC_CUST.CUSTOMER_ID = ${TestCase#customerId}
The response I receive after executing is as below.
Error getting response; java.sql.SQLSyntaxErrorException : ORA-00911: Invalid character.
But, when I run the query without the parameterized value it executes perfectly. Where, I tend the conclusion as there is a syntax issue in the way I have mentioned the parameter in the query.
But, I am unable to find the correct way to mention the parameter in the query in SoapUI.
Can anyone with experience in SoapUI, please assist me on this?
That is not working because of the use of property expansion which is only known to SoapUI, but not for the SQL query.
In order to get it work for the same, you need to define the variables in the top for all the parameters that are going to be used in the sql query.
Here the screen shot which explains how to use the same:
You forgot a '#'
Select *
From ABC.SEC_CUST
Where ABC.SEC_CUST.CUSTOMER_ID = ${#TestCase#customerId}
Try this.
Select * From ABC.SEC_CUST
Where ABC.SEC_CUST.CUSTOMER_ID = :customerId

OrientDB: text searching using gremlin

I am using OrientDB and the gremlin console that comes with.
I am trying to search a pattern in text property. I have Email vertices with ebodyText property. The problem is that the result of querying with SQL like command and Gremlin language is quite different.
If I use SQL like query such as:
select count(*) from Email where eBodyText like '%Syria%'
it returns 24.
But if I query in gremlin console such as:
g.V.has('eBodyText').filter{it.eBodyText.matches('.*Syria.*')}.count()
it returns none.
Same queries with a different keyword 'memo' returns 161 by SQL but 20 by gremlin.
Why does this behave like this? Is there a problem with the syntax of gremlin command? Is there a better way to search text in gremlin?
I guess there might be a problem of setting properties in the upload script which uses python driver 'pyorient'.
Python script used to upload the dataset
Thanks for your help.
I tried with 2.1.15 and I had no problem.
These are the records.
EDITED
I added some vertexes to my DB and now the count() is 11
QUERY:
g.V.has('eBodyText').filter{it.eBodyText.contains('Syria')}.count()
OUTPUT:
==>11
Hope it helps.

Prepared Statement support in Apache Ignite Cache API

Is any facility like Prepared statement supported in IgniteCache API to avoid query parsing each time? I saw that a Jira issue has been raised for this , and it says its resolved in version 1.5.0.final,
https://issues.apache.org/jira/browse/IGNITE-1856 , but i could not find any documentation for this in Apache Ignite site. I know that we can use prepared statement by connecting via JDBC Connection but that does not suit my use case.
My Code looks like below ,this query will be called again and again with different parameters,
IgniteCache<Integer,Subscriber> subscriberCache= rocCachemanager.getCache("subscriberCache");
SqlQuery<Integer, Subscriber> sql = new SqlQuery(Subscriber.class,
"from Subscriber where Subscriber.MSISDNNo=? and Subscriber.status='Active'");
sql.setArgs("SomeNumber");
QueryCursor<Entry<Integer,Subscriber>> cursor =ss.query(sql);
Statements are cached automatically, no action required. If your query text does not change, only parameters do, Ignite will not parse the query again.