Sparql query runs for the turtle files downloaded from another system, but it is not running for the same files downloaded in my system - sparql

In GraphDB, I'm trying to run a Construct query on a set of data(rdf file). After that, once I run and download the query result in turtle format, I'm trying to import that turtle file into GraphDB and execute a select query, to get the result in CSV format. But select query is not running. But however, if in the same way, another person is running the construct query in his system and downloading the result in turtle format, and sending that turtle file to me, I can easily run the Select query on that turtle file.
Even though we are doing the same thing, the files downloaded in my system are not giving any result for Select query.
What might be the possible reason?
I tried running select query on the turtle file which I downloaded. I expected it to run and give me the result.

Related

Need to simulate resourceName with full table path in Log Explorer

I need to understand under what circumstance does the protoPayload.resourceName with full table path i.e., projects/<project_id>/datasets/<dataset_id>/tables/<table_id> appear in the Log Explorer as shown in the example below.
The below entries were generated by a composer dag running a kubernetespodoperator executing some dbt commands on some models. On the basis of this, I have a sink linked to pub/sub for further processing.
As seen in the image the resourceName value is appearing as-
projects/gcp-project-name/datasets/dataset-name/tables/table-name
I have shaded the actual values of projectid, datasetid, and tablename.
I can't run the similar dag job with kuberenetesoperator on test tables owing to environment restrictions. So I tried running some update queries and insert queries using BigQuery Editor. Here is how value of protoPayload.resourceName comes as -
projects/gcp-project-name/jobs/bxuxjob_
I tried same queries using Composer DAG using BigQueryInsertJobOpertor. Here is how the value of protoPayload.resourceName comes as -
projects/gcp-project-name/jobs/airflow_<>_
Here is my question. What operation/operations in BigQuery will give me protoPayload.resourceName as the one that I am expecting i.e. -
projects/<project_id>/datasets/<dataset_id>/tables/<table_id>

Presto Trino - Execute from SQL file

Beginner. Using CLI/Presto/Trino. Not sure the right term. It looks to be command line and we are using Hive.
I can run select, create tables. I'm trying to run multiple queries at once. I created SQL file and uploaded to Hive folder structure. I think I can execute all of them at once instead of going one by one.
How do I initiate the process of running SQL query from file?
--execute file user/hivefile.sql > result getting nowhere

How do you query data from only the last file uploaded in cloud storage with BigQuery

Everyday I'm uploading a new file to a Cloud Storage bucket. The file is stored as JSON-NL format. I have a BigQuery table (setup as external table) connected to this bucket. Each files is named with the date of their upload. If I want to query only the most recent file, so far the best option I found is to parse the _FILE_NAME in my sql query and match it with the current date. However the parsing is a bit messy so I'm wondering is there is any other better solution.
What are other options to query only the most recent file? Should I set this up differently?
There isn't better solution. Use a script to parse the pseudo-column with the file name, get the latest one and then query it (with an execute immediate). No other solution so far

SQL - insert image from relative path?

I'm trying to write a SQL script, that can be transferred to another computer and someone else will get a whole database with no problem.
What I'm fighting with, is how to make a relative path for an image?
Let's say, that someone will have a script in c:\Documents\script.sql
But I don't know that if someone will keep it there.
I have a folder with images, that I want to load to my database
So here's a fragment of my script, how to make a relative path? Let's say that images/ and script.sql are in the same folder
INSERT dbMagazynier.dbo.Produkty (Nazwa, Cena, Opis, Zdjecie, ID_Producenct)
SELECT 'jeansy', 199.00, 'jenasy jak na zdjeciu', Zdjecie.*, 5
FROM OPENROWSET (BULK '\images\jeansy.jpg', SINGLE_BLOB) Zdjecie
BUT SQL Server 2012 says that it can't find my jeans image
I don't believe you can work with relative paths in SQL Server. You can check more in this question: Relative path in t sql?
The best you can do is to declare a variable at the top of the script, specify the file location there and use your variable later in the code of the script. You can also put some comments about the variable with instructions for people using the script.

How can I store the result of a SQL query in a CSV file using Squirrel?

Version 3.0.3. It's a fairly large result-set, around 3 million rows.
Martin pretty much has this right.
The TL/DR version is that you need the "SQLScripts" plugin (which is one of the "standard' plugins), and then you can select these menu options: Session > Scripts > Store Result of SQL in File
I'm looking at version 3.4. I don't know when this feature was introduced but you may need to upgrade if you don't have and cannot install the SQLScripts plugin.
Instructions for installing a new plugin can be found at: http://squirrel-sql.sourceforge.net/user-manual/quick_start.html#plugins
But if you're performing a fresh install of Squirrel you can simply select the "SQLScripts" plugin during the installation.
Here's the long version:
Run the query
Connect to the database. Click on the SQL tab. Enter your query. Hit the run button (or Ctrl-Enter).
You should see the first 100 rows or so in the results area in the bottom half of the pane (depending upon how you've configured the Limit Rows option).
Export the full results
Open the Session menu. Select the Scripts item (nearly at the bottom of this long menu). Select Store Result of SQL in File.
This opens a dialog box where you can configure your export. Make sure you check Export the complete result set to get everything.
I haven't tried this with a 3 million row result set, but I have noticed that Squirrel seems to stream the data to disk (rather than reading it all into memory before writing), so I don't see any reason why it wouldn't work with an arbitrarily large file.
Note that you can export directly to a file by using Ctrl-T to invoke the tools popup and selecting sql2file.
I have found a way to do this, there is a nice support for this in Squirrel. Run the SQL select (the 100 row limit will be ignored by the exporter, don't worry). Then, in the main menu, choose Session, Scripts, Store Result of SQL in File. This functionality may not be present by default, it may be present in some standard plugin (but not installed by default). I don't know which plugin though.
I also wanted to export results of SQL query to CSV file using SquirrelSQL. However according to changes file it seems that this functionality is not supported even in SquirrelSql 3.3.0.
So far I was able to export only data shown in 'result table' of SQL query by right click on the table > Export to CSV. The table size is by default 100 rows and so is the CSV export. You may change the table size in Session Properties > SQL > Limit rows. E.g. change the size to 10000 and your export will also contain 10000 rows. The question is, how will SquirrelSql deal with really big result sets (millions of rows)...
Run from your GUI:
COPY (SELECT * FROM some_table) TO '/some/path/some_table.csv' WITH CSV HEADER
Using Squirrel 3.5.0
The "Store SQL result as file" is great if you only have a simple Select query. A more complex one with parameters wont work.
Even trying to export a result of 600,000+ rows to a CSV file can also fail.