Using Delete as header - sql

I am using Text_JDBC40 jar and trying to fetch data from a csv file using sql query. In the csv file I have a header named as Delete so when I am trying to fetch data I am getting the below error.
Syntax error: Stopped parse at Delete
Renaming this column with some other name fetches the data properly. Any idea why this is happening? Also is there any other option for Text_JDBC40 jar?

If you are trying to import data into sql from csv then Delete quoted with Grave accent(`) while creating and retrieving the data. Please try this, may be helpful.

Related

SQL: I'm getting an error message when I try to upload data to create a query, how do I fix this?

I keep getting an error when I try and upload data from my own documents in csv format under the table destination. How do I fix this so I can write queries for my data?
I tried changing the names of the file I was uploading and following the instructions from my course exactly without any luck. I was expecting the data to be uploaded into my project file so I could write queries to analyze the data.

BigQuery - BQ extract - Multiple empty file generation

Im trying to export data from big query table to zip file in command line by using BQ extract. It generated multiple empty files (with header) , along with one file with correct data. Can someone please let me know , why empty files are generated.
Thanks
This is a BigQuery issue already reported. I suggest starring the issue and asking for an update on it.
I faced the same empty files issue when using EXPORT DATA.
After doing a bit of R&D found the solution. Put LIMIT xxx in your SELECT SQL and it will do the trick.
You can find the count, and put that as LIMIT value.
SELECT ....
FROM ...
WHERE ...
LIMIT xxx

Trying to create a table and load data into same table using Databricks and SQL

I Googled for a solution to create a table, using Databticks and Azure SQL Server, and load data into this same table. I found some sample code online, which seems pretty straightforward, but apparently there is an issue somewhere. Here is my code.
CREATE TABLE MyTable
USING org.apache.spark.sql.jdbc
OPTIONS (
url "jdbc:sqlserver://server_name_here.database.windows.net:1433;database = db_name_here",
user "u_name",
password "p_wd",
dbtable "MyTable"
);
Now, here is my error.
Error in SQL statement: SQLServerException: Invalid object name 'MyTable'.
My password, unfortunately, has spaces in it. That could be the problem, perhaps, but I don't think so.
Basically, I would like to get this to recursively loop through files in a folder and sub-folders, and load data from files with a string pattern, like 'ABC*', and load recursively all these files into a table. The blocker, here, is that I need the file name loaded into a field as well. So, I want to load data from MANY files, into 4 fields of actual data, and 1 field that captures the file name. The only way I can distinguish the different data sets is with the file name. Is this possible? Or, is this an exercise in futility?
my suggestion is to use the Azure SQL Spark library, as also mentioned in documentation:
https://docs.databricks.com/spark/latest/data-sources/sql-databases-azure.html#connect-to-spark-using-this-library
The 'Bulk Copy' is what you want to use to have good performances. Just load your file into a DataFrame and bulk copy it to Azure SQL
https://docs.databricks.com/data/data-sources/sql-databases-azure.html#bulk-copy-to-azure-sql-database-or-sql-server
To read files from subfolders, answer is here:
How to import multiple csv files in a single load?
I finally, finally, finally got this working.
val myDFCsv = spark.read.format("csv")
.option("sep","|")
.option("inferSchema","true")
.option("header","false")
.load("mnt/rawdata/2019/01/01/client/ABC*.gz")
myDFCsv.show()
myDFCsv.count()
Thanks for a point in the right direction mauridb!!

WAMP, PHPMyadmin, Import, CSV Error SQL query... #2006 - MySQL server has gone away

Im getting this error:
Error SQL query… #2006 - MySQL server has gone away
I'm suspecting the CSV file might be the source of the problem as other CSV work, I've tried changing the following values on the SQL server
the key_buffer_size = 900M
max_allowed_packet = 900M
But it doesnt seem to fix the problem, I've tried converting the file to SQL, XML but it just doesn't want to import.
any advice?
Here are the files:
CSV im traying to upload
CSV that works
With phpMyAdmin 4.4.13.1 and the CSV import, I was able to import your file, after I changed the first line to:
Username,Client Name,Date,Time,Published App,x
The only way I managed to import the file was to make smaller file and upload the one by one... which is strange as the csv wasn't that big 4MB thousand records...

Writing data flow to postgresql

I know that by doing:
COPY test FROM '/path/to/csv/example.txt' DELIMITER ',' CSV;
I can import csv data to postgresql.
However, I do not have a static csv file. My csv file gets downloaded several times a day and it includes data which has previously been imported to the database. So, to get a consistent database I would have to leave out this old data.
My bestcase idea to realize this would be something like above. However, worstcase would be a java program manually checks each entry of the db with the csv file. Any recommendations for the implementation?
I really appreciate your answer!
You can dump latest data to the temp table using COPY command and MERGE temp table with the live table.
If you are using JAVA program for execute COPY command, then try CopyManager API.