Getting following error in sqlloader - sql

SQL*Loader-500: Unable to open file (C:\sqlloaderpra\tmp.txt)
SQL*Loader-553: file not found SQL*Loader-509: System error: The
system cannot find the file specified. SQL*Loader-2026: the load was
aborted because SQL Loader cannot continue.
Where my ctrl file is
load data
infile 'C:\sqlloaderpra\tmp.txt'
into table tmpload
append
fields terminated by ","
trailing nullcols
(
a Integer external,
b char nullif b=blanks)
Files are there but i get this error.

Related

PostgresSQL Data importation using COPY

I have a CSV file with 13 columns; Id,Assigned_to,Title,Serial_number,Description, Price, Date_assigned,Purchase_date,is_assigned,is_cleared_off,is_damaged, created_at, updated_at.e terminal
I am trying to import the data to a PostgreSQL server using the terminal.
I am using the following command;
COPY assets(id,assigned_to,title,serial_number,description,price,date_assigned,purchase_date,is_assigned,is_damaged,created_at,updated_at) FROM '/home/intern2/Documents/assets_final.csv' DELIMITER ‘,’ CSV HEADER;
where /home/intern2/Documents is the file directory and the file name is assets_final.csv
I am however getting this error
ERROR: syntax error at or near "‘"
LINE 1: ...ome/intern2/Documents/assets_final.csv' DELIMITER ‘,’ CSV HE...

SSIS Flat File Import errors

I have a ssis job that imports flat file data into my database and also data conversion. Please find a view of the scheme:
The issue is that I keep getting errors on the "Violations" field see below:
[Flat File Source [37]] Error: Data conversion failed. The data
conversion for column "Violations" returned status value 4 and status
text "Text was truncated or one or more characters had no match in the
target code page.".
[Flat File Source [37]] Error: The "Flat File Source.Outputs[Flat File
Source Output].Columns[Violations]" failed because truncation
occurred, and the truncation row disposition on "Flat File
Source.Outputs[Flat File Source Output].Columns[Violations]" specifies
failure on truncation. A truncation error occurred on the specified
object of the specified component.
[Flat File Source [37]] Error: An error occurred while processing file
"C:\Users\XXXX\XXXX\XXXX\XXXX\XXXX\XXXX\XXXX\Food_Inspections.csv"
on data row 25.
In line 25 of the CSV file, this field is over 4000 characters long.
In data conversion, I currently have the Data Type set to string [DT_STR] of length 8000, coding 65001.
Row delimiter {LF},
Column delimiter Semicolon {;}
I have already looked at other suggested solutions, i.e. increasing OutputColumnWidth to 5000 but it did not help - please advise how to solve this.

Need to export CSV file from PSQL on Windows (psql 13) using COPY TO

I am using
\COPY
(
SELECT *
FROM person
LEFT JOIN car ON car.id=person.car_id
)
TO 'c:/Users/nick-/Downloads/results.csv'
DELIMITER ',' CSV HEADER;
and am getting the error
ERROR: syntax error at or near "TO"
LINE 6: TO 'c:/Users/nick-/Downloads/results.csv'
I've also tried COPY rather than \COPY but I get this error
ERROR: could not open file "c:/Users/nick-/Downloads/results.csv" for writing: Permission denied
HINT: COPY TO instructs the PostgreSQL server process to write a file. You may want a client-side facility such as psql's \copy.
I have tried using all of the below
c:/Users/nick-/Downloads/results.csv
c://Users//nick-//Downloads//results.csv
'c:\Users\nick-\Downloads\results.csv'
'c:\\Users\\nick-\\Downloads\\results.csv'
and none have worked.

Getting Internal Server Error on pgSQL

Im tryingto import data from windows CSV (comma delimiter) file into pgSQL faxtest1 table, but I keep getting error saying "The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application."
The following is my code:
COPY faxtest1
FROM 'C:‪\Users\David\Desktop\test3.csv'
WITH DELIMITER AS ',' CSV ;
The CSV file is like:
Status,Fax ID
Fax to Email,2104
Fax to Email,2108
It is a bug of pg admin 4, hope they will fix it in the future.
In version 14, in the Import/Export data function, there are 2 columns, "Options" and "Columns." Try manually select the columns one at time, separated by a comma. See if this would by pass the error.
It worked for me.

error when importing gz files into bigquery

I ran into an error when importing gzipped tab delimited files into bigquery
The output I got was:
root#a20c6fbdf9b5:/opt/batch/jobs# bq show -j bqjob_r5720e2f2267a5a5b_0000014d09571f27_1
Job infra-bedrock-861:bqjob_r5720e2f2267a5a5b_0000014d09571f27_1
Job Type State Start Time Duration Bytes Processed
---------- --------- ----------------- ---------- -----------------
load FAILURE 30 Apr 08:00:44 0:02:05
Errors encountered during job execution. Bad character (ASCII 0) encountered: field starts with: <H:|\ufc0f\ufffd(>
Failure details:
- File: 1 / Line:1 / Field:1: Bad character (ASCII 0) encountered:
field starts with: <\ufff>
- File: 1 / Line:3 / Field:1: Bad character (ASCII 0) encountered:
field starts with: <\u0475\ufffd=\ufffd\ufffd\u03d6>
- File: 1 / Line:4 / Field:1: Bad character (ASCII 0) encountered:
field starts with: <-\ufffd\ufffdY\u049a\ufffd>
- File: 1 / Line:6 / Field:1: Bad character (ASCII 0) encountered:
field starts with: <\u018e\ufffd\ufffd\ufffd\ufffd>
I tried manually downloading the files, unzipping and then uploading the files again. The uncompressed files could be imported into bigquery without any problems.
This looks like a bug in bigquery with zip files
Inspecting the job configuration, you include a non-gzip file as the first uri, ending in .../20150426/_SUCCESS. BigQuery uses the first file to determine whether compression is enabled.
Assuming this file is empty, you can remove it from your load requests to fix this. If there is data in this file, attach a ".gz" suffix or re-order this file so it is not first in the uri list.