Hello guys as the title says I am having trouble trying to import ' create ' new database from dump file. When i try to run sql query - i get error regarding
' COPY '
. When u run through psql console i get
wrong command \n
the SQL file looks like this ( just a sample ofc as the whole lot is quite big .. )
--
-- PostgreSQL database dump
--
-- Dumped from database version 9.1.12
-- Dumped by pg_dump version 9.3.3
-- Started on 2014-04-01 17:05:29
SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
--
-- TOC entry 209 (class 1259 OID 32258844)
-- Name: stats_call; Type: TABLE; Schema: public; Owner: postgres; Tablespace:
--
CREATE TABLE bensonsorderlystats_call (
id integer,
callerid text,
entry timestamp with time zone,
oqid integer,
oqnumcalls integer,
oqannounced double precision,
oqentrypos integer,
oqexitpos integer,
oqholdtime double precision,
acdcallid text,
acdentry timestamp with time zone,
acdqueueid integer,
acdagents integer,
acdentrypos integer,
acdexitpos integer,
acdholdtime double precision,
holdtime double precision,
exit timestamp with time zone,
agentid integer,
talktime double precision,
calltime double precision,
callend timestamp with time zone,
reason integer,
wraptime double precision,
acdsubqueueid integer,
working integer,
calledback integer,
accountid integer,
needed integer,
ringingagentid integer,
ringtime double precision,
presented integer,
notecode integer,
note text,
recording text,
wrapcode integer
);
ALTER TABLE public.stats_call OWNER TO postgres;
--
-- TOC entry 2027 (class 0 OID 32258844)
-- Dependencies: 209
-- Data for Name: stats_call; Type: TABLE DATA; Schema: public; Owner: postgres
--
COPY stats (id, callerid, entry, oqid, oqnumcalls, oqannounced, oqentrypos, oqexitpos, oqholdtime, acdcallid, acdentry, acdqueueid, acdagents, acdentrypos, acdexitpos, acdholdtime, holdtime, exit, agentid, talktime, calltime, callend, reason, wraptime, acdsubqueueid, working, calledback, accountid, needed, ringingagentid, ringtime, presented, notecode, note, recording, wrapcode) FROM stdin;
1618693 unknown 2014-02-01 02:59:48.297+00 2512 \n \n \n \n 0 1391223590.58579 2014-02-01 02:59:48.297+00 1872 \n
on the above screen when i run the import with
\i C:<path>/file.sql with delimiter \n i get wrong command \n
i have tried also
`\i C:<path>/file.sql delimiter \n`
`\i C:<path>/file.sql`
Can any one tell me please, how to get this db into server. Help appreciated. Thanks
In general, you can issue \set ON_ERROR_STOP in psql before including a SQL file, to stop at the first error and not be flooded by subsequent errors.
When trying to copy into a non-existing table, COPY fails and all the data after it is rejected with a lot error messages.
Looking at the beginning of your dump, there seems to be a few problems indeed.
It creates a table named bensonsorderlystats_call but then gives ownership to postgres to another public.stats_call which is not supposed to exist.
And then it tries to COPY data into a table named stats which is also never created, assuming you're restoring into an empty database.
It looks as if someone manually edited the dump and messed up.
Try psql -U username -f dump.sql database_name.
Related
I tried to create a table to import a csv file as you can see below, but this error came up, what do I do to fix this.
id VARCHAR(255),
race_ethnicity VARCHAR(255),
sex CHAR(1),
date_of_svc DATE,
icd10_category VARCHAR(3),
billed_amt DOUBLE PRECISION,
allowed_amt DOUBLE PRECISION,
claim_status VARCHAR(255),
cpt INT
);
COPY public.test_dataset FROM 'C:\Users\marca\Downloads\Test dataset - SQL - Sep. 2021 - HCDA.csv' WITH CSV HEADER;```
ERROR: extra data after last expected column
CONTEXT: COPY test_dataset, line 2: "A100B9111,african american,F,01/01/2020,F10, $350.00 , $250.00 ,Paid,99281,"
SQL state: 22P04
I have PostgreSQL database dump by pg_dump version 9.5.2, which contains DDLs and also INSERT INTO statements for each table in given database. Dump looks like this:
SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'UTF8';
CREATE TABLE unimportant_table (
id integer NOT NULL,
col1 character varying
);
CREATE TABLE important_table (
id integer NOT NULL,
col2 character varying NOT NULL,
unimportant_col character varying NOT NULL
);
INSERT INTO unimportant_table VALUES (123456, 'some data split into
- multiple
- lines
just for fun');
INSERT INTO important_table VALUES (987654321, 'some important data', 'another crap split into
- lines');
...
-- thousands of inserts into both tables
The dump file is really large and it is produced by another company, so I am not able to influence the export process. I need create 2 files from this dump:
All DDL statements (all statements that doesn't start with INSERT INTO)
All INSERT INTO important_table statements (I want restore only some tables from dump)
If all statements would be on single line without new line character in the data, it will be very easy to create 2 SQL script by grep, for example:
grep -v '^INSERT INTO .*;$' my_dump.sql > ddl.sql
grep -o '^INSERT INTO important_table .*;$' my_dump.sql > important_table.sql
# Create empty structures
psql < ddl.sql
# Import only one table for now
psql < important_table.sql
Firstly I was thinking about using grep but I did not find, how to process multiple lines at once, then I tried sed but it is returning only single line inserts. I also used https://regex101.com/ to find out right regular expression but I don't know how to combine it with grep or sed:
^(?!(INSERT INTO)).*$ -- for ddl
^INSERT INTO important_table(\s|[[:alnum:]])*;$ -- for inserts
I found similar question pcregrep multiline SQL match but there is no answer. Also, I don't mind if the solution will work with grep, sed or whatever you suggest, but it should work on Ubuntu 18.04.4 TLS.
Here is a bash based solution that uses perl one-liners to prepare your SQL dump data for the subsequent grep statements.
In my approach, the goal is to get one SQL statement on one line through a script that I called prepare.sh. It got a little more complicated because I wanted to accomodate for semicolons and quotes within your insert data strings (these, along with the line breaks, are represented by their hex codes in the intermediate output):
EDIT: In response to #32cupo's comment, below is a modified set of scripts that avoids xargs with large data sets (although I don't have huge dump files to test it with):
#!/bin/bash
perl -pne 's/;(?=\s*$)/__ENDOFSTATEMENT__/g' \
| perl -pne 's/\\/\\\\x5c/g' \
| perl -pne 's/\n/\\\\x0a/g' \
| perl -pne 's/"/\\\\x22/g' \
| perl -pne 's/'\''/\\\\x27/g' \
| perl -pne 's/__ENDOFSTATEMENT__/;\n/g' \
Then, a separate script (called ddl.sh) includes your grep statement for the DDL (and, with the help of the loop, only feeds smaller chunks (lines) into xargs):
#!/bin/bash
while read -r line; do
<<<"$line" xargs -I{} echo -e "{}"
done < <(grep -viE '^(\\\\x0a)*insert into')
Another separate script (called important_table.sh) includes your grep statement for the inserts into important-table:
#!/bin/bash
while read -r line; do
<<<"$line" xargs -I{} echo -e "{}"
done < <(grep -iE '^(\\\\x0a)*insert into important_table')
Here is the set of scripts in action (please also note that I spiced up your insert data with some semicolons and quotes):
~/$ cat dump.sql
SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'UTF8';
CREATE TABLE unimportant_table (
id integer NOT NULL,
col1 character varying
);
CREATE TABLE important_table (
id integer NOT NULL,
col2 character varying NOT NULL,
unimportant_col character varying NOT NULL
);
INSERT INTO unimportant_table VALUES (123456, 'some data split into
- multiple
- lines
;just for fun');
INSERT INTO important_table VALUES (987654321, 'some important ";data"', 'another crap split into
- lines;');
...
-- thousands of inserts into both tables
~/$ cat dump.sql | ./prepare.sh | ./ddl.sh >ddl.sql
~/$ cat ddl.sql
SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'UTF8';
CREATE TABLE unimportant_table (
id integer NOT NULL,
col1 character varying
);
CREATE TABLE important_table (
id integer NOT NULL,
col2 character varying NOT NULL,
unimportant_col character varying NOT NULL
);
...
-- thousands of inserts into both tables
~/$ cat dump.sql | ./prepare.sh | ./important_table.sh > important_table.sql
~/$ cat important_table.sql
INSERT INTO important_table VALUES (987654321, 'some important ";data"', 'another crap split into
- lines;');
We are trying to insert a large string into a table column and getting an error "length can't exceed maximum length(8388607 bytes)". (0x7F FFFF). The input data field length exceeds 10MB.
HANA version SPS 9 (Rev 97)
Data type of variable and table column is CLOB
Using INSERT in a SQLSCRIPT Stored Procedure
The HANA data types documentation say that maximum length of any LOB object is 2GB (0x7FFF FFFF). Our string length is well within this limit. So this is very confounding. Will appreciate any hints to resolve this.
Thanks a lot.
---------- CODE
CREATE PROCEDURE XXX_SCHEMA.PROC_INSERT_INTO_CLOB
( IN DATA_CLOB CLOB, )
BEGIN
LANGUAGE SQLSCRIPT SQL SECURITY INVOKER default schema XXX_SCHEMA AS
INSERT INTO "XXX_SCHEMA"."XXX::DB_YY_CLOB"
(
'ABC' ,
CURRENT_TIMESTAMP ,
DATA_CLOB
)
SELECT F1,
F2,
:DATA_CLOB
FROM DUMMY ;
END;
-- Table Defintion
table.schemaName = "XXX_SCHEMA";
table.tableType = ROWSTORE;
table.columns = [
{name = "F1";sqlType = NVARCHAR;nullable = false; length = 3;},
{name = "F2";sqlType = TIMESTAMP;nullable = true;},
{name = "DATA_CLOB";sqlType = CLOB;nullable = true;}];
The reason for the error is that you seem to use string methods to deal with the CLOB data.
When I tried simple things like inserting a really long value generated via
update rclob set data_clob = lpad ('X', 2000000000, 'Y');
I also received the error message
Could not execute 'update rclob set data_clob = lpad ('X', 2000000000, 'Y')'
SAP DBTech JDBC: [384]: string is too long: length can't exceed maximum length(8388607bytes) at function lpad() (at pos 29)
Since LPAD produces a string before it gets entered into the CLOB, the error message is thrown before the CLOB column is actually touched.
Generally LOB columns can only be inserted by binding the data to a parameter in the insert statement.
I am getting an error in an Oracle 11g stored procedure. The error is...
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
It is happening at line 31, the line that contains out_cnt_tot := 0; I'm really not sure why there is anything wrong with that line. Another programmer created this procedure and I'm really not familiar with SQL procedures. Can anyone help me figure this out?
create or replace
PROCEDURE "FIP_BANKREC_PREP"
(
in_file_date in varchar2,
in_bank_code in varchar2,
out_cnt_apx_miss_no out integer,
out_cnt_prx_miss_no out integer,
out_cnt_apx_no_mtch out integer,
out_cnt_prx_no_mtch out integer,
out_cnt_ap_dup out integer,
out_cnt_pr_dup out integer,
out_cnt_bad out integer,
out_cnt_ap_load out integer,
out_cnt_pr_load out integer,
out_cnt_ap_not_load out integer,
out_cnt_pr_not_load out integer,
out_cnt_tot out integer,
out_message out varchar2
) as
file_date date;
ap_acct_no varchar2(16);
pr_acct_no varchar2(16);
-- ------------------------------------------------------
-- begin logic
-- ------------------------------------------------------
begin
file_date := to_date(in_file_date,'yyyymmdd');
out_cnt_tot := 0; --- THE ERROR IS ON THIS LINE ---
out_message := 'Test Message';
select brec_acct_code into ap_acct_no
from MSSU.zwkfi_bankrec_accts
where brec_acct_bank = in_bank_code
and brec_acct_type = 'AP';
select brec_acct_code into pr_acct_no
from MSSU.zwkfi_bankrec_accts
where brec_acct_bank = in_bank_code
and brec_acct_type = 'PR';
// The rest of the procedure...
Simple demo of the scenario mentioned in comments:
create or replace procedure p42(out_message out varchar2) as
begin
out_message := 'Test message';
end p42;
/
If I call that with a variable that is declared big enough, it's fine. I have a 12-char variable, so assigning a 12-char value is not a problem:
declare
msg varchar2(12);
begin
p42(msg);
end;
/
anonymous block completed
But if I make a mistake and make the caller's variable too small I get the error you're seeing:
declare
msg varchar2(10);
begin
p42(msg);
end;
/
Error report:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "STACKOVERFLOW.P42", line 3
ORA-06512: at line 4
06502. 00000 - "PL/SQL: numeric or value error%s"
*Cause:
*Action:
The error stack shows both the line in the procedure that errored (line 3), and the line in the caller that triggered it (line 4). Depending on where you're calling it you might not have the whole stack, of course.
You mentioned that there would be various error messagesin the future. You need to make sure that anything that ever calls this defines the variables to be big enough to cope with any of your messages. If they were stored in a table you could semi-automate that, otherwise it'll be a manual code review check.
OK, saw your c# comment after posting this. It looks like you're calling this constructor; that doesn't say what default size it gets, but not unreasonable to think it might be 1. So you need to call this constructor instead to specify the size explicitly:
OracleParameter(String, OracleType, Int32)
Initializes a new instance of the OracleParameter class that uses the parameter name,
data type, and length.
... something like:
OracleParameter prm15 = new OracleParameter("out_str_message",
OracleDbType.Varchar2, 80);
Unless there's a way to reset the size after creation, which I can't see. (Not something I've ever used!).
The query is:
SET TERM ^ ;
ALTER PROCEDURE SALVARROTA (
datahora timestamp,
distancia double precision,
custo double precision,
capacidadelivre double precision,
capacidadetotal double precision,
nome varchar(50),
depositox double precision,
depositoy double precision,
chegadax double precision,
chegaday double precision,
arquivoshp blob sub_type 0 segment size 80,
arquivodbf blob sub_type 0 segment size 80,
arquivoshx blob sub_type 0 segment size 80,
veiculo varchar(50),
placa varchar(8),
valor double precision)
returns (
id integer)
as
BEGIN INSERT INTO ROTAS
(DATAHORA, DISTANCIA, CUSTO, CAPACIDADELIVRE, CAPACIDADETOTAL, NOME, DEPOSITOX, DEPOSITOY, CHEGADAX, CHEGADAY, ARQUIVOSHP, ARQUIVODBF, ARQUIVOSHX, VEICULO, PLACA, VALOR)
VALUES (:DATAHORA, :DISTANCIA, :CUSTO, :CAPACIDADELIVRE, :CAPACIDADETOTAL, :NOME, :DEPOSITOX, :DEPOSITOY, :CHEGADAX, :CHEGADAY, :ARQUIVOSHP, :ARQUIVODBF, :ARQUIVOSHX, :VEICULO, :PLACA, :VALOR);
SELECT GEN_ID (GEN_ROTAS_ID,0) FROM RDB$DATABASE INTO ID; SUSPEND; END
^
SET TERM ; ^
I get the error:
Invalid token.
Dynamic SQL Error.
SQL error code = -104.
Token unknown - line 1, column 5.
TERM.
I'm using IBExpert to execute it, and it's a 2.1 Firebird database.
Don't use SET TERM directive in SQL Editor window of IBExpert. It is allowed only in Script Executive window.
IBExpert generate script automatically (use the stored procedure editor and click on flash button) and then press button "Copy script"
I think they always make CREATE OR ALTER PROCEDURE...