Importing a pgsql file via pgadmin - sql

I work on MySQL, but one of my client gave me a project that uses postgreSql. He gives me the project files and db file with extension '.pgsql'.
I installed pgadmin and created a test db, but don't know how to import pgsql file. I tries copy-paste of queries in script editor, till tables everything is executing fine, but at the time of data queries, its throwing error.
And data format is also strange, don't know whether its the correct query format or not.
This is the glimpse of pgsql file:
--
-- PostgreSQL database dump
--
SET statement_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
--
-- Name: kjh; Type: TABLE; Schema: public; Owner: design; Tablespace:
--
CREATE TABLE kjh (
mknh integer NOT NULL,
jkh integer NOT NULL
);
ALTER TABLE public.kjh OWNER TO design;
--
-- Name: TABLE kjh; Type: COMMENT; Schema: public; Owner: design
--
//..And so on
These lines in pgsql files are throwing error:
--
-- Data for Name: kjh; Type: TABLE DATA; Schema: public; Owner: design
--
COPY kjh (mknh, jkh) FROM stdin;
1 1
\.
--
-- Data for Name: w_ads; Type: TABLE DATA; Schema: public; Owner: design
--
COPY w_ads (id, link, city, type, name, enabled, "time", hits, isimg) FROM stdin;
44 # -1 0 1 t 1 20 1
\.
Any suggestions?

You are likely running into problems with delimiters, try something like this:
COPY kjh (mknh, jkh) FROM stdin WITH DELIMITER ' ';
1 1
\.
This should execute just fine.
Also, note, that your pgsql file have inconsistent spaces count between values, so you may have to prepare this file replacing multiple spaces in values with single delimiter (two spaces with single space delimiter will result in error, since postgres will assume you providing empty column value).
And I would suggest you to choose another delimiter instead of space, something like ; or ^.
Not sure if COPY ... FROM stdin will work from pgadmin (well, it should), but if it fails, try to run this query from CLI using psql command.
For more info on delimiters and copy syntax, refer to docs.

Related

How to add error handling using SQL in snowflake

I've written below stored procedure code using SQL in snowflake which truncates a table and load new data into that table by first copying the data from a source, applying a bit of processing on it and loading into the target table which we truncated.
I've added nested Begin and End statements in order to try to add error handling functionality along with If Else statements but none of them worked. I want to first test if copy data was successful if yes than code should run second insert statements which basically brings data to staging where we refine the data where I want to add second check which checks if the rows were added successfully. Lastly we copy into the target table after all the checks are passed.
CREATE OR REPLACE PROCEDURE DEV_NMC_ITEM_AND_PSYCHOMETRIC_DM.STAGE2B."SP_N_1Test"("STAGE_S3" VARCHAR(16777216), "STAGE_OUTPUT" VARCHAR(16777216))
RETURNS VARCHAR(16777216)
LANGUAGE SQL
EXECUTE AS CALLER
AS '
DECLARE
Stage_nm_s3 STRING;
begin
truncate table "STAGE2A"."T001_IRF_STUDENT_FORM_S3";
execute immediate ''COPY INTO "STAGE2A"."T001_IRF_STUDENT_FORM_S3"
FROM ( select
a bunch of columns
from #stage2a.''||:STAGE_S3||'')
pattern= ''''.*_IRF_.*\\\\.csv''''
file_format = (type=csv, skip_header=1 )'';
begin
Insert into "STAGE2B"."T011_IRF_STUDENT_FORM_V001" (
a bunch of columns
SELECT
a bunch of columns
from "STAGE2A"."V001_IRF_STUDENT_FORM_T001";
begin
execute immediate ''copy into #stage2a.''||:STAGE_OUTPUT||''/T001_IRF_STUDENT_FORM_S3
from (SELECT
a bunch of columns
from "STAGE2B"."T011_IRF_STUDENT_FORM_V001")
file_format = ( format_name = F_CSV type=csv compression = none)
header = True
SINGLE = FALSE
OVERWRITE = TRUE
max_file_size=524288000 '';
return ''Load process completed for IRF_STUDENT_FORM_S3'';
end;
end;
end;
';```
I'm afraid you will need to wrap up your SQL statements into Javascript-syntax stored procedure to use Try/Catch block.
Here's some more explanation on that topic: Error handling for stored procedures

ORA-01031: insufficient privileges when bulk loading data from one schema to another via SQL*Loader

I am trying to INSERT data into another schema's table via SQL Loader. Here is the control file:
LOAD DATA
INFILE *
INTO TABLE globalref01.dw_stg_holiday_extract
FIELDS TERMINATED BY "|"
TRAILING NULLCOLS
( ric_trd_exch,
cntry_cde,
holiday_date "TO_DATE (:holiday_date, 'YYYYMMDD')",
holiday_desc,
trd,
stl
)
Notice I'm inserting into table SCHEMA.TABLE_NAME. Since I'll be sqlldr'ing from the schema depotapp01, I'll run the following command on globalref01:
GRANT INSERT ON dw_stg_holiday_extract TO depotapp01;
Check and see if it works:
SELECT * FROM user_tab_privs_recd WHERE table_name = 'DW_STG_HOLIDAY_EXTRACT' AND owner = 'GLOBALREF01';
That confirms I have INSERT privs from depotapp01:
GLOBALREF01 DW_STG_HOLIDAY_EXTRACT GLOBALREF01 INSERT NO NO
Now, when I try to execute the sqlldr command, I get ORA-01031: insufficient privileges:
SQL*Loader: Release 10.2.0.4.0 - Production on Mon Feb 3 09:41:43 2014
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Control File: /db/platform/eq/sparc_SunOS_5.6/depot/2.0/DWRef/cfg/uat/2.1/base_config/holiday_calendar.ctl
Data File: /export/data/depotdw/DWRef/data/oats/holiday_calendar.dat
Bad File: /export/data/depotdw/DWRef/data/oats/holiday_calendar.err
Discard File: /export/data/depotdw/DWRef/data/oats/holiday_calendar.dsc
(Allow all discards)
Number to load: ALL
Number to skip: 0
Errors allowed: 200000
Bind array: 64 rows, maximum of 1000000 bytes
Continuation: none specified
Path used: Conventional
Table GLOBALREF01.DW_STG_HOLIDAY_EXTRACT, loaded from every logical record.
Insert option in effect for this table: INSERT
TRAILING NULLCOLS option in effect
Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
RIC_TRD_EXCH FIRST * | CHARACTER
CNTRY_CDE NEXT * | CHARACTER
HOLIDAY_DATE NEXT * | CHARACTER
SQL string for column : "TO_DATE (:holiday_date, 'YYYYMMDD')"
HOLIDAY_DESC NEXT * | CHARACTER
TRD NEXT * | CHARACTER
STL NEXT * | CHARACTER
SQL*Loader-929: Error parsing insert statement for table GLOBALREF01.DW_STG_HOLIDAY_EXTRACT.
ORA-01031: insufficient privileges
So my question is, now that I know I have INSERT privileges, what other privileges do I need to grant in order for this to work?
I think you also need the SELECT privilege in most cases because SQL*Loader performs checks before inserting.
In particular, the default loading option is INSERT:
INSERT
This is SQL*Loader's default method. It requires the table to
be empty before loading. SQL*Loader terminates with an error if the
table contains rows.
In order to check that the table is empty, the account used by SQL*Loader needs the SELECT privilege. You might not need it if loading as APPEND instead. Conversely, if you use the REPLACE or TRUNCATE option, you'll need additional privileges.

How to dump whole csv database to format readable by Excel (csv is OK).

This may be a futile excersise, but one of my client totally insisted that he needs whole database dump to perform analytics in Excel.
There are many answers to how to dump single table to csv (like this: Export to CSV and Compress with GZIP in postgres, Save PL/pgSQL output from PostgreSQL to a CSV file, Export Postgres table to CSV file with headings). There is even a closed question on this subject: https://stackoverflow.com/questions/9226229/how-to-take-whole-database-dump-in-csv-format-for-postgres. But there are no answers on how to dump whole database in single command.
Anyways, here is my script:
DO $DO$
DECLARE
r record;
BEGIN
FOR r IN select tablename from pg_tables where NOT (tablename LIKE 'pg%' OR tablename LIKE 'sql%') LOOP
EXECUTE 'copy (select * from "'|| r.tablename || '" ) to ''/tmp/dump/' || r.tablename || '.csv'' with csv header';
END LOOP;
END;
$DO$;
Some fine points:
It can be pasted into psql command and it will dump all tables in current schema to /tmp/dump directory (please create this directory first)
Query in the for loop (that is: select tablename from pg_tables where NOT (tablename LIKE 'pg%' OR tablename LIKE 'sql%') select all table names in current schema except for ones starting with pg and sql that will most likely be reserved names for postgres and SQL stuff. There most probably is a better way, but hell, who cares?

How to restore table from dump to database?

I create a table dump using pg_dump:
pg_dump -h server1 -U postgres -t np_points gisurfo > D:\np_point.sql
After I go in psql and says:
-f D:\np_point.sql
But get list of standart PostgreSQL tables.
Next I try to exequte np_point.sql in pgAdmin and get error:
ERROR: Syntax error (near: "1")
LINE 78: 1 Сухово 75244822005 75644000 Челябинская обл. Нязепетровски...
Its snippet of this sql where I get error:
COPY np_point (gid, full_name, okato, oktmo, obl_name, region_nam, the_geom) FROM stdin;
1 Сухово 75244822005 75644000 Челябинская обл. Нязепетровский район 0101000020E6100000312A7936BD9F4D402A3C580DE9FF4B40
How can I restore table from sql file?
UPDATE
PostgreSQL 8.4
And first part of sql file.
PostgreSQL database dump
SET statement_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
--
-- Name: np_point; Type: TABLE; Schema: public; Owner: postgres; Tablespace:
--
CREATE TABLE np_point (
gid integer NOT NULL,
full_name character varying(254),
okato character varying(254),
oktmo character varying(254),
obl_name character varying(254),
region_nam character varying(254),
the_geom geometry,
CONSTRAINT enforce_dims_the_geom CHECK ((st_ndims(the_geom) = 2)),
CONSTRAINT enforce_geotype_the_geom CHECK (((geometrytype(the_geom) = 'POINT'::text) OR (the_geom IS NULL))),
CONSTRAINT enforce_srid_the_geom CHECK ((st_srid(the_geom) = 4326))
);
Did you install posgis in destinations db? If not , install postgis first。
If you install postgis and still have the problem, try to dump a table without geometry filed
and restore it in another db ,and see if the problem still appears.

How to import a mysql dump while renaming some tables/columns and not importing others at all?

I'm importing a legacy db to a new version of our program, and I'm wondering if there's a way to not import some columns/tables from the dump, and rename other tables/columns as i import them? I'm aware I could edit the dump file in theory, but that seems like a hack, and so far none of my editors can handle opening the 1.3 gb file (Yes, I've read the question about that on here. No, none of the answers worked for me so far.).
Suggestions?
It's possible to not import some tables by denying permissions to do so, and using --force as a command line option.
Not importing some columns, or renaming them is not possible (at least without editing the dump file, or making modifications once imported).
My recommendation would be:
Import the tables into another database (1.3G should still be very quick).
Do your dropping/renaming.
Export the data to create yourself a new dump file.
If you're worried the dump contains multiple databases, the mysql command line tool has a -o flag to only import the one.
I'd say import it into a temporary database and do the changes live - possibly applying a pre-built script that does the necessary operations:
DROP TABLE ....
DROP TABLE ....
DROP TABLE ....
ALTER TABLE ..... DROP column ....
Then copy the finished result into the production database.
This can be very nicely automated as well.
It's likely to work out faster and with less problems than finding a tool that edits dumps (or, as so often with these things, trying out five different tools and finding out none works well).
Assuming you have both databases, you could rename all tables in OldDB (just make sure the prefix isn't used already in any table name, because renaming back has a string-replace) …
USE olddb;
DROP PROCEDURE IF EXISTS rename_tables;
DELIMITER ||
CREATE PROCEDURE rename_tables(
IN plz_remove BOOLEAN
)
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE tab VARCHAR(64);
DECLARE mycursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_schema = (SELECT DATABASE() FROM DUAL)
;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
OPEN mycursor;
myloop: LOOP
FETCH mycursor INTO tab;
IF done THEN
LEAVE myloop;
END IF;
IF plz_remove THEN
SET #sql = CONCAT(
'RENAME TABLE ', tab, ' TO ', REPLACE(tab, 'olddb_', '')
);
ELSE
SET #sql = CONCAT('RENAME TABLE ', tab, ' TO olddb_', tab);
END IF;
-- construct due to RENAME × CONCAT / variables.
PREPARE s FROM #sql;
EXECUTE s;
END LOOP;
CLOSE mycursor;
END ||
DELIMITER ;
-- append 'olddb_'.
CALL rename_tables(false);
-- […]
-- rename back after dump.
CALL rename_tables(true);
… then dump and import into NewDB.
$ mysqldump -hlocalhost -uroot -p --complete-insert --routines --default-character-set=utf8 olddb > olddb.sql
$ mysql -hlocalhost -uroot -p --default-character-set=utf8 newdb < olddb.sql
This would give you (for example):
USE newdb;
SHOW TABLES;
+------------------+
| Tables_in_newdb |
+------------------+
| bar |
| foo |
| olddb_company |
| olddb_department |
| olddb_user |
| user |
+------------------+
Further reading / based on:
MySQL Manual: Cursors
MySQL Manual: Loop
MySQL Manual: Prepare
SO answer for "MySQL foreach alternative for procedure"
SO answer for "MySQL foreach loop"
SO answer for "How do you mysqldump specific table(s)?"