PostgreSQL - Automate schema and table creation - powershell - sql

I am trying to automate the creation of schemas and some tables into that newly created schema. I am trying to write a script in powershell to help me achieve the same. I have been able to create the schema, however, I cannot create the tables into that schema.
I am passing the new schema to be created as a variable to powershell.
script so far (based off the solution from the following answer. StackOverFlow Solution):
$MySchema=$args[0]
$CreateSchema = 'CREATE SCHEMA \"'+$MySchema+'\"; set schema '''+$MySchema+''';'
write-host $CreateSchema
C:\PostgreSQL\9.3\bin\psql.exe -h $DBSERVER -U $DBUSER -d $DBName -w -c $CreateSchema
# To create tables
C:\PostgreSQL\9.3\bin\psql.exe -h $DBSERVER -U $DBUSER -d $DBName -w -f 'E:\automation\scripts\create-tables.sql' -v schema=$MySchema
At the execution, I see the following error:
psql:E:/automation/scripts/create-tables.sql:11: ERROR: no schema has been selected to create in
The content of create-tables.sql is:
SET search_path TO :schema;
CREATE TABLE testing (
id SERIAL,
QueryDate varchar(255) NULL
);

You've got this in your first step:
$CreateSchema = 'CREATE SCHEMA \"'+$MySchema+'\"; set schema '''+$MySchema+''';'
Take out that set schema - it's erroneous and causing the schema not to be created. Then on the next step you wind up with an empty search path (because the schema never got created), which is why you get that error.

Related

Set search_path with file include in postgres psql command

How can I include multiple search paths in a psql command, so that multiple files can be run with different search_paths but all be run in one transaction?
psql
--single-transaction
--command="set search_path = 'a'; \i /sqlfile/a.sql; set search_path = 'b'; \i /sqlfile/b.sql;"
When I run this I get a syntax error at \i. I need to have the files included separately and they're generated dynamically so I'd rather run it using a --command than having to generate a file and using --file if possible.
The manual about the --command option:
command must be either a command string that is completely parsable by
the server (i.e., it contains no psql-specific features), or a single
backslash command. Thus you cannot mix SQL and psql meta-commands
within a -c option. To achieve that, you could use repeated -c options
or pipe the string into psql [...]
Bold emphasis mine.
Try:
psql --single-transaction -c 'set search_path = a' -c '\i /sqlfile/a.sql' -c 'set search_path = b' -c '\i /sqlfile/b.sql'
Or use a here-document:
psql --single-transaction <<EOF
set search_path = a;
\i /sqlfile/a.sql
set search_path = b;
\i /sqlfile/b.sql
EOF
The search_path needs no quotes, btw.

How to pass a variable from shell script to sql file

I am having a shell file named test.sh which is invoking other sql file 'table.sql'. 'table.sql' file will create some tables, but I want to create the tables in a particular schema 'bird'.
content of sql file.
create schema bird; --bird should not be hard coded it should be in variable
set search_path to 'bird';
create table bird.sparrow(id int, name varchar2(20));
content of shell file.
dbname=$1
cnport=$2
schemaname=$3
filename=$4
gsql -d ${dbname} -p ${cnport} -f ${filenam} #[how to give schema name here so that it can be used in table.sql without hardcoding]
I will execute my shell file like this
sh test.sh db1 9999 bird table.sql
it is easier doing it in shell, eg:
dbname=$1
cnport=$2
schemaname=$3
filename=$4
gsql -d ${dbname} -p ${cnport} <<EOF
create schema $3; --bird should not be hard coded it should be in variable
set search_path to '$3';
create table bird.sparrow(id int, name varchar2(20));
EOF
otherwise use psql variables

Using documentParser function in Teradata Aster

I'm working with Teradata's Aster and am trying to parse a pdf(or html) file such that it is inserted into a table in the Beehive database in Aster. The entire pdf should correspond to a single row of data in the table.
This is to be done by using one of Aster's SQL-MR functions called documentParser. This will produce a text file(.rtf) containing a single row produced by parsing all the chapters from the pdf file, which would be then loaded into the table in Beehive.
I have been given this script that shows the use of documentParser and other steps involved in this parsing process -
/* SHELL INSTRUCTIONS */
--transform file in b64 (change file names to your relevant file)
base64 pp.pdf>pp.b64
--prepare a loadfile
rm my_load_file.txt
-- get the content of the file
var=$(cat pp.b64)
-- put in file
echo \""pp.b64"\"","\""$var"\" >> "my_load_file.txt"
-- create staging table
act -U db_superuser -w db_superuser -d beehive -c "drop table if exists public.cf_load_file;"
act -U db_superuser -w db_superuser -d beehive -c "create dimension table public.cf_load_file(file_name varchar, content varchar);"
-- load into staging table
ncluster_loader -U db_superuser -w db_superuser -d beehive --csv --verbose public.cf_load_file my_load_file.txt
-- use document parser to load the clean text (you will need to create the table beforehand)
act -U db_superuser -w db_superuser -d beehive -c "INSERT INTO got_data.cf_got_text_data (file_name, content) SELECT * FROM documentParser (ON public.cf_load_file documentCol ('content') mode ('text'));"
--done
However, I am stuck on the last step of the script because it looks like there is no function called documentParser in the list of functions that are available in Aster. This is the error I get -
ERROR: function "documentparser" does not exist
I tried to search for this function several times with the command \dF, but did not get any match.
I've attached a picture which present the gist of what I'm trying to do.
SQL-MR Document Parser
I would appreciate any help if any one has any experience with this.
What happened is that someone told you about this function documentParser but never gave you the function archive file (documentParser.zip) to install in Aster. This function does exist but it's not part of the official Aster Analytics Foundation (AAF). Please contact person who gave you this info for help.
documentParser belongs to so-called field functions that are developed and used by the Aster field team only. Not that you can't use it, but don't expect support to help you - only whoever gave you access to it.
If you don't have any contacts then next course of action I'd suggest to go to Aster Community Network and ask question about it there.

Import postgres database without roles

I have a database that was exported with pg_dump, but now when I'm trying to import it again with:
psql -d databasename < mydump.sql
It fails trying to grant roles to people that don't exist. (error says 'Role "xxx" does not exist')
Is there a way to import and set all the roles automatically to my user?
The default behavior of the import is that it replaces all roles it does not know with the role you are doing the import with. So depending on what you need the database for, you might just be fine with importing it and with ignoring the error messages.
Quoting from http://www.postgresql.org/docs/9.2/static/backup-dump.html#BACKUP-DUMP-RESTORE
Before restoring an SQL dump, all the users who own objects or were granted permissions on objects in the dumped database must already exist. If they do not, the restore will fail to recreate the objects with the original ownership and/or permissions. (Sometimes this is what you want, but usually it is not.)
The answer that you might be looking for is adding the --no-owner to the pg_restore command. Unlike the accepted answer at the moment, the command should create every object with the current user even if the role in the dump don't exist in the database.
So no element will get skipped by pg_restore but if some elements imported are owned by different users, all of the records will be now owned by only one user as far as I can tell.
With pg_restore you can use the --role=rolename option to force a role name to be used to perform the restore. But the dump must be non plain text format.For example you can dump with:
pg_dump -F c -Z 9 -f my_file.backup my_database_name
and than you can restore it with:
pg_restore -d my_database_name --role=my_role_name my_file.backup
for more info:
http://www.postgresql.org/docs/9.2/static/app-pgrestore.html
Yes, you can dump all the "Global" objects from your source DB with pg_dumpall's -g option:
pg_dumpall -g > globals.sql
Then run globals.sql against your target DB before importing.
I used the following:
pg_dump --no-privileges --no-owner $OLD_DB_URL | psql $NEW_DB_URL
From
https://www.postgresql.org/docs/12/app-pgdump.html
-O
--no-owner
Do not output commands to set ownership of objects to match the
original database. By default, pg_dump issues ALTER OWNER or SET
SESSION AUTHORIZATION statements to set ownership of created database
objects. These statements will fail when the script is run unless it
is started by a superuser (or the same user that owns all of the
objects in the script). To make a script that can be restored by any
user, but will give that user ownership of all the objects, specify
-O.
This option is only meaningful for the plain-text format. For the
archive formats, you can specify the option when you call pg_restore.
-x
--no-privileges
--no-acl
Prevent dumping of access privileges (grant/revoke commands).
In newer versions of pg_restore it will complain about text file imports.
However, ou can just remove these lines with awk. You can pipe it to a new file to make sure it didn't break anything or just pipe it directly into psql like this:
cat my-import-file.sql | awk '!/old-role/' | psql -U target_owner -d target_db_name -1 -v ON_ERROR_STOP=1
replace my-import-file.sql with your import file
replace target_owner with your username
replace target_db_name with the name of the database to create
replace old-role with the role it is complaining about
If you have multiple roles:
cat my-import-file.sql | awk '!/role1|role2|role3/' | psql -U target_owner -d target_db_name -1 -v ON_ERROR_STOP=1
This will remove all lines that mention that role. Everything will be created owned by the user logged in to psql.
If you are trying to import a backup by using pgadmin enable this flags when you are restoring your bd:
I hope it helps you.
cheers!
Well You can just create new role with same name as that you are missing, and then import dump with no errors.
error says 'Role "xxx" does not exist' - so create it :)

How in H2DB get sql dump like in MySql?

I have H2DB database which stores data in files. I have 3 files: test.18.log.db, test.data.db, and test.index.db.
I want get SQL dump file like when I use mysqldump. Is it possible?
Yes, there are multiple solutions. One is to run the SCRIPT SQL statement:
SCRIPT TO 'fileName'
Another is to use the Script tool:
java org.h2.tools.Script -url <url> -user <user> -password <password>
Then, there are also the RUNSCRIPT statement and RunScript tool.
By the way, you should consider upgrading to a more recent version of H2. With newer versions, the two files .data.db and .index.db are combined in to a .h2.db file.
If you want to get schema and data, you can use
SCRIPT TO 'dump.sql';
If you want to get only schema, you can use
SCRIPT SIMPLE TO 'dump.txt';
Your shortcut:
$ ls
foo.mv.db
$ wget -O h2.jar https://search.maven.org/remotecontent?filepath=com/h2database/h2/1.4.200/h2-1.4.200.jar
$ ls
foo.mv.db
h2.jar
$ java -cp h2.jar org.h2.tools.Script -url "jdbc:h2:file:./foo" -user sa -password ""
$ ls
backup.sql
foo.mv.db
h2.jar
$ cat backup.sql | head -n 20
;
CREATE USER IF NOT EXISTS "SA" SALT 'bbe17...redacted...' HASH 'a24b84f1fe898...redacted...' ADMIN;
CREATE SEQUENCE "PUBLIC"."HIBERNATE_SEQUENCE" START WITH 145;
CREATE CACHED TABLE "PUBLIC"."...redacted..."(
"ID" INTEGER NOT NULL SELECTIVITY 100,
[...redacted...]
"...redacted..." VARCHAR(255) SELECTIVITY 100
);
ALTER TABLE "PUBLIC"."...redacted..." ADD CONSTRAINT "PUBLIC"."CONSTRAINT_8" PRIMARY KEY("ID");
-- 102 +/- SELECT COUNT(*) FROM PUBLIC.[...redacted...];
INSERT INTO "PUBLIC"."...redacted..." VALUES
([...redacted...]),