How to do multiple commands one execute in the HSQLDB GUI? - sql

I have a number of command that I want to do from the GUI. I want to do many groups of these, but I can't get a single group to work. I presume I need to somehow force commits between them, but I can't figure out how to do that. If I execute each one of these commands by itself in order, everything works as expected.
I'm using the EPSG.dat from GeoTools' EPSG.zip.
unzip EPSG.zip
perl -pi -e 's/readonly=true/readonly=false/' EPSG.properties
java -jar hsqldb-2.4.1.jar
jdbc:hsqldb:file:./EPSG
SET AUTOCOMMIT true; -- Press Execute SQL, but this doesn't seem to help.
CREATE TEXT TABLE EPSG_UNITOFMEASURE_COPY (LIKE EPSG_UNITOFMEASURE);
GRANT all ON EPSG_UNITOFMEASURE_COPY TO public;
SET TABLE EPSG_UNITOFMEASURE_COPY SOURCE 'EPSG_UNITOFMEASURE_COPY.csv;encoding=UTF-8';
INSERT INTO EPSG_UNITOFMEASURE_COPY SELECT * FROM EPSG_UNITOFMEASURE;
SET TABLE EPSG_UNITOFMEASURE_COPY SOURCE OFF;
I then get an error of:
user lacks privilege or object not found: EPSG_UNITOFMEASURE_COPY / Error Code: -5501 / State: 42501
I am pretty sure this is an object not found case.

You cannot execute these commands as one block. When a schema definition statement refers to a schema object, that object must already exist.
Execute the CREATE TEXT TABLE, then you can execute the rest as a block.

Related

Change the database connection programmatically

In Oracle SQL Developer, I need to switch the active database connection manually. Is there a command that will connect to a different database programmatically, assuming that the login credentials are already saved? I'm trying to avoid clicking on the drop-down menu at the top right of the window which selects the active connection.
Perhaps I should rather have a single SQL file per database? I could understand that argument. But this to prepare to migrate some tables from one database to another and so it's nice to have all of the context in one file.
On database1, run a query on table1 which is located in schema1.
-- manually switch to database1 (looking for a command to replace this step)
ALTER SESSION SET CURRENT_SCHEMA = schema1
SELECT * FROM table1;
On database2, run a query on table2 which is located in schema2.
-- manually switch to database2
ALTER SESSION SET CURRENT_SCHEMA = schema2
SELECT * FROM table2;
Looks like this is well documented here
Use this command
CONN[ECT] [{<logon>| / |proxy} [AS {SYSOPER | SYSDBA | SYSASM}] [edition=value]]
You need a DDL TRIGGER to perform an event after your presql
CREATE TRIGGER sample
ON TABLE
AFTER
Event
........
THEN
ALTER SESSION SET
CURRENT_SCHEMA = schema2
SELECT * FROM table2;
I don't know of a way in which to change your selected connection in SQL Developer, but there is a programmatic method for temporarily changing the connection under which the script commands are run, as #T.S. pointed out. I want to give a few examples, which might be helpful to people (as they would have been for me).
So let's say your script has part A and part B and you want to execute them one after the other but from different connections. Then you can use this:
CONNECT username1/password1#connect_identifier1;
-- Put commands A here to be executed under this connection.
DISCONNECT; -- username1
CONNECT username2/password2#connect_identifier2;
-- Put commands B here to be executed under this connection.
DISCONNECT; -- username2
The connect_identifier part identifies the database where you want to connect. For instance, if you want to connect to a pluggable database on the local machine, you may use something like this:
CONNECT username/password#localhost/pluggable_database_name;
or if you want to connect to a remote database:
CONNECT username/password#IP:port/database_name;
You can omit the password, but then you will have to input it in a prompt each time you run that section. If you want to consult the CONNECT command in more detail, this reference document may be useful.
In order to execute the commands, you would then select the code that you are interested in (including the relevant CONNECT commands) and use Run Script (F5) or just use Run Script (F5) without selecting anything which will execute the entire script file. SQL Developer will execute your commands, put the output into the Script Output tab and then stop the connection. Note that the output of SELECT commands might be unpleasant to read inside Script Output. This can be mitigated by running the following command first (just once):
SET sqlformat ansiconsole;
There is also Run Statement (Ctrl+Enter), but do note that Run Statement (Ctrl+Enter) does not seem to work well with this workflow. It will execute and display each SELECT statement into a separate Query Result tab, which is easier to read, BUT the SELECT query will always be executed from the context of the active connection in SQL Developer (the one in the top right), not the current code connection of the CONNECT statement. On the other hand, INSERT commands, for instance, DO seem to be executed in the context of the current code connection of the CONNECT statement. This (rather inconsistent) behaviour is probably not what you want, so I recommend using Run Script (F5) as described above.

What is the difference between "psql -c" and "psql -f" when executing multiple queries?

I'm trying to execute two sql commands (create a new schema and table), in a way that would enable a rollback of both commands if the execution fails. The database I'm connecting to is AWS Redshift.
create schema if not exists test_schema;
create table test_schema.test_table as select 1;
Initially I tried to execute these commands programatically with python, using both psycopg2 and pyodbc, and got the following error:
ERROR: schema "test_schema" does not exist
I realised that it fails because the first command isn't being comitted, so to fix that , I tried setting the autocommit mode on, and wrapping the statements with "begin/end" block, which didn't help.
When I used psql CLI and ran the following, everything worked as intended (there was no "schema does not exist" error, and after the rollback, both schema and table were gone):
dev=# begin;
BEGIN
dev=# create schema test_schema;
CREATE SCHEMA
dev=# create table test_schema.test_table as select 1;
SELECT
dev=# rollback;
ROLLBACK
I tried to get the same results by running the following in the command line:
psql -c "begin; create schema test_schema; create table test_schema.test_table as select 1;"
This results in the same error:
ERROR: schema "test_schema" does not exist
However, when I put the above code in a file and ran the same command, this time using -f, it worked:
psql -f create_schema_and_table.sql
My questions are:
What is the difference between executing queries with "psql -c" and "psql -f"?
How can the same result be achieved programatically, with python?
Thanks a lot!
I don't know what you are doing wrong, your "psql -c" command works perfectly fine:
ads#diamond:~$ psql -c "begin; create schema test_schema; create table test_schema.test_table as select 1;" postgres
SELECT 1
psql will send the entire string to the server, and execute it in one single transaction. Your problem is that you start a transaction using "begin", but never commit it. Therefore at the end of the psql run, all your changes are rolled back. The next psql command will not find the schema, nor the table. But as long as everything stays in a single psql call, subsequent queries in the same command can see newly created objects.
Your query string should instead look like:
begin; create schema test_schema; create table test_schema.test_table as select 1; commit;
Or, more easy:
create schema test_schema; create table test_schema.test_table as select 1;
Both will work.

How to pass a bash variable to sqlplus and then pass that output to another variable

So what I'm trying to do is to clear the audit logs of the PDB in an Oracle database. The name of the PDB can be different each time, so I cannot use tnsnames to sqlplus directly into the PDB to do this. I'm passing commands into bash and then passing those into a SQLPLUS command. Each of these work except for one and I can't seem to figure out how to get it to work.
My code is
AUDIT="DELETE FROM SYS.AUD$ WHERE NTIMESTAMP# < sysdate-30;"
FINDPDB="select pdb_name from dba_pdbs where pdb_name != 'PDB\$SEED';"
ALTER="alter session set container=$FINDPDB;"
sqlplus -S /nolog <<EOF1
connect / as sysdba
set echo off feedback off head off pages 0
set serveroutput on
$FINDPDB
$ALTER
$AUDIT
exit;
EOF1
The error I keep getting is
alter session set container=select pdb_name from dba_pdbs where pdb_name != 'PDB$SEED';
*
ERROR at line 1:
ORA-65015: missing or invalid container name
This tells me that it's not passing the output of the select statement to $FINDPDB, but rather the actual select statement itself.
Is there a way I can pass this value to the ALTER variable and have it alter the session and clear the sys.aud$ table?
The error I keep getting is
alter session set container=select pdb_name from dba_pdbs where pdb_name != 'PDB$SEED';
*
ERROR at line 1:
ORA-65015: missing or invalid container name
This tells me that it's not passing the output of the select statement to $FINDPDB, but rather the actual select statement itself.
I don't see why you would expect this to pass the output of the SELECT query into $FINDPDB. You're putting together a big long string which bash passes to the standard input of sqlplus and then writes to stdout the output from sqlplus. At no point is bash picking out certain lines of the sqlplus output and putting them into shell variables.
In fact, try adding echo $ALTER to your bash script before you call sqlplus. You will quite probably find that the output is
alter session set container=select pdb_name from dba_pdbs where pdb_name != 'PDB$SEED';
If so, then bash has already done the substitution you didn't want before you've even started sqlplus.
You seem to want bash and sqlplus to have some kind of back-and-forth dialog. I would give up on this approach. Instead of trying to put the PDB name into a shell variable, I would put it into a sqlplus substitution variable. I would try something like the following (not tested):
sqlplus -S /nolog <<"EOF1"
connect / as sysdba
set echo off feedback off head off pages 0
set serveroutput on
column pdb_name new_value pdb
select pdb_name from dba_pdbs where pdb_name != 'PDB\$SEED';
alter session set container = &pdb.;
delete from sys.aud$ where ntimestamp# < sysdate - 30;
exit;
EOF1
We use column pdb_name new_value pdb to set the substitution variable pdb to the next value to be selected from a column named pdb_name. We then run a select query to fetch the PDB name and hence store it in pdb. Once we've got this value in a substitution variable, we can then issue the alter session statement to change the PDB and finally the delete statement to delete data from the PDB.
I'm tempted to avoid the use of a PL/SQL block for this, as has been suggested in another answer. I would prefer that the delete statement is parsed after the PDB is changed as I would want to be sure that the data from the 'correct' PDB is being deleted. My concern with using PL/SQL for this is that the PL/SQL compiler would determine which table to delete from when the block is parsed, which would be before it runs the block, and hence before it executes the alter session statement to change the PDB. However, I don't know PDBs and CDBs in Oracle 12c well enough to say whether this is a genuine problem or unfounded nonsense.
I don't have access to a pluggable Oracle 12c database to run something like this against, so I can't tell you whether this script works. If not, hopefully it should give you an idea of where to go.
I have no Oracle instance at hand but I see two ways to do this :
Make many connections through SQL*Plus
First, to retrieve pdb_name.
Second, to set container and delete audits.
Uses a single SQL*Plus but uses two named pipes
One to send generated SQL commands
Second to read SQL*Plus output
As alternative way I should have used a "real" programming language (Ruby, Python, JavaScript) which are better dedicated to deal with data read from database.
EDIT: After some search, it mays be done in PL/SQL
DECLARE
v_pdb_name VARCHAR2(255);
BEGIN
SELECT pdb_name INTO v_pdb_name FROM dba_pdbs WHERE pdb_name != 'PDB\$SEED';
EXECUTE IMMEDIATE 'ALTER SESSION SET container='||v_pdb_name;
DELETE FROM sys.aud$ WHERE ntimestamp# < sysdate-30;
END;
/

Creating table in Firebird script causes "unsuccessful metadata update" with deadlock

I have the following script that I run using "isql -i scriptfile.sql":
CONNECT C:\Databasefile.fdb USER user PASSWORD password;
SET TERM !! ;
EXECUTE BLOCK AS BEGIN
IF (EXISTS(SELECT 1 FROM rdb$relations WHERE rdb$relation_name = 'MYTABLE')) THEN
EXECUTE STATEMENT 'DROP TABLE MYTABLE;';
END!!
SET TERM ; !!
CREATE TABLE MYTABLE
(
MYCOLUMN VARCHAR(14) NOT NULL
);
The very first time I run this (when the table does not already exist) the table is created as expected.
If I run the script again I get the following error:
Statement failed, SQLCODE = -607
unsuccessful metadata update
-STORE RDB$RELATIONS failed
-deadlock
After line 8 in file d:\myscript.sql
When the script exits, MYTABLE has been deleted and can no longer be found in the database.
If I run the script a third time the table is once again created and no errors are thrown.
Why can't the script both delete and then recreate a table?
DDL from PSQL is not allowed, using EXECUTE STATEMENT it is not directly forbidden, and usually possible, but still not wise exactly because of these kinds of problems. I am not exactly sure about the reasons, but part of it have to do with how DDL changes are applied in Firebird; the use of execute statement adds additional locks iirc which conflict with a subsequent DDL for the same table name.
Instead of dropping and creating this way, you should use the DDL statement RECREATE TABLE instead.
Note that the word deadlock in this error is actually a bit of a misnomer (there is no real deadlock).

How to count how many sql statements are there in an SQL file?

So, given an SQL file to be executed in Oracle, we are asked to determine how many blocks are to be executed within the SQL file. For example, there is one block in an SQL file containing the following command,
CREATE TABLE customer (id varchar2(42));
two blocks in the following SQL file,
ALTER TABLE customer ADD name varchar2(42);
ALTER TABLE customer DROP COLUMN id;
and three blocks in the following SQL file
CREATE OR REPLACE PROCEDURE printHelloWorld IS
BEGIN
dbms_output.put_line('Hello World!');
END;
/
INSERT INTO customer VALUES ('ivan');
DROP TABLE customer;
We can't assume anything else about the input, other than the fact it will be executed without any error in Oracle SQLDeveloper.
UPDATE
The purpose of asking the question is to ensure that there would only be one statement, which is to be executed, in the SQL file. I am also open to the answer of this question. It would be even better to be able to create a script to split a multiple-statement SQL file to multiple files.
Not a perfect solution but will work in most cases I think.
First create a duff user with no rights except to create a session.
CREATE USER duff IDENTIFIED BY "password";
GRANT CREATE SESSION TO duff;
Then use this with sqlplus and grep to count the ORA- errors - should be one per statement.
sqlplus duff/password#db < script.sql | grep -c ORA-
If you have ALTER SESSION statements then you need a bit more
sqlplus duff/password#db < script.sql | grep -Ec 'ORA-|Session altered.'
There maybe other exceptions, but I think it gives you a workable solution for little overhead. Be careful that scripts don't switch user - but if you have hard-coded usernames and passwords in your scripts you have other issues.