pglogical replication slot creation fails - sql

I am using Flyway to execute sql commands for database migrations. So for our postgres cluster we want to create a replication slot with pglogical. The command used is -
SELECT * FROM pg_create_logical_replication_slot('test_replication_slot', 'pglogical');
The migration sql file is as follows -
CREATE EXTENSION pglogical;
SELECT * FROM pg_create_logical_replication_slot('test_replication_slot', 'pglogical');
These commands fail with the error -
ERROR: cannot create logical replication slot in transaction that has performed writes
Location : /opt/amazon/migration/V0003__migration.sql (/opt/amazon/migration/V0003__migration.sql)
Line : 5
Statement : SELECT * FROM pg_create_logical_replication_slot('test_replication_slot', 'pglogical')
at org.flywaydb.core.internal.command.DbMigrate.doMigrateGroup(DbMigrate.java:345)
at org.flywaydb.core.internal.command.DbMigrate.access$900(DbMigrate.java:53)
But when I try to execute these statements by logging into the cluster using pgAdmin/psql I am able to create the replication slot. How do I fix this? Any help would be appreciated.

Related

Insert works in SQL Client but not in my code (SQL7008)

I am trying to perform insert/update statements in a DB2-AS400 database.
I use the jt400 driver, version 9.5 for java 8 in order to be able to connect and dialog with my DB.
In my app, I can perform selects just fine but when I try to insert or update I get the following SQL Error:
[SQL7008] Table not valid for operation.
I have done some research and it seems that it would be a journaling problem on the DB side and not in my code.
What I would like to understand is why am I able to perform insert/update using my SQL Client (DBeaver) on the same table with the exact same user ?
You might try disabling transaction isolation by adding transaction isolation=none to your connection string:
jdbc:as400://systemname;naming=sql;errors=full;transaction isolation=none;date format=iso
Ref: SQL7008 Error - Workaround?

Change the database connection programmatically

In Oracle SQL Developer, I need to switch the active database connection manually. Is there a command that will connect to a different database programmatically, assuming that the login credentials are already saved? I'm trying to avoid clicking on the drop-down menu at the top right of the window which selects the active connection.
Perhaps I should rather have a single SQL file per database? I could understand that argument. But this to prepare to migrate some tables from one database to another and so it's nice to have all of the context in one file.
On database1, run a query on table1 which is located in schema1.
-- manually switch to database1 (looking for a command to replace this step)
ALTER SESSION SET CURRENT_SCHEMA = schema1
SELECT * FROM table1;
On database2, run a query on table2 which is located in schema2.
-- manually switch to database2
ALTER SESSION SET CURRENT_SCHEMA = schema2
SELECT * FROM table2;
Looks like this is well documented here
Use this command
CONN[ECT] [{<logon>| / |proxy} [AS {SYSOPER | SYSDBA | SYSASM}] [edition=value]]
You need a DDL TRIGGER to perform an event after your presql
CREATE TRIGGER sample
ON TABLE
AFTER
Event
........
THEN
ALTER SESSION SET
CURRENT_SCHEMA = schema2
SELECT * FROM table2;
I don't know of a way in which to change your selected connection in SQL Developer, but there is a programmatic method for temporarily changing the connection under which the script commands are run, as #T.S. pointed out. I want to give a few examples, which might be helpful to people (as they would have been for me).
So let's say your script has part A and part B and you want to execute them one after the other but from different connections. Then you can use this:
CONNECT username1/password1#connect_identifier1;
-- Put commands A here to be executed under this connection.
DISCONNECT; -- username1
CONNECT username2/password2#connect_identifier2;
-- Put commands B here to be executed under this connection.
DISCONNECT; -- username2
The connect_identifier part identifies the database where you want to connect. For instance, if you want to connect to a pluggable database on the local machine, you may use something like this:
CONNECT username/password#localhost/pluggable_database_name;
or if you want to connect to a remote database:
CONNECT username/password#IP:port/database_name;
You can omit the password, but then you will have to input it in a prompt each time you run that section. If you want to consult the CONNECT command in more detail, this reference document may be useful.
In order to execute the commands, you would then select the code that you are interested in (including the relevant CONNECT commands) and use Run Script (F5) or just use Run Script (F5) without selecting anything which will execute the entire script file. SQL Developer will execute your commands, put the output into the Script Output tab and then stop the connection. Note that the output of SELECT commands might be unpleasant to read inside Script Output. This can be mitigated by running the following command first (just once):
SET sqlformat ansiconsole;
There is also Run Statement (Ctrl+Enter), but do note that Run Statement (Ctrl+Enter) does not seem to work well with this workflow. It will execute and display each SELECT statement into a separate Query Result tab, which is easier to read, BUT the SELECT query will always be executed from the context of the active connection in SQL Developer (the one in the top right), not the current code connection of the CONNECT statement. On the other hand, INSERT commands, for instance, DO seem to be executed in the context of the current code connection of the CONNECT statement. This (rather inconsistent) behaviour is probably not what you want, so I recommend using Run Script (F5) as described above.

What is the difference between "psql -c" and "psql -f" when executing multiple queries?

I'm trying to execute two sql commands (create a new schema and table), in a way that would enable a rollback of both commands if the execution fails. The database I'm connecting to is AWS Redshift.
create schema if not exists test_schema;
create table test_schema.test_table as select 1;
Initially I tried to execute these commands programatically with python, using both psycopg2 and pyodbc, and got the following error:
ERROR: schema "test_schema" does not exist
I realised that it fails because the first command isn't being comitted, so to fix that , I tried setting the autocommit mode on, and wrapping the statements with "begin/end" block, which didn't help.
When I used psql CLI and ran the following, everything worked as intended (there was no "schema does not exist" error, and after the rollback, both schema and table were gone):
dev=# begin;
BEGIN
dev=# create schema test_schema;
CREATE SCHEMA
dev=# create table test_schema.test_table as select 1;
SELECT
dev=# rollback;
ROLLBACK
I tried to get the same results by running the following in the command line:
psql -c "begin; create schema test_schema; create table test_schema.test_table as select 1;"
This results in the same error:
ERROR: schema "test_schema" does not exist
However, when I put the above code in a file and ran the same command, this time using -f, it worked:
psql -f create_schema_and_table.sql
My questions are:
What is the difference between executing queries with "psql -c" and "psql -f"?
How can the same result be achieved programatically, with python?
Thanks a lot!
I don't know what you are doing wrong, your "psql -c" command works perfectly fine:
ads#diamond:~$ psql -c "begin; create schema test_schema; create table test_schema.test_table as select 1;" postgres
SELECT 1
psql will send the entire string to the server, and execute it in one single transaction. Your problem is that you start a transaction using "begin", but never commit it. Therefore at the end of the psql run, all your changes are rolled back. The next psql command will not find the schema, nor the table. But as long as everything stays in a single psql call, subsequent queries in the same command can see newly created objects.
Your query string should instead look like:
begin; create schema test_schema; create table test_schema.test_table as select 1; commit;
Or, more easy:
create schema test_schema; create table test_schema.test_table as select 1;
Both will work.

SQL Server Add Column then Update Column Error

I'm writing a sql script that modifies multiple tables after importing them. In one table I don't have a 'RCVDDATE' column but I do in another related table. I'm adding the new table with this command:
ALTER TABLE TEST.CASES.ADDRESS
ADD RCVDDATE DATE;
And then I'm running this command to bring in the correct values:
UPDATE TEST.CASES.ADDRESS
SET RCVDDATE = c.RCVDDATE
FROM TEST.CASES.CALLS c
Where TEST.CASES.ADDRESS.CALL_NUMBER = c.CALL_NUMBER;
Individually they work fine. But when I execute them in a script it throws an error:
Msg 207, Level 16, State 1, Line 5
Invalid column name 'RCVDDATE'.
Am I having a problem with Intellisense or is it something else? If you have any suggestions on how I can get the script to run in SQL Server, please advise.
You need a GO statement to separate your batches. From https://learn.microsoft.com/en-us/sql/t-sql/language-elements/sql-server-utilities-statements-go?view=sql-server-2017
SQL Server applications can send multiple Transact-SQL statements to an instance of SQL Server for execution as a batch. The statements in the batch are then compiled into a single execution plan. Programmers executing ad hoc statements in the SQL Server utilities, or building scripts of Transact-SQL statements to run through the SQL Server utilities, use GO to signal the end of a batch.

Copy Table from a Server and Insert into another Server: What is wrong with this T-SQL query?

I am using SQL Server 2014. I have created the following T-SQL query which I uploaded to my local SQL server to run as a job process on a daily basis at a specific time. However, I noticed that it failed to run. If I run it manually in SSMS, it runs correctly.
What is preventing the query to run as an automated process? Is it a syntax issue?
USE MyDatabase
GO
DELETE FROM ExchangeRate -- STEP 1
;WITH MAINQUERY_CTE AS ( --STEP 2
SELECT *
FROM (
SELECT *
FROM [178.25.0.20].HMS_ARL.dbo.ExchangeRate
) q
)
INSERT INTO ExchangeRate --STEP 3
SELECT *
FROM MAINQUERY_CTE
Basically, the function of the query is to copy a table named ExchangeRate from the live server and paste its content in a table of the same name (which already exists on my local server).
Error Log shows the following message:
Description: Executing the query "USE MyDatabase DELETE FROM
ExchangeRate..." failed with the following error: "Access to the
remote server is denied because no login-mapping exists.". Possible
failure reasons: Problems with the query, "ResultSet" property not set
correctly, parameters not set correctly, or connection not established
correctly. End Error DTExec: The package execution returned
DTSER_FAILURE (1). Started: 10:59:30 AM Finished: 10:59:30 AM
Elapsed: 0.422 seconds. The package execution failed. NOTE: The
step was retried the requested number of times (3) without succeeding.
The step failed.
May be you have to create Linked Server in your local server to the Remote server?