Wait for CockroachDB Command to Finish - slick-2.0

I currently have a .sql file with the likes of:
DROP VIEW IF EXISTS vw_example;
CREATE VIEW vw_example as
SELECT a FROM b;
When running this command as part of a flyway migration, if the view already exists, it fails, as if the create command is not waiting for the DROP IF EXISTS to finish.
I know SQL server has a GO type keyword. Is there a way to sort of tell cockroachdb to wait for the first command?

As per the issue mentioned in the link, it is better to have the drop and create scripts in different migration files as flyway runs each migration in a single transaction.

Related

Liquibase Update Command doesn't drop elements (tables, functions, procedure...) from my database despite SQL script absent from my solution

I use liquibase tool to manage a postgres database. I work as the following :
I have a solution composed of different folders containing SQL scripts responsible for schema creation, tables creations, types creation, procedures creation, etc... Then, I have a ChangeLog file in xml format, containing the following informations :
includeAll path="#{Server.WorkingDirectory}#/02 - Schema" relativeToChangelogFile="false"
includeAll path="#{Server.WorkingDirectory}#/03 - Types" relativeToChangelogFile="false
includeAll path="#{Server.WorkingDirectory}#/04 - Tables" relativeToChangelogFile="false"
includeAll path="#{Server.WorkingDirectory}#/05 - Fonctions" relativeToChangelogFile="false"
includeAll path="#{Server.WorkingDirectory}#/06 - Stored Procedures" relativeToChangelogFile="false"
I run liquibase via command line :
liquibase --changeLogFile=$(Changelog.File.Name) --driver=$(Driver.Name) --classpath=$(Driver.Classpath) --url=$(BDD.URL) --username=$(BDD.Login) --password=$(BDD.Password) update
This enable Liquibase to take all the SQL scripts in the different folders listed in the changelogFile, compare it with the current database at url $(BDD.URL), and generate a delta script containing all the SQL queries to be executed to have a database corresponding to my solution.
This works well when I add new scripts (new tables or procedures) or modify existing scripts, my database is correctly updated by the command line, as expected. BUT it does not do anything when I delete a script from my solution.
To be more factual, here is what I want :
I have a SQL file containing the query "CREATE TABLE my_table" located in the folder "04 - Tables".
I execute the update command above, and it creates the table "my_table" in my database.
I finally do not want this table in my database any more. Thus I would like to simply remove the corresponding SQL script from my solution, and then run again the "update" command to simply remove my table in my database, generating automatically a "DROP TABLE my_table" by the liquibase "update" command. But this is not working as Liquibase doesn't record any change when I remove a sql file (whereas it does when I add or modify a file).
Does anyone know a solution to this ? Is there a specific command to drop an element when there is no "CREATE" query for this element, in a SQL solution ?
Many thanks in advance for you help :)
You will need to explicitly write a script to drop the table.
Other option is to rollback the change IF YOU HAVE Specified the Rollback SQL as part of your original SQL script.
There is a Pro Version option to rollback a single update , with free / community version, you can rollback last few changes in sequence
ex; I did "liquibase rollbackCount 5" will rollback the last 5 changes that were applied ONLY IF I HAD Coded the rollback sql needed as part of my script.
My Sql script sample that included the code to rollback is
--rollback drop TABLE test.user1 ; drop table test.cd_activity;
CREATE TABLE test.user1 (
user_type_id int NOT NULL
);
CREATE TABLE test.cd_activity (
activity_id Integer NOT NULL
userid int);

Mitigate Redshift Locks?

Hi I am running ETL via Python .
I have simple sql file that I run from Python like
truncate table foo_stg;
insert into foo_stg
(
select blah,blah .... from tables
);
truncate table foo;
insert into foo
(
select * from foo_stg
);
This query sometimes takes lock on table which it does not release .
Due to which other processes get queued .
Now I check which table has the lock and kill the process that had caused the lock .
I want to know what changes I can make in my code to mitigate such issues ?
Thanks in Advance!!!
The TRUNCATE is probably breaking your transaction logic. Recommend doing all truncates upfront. I'd also recommend adding some processing logic to ensure that each instance of the ETL process either: A) has exclusive access to the staging tables or B) uses a separate set of staging tables.
TRUNCATE in Redshift (and many other DBs) does an implicit COMMIT.
…be aware that TRUNCATE commits the transaction in which it is run.
Redshift tries to makes this clear by returning the following INFO message to confirm success: TRUNCATE TABLE and COMMIT TRANSACTION. However, this INFO message may not be displayed by the SQL client tool. Run the SQL in psql to see it.
in my case, I created a table the first time and tried to load it from the stage table using insert into a table from select c1,c2,c3 from stage;I am running this using python script.
The table is locking and not loading the data. Another interesting scenario is when I run the same insert SQL from the editor, it is loading, and after that my python script loads the same table without any locks. But the first time only the table lock is happening. Not sure what is the issue.

SymmetricDS replication DDL statements

Now I try to replicate DDL statements(like create, alter, drop) between different databases with the help of Symmetric-DS.
I've found this page https://www.symmetricds.org/docs/how-to/sync-schema-ddl-changes and found that it could be done it that way:
bin/symadmin -e root-000 --node=001 sync-triggers
bin/symadmin -e root-000 --node=001 send-schema
But, for now I cannot understand that is there possibility to replicate DDL statements automatically? I.e. I create table on one database and it creates on another automatically (without writing sync-triggers and send-schema)?
Thanks.

MyBatis Migrations: does "migrate new" automatically generate the changes?

With the migrate new command, it generates a sql file:
-- // create blog table
-- Migration SQL that makes the change goes here.
-- //#UNDO
-- SQL to undo the change goes here.
Does the command detect my table and data changes and fill in with the script, such as Alter Table query, automatically? Or I have to fill in the scripts manually?
It was answered here:
Migrations does not do any introspection or reverse engineering of
your database. You have to enter the DDL changes your self.

Can't recreate deleted SSAS database

I didn't like the DatabaseID of my SSAS database, so I decide to delete/create it.
I generated a create script, and then a delete script. I changed the ID in the create script to the one I want.
I ran the delete script. It ran successfully. Refreshed and verified the database has been deleted.
Now when I run the create script, I get:
"Either the user [MyUserName] does not have access to the [MyDatabaseName] database, or the database does not exist."
Well, no &*^% it doesn't exist, I'm trying to create it.
Googling hasn't yielded any results so far. Any ideas? Do I need to do some additional clean-up somewhere before I can recreate the database?
The create script is still "pointing at" the old cube. Close that query window and create a new one.