How do I get SQL out of knex scripts? - sql

I have a set of scripts that build a database using knex. What is the minimum effort possible to build this database or get the SQL for building it? I do not have Node.js set up for myself and I am new to both Node.js and Knex, but I do know SQL.

What is the minimum effort possible to build this database or get the
SQL for building it?
Your 2 options are either:
Install Node on your local machine (very simple see: link), and run the scripts against a local database (may or may not be simple depending on the amount of environment setup required.). Then you can use your DB to dump the table creation SQL if you want to build the tables on a different database.
Review the Knex scripts and reverse engineer them. The Knex table creation functions are not complicated to understand, and you can reference the Knex API here.
I would recommend giving #1 a try, and if it takes you more than an hour or 2, then just do #2, unless there are 50-ish or more tables, then work your way through #1 anyway.
PS: If you run the Knex scripts, you can also turn on debugging in the knex connection configuration, and the SQLs being executed will be output to the console.

Related

Use Liquibase autogenerated xml for Corda Enterprise DB migration

I switched to Corda Enterprise mainly to try how it handles automated database migration.
In the documentation here it says tools-database-manager generates only SQL version of Liquibase script for initial DB and SQL version is Database specific so should not be used for production.
But it is possible to generate the XML also with liqubase cmd using this command:
/snap/bin/liquibase --url="jdbc:h2:tcp://localhost:10039/node" --driver=org.h2.Driver --classpath=/home/corda/Downloads/h2.jar generateChangeLog
which I did, and then I had to remove all the chnagelogs which are related to corda internal tables, and left only the ones that are my own and it seems everything works.
So the question is - may this approach have some hidden dangers that I don't know. Why otherwise Corda team developed tools-database-manager, and why they don't yet support xml generation with tools-database-manager?
And this leads to another question - what if I for example forget to include one of my tables in the initial script? Seems corda does not complain about it. Won't my table be created? Will I be able to ever migrate that table if it is missing in the initial script?
Firstly tools-database-manager is a helper tool available to make it easy for developers to perform database migration.
Let’s say you have 2 nodes in your network, each using a different database. PartyA uses PostgreSQL and PartyB uses Oracle. If PartyA uses this tool to create the migration script by connecting to PostgreSQL, this will out SQL statements specific to PostgreSQL.
Hence this is not portable and hence it's said the generated script is database specific.
Also, you do not want to trust a script and fire it directly on your production database, it contains DDL statements, so it is strongly recommended that every time a script is generated, make sure you know what the script is doing by manually looking into it.
There are a lot of enhancements going on in this space, supporting XML for migration script being one of them.
As mentioned earlier, you should manually look at the migration script. If you forget to add one of your table, Corda will not complain. It will fail sometime later when from within your code you try to access this table.
Yes, you can stop the node and create the table again by adding a create table script.

How to manage/ track changes to SQL Server database without compare tool

I'm working on a project as an outsourcing developer where i don't have access to testing and production servers only the development environment.
To deploy changes i have to create sql scripts containing the changes to make on each server for the feature i wish to deploy.
Examples:
When i make each change on the database, i save the script to a folder, but sometimes this is not enought because i sent a script to alter a view, but forgot to include new tables that i created in another feature.
Another situation would be changing a table via SSMS GUI and forgot to create a script with the changed or new columns and later have to send a script to update the table in testing.
Since some features can be sent for testing and others straight to production (example: queries to feed excel files) its hard to keep track of what i have to send to each environment.
Since the deployment team just executes the scripts i sent them to update the database, how can i manage/ keep track of changes to sql server database without a compare tool ?
[Edit]
The current tools that i use are SSMS, VS 2008 Professional and TFS 2008.
I can tell you how we at xSQL Software do this using our tools:
deployment team has an automated process that takes a schema snapshot of the staging and production databases and dumps the snapshots nightly on a share that the development team has access to.
every morning the developers have up to date schema snapshots of the production and staging databases available. They use our Schema Compare tool to compare the dev database with the staging/production snapshot and generate the change scripts.
Note: to take the schema snapshot you can either use the Schema Compare tool or our Schema Compare SDK.
I'd say you can have a structural copy of test and production servers as additional development databases and keep in mind to always apply change when you send something.
On these databases you can establish triggers that will capture all DDL events and put them into table with getdate() attached. With that you should be able to handle changes pretty easily and some simple compare will also be easier to apply.
Look into Liquibase specially at the SQL format and see if that gives you what you want. I use it for our database and it's great.
You can store all your objects in separate scripts, but when you do a Liquibase "build" it will generate one SQL script with all your changes in it. The really important part is getting your Liquibase configuration to put the objects in the correct dependency order. That is tables get created before foreign key constraints for one example.
http://www.liquibase.org/

Schema Change/Update script for Database deploy

I have a need to change the database schema . I'm planning to write Schema change and update scripts for tracking database changes and updating them. I followed
Versioning Databases – Change Scripts
for a start, I got a gist of what he is getting at however since I haven't worked much on SQL scripts before, a tutorial or something to start with would be good. I did some research on the web and came to know that most people use Automatic comparing tools to generate the script which I don't want to do for obvious reason that I won't learn the anything in the process.
I'm looking for some tutorials/links on How to write Change scripts and Update scripts ? Especially update scripts as I couln't find even a single script/pseudo-code on how to do update schema by comparing SchemaChangeLog table, connecting to the table using scripts...
Thanks in advance!
I would recommend using a database migration tool like liquibase.
Each change to the database is captured as a changeset and liquibase will automatically keep track of which changesets have been applied to the database, enabling updates and rollbacks.

SQL Server database change workflow best practices

The Background
My group has 4 SQL Server Databases:
Production
UAT
Test
Dev
I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production.
The Problem
The entire process is awkward for a few reasons.
Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway.
Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know).
We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc.
The Question
People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process?
For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.
I should clarify I'm not necessarily looking for diff tools. If that's the best way to sync our environments then it's fine, but if there's a better way I'm looking for that.
An example doing what I want really well are migrations in Ruby on Rails. Dead simple syntax, all changes are well documented automatically and by default, determining what migrations need to run is almost trivially easy. I'd love if there was something similar to this for SQL Server.
My ideal solution is 1) easy and 2) hard to mess up. Rails Migrations are both; everything I've done so far on SQL Server is neither.
Within our team, we handle database changes like this:
We (re-)generate a script which creates the complete database and check it into version control together with the other changes. We have 4 files: tables, user defined functions and views, stored procedures, and permissions. This is completely automated - only a double-click is needed to generate the script.
If a developer has to make changes to the database, she does so on her local db.
For every change, we create update scripts. Those are easy to create: The developer regenerates the db script of his local db. All the changes are now easy to identify thanks to version control. Most changes (new tables, new views etc) can simply be copied to the update script, other changes (adding columns for example) need to be created manually.
The update script is tested either on our common dev database, or by rolling back the local db to the last backup - which was created before starting to change the database. If it passes, it's time to commit the changes.
The update scripts follow a naming convention so everybody knows in which order to execute them.
This works fairly well for us, but still needs some coordination if several developers modify heavily the same tables and views. This doesn't happen often though.
The important points are:
database structure is only modified by scripts, except for the local developer's db. This is important.
SQL scripts are versioned by source control - the db can be created as it was at any point in the past
database backups are created regularly - at least before making changes to the db
changes to the db can be done quickly - because the scripts for those changes are created relatively easily.
However, if you have a lot of long lasting development branches for your projects, this may not work well.
It is by far not a perfect solution, and some special precautions are to be taken. For example, if there are updates which may fail depending on the data present in a database, the update should be tested on a copy of the production database.
In contrast to rails migrations, we do not create scripts to reverse the changes of an update. But this isn't always possible anyway, at least in respect to the data (the content of a dropped column is lost even if you recreate the column).
Version Control and your Database
The root of all things evil is making changes in the UI. SSMS is a DBA tool, not a developer one. Developers must use scripts to do any sort of changes to the database model/schema. Versioning your metadata and having upgrade script from every version N to version N+1 is the only way that is proven to work reliably. It is the solution SQL Server itself deploys to keep track of metadata changes (resource db changes).
Comparison tools like SQL Compare or vsdbcmd and .dbschema files from VS Database projects are just last resorts for shops that fail to do a proper versioned approach. They work in simple scenarios, but I see them all fail spectacularly in serious deployments. One just does not trust a tool to do a change to +5TB table if the tools tries to copy the data...
RedGate sells SQL Compare, an excellent tool to generate change scripts.
Visual Studio also has editions which support database compares. This was formerly called Database Edition.
Where I work, we abolished the Dev/Test/UAT/Prod separation long ago in favor of a very quick release cycle. If we put something broken in production, we will fix it quickly. Our customers are certainly happier, but in the risk avert corporate enterprise, it can be a hard sell.
There are several tools available for you. One is from Red-Gate called SQL Compare. Awesome and highly recommended. SQL Compare will let you do a diff in schemas between two databases and even build the sql change scripts for you.
Note they have been working on a SQL Server source control product for awhile now as well.
Another (if you're a visual studio shop) is the schema and data compare features that is part of Visual Studio (not sure which versions).
Agree that SQL Compare is an amazing tool.
However, we do not make any changes to the database structure or objects that are not scripted and saved in source control just like all other code. Then you know exactly what belongs in the version you are promoting because you have the scripts for that particular version.
It is a bad idea anyway to make structural changes through the GUI. If you havea lot of data, it is far slower than using alter table at least in SQL Server. You only want to use tested scripts to make changes to prod as well.
I agree with the comments made by marapet, where each change must be scripted.
The problem that you may be experiencing, however, is creating, testing and tracking these scripts.
Have a look at the patching engine used in DBSourceTools.
http://dbsourcetools.codeplex.com
It's been specifically designed to help developers get SQL server databases under source-code control.
This tool will allow you to baseline your database at a specific point, and create a named version (v1).
Then, create a deployment target - and increment the named version to v2.
Add patch scripts to the Patches directory for any changes to schema or data.
Finally, check the database and all patches into source-code control, to distribute with devs.
What this gives you is a repeatable process to test all patches to be applied from v1 to v2.
DBSourceTools also has functionality to help you create these scripts, i.e. schema compare or script data tools.
Once you are done, simply send all of the files in the patches directory to your DBA to upgrade from v1 to v2.
Have fun.
Another "Diff" tool for databases:
http://www.xsqlsoftware.com/Product/Sql_Data_Compare.aspx
Keep database version in a versioning table
Keep script file name that was successfully applied
Keep md5 sum of each sql script that has been applied. It should ignore spaces when calculate md5 sum. Must be effective.
Keep info about who applied a script Keep info about when a script was applied
Database should be verified on application start-up
New sql script should be applied automatically
If md5 sum of a script that was already applied is changed, error should be thrown (in a production mode)
When script have been released it must not be changed. It must be
immutable in a production environment.
Script should be written in a way, so it could be applied to different types of database (see liquibase)
Since most ddl statements are auto-committing on most databases, it is best to have a single ddl statement per SQL script.
DDL sql statement should be run in a way, so it can be executed several times without errors. Really helps in a dev mode, when you may edit script several times. For instance, create a new table, only if it does not exist, or even drop table before creating a new one. It will help you in a dev mode, with a script that has not been released, change it, clear md5 sum for this script, rerun it again.
Each sql script should be run in its own transaction.
Triggers/procedures should be dropped and created after each db
update.
Sql script is kept in a versioning system like svn
Name of a script contains date when it was committed, existing (jira) issue id, small description
Avoid adding rollback functionality in scripts (liquibase allow to do that). It makes them more complicated to write and support. If you use exactly one ddl statement per script, and dml statements are run within a
transaction, even failing a script will not be a big trouble to
resolve it
This is the workflow we have been using succesfully:
Development instance: SQL objects are created/updated/deleted in DB using MSSQL Studio and all operations are saved to scritps we include in each version of our code.
Moving to production: We compare schema between dev and prod db using SQL Schema Compare in Microsoft Visual Studio. We update prod using the same tool.

Clone entire database with a SP

I'm trying to find out if this is possible, but so far I haven't found out any good solutions. What I would like to achieve is write a stored procedure that can clone a database but without the stored data. That means all tables, views, constraints, keys and indexes should be included but without any data. Can it be done?
Sure - your stored proc would have to read the system catalog views to find out what objects are in the database, determine their potential dependencies, and then create a single or a collection of SQL scripts which re-create the database, and execute those.
It's possible - not very nice and easy to do. Especially the dependencies between objects might cause more headaches than first meets the eye....
You could also:
use something like SQL Server Management Studio (if you're on SQL Server - you didn't specify) and create the scripts manually, and just re-execute them on a separate server
use a "diff" tool like Redgate SQL Compare to compare two servers and have the second one brought up to date
I've successfully used the Microsoft SQL Server Database Publishing Wizard for this purpose. It's pretty straightforward, no coding needed. Here's a sample call:
sqlpubwiz script -d DatabaseName -S ServerName -schemaonly C:\Projects2\Junk\ DatabaseName.sql
I believe the default is to create both data and schema, but you can use the schemaonly parameter.
Download it here
In SQL Server you can roll through the system tables (sys.tables, sys.columns, etc.) and construct things one at a time. It's going to be very manual and error prone at the beginning, but it should become systematic pretty quickly.
Another way to do it is to write something in .Net using SMO. Check out this link:
http://www.sqlteam.com/article/scripting-database-objects-using-smo-updated