Compare ms-sql database schemas on Ubuntu/Linux - sql

Is there any command in ms-sql (on linux) to compare schemas between two databases?

I have very similar needs (I currently use PostgreSQL on Linux), and if doesn't have to necessarily be a ms-sql command I have 2 possible solutions:
Solution 1:
Use mssql-scripter from Microsoft (https://github.com/Microsoft/mssql-scripter)
You can get mssql-scripter via for example
pip install mssql-scripter.
and execute the following commands:
$ mssql-scripter -S serverName -d databaseSource -U user > ./source.sql
$ mssql-scripter -S serverName -d databaseTarget -U user > ./target.sql
$ diff source.sql target.sql
Solution 2:
If you have the possibility to use a desktop environment (as I'm doing) I would use a comparison tools, which is much more user friendly in my opinion.
TiCodeX SQL Schema Compare (https://www.ticodex.com) It's a nice tools that runs in Linux, Windows and Mac and can compare the schema of MS-SQL, MySQL and PostgreSQL database. Easy to use and effective. It may help you.
In order to use it:
Configure the source db (specifying servername, username, password, etc...)
Configure the target db
There are options in case you want to exclude database objects, or change the output
Press the comparison button
You will get the differences between the two databases, and eventually you also get the migration scripts to make the target db identical to the source.

It can perhaps can be done indirectly via
sqlpackage for Linux.
Firstly, dacpac of each database to be created:
sqlpackage.exe /Action:Extract /SourceServerName:XLW-CNU415CD8B /SourceDatabaseName:AdventureWorks2012 /TargetFile:AdventureWorks2012_v1.dacpac /p:IgnoreExtendedProperties=True /p:IgnorePermissions=False /p:ExtractApplicationScopedObjectsOnly=True
Then, dacpacs to be compared:
sqlpackage /a:DeployReport /sf:AdventureWorks2012_v1.dacpac /tf:AdventureWorks2012_v2.dacpac /tdn:AdventureWorks2012.db /op:AdventureWorks2012_v1.xml
Please note that this example is based on a windows version of the tool, I assume that Linux port has the same list of arguments

Related

PostgreSQL Query To Create A Directory

Files are being written to a directory using the COPY query:
Copy (SELECT * FROM animals) To '/var/lib/postgresql/data/backups/2020-01-01/animals.sql' With CSV DELIMITER ',';
However if the directory 2020-01-01 does not exist, we get the error
could not open file "/var/lib/postgresql/data/backups/2020-01-01/animals.sql" for writing: No such file or directory
PostgeSQL server is running inside a Docker container with the volume mapping /mnt/backups:/var/lib/postgresql/data/backups
The Copy query is being sent from a Node.js app outside of the Docker container.
The mapped host directory /mnt/backups was created by Docker Compose and is owned by root, so the Node.js app sending the COPY query is unable to create the missing directories due to insufficient permissions.
The backup file is meant to be transferred out of the Docker container to the Docker host.
Question: Is it possible to use an SQL query to ask PostgreSQL 11.2 to create a directory if it does not exist? If not, how will you recommend the directory creation be done?
Using Node.js 12.14.1 on Ubuntu 18.04 host. Using PostgreSQL 11.2 inside container, Docker 19.03.5
An easy way to solve it is to create the file directly into the client machine. Using STDOUT from COPY you can let the query output be redirected to the client standard output, which you can catch and save in a file. For instance, using psql in the client machine:
$ psql -U your_user -d your_db -c "COPY (SELECT * FROM animals) TO STDOUT WITH CSV DELIMITER ','" > file.csv
Creating an output directoy in case it does not exist:
$ mkdir -p /mnt/backups/2020-01/ && psql -U your_user -d your_db -c "COPY (SELECT * FROM animals) TO STDOUT WITH CSV DELIMITER ','" > /mnt/backups/2020-01/file.csv
On a side note: try to avoid exporting files into the database server. Although it is possible, I consider it a bad practice. Doing so you will either write a file into the postgres system directories or give the postgres user permission to write somewhere else, and it is something you shouldn't be comfortable with. Export data directly to the client either using COPY as I mentioned or follow the advice from #Schwern. Good luck!
Postgres has its own backup and restore utilities which are likely to be a better choice than rolling your own.
When used with one of the archive file formats and combined with pg_restore, pg_dump provides a flexible archival and transfer mechanism. pg_dump can be used to backup an entire database, then pg_restore can be used to examine the archive and/or select which parts of the database are to be restored. The most flexible output file formats are the “custom” format (-Fc) and the “directory” format (-Fd). They allow for selection and reordering of all archived items, support parallel restoration, and are compressed by default. The “directory” format is the only format that supports parallel dumps.
A simple backup rotation script might look like this:
#!/bin/sh
table='animals'
url='postgres://username#host:port/database_name'
date=`date -Idate`
file="/path/to/your/backups/$date/$table.sql"
mkdir -p `dirname $file`
pg_dump $url -w -Fc --table=$table -f $file
To avoid hard coding the database password, -w means it will not prompt for a password and instead look for a password file. Or you can use any of many Postgres authentication options.

Unable to run .sql file in SQL Server

I have a .sql dump file 20 gb and I am trying to run it on Mysql workbench using run script and after successful execution, using SSMA I'll migrate the data from Mysql workbench to SQL Server. I have migrated the data this way many times successfully however for 20 gb file it seems very time-consuming. Please let me know if there is any alternate way to achieve this quickly. I have followed the following link:
Steps to migrate mysql tables to sql server using SSMA!
From your Title "unable to run .sql file in SSMS" and "I have a .sql dump file 20 gb" are you trying to open a 20GB .sql in SSMS? That's never going to work. SSMS is a 32bit application, so the maximum addressable memory is 2GB. If you want to run your .sql file, I suggest using sqlcmd.
Open up Powershell, and then run the command below replacing the appropriate parts:
sqlcmd -S {Server Name/ServerIP} -U {Your Login} -i {Your full path to your script}
You'll be prompted for your password and then you the file will be run. So, as an example, you might run:
sqlcmd -S svSQL2017 -U Larnu -i \\svFileServer\SQLShare\Scripts\BigBatchFile.sql
If you are using integrated security, then don't pass the -U parameter for the command.
Edit: This answer is no relevant to the OPs question, as they were using "SSMS" as a synonym for SQL Server, which it is not. I have left this here for the moment so the OP can review my comments, and I will likely remove this answer at a later point.

Teradata client on Unix Solaris

I deploy some .bteq and .sql scripts on a TERADATA database. For doing this, I use a client on my desktop called BTEQWin version 13.10.0.03.
I get the .bteq/.sql from a version control like pvcs/svn etc and all I do once the files are in my workspace folder (from Version control tool), to just drag and drop the files from Windows browser to BTEQWin client (which I connect to a database prior to drag/drop for running those scripts).
Now, I have to automate this whole process in UNIX.
I have written a SHELL KSH/BASH script which is getting all the .bteq/.sql from a TAG/LABEL in the version control tool to a given UNIX folder. Now, all I need to do is the pass these files one by one (i'll take care of the order) to Teradata client.
My ?
- what client do I need to tell Unix admin team to install on Unix server - so that I can run something like below:
someTeraDataCommand -u username -p password -h hostname -d database -f filenametoexectue | tee output_filename.log
Where, someTeraDataCommand is the client / executable - which will let me run Teradata scripts (like I was doing using BTEQWin on my desktop - GUI session). Other parameters can be username, password, which database to connect on what server and which file to run or make that file passed to the command using "<" operator at command line.
Any idea?
- What client ?
Assuming the complete Teradata Tools and Utilities package is installed on your UNIX server (which will have the connectivity tools to talk to Teradata), you should have access to bteq from the command line. Something like this:
bteq < script_file > output_file
Your script file should contain a .LOGON statement to establish the connection:
.LOGON yourTDPID/your_account,your_pw
You might also need to use other commands to set your default database or non-default session values.
Another option would be to combine the SQL and call to BTEQ in a Korn shell script:
#!/usr/bin/ksh
##############
SHELL_NAME=`basename $0`
PRG_NAME=`basename $(SHELL_NAME} .ksh`
LOG_FILE=${PRG_NAME}.log
OUT_FILE=${PRG_NAME}.out
#
bteq <<EOBTQ > ${LOG_FILE} 2>$1
.LOGON {TDPID}/{USERID},{PWD};
--.RUN file=${LOGON}
/* Add your SQL/BTEQ commands here */
.QUIT 0;
EOBTQ
Edit
The double hyphen indicates a single line comment. Typically in a UNIX script you do not leave your password in plain text of a KSH script. The .RUN command would reference a text file in a barely sufficient secure location containing the .LOGON {TPDID}/{USERID},{PWD}; command.
The .RUN command in BTEQ allows you to reference another text file containing a series of valid BTEQ commands that you want to run in the current BTEQ script.
Easiest way is to setup the Solaris TTU, is to request root sudo, and run an interactive installation into defaults as a root. That would cure all client issues.

Creating a new Trac project via trac-admin initenv

I'm somewhat new to Trac.
I'm running trac version 0.11.7 on an ubuntu system.
I'm trying to create another project via the following command:
"trac-admin /var/lib/trac/shipping_tracker initenv".
After answering the various questions, the program fails and returns an error
( see: http://pastebin.com/yijzpB3i ) "Table 'system' already exists"
Does this mean that every-time I need to create a new project, I'll have to go into
the mysql database and create a new database, like trac1, trac2, etc??
I did notice this particular ticket ( http://trac.edgewall.org/ticket/5138 ) where
someone states you have to create a new database for each project. Is this correct??
Thank you.
--Mike
Every Trac environment, being a completely self-contained space, uses a separate database. So yes, you need to create a new database for each environment (although it might be a bad idea to name them trac1, trac2 etc.).
If you want to create new environments often, what you really need is probably multi-project support, which allows you to have different projects within one environment. However, it is still not done as of Trac 0.13, and is planned for 0.14.
You might also want to read about various ideas on having multiple projects with Trac. One of them deals with making Trac store multiple environments in a single database, though it might be outdated and probably breaks automatic updates.
I am using Trac 1.0, running as a stand-alone server, and in order to run multiple projects on one trac installation you still need to set up new environment using
trac-admin /path/to/trac/yournewpoject initenv
... then create .htpasswd file in the /path/to/trac/yournewpoject dir, add users using
htpasswd /path/to/trac/yournewpoject/.htpasswd newuser
(or copy an existing .htpasswd file there) ... and then restart trac with similar to the followin command:
python /path/to/tracd --user=yourlinuxuser --group=yourlinuxgroup -d \
-b hostname -p 8000 \
--basic-auth=oldproject,/path/to/trac/oldproject/.htpasswd,realmname \
--basic-auth=yournewpoject,/path/to/trac/yournewpoject/.htpasswd,realmname \
/path/to/trac/oldproject \
/path/to/trac/yournewpoject
This is valid in case you are using the same type of basic authentication as I do.

Is there a way to directly compress/zips the result from a SQL query?

Off late I have been dumping relatively large tables using SSMS. The usual way is to set Query->Results-To->File, 'Execute`, choose a file and let the SQL query run. After it finishes, I usually zip the file and then transfer it to my local machine. This has obvious problems of the host machine running out of space during overnight SQL queries.
I was wondering if there is a way to compress the output from SSMS directly without having to wait until it dumps the results from the entire query. Any suggestions? The host machine is pretty restricted in what it allows me to run on it so a suggestion that requires minimal third-party software would be great.
Run the queries from sqlcmd instead and pipe the output into a command line zip (you'll need to install one, see What's a good tar utility for Windows?). Or you can use PowerShell that can zip out-of-the-box, including piped input, see Compress Files with Windows PowerShell then package a Windows Vista Sidebar Gadget, this requires no additional tools as PS is already on your host server (although on second read I think the PS solutions, as in the link, still requires a deflated file first, cannot compress on-the-file).
Sample query using sqlcmd and 7zip:
sqlcmd -S <DATABASE> -s <COLUMNSEP> -Q "SELECT ..." | .\7za.exe a -si <FILENAME>
Remember to use the -Q (run query and exit) and not the -q (run query) or else this won't work.