sqlite diff-script generator - sql

Given two sqlite databases A and B, is there a tool that can generate SQL commands that will convert A to B (or vice versa)? This must included insertions, deletions, and updates - and maybe also table alterations (though that's not important to me).
Possibly this tool is not even sqlite-specific.

Could this be what you are looking for?
SQLite Compare
Not sure what you mean by
Possibly this tool is not even
sqlite-specific.
But a Sql Server specific one is available too.
SQL Data Compare

CompareData will let you visually compare two sqlite databases and/or sync them or generate a sql sync script. Free for comparing the data, requires license after 30-day evaluation period expires for sync/generate sql sync script

try this SQLite-Compare-Utility

Related

Where can I download the Northwind sample database for SQLite?

After looking at:
Where can I download Northwind database for Postgresql?
it looks like the best place for Northwind data, outside of the Microsoft itself, is:
https://code.google.com/archive/p/northwindextended/downloads
to what extent does this raw data match the Microsoft data? Is there any advantage in downloading directly from Microsoft?
I'll be using either MySQL or SQLite, mostly on Linux. The Microsoft site at least emphasized their SQL Server, of course.
See https://github.com/jpwhite3/northwind-SQLite3
"All the TABLES and VIEWS from the MSSQL-2000 version have been converted to Sqlite3 and included here. Also included are two versions prepopulated with data - a small version and a large version"
You can download a version of sqlite3 database from
https://code.google.com/archive/p/northwindextended/downloads
Advantage: It is available immediately. Click and use.
If it's simply to get a sample database for general testing, then just use what's immediately available. If you require an exact copy to match testing results (for instance), then download and convert MS data yourself to ensure exactness since the file's header indicates modifications and added foreign keys and contains no other certification of content. You really need to answer this for yourself based on your own requirements.
Without knowing anything about the conversion process, I would at least guess that the date values are not precisely the same since sqlite has no native DateTime type, rather such values would have needed to be converted to string values and/or Julian numeric values. Of course the values themselves may represent the same dates and times, but processing of query data would certainly require special handling. BLOB values for images should also produce the same images, but retrieving and using those values will likely be different than getting them from other databases. I suppose there could be other data values/types that different handling would apply since sqlite really has no distinct, strict numeric types, rather just type affinities.

SQL compare entire rows

In SQL server 2008 I have some huge tables (200-300+ cols). Every day we run a batch job generating a new table with timestamp appended to the name of the table.
The the tables have no PK.
I would like a generic way to compare 2 rows from two tables. Showing which cols having different values is sufficient, but showing the values would be perfect.
Thanks a lot
Thanks for the answers. I ended up writing my own C# tool to do the job - as I'm not allowed to install 3rd party software in my company.
There are several tools that do this for you.
My favorite is Red Gate SQL Compare.
If you don't have the money use the open source solutions. There are several
DB Compare
OpenDBDiff
You could create RPT files from SELECT queries then use Beyond Compare to see the differences.
Also, red-gate has some tools to compare database tables, but I think they're expensive.

Tool Compare the tables in two different databases

I am using Toad. Frequently i need to compare tables in two different test environments.
the tables present in them are same but the data differs.
i just need to know what are the differences in the same tables which are in two different data bases.Are there any tools which can be installed on windows and use it to compare.
Take a look at SQL Compare and SQL Data Compare
There's a compare tool built into TOAD. Tools | Compare Data.
Open Source DiffKit will do this:
www.diffkit.org
I would definitely go with AdeptSQL if you're using MSSQL. It's the least good looking but the most talented db compare tool amongst the ones I've tried. It can compare both the structure and the data. It tells you which tables exist on one db but does not exist on the other, compares the structure and data of the common ones and it can produce the script to synchronize the two. It's not free but has a 30 day trial (as far as I can remember)
Try dbForge Data Compare for SQL Server. It can compare and sync any databases, even very large ones. Quick, easy, always delivers a correct result.
Try it on your database and comment upon the product.
If you want to do it online, there is a free tool DB Matcher where you upload 2 .SQL files and it gives you the differences
https://dbmatcher.com

Why use "Y"/"N" instead of a bit field in Microsoft SQL Server?

I'm working on an application developed by another mob and am confounded by the use of a char field instead of bit for all the boolean columns in the database. It uses "Y" for true and "N" for false (these have to be uppercase). The type name itself is then aliased with some obscure name like ybln.
This is very annoying to work with for a lot of reasons, not the least of which is that it just looks downright aesthetically unpleasing.
But maybe its me that's stupid - why would anyone do this? Is it a database compatibility issue or some design pattern that I am not aware of?
Can anyone enlighten me?
I've seen this practice in older database schemas quite often. One advantage I've seen is that using CHAR(1) fields provides support for more than Y/N options, like "Yes", "No", "Maybe".
Other posters have mentioned that Oracle might have been used. The schema I referred to was in-fact deployed on Oracle and SQL Server. It limited the usage of data types to a common subset available on both platforms.
They did diverge in a few places between Oracle and SQL Server but for the most part they used a common schema between the databases to minimize the development work needed to support both DBs.
Welcome to brownfield. You've inherited an app designed by old-schoolers. It's not a design pattern (at least not a design pattern with something good going for it), it's a vestige of coders who cut their teeth on databases with limited data types. Short of refactoring the DB and lots of code, grit your teeth and gut your way through it (and watch your case)!
Other platforms (e.g. Oracle) do not have a bit SQL type. In which case, it's a choice between NUMBER(1) and a single character field. Maybe they started on a different platform or wanted cross platform compatibility.
I don't like the Y/N char(1) field as a replacement to a bit column too, but there is one major down-side to a bit field in a table: You can't create an index for a bit column or include it in a compound index (at least not in SQL Server 2000).
Sure, you could discuss if you'll ever need such an index. See this request on a SQL Server forum.
They may have started development back with Microsoft SQl 6.5
Back then, adding a bit field to an existing table with data in place was a royal pain in the rear. Bit fields couldn't be null, so the only way to add one to an existing table was to create a temp table with all the existing fields of the target table plus the bit field, and then copy the data over, populating the bit field with a default value. Then you had to delete the original table and rename the temp table to the original name. Throw in some foriegn key relationships and you've got a long script to write.
Having said that, there were always 3rd party tools to help with the process. If the previous developer chose to use char fields in lieu of bit fields, the reason, in a nutshell, was probably laziness.
The reasons are as follows (btw, they are not good reasons):
1) Y/N can quickly become "X" (for unknown), "L" (for likely), etc. - What I mean by this is that I have personally worked with programmers who were so used to not collecting requirements correctly that they just started with Y/N as sort of 'flags' with the superstition that it might need to expand (to which they should use an int as a status ID).
2) "Performance" - but as was mentioned above, SQL indexes are ruled out if they are not 'selective' enough... a field that only has 2 possible values will never use that index.
3) Lazyness. - Sometimes developers want to output directly to some visual display with the letter "Y" or "N" for human readableness, and they don't want to convert it themselves :)
There are all 3 bad reasons that I've heard/seen before.
I can't imagine any disadvantage in not being able to index a "BIT" column, as it would be unlikely to have enough different values to help the execution of a query at all.
I also imagine that in most cases the storage difference between BIT and CHAR(1) is negligible (is that CHAR a NCHAR? does it store a 16bit, 24bit or 32bit unicode char? Do we really care?)
This is terribly common in mainframe files, COBOL, etc.
If you only have one such column in a table, it's not that terrible in practice (no real bit-wasting); after all SQL Server will not let you say the natural WHERE BooleanColumn, you have to say WHERE BitColumn = 1 and IF #BitFlag = 1 instead of the far more natural IF #BooleanFlag. When you have multiple bit columns, SQL Server will pack them. The case of the Y/N should only be an issue if case-sensitive collation is used, and to stop invalid data, there is always the option of a constraint.
Having said all that, my personal preference is for bits and only allowing NULLs after careful consideration.
Apparently, bit columns aren't a good idea in MySQL.
They probably were used to using Oracle and didn't properly read up on the available datatypes for SQL Server. I'm in exactly that situation myself (and the Y/N field is driving me nuts).
I've seen worse ...
One O/R mapper I had occasion to work with used 'true' and 'false' as they could be cleanly cast into Java booleans.
Also, On a reporting database such as a data warehouse, the DB is the user interface (metadata based reporting tools notwithstanding). You might want to do this sort of thing as an aid to people developing reports. Also, an index with two values will still get used by index intersection operations on a star schema.
Sometimes such quirks are more associated with the application than the database. For example, handling booleans between PHP and MySQL is a bit hit-and-miss and makes for non-intuitive code. Using CHAR(1) fields and 'Y' and 'N' makes for much more maintainable code.
I don't have any strong feelings either way. I can't see any great benefit to doing it one way over another. I know philosophically the bit fields are better for storage. My reality is that I have very few databases that contain a lot of logical fields in a single record. If I had a lot then I would definitely want bit fields. If you only have a few I don't think it matters. I currently work with Oracle and SQL server DB's and I started with Cullinet's IDMS database (1980) where we packed all kinds of data into records and worried about bits and bytes. While I do still worry about the size of data, I long ago stopped worrying about a few bits.

Strategy for identifying unused tables in SQL Server 2000?

I'm working with a SQL Server 2000 database that likely has a few dozen tables that are no longer accessed. I'd like to clear out the data that we no longer need to be maintaining, but I'm not sure how to identify which tables to remove.
The database is shared by several different applications, so I can't be 100% confident that reviewing these will give me a complete list of the objects that are used.
What I'd like to do, if it's possible, is to get a list of tables that haven't been accessed at all for some period of time. No reads, no writes. How should I approach this?
MSSQL2000 won't give you that kind of information. But a way you can identify what tables ARE used (and then deduce which ones are not) is to use the SQL Profiler, to save all the queries that go to a certain database. Configure the profiler to record the results to a new table, and then check the queries saved there to find all the tables (and views, sps, etc) that are used by your applications.
Another way I think you might check if there's any "writes" is to add a new timestamp column to every table, and a trigger that updates that column every time there's an update or an insert. But keep in mind that if your apps do queries of the type
select * from ...
then they will receive a new column and that might cause you some problems.
Another suggestion for tracking tables that have been written to is to use Red Gate SQL Log Rescue (free). This tool dives into the log of the database and will show you all inserts, updates and deletes. The list is fully searchable, too.
It doesn't meet your criteria for researching reads into the database, but I think the SQL Profiler technique will get you a fair idea as far as that goes.
If you have lastupdate columns you can check for the writes, there is really no easy way to check for reads. You could run profiler, save the trace to a table and check in there
What I usually do is rename the table by prefixing it with an underscrore, when people start to scream I just rename it back
If by not used, you mean your application has no more references to the tables in question and you are using dynamic sql, you could do a search for the table names in your app, if they don't exist blow them away.
I've also outputted all sprocs, functions, etc. to a text file and done a search for the table names. If not found, or found in procedures that will need to be deleted too, blow them away.
It looks like using the Profiler is going to work. Once I've let it run for a while, I should have a good list of used tables. Anyone who doesn't use their tables every day can probably wait for them to be restored from backup. Thanks, folks.
Probably too late to help mogrify, but for anybody doing a search; I would search for all objects using this object in my code, then in SQL Server by running this :
select distinct '[' + object_name(id) + ']'
from syscomments
where text like '%MY_TABLE_NAME%'