Can we export schema of a table and it's data from qt code?
Or can we do it with sql script, a query that return a table's schema?
You can probably manage to export simple tables in a generic manner using only the functions provided by QSqlDatabase (via the tables() and records functions as a starting point), but as far as I know, you'll need to use database-specific queries to get complete schema information.
This is best done, in my opinion, with your specific database implementation's tools. For instance, SQLite has a .dump command that does just that. MySQL has a dedicated mysqldump utility. PostgreSQL has pg_dump, etc...
It's safer to use pre-built tools for your specific engine. Getting all the DDL statements correct, plugging in the keys and triggers at the right time, worrying about encoding, ... is quite a task.
Related
I need to create a database in SQLite, but I do not want to create the tables manually.
I already have the model of the data I need in the database, and what kind of relationship is each one (many-many, one-many, ...)
I'm wondering if there is a tool that allows me to do that?
I just need the tool to generate the SQL code. Then I will take care of the queries manually using SQL
I was thinking about placing the model in Django, and see what it generates, but there should be a tool not linked to a particular language that allows me to do that. Am I wrong?
Hibernate have the ability to create a scheme from mapped classes. There is support for SQLite.
You can go for dia (see "Tools that generates something from Dia diagrams" at http://projects.gnome.org/dia/links.html).
Also there is SQL::Translator and DBIx::* that allows reading an schema from YAML, Excel, and other sources, but these are Perl specific.
Good luck
You can use Symfony + Doctrine framework. It can generate SQL queries.
Try this module on CPAN: Parse::Dia::SQL
I have a database in a format which can be accessed via ODBC. I'm looking for a command-line tool to generate SQL file with DROP/CREATE statements from it, preferably with all the information including table/field comments and table relations. (Possibly for a tool to parse the file and import the schema too, but I guess this would be relatively easier to find). Need this to automate workflow, to be able to design the database visually but store it in SVN in code form.
Which tool should I use?
If this helps, the database in question is MS Access, but I guess there's a higher chance of finding a generic ODBC tool...
Okay, I wrote the tool to export access schema/parse SQL files myself, it's available here:
https://bitbucket.org/himselfv/jet-tool
Feel free to use if anyone needs it.
Adding this because I wanted to search an ODBC schema, and came across this post. This tool lets you dump a csv format of the schema itself:
http://sagedataobjects.blogspot.co.uk/2008/05/exploring-sage-data-schema.html
And then you can grep away..
This script may work for you with some modifications. Access (the application) is required though.
I know that a SQL dump is a series of insert SQL statements which reflect all the records inside the database. But what is it used for? Why should we dump the database records? Does every database support a dumping function?
Somewhat strangely, this is actually the usual way to back up a database. Copying the files themselves that actually hold the data is not the usual backup method, for various complicated reasons.
All relational databases work this way, or at least I've never heard of one that doesn't: they all have a facility to export a bunch of SQL code that, when executed, will recreate the database in the same state it was in when the dump was started.
However these various formats are generally incompatible, due to subtle differences between the various dialects of SQL used by the different database systems. There are utilities that can convert between some of them, but I'm not aware of any 'Rosetta Stone' that handles every possible case.
As well as being the primary method of backing up a database, this technique is also useful when staging the data of db apps between different servers, ie from development to testing to production.
mysqldump produces an SQL representation of the data for one or more tables or databases. As the format is SQL, it will run on any other MySQL server, regardless of architecture or major/minor version (obviously, views won't work on 4.x etc. but it is mostly forwards compatible).
There is another tool, mysqlhotcopy, but as this tool produces binary files, they are tied to the machine they have been generated on, and cannot be used elsewhere. SQL has the advantage of running on any MySQL server, and being independent of the underlying file storage mechanism of the database(s).
The two main use cases for dumping SQL are:
Backing up the database data. The SQL can be read in ("played back") to an empty database server and it will re-create the tables and populate them with rows.
Migrating the data to another server. Say you are upgrading from MySQL 5.0 to 5.1. You have two machines. You use mysqldump to produce an SQL dump on the 5.0 machine, and feed it into the 5.1.
There are some less common uses. For example, SQL snapshot of your application's database could be taken for unit testing against a known state. It is also possible to transform SQL code into another dialect, e.g. PostgeSQL or SQLite, to port your data to another database.
You asked if other databases provide SQL dump functionality. The answer is yes in almost all cases. PostgreSQL provides pg_dump, SQLite has a .dump command, etc.
I have a Databse "Product" in in sql 2008.I have another Databse "ORDER" in sql 2008.
Both exist in different servers.
Now the requirement is to Merge both databases, and test pointing the applications to this new DB.
Can anyone suggest the best way to accomplish this without losing the information?
I have 2 options.
1) Script the DB objects.(script both the DB and run this scripts inthe new DB)
2) Export DB
Which one in this is best or should i use any other methods to avoid errors.
I am new to SQL so please guide me with correct options.
Thanks
SNA
In my opinion the best way to achieve what you want is by just Exporting the database.
I think this is the best option because it's alot more safe then scripting the db's into a new one (a way to just get alot of frustration and errors).
Just try the exporting of your database first before trying to do anything with scripting (which obviously also takes alot more time). So try your fast solution first, and see if it will work.
(I see you are using sql-server 2008) Are you also using the management studio? If so, you can go into the tables in edit-mode and try to copy / paste rows into the new tables. I don't know how big your tables / DB's are, but this could also be an option.
Greetings,
Younes
As you say, two options are scripting or using the SQL server export/import wizard.
I've used both (for the same database as it happens)
A third option is to use Visual StudioTeam System 2008 Database Edition GDR.
In terms of a one time export and import then I'd recommend going with the wizard. This is very safe and also very straightforward. Particulary as you are new to SQL server, you want to take the approach that minimizes the risk.
The only downside to doing it this way is that it is perhaps a little less transparent than the other methods.
On the project where I merged databases I ended up using the scripting method but that was mainly because I had a project that was already using GDR to merge incremental database updates, so adding in a data merge script to that was a simple task - all changes needed to go through DBAs who unfortunately weren't very SQL literate (I know!) so keeping all the processes similar was a must.
I also took some of my learnings from scripting the data and applied them to setting up my reference data scripts, so the effort of scripting was not a one time cost.
Either way, the most important tip I can give is to back up the databases before doing any work on them.
I am currently creating a master ddl for our database. Historically we have used backup/restore to version our database, and not maintained any ddl scripts. The schema is quite large.
My current thinking:
Break script into parts (possibly in separate scripts):
table creation
add indexes
add triggers
add constraints
Each script would get called by the master script.
I might need a script to drop constraints temporarily for testing
There may be orphaned tables in the schema, I plan to identify suspect tables.
Any other advice?
Edit: Also if anyone knows good tools to automate part of the process, we're using MS SQL 2000 (old, I know).
I think the basic idea is good.
The nice thing about building all the tables first and then building all the constraints, is that the tables can be created in any order. When I've done this I had one file per table, which I put in a directory called "Tables" and then a script which executed all the files in that directory. Likewise I had a folder for constraint scripts (which did foreign key and indexes too), which were executed when after the tables were built.
I would separate the build of the triggers and stored procedures, and run these last. The point about these is they can be run and re-run on the database without affecting the data. This means you can treat them just like ordinary code. You should include "if exists...drop" statements at the beginning of each trigger and procedure script, to make them re-runnable.
So the order would be
table creation
add indexes
add constraints
Then
add triggers
add stored procedures
On my current project we are using MSBuild to run the scripts. There are some extension targets that you can get for it which allow you to call sql scripts. In the past I have used perl which was fine too (and batch files...which I would not recommend - the're too limited).
#Adam
Or how about just by domain -- a useful grouping of related tables in the same file, but separate from the rest?
Only problem is if some domains (in this somewhat legacy system) are tightly coupled. Plus you have to maintain the dependencies between your different sub-scripts.
If you are looking for an automation tool, I have often worked with EMS SQLManager, which allows you to generate automatically a ddl script from a database.
Data inserts in reference tables might be mandatory before putting your database on line. This can even be considered as part of the ddl script. EMS can also generate scripts for data inserts from existing databases.
Need for indexes might not be properly estimated at the ddl stage. You will just need to declare them for primary/foreign keys. Other indexes should be created later, once views and queries have been defined
What you have there seems to be pretty good. My company has on occasion, for large enough databases, broken it down even further, perhaps to the individual object level. In this way each table/index/... has its own file. Can be useful, can be overkill. Really depends on how you are using it.
#Justin
By domain is mostly always sufficient. I agree that there are some complexities to deal with when doing it this way, but that should be easy enough to handle.
I think this method provides a little more seperation (which in a large database you will come to appreciate) while still making itself pretty manageable. We also write Perl scripts that do a lot of the processing of these DDL files, so that might be an option of a good way to handle that.
there is a neat tools that will iterate through the entire sql server and extract all the table, view, stored proceedures and UDF defintions to the local file system as SQL scripts (Text Files). I have used this with 2005 and 2008, not sure how it wil work with 2000 though. Check out http://www.antipodeansoftware.com/Home/Products
Invest the time to write a generic "drop all constraints" script, so you don't have to maintain it.
A cursor over the following statements does the trick.
Select * From Information_Schema.Table_Constraints
Select * From Information_Schema.Referential_Constraints
I previously organised my DDL code organised by one file per entity and made a tool that combined this into a single DDL script.
My former employer used a scheme where all table DDL was in one file (stored in oracle syntax), indicies in another, constraints in a third and static data in a fourth. A change script was kept in paralell with this (again in Oracle). The conversion to SQL was manual. It was a mess. I actually wrote a handy tool that will convert Oracle DDL to SQL Server (it worked 99.9% of the time).
I have recently switched to using Visual Studio Team System for Database professionals. So far it works fine, but there are some glitches if you use CLR functions within the database.