Best way to create a DB-tablestructure [duplicate] - sql

This question already has answers here:
Closed 11 years ago.
Possible Duplicates:
Generating SQL Server DB from XSD
Generating SQL Schema from XML
I have loads and loads of xml files with data, and a schema-file (.xsd) which describes the structure of the xml.
I want to store the data in a MSSQL-database so that I can query it later and display it on a web-site.
I must now create the db-structure, and have so far thought of 3 ways of creating the tables:
Using xmlspy I could load the xsd and use the "create DB from xsd" there. The "trouble" is that I have to manually add the relations between the tables, and also add the columns that is used for these relations.
Using Microsoft SQL Management Studio I could graphically create the tables and relations. The "trouble" here is that the xsd describes about 100 tables and the thought of manually doing this in a GUI way is scary. I would loose track of where I was somewhere in there.
Handwriting the sql in notepad or something. Boring, but then I could do it in small steps, something I could not do with the two other options.
Is there any other way I havent't thought of?

You could do something similar to option (1): import the xsd into a database design tool (e.g. ERwin or PowerDesigner) then do the editing steps in a "graphical" environment, and then have the tool generate the database.
I'm not sure how well these tools work with xml and xsd and you may have to generate the db using xmlspy and then reverse engineer the database. But a good tool will make this easier than "just" with the database.
Hope that this is not too similar to the option (2) you mentioned ...

Related

How to migrate data from MongoDB to SQL-Server? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I searched around I found that there are ways to transfer/sync data from sql-server to mongodb.
I also know that Mongodb contains collections instead of table and the data is stored differently.
I want to know whether it is possible to move data from mongodb to sql-server. If yes, then how and what are the tools/topics should I use?
Of course it's possible, but you will need to find a way to force the flexibility of a document db like MongoDB into a RDBMS like SQL Server.
It means that you need to define how you want to handle missing fields (will it be a NULL in the db column? or a default value?) and other things that usually don't fit well in a relational database.
Said do, you can use an ETL tool able to connect to both databases, SSIS can be an example if you want to stay in the MicroSoft world (you can check this Importing MongoDB Data Using SSIS 2012 to have an idea) or you can go for an open source tool like Talend Big Data Integration which has a connector to MongoDB (and of course to SQL Server).
There is no way to directly move data from MongoDB to SQL Server. Because MongoDB data is non-relational, any such movement must involve defining a target relational data model in SQL Server, and then developing a transformation that can take the data in MongoDB and transform it into the target data model.
Most ETL tools such as Kettle or Talend can help you with this process, or if you're a glutton for punishment, you can just write gobs of code.
Keep in mind that if you need this transformation process to be online, or applied more than once, you may need to tweak it for any small changes in the structure or types of the data stored in MongoDB. As an example, if a developer adds a new field to a document inside a collection, your ETL process will need rethinking (possibly new data model, new transformation process, etc.).
If you are not sold on SQL Server, I'd suggest you consider Postgres, because there is a widely-used open source tool called MoSQL that has been developed expressly for the purpose of syncing a Postgres database with a MongoDB database. It's primarily used for reporting purposes (getting data out of MongoDB and into an RDBMS so one can layer analytical or reporting tools on top).
MoSQL enjoys wide adoption and is well supported, and for badly tortured data, you always have the option of using the Postgres JSON data type, which is not supported by any analytics or reporting tools, but at least allows you to directly query the data in Postgres. Also, and now my own personal bias is showing through, Postgres is 100% open source, while SQL Server is 100% closed source. :-)
Finally, if you are only extracting the data from MongoDB to make analytics or reporting easier, you should consider SlamData, an open source project I started last year that makes it possible to execute ANSI SQL on MongoDB, using 100% in-database execution (it's basically a SQL-to-MongoDB API compiler). Most people using the project seem to be using it for analytics or reporting use cases. The advantage is that it works with the data as it is, so you don't have to perform ETL, and of course it's always up to date because it runs directly on MongoDB. A disadvantage is that no one has yet built an ODBC / JDBC driver for it, so you can't directly connect BI tools to SlamData.
Good luck!
There is a tool provided by MongoDB called mongoexport and it's capable of exporting csv files. These csv files can be easily imported into MySQL. Good luck!

Which tool to use to export SQL schema from ODBC database?

I have a database in a format which can be accessed via ODBC. I'm looking for a command-line tool to generate SQL file with DROP/CREATE statements from it, preferably with all the information including table/field comments and table relations. (Possibly for a tool to parse the file and import the schema too, but I guess this would be relatively easier to find). Need this to automate workflow, to be able to design the database visually but store it in SVN in code form.
Which tool should I use?
If this helps, the database in question is MS Access, but I guess there's a higher chance of finding a generic ODBC tool...
Okay, I wrote the tool to export access schema/parse SQL files myself, it's available here:
https://bitbucket.org/himselfv/jet-tool
Feel free to use if anyone needs it.
Adding this because I wanted to search an ODBC schema, and came across this post. This tool lets you dump a csv format of the schema itself:
http://sagedataobjects.blogspot.co.uk/2008/05/exploring-sage-data-schema.html
And then you can grep away..
This script may work for you with some modifications. Access (the application) is required though.

Normalization of an existing SQL database

I have a single-table database I inherited and migrated to SQL Server, and then I normalized it by creating, linking, and filling a whole bunch of lookup-type tables that represented items in the main table. I now want to replace those items in the original table with their foreign keys. Am I stuck writing a bunch of queries or UDF's and then a giant INSERT statement to accomplish this, or is there a tool I can use to point at the various fields and have it handle the grunt work for me?
Redgate SQL Refactor comes with a 14 day evaluation period and has a "Split Table" refactoring which sounds like it might do what you need?
The feature is described thus:
Split Table splits a table into two
tables, and automatically rewrites the
referencing stored procedures, views,
and so on. You can also use this
refactoring to introduce referential
integrity tables. You can select this
feature from the context menu in
Management Studio’s Object Explorer.
I have had similar experiences. I once inherited a fairly large database that required serious overhaul to the schema before I would look at it without scorn.
Because the upgrade was fairly significant, I designed an SSIS package to migrate data from the old schema to the new. Lookup activities were helpful to map old text values to the new keys. I kept a script of my old schema and data handy and would repeatedly restore the database in a sandbox and re-migrate until I could satisfy the powers-that-be that the migration was reliable.
I found there was only a moderate learning curve to getting started with SSIS. If the tool is available to you, I recommend giving it a try.

Defining table structure for a database?

Up until now, my experience with databases has always been working with an intermediate definition layer that we have where I work. i.e. SQL wasn't directly written for the table definitions, but generated from an intermediate file which wrote out SQL scripts for creating the appropriate tables, upgrade scripts between schema changes, and helper functions for doing simple queries/updates/inserts/deletes from the database.
Now I'm in a situation where I don't have access to that, for reasons I won't get into, and I find myself somewhat lost at sea regarding what to do. I need to have a small number of tables in a database, and I'm unsure what's usually done to manage the table definitions.
Do people normally just use the SQL script that does the table creation as their definition, or does everyone just use an IDE that manages the definition in a separate file and regenerates the SQL script to create the tables?
I'd really prefer not to have to introduce a dependence on a specific IDE, because as we all know, developers are whiners that are prone to religious debates over small things.
Open your favorite text editor -> Start writing CREATE scripts -> Save -> Put in Source Control
That script now becomes the basis for you database. Anytime there are schema changes, they get put back into the scripts so that they don't get lost.
These become your definition.
I find it more reliable than depending on any specific IDE/Platform generating those scripts for you.
We write the scripts ourselves and store them in source control like any other code. Then the scripts that are appropriate for a particular version are all groupd together and promoted to prod together. Make sure to use alter table when changing existing tables becasue you don't want to drop and recreate them if they have data! I use a drop and recireate for all other objects though. If you need to add records to a particular table (usually a lookup of sometype) we do that in scripts as well. Then that too gets promoted with the rest of the version code.
For me, putting the scripts in source control however they are generated is the key step. This is how you know what you have changed for the next release. This is how you can see earlier versions and revert back easily if there is a problem. Treat database code the same wayyou treat all other code.
YOu could use one of the data modelling tools that creates scripts for you if you are starting out on a database design and the eventually want to create it for you. Some tools for that are Erwin, Fabforce etc... (though not free)
If you have access to an IDE like SQL Management studio, you can create them by using an GUI thats pretty simple.
If you are writing your own code, Its always better to write your own scripts based on a good template so that you cover all the properties of the definition of the table like the file_group, Collation & stuff. Hope this helps
Once you do create a base copy generate scripts and have a base reference copy of it so that you could do "incremental" changes on them and manage them in a source control.
Though I use TOAD for Oracle, I always write the scripts to create my database objects by hand. It gives you (and your DBA's) more control and knowledge of what's being created and how.
If your schema is too difficult to describe in SQL, you probably have other issues more pressing than which IDE. Use modelling documentation if you need a graphical representation, but yeah, you don't need an IDE.
There are multiple ways out there for what you are asking.
Old traditional way is to have a script file ready with your application that has CREATE TABLE statement.
If you are a developer, and that too a Java enterprise developer, you could generate complete schema using a persistence library called Hibernate. Here is a how to
If you are a DBA level user, you could take schema export from one environment and import that in to your current environment. This is a standard practice among DBAs. But it requires admin access as you can see. Also, the methods are dependent on the database you are using (oracle, db2 etc)

Database schemas WAY out of sync - need to get up to date without losing data

The problem: we have one application that has a portion which is used by a very small subset of the total users, and that part of the application is running off of a separate database as well. In a perfect world, the schemas of the two databases would be synced up, but such is not the case. Some migrations have been run on the smaller database, most haven't; and furthermore, there is nothing such as revision number to be able to easily identify which have and which haven't. We would like to solve this quandary for future projects. During a discussion we've come up with the following possible plan of action, and I am wondering if anyone knows of any project which has already solved this problem:
What we would like to do is create an empty database from the schema of the large fully-migrated database, and then move all of the data from the smaller non-migrated database into that empty one. If it makes things easier, it can probably be assumed for the sake of this problem specifically that no migrations have ever removed anything, only added.
Else, if there are other known solutions, I'd like to hear them as well.
You could use a schema comparison tool like Red-Gate's SQL Compare. You can synchronize the changes and not lose any data. I wrote about this and many alternative tools ranging widely in price here:
http://bertrandaaron.wordpress.com/2012/04/20/re-blog-the-cost-of-reinventing-the-wheel/
The nice thing is that most tools have trial versions. So, you can try them our for 14 days (fully functional) and only buy it if it meets your expectations. I can't speak for the other tools, but I've been using RG for years and it is a very capable and reliable tool.
(Updated 2012-06-23 to help prevent link-rot.)
Red-Gate's SQL Compare as Aaron Bertrand mentions in his answer is a very good option. However, if you are not permitted to purchase something, an option is to try something like:
1) For each database, script out all the tables, constraints, indexes, views, procedures, etc.
2) run a DIFF, and go through all the differences and make sure that the small DB can accept them. If not implement any changes (including data) necessary onto the small DB so it can accept the changes.
3) create a new empty database from the schema of the large DB
4) import the data from the small DB into the nee DB.
You could also reverse engineer your database into Visual Studio as a database project. Visual Studio Team Suite Database Edition GDR R2 (I know long name) has the capability to do a schema comparison and data comparison, but the beauty of this approach is that you get all of your database into a nice database project where you can manage change and integrate with source control. This would allow you to build from a common source and deploy consistent changes.