I have about 10 tables that I use to create a SQL view. Some of these are pure lookup tables. And there are some SELECT clause columns that use Concatenation and other basic functions.
The data volume is expected to be about ~ 2 million rows.
I need to use the data from the SQL view to load into final tables using Azure Data Factory data flow.
Now I am just wondering if I should remove the lookup part from the view and add that to ADF flow. And also for the concatenation and simple functions in SELECTed columns in the view, I think I should maybe use Derived Column transformation in the data flow.
Just trying to understand what would the best practice be. And also, in terms of performance which one would be better?
Is it better to perform the required transformations and lookups in the view? That way the source data would be available in the required format and flow would be faster because not much transformation would be processed.
But if I have to make all the lookups and transformations in SQL view itself then what is the benefit of using a ETL tool like ADF?
I suggest you keep it all in the SQL and use a standard ADF data copy. It's easier to troubleshoot if you can log on to the SQL Server and run a view, vs, digging through transformations in an ETL tool.
the copy activity doesn't support any transformation but ADF data flows do.
However, ADF data flows have to spin up a databricks instance to do anything and that can take up to five minutes
what is the benefit of using a ETL tool like ADF
When you write your transformations in SQL you end up with a mass of code that can sometimes be difficult to maintain and audit
When you write your transformations in an ETL tool, you often get useful plumbing like logging and lineage and debugging. Also an ETL tool can more easily span multiple data sources, i.e. it can combine a text file with a database table.
Some people also prefer the visual aspect of ETL tools.
Many ETL tools also come with out of the box templates/wizards that assist in mundane tasks.
I'm fairly new to SQL Server management and am currently looking in to building a solution with SSIS.
My question will be mostly about "is my logic correct" and some smaller things about what the best practice would be.
But let me paint you a picture to start!
I have an application which provides me with DB Views, to offload the stress on that database I would like to transfer the data of these views periodically to a secondary database/different instance on which I can then also set more specific permissions/transformations/other views build on that data. My views initially provided are pretty much fixed in how I can get them.
After some reading it looked to me that the way to go was to use SSIS. I started building my package and used the "SSIS Import and Export wizard" to do an initial transfer.
Now for my first question, would this be the proper way to transfer the data and is SSIS the right tool for the job?
Secondly, I noticed that the wizard made multiple SQL Preperation tasks and Data flow tasks.
For me it would seem logical to split each view that becomes a table in to a seperate SQL Preperation Task and a seperate Data flow task just to keep a clear picture and as much control as possible. While that would take some time to set up (>100 views/tables) it seems cleaner than how the wizard did it by just grouping some of them together.
Also, since the preperation tasks already create tables these fail when executed a 2nd time as they already exist. Is there a quick workarround for this besides adding a IFEXISTS clause to each query?
Any thoughts on this would be appreciated or hints towards a better solution if I'm approaching this completly from the wrong direction.
The idea is to later on add some SSAS to the system and provide some data analytics on these tables/data as well.
Thanks!
If you are transferring the data to a different instance then SSIS is probably your best bet. Your next question is to work out whether you want to import all the data each time or just the new/updated items.
If you are exporting all the data every time, this is much simpler and assumig you have a suitable maintenance window (such as overnight) that you can complete the process in without affecting your end users, you can get away with simply truncating the data and re-loading. This obviously has consequences related to increased data transfer volumes.
If you want to only export the new/updated data, you will now need to work out whether or not you can actually work out which rows are new or different without simply comparing them all to what you have in your secondary database. Ideally your source tables will have a reliable LastUpdateDate column or better yet a rowversion column, using which you can export all rows with a more recent value than can be seen in the corresponding table in your secondary instance.
There is a lot of reading to be done regarding the updates only route, for which I would strongly suggest you avoid the Slowly Changing Dimension transformation like the plague.
You are also right in thinking that there is a lot of repetitive tasks when you want to do simple operations across a large number of similar objects, such as adding that ifexists to the table creation in your post. The best way to tackle this I have found is to learn how to use Biml to automate the repetitive tasks based on metadata.
Good Luck!
I would request for some help on test automation data storage and retrieval. We are writing test automation scripts with Selenium Webdriver. We started off with using MS Excel sheets to store our test data and using Apache POI to read the data. What we observed recently is sometimes when multiple people modify the same sheet and checkin into GIT, the changes are not reflected.
One automation engineer proposed using .csv file to avoid this problem and I proposed using a Oracle database to store test data.
Is it a good idea to store test data in different Oracle DB tables? My idea is to create oracle tables with two columns that store name/value pairs. My application is large and might require anywhere between 5 to 10 tables.
Kindly let me know.
Regards
Srinivas
I believe, for most of the cases CSV would be good enough.
Creating special database (especially for storing only key/pair values) for test data seems like overkill.
CSV pros
Advantages of using GIT (easy diffs, history of changes)
Performance: It's plain text file, easy to read
Disk space
Support of parameterized tests (Junit, TestNG)
CSV cons
Maintain relations between different data if such exists
Database pros
Flexibility
Easier mapping complicated objects with relations
Database cons
Time for building initial setup and maintaining it
Performance: Starting, cleaning-up and initializing DB (to keep repeatability of tests)
Preferably you should test also database that you're using
Main problem with using database is maintaining tests code, which should be simple, fast and repeatable.
I am in the middle of a small project aimed to eventually create a data warehouse. I am currently moving data from a flat file system and two SQL Server databases. The project started in C# to automate the processing of data from the flat file system. Along with this, the project executes stored procedures to bring data from other databases. They are accessing the data from other databases using linked servers.
I am wondering if this is incorrect as even though it does get the job done, there may better approach? The other way I have thought about this is to use the app to pull data from each DB then push it to the data warehouse, but I am not sure about performance. Is there another way? Any path that I can look into is appreciated.
'proper' is a pretty relative term. I have seen a series of stored procedures, SSIS (microsoft), and third party tools. THey each have some advantages
Stored procedures
Using a job to schedule a series of stored procedures that insert rows from one server to the next works. I find sql developers more likely to take this path...it's flexible in design and good SQL programmers can accomplish nearly anything in here. That said, it is exceedingly difficult to support / troubleshoot / maintain / alter (especially if the initial developer(s) are no longer with the company). There is usually very poor error handling here
SSIS and other tools such as pentaho or data stage or ...google search it, theres a few.
This gives a more graphical design interface, although I've seen SSIS packages that simply called a stored procedures in order that may as well just been a job. These tools are really what you make of them. They give very easy to see work flows and are substantially robust when it comes to error handling and troubleshooting ability (trust me, every ETL process is going to have a few bad days and you'll be very happy for any logging you have to identify what you want). I find configuring a servers resources (multiple processors for example) is significantly easier with these tools. They all come with quite the learning curve though.
I find SQL developers are very much inclined to use the stored procedure route while people from a DBA background are generally more inclined to use the tools. If you're investing the time into it, the SSIS or equivlent tool is a better way to go from the future of your company standpoint, though takes a bit more to implement.
In choosing what to use you need to consider the following factors:
How much data are we talking about moving and how quickly does it need to be moved. There is s huge difference between using a linked server to move45,000 records and using it to move 100,000,000 records. Consider alo the expected growth of the data set to be moved over time. A process taht is fine in the early stages may chocke and die once you get more records. Tools like SSIS are much faster once you know how to use them which brings us to point 2.
How much development time do you have and what tools does the developer and the person who will maintain the import over time know? SSIS for instance is a complex tool, it can take a long time to feel comfortable with it.
How much data cleaning and transforming do you need to do? What kind of error trapping and exception processing do you need, what kind of logging will you need? The more complex the process, the more likely you will need to bite the bullet and learn an ETL specific tool.
Even there is a few answers, and I agree with two of them, I have to give my subjective opinion about the wider picture.
I am in the middle of a small project aimed to eventually create a data warehouse.
Question name perfectly suits to your question description. It could be very helpful to future readers. So, your project should create data warehouse. However it's small, learn to develop projects with scalability. Always!
In that point of view, search and study about how data warehouse project should look like. And develop each step.
Custom software vs Stored Procedures (Linked DBs) vs ETL
Custom software (in this case your C# project) should be used in two cases:
Medium scale projects where budget ETL cannot do everything
You're working for Enterprise level IT company, so developing your solution is cheaper and more manageable
And perhaps you think for tiny straight-forward projects. But NO, because those projects can grow and very quick outgrow your solution (new tables, new sources, changing ERP or CRM, ect).
If you're using just SQL Server, if you no need for data cleansing, if you no need for data profiling, if you no need for external data, Stored Procedures are OK. But, a lot of 'ifs' is here. And again, you're loosing scalability (your managment what's to add some data from Google Spreadsheet they internly use, KPI targets for example).
ETL tools are one native step in data warehouse development. In begining, there could be few table copy operation, or some SQL's, one source, one target. As far as your project is growing, you can adding new transformations.
SSIS is perhaps best as you're using SQL Server, but there is some good, free tools.
For most database-backed projects I've worked on, there is a need to get "startup" or test data into the database before deploying the project. Examples of startup data: a table that lists all the countries in the world or a table that lists a bunch of colors that will be used to populate a color palette.
I've been using a system where I store all my startup data in an Excel spreadsheet (with one table per worksheet), then I have a utility script in SQL that (1) creates the database, (2) creates the schemas, (3) creates the tables (including primary and foreign keys), (4) connects to the spreadsheet as a linked server, and (5) inserts all the data into the tables.
I mostly like this system. I find it very easy to lay out columns in Excel, verify foreign key relationships using simple lookup functions, perform concatenation operations, copy in data from web tables or other spreadsheets, etc. One major disadvantage of this system is the need to sync up the columns in my worksheets any time I change a table definition.
I've been going through some tutorials to learn new .NET technologies or design patterns, and I've noticed that these typically involve using Visual Studio to create the database and add tables (rather than scripts), and the data is typically entered using the built-in designer. This has me wondering if maybe the way I'm doing it is not the most efficient or maintainable.
Questions
In general, do you find it preferable to build your whole database via scripts or a GUI designer, such as SSMSE or Visual Studio?
What method do you recommend for populating your database with startup or test data and why?
Clarification
Judging by the answers so far, I think I should clarify something. Assume that I have a significant amount of data (hundreds or thousands of rows) that needs to find its way into the database. This data could be sourced from various places, such as text files, spreadsheets, web tables, etc. I've received several suggestions to script this process using INSERT statements, but is this really viable when you're talking about a lot of data?
Which leads me to...
New questions
How would you write a SQL script to take the country data on this page and insert it into the database?
With Excel, I could just copy/paste the table into a worksheet and run my utility script, and I'd basically be done.
What if you later realized you needed a new column, CapitalCity?
With Excel, I could take that information from this page, paste it into Excel, and with a quick text-to-column manipulation, I'd have the data in the format I need.
I honestly didn't write this question to defend Excel as the best way or even a good way to get data into a database, but the answers so far don't seem to be addressing my main concern--how to get all this data into your database. Writing a script with hundreds of INSERT statements by hand would be extremely time consuming and error prone. Somehow, this script needs to be machine generated, but how?
I think your current process is fine for seeding the database with initial data. It's simple, easy to maintain, and works for you. If you've got a good database design with adequate constraints then it doesn't really matter how you seed the initial data. You could use an intermediate tool to generate scripts but why bother?
SSIS has a steep learning curve, doesn't work well with source control (impossible to tell what changed between versions), and is very finicky about type conversions from Excel. There's also an issue with how many rows it reads ahead to determine the data type -- you're in deep trouble if your first x rows contain numbers stored as text.
1) I prefer to use scripts for several reasons.
• Scripts are easy to modify, and plus when I get ready to deploy my application to a production environment, I already have the scripts written so I'm all set.
• If I need to deploy my database to a different platform (like Oracle or MySQL) then it's easy to make minor modifications to the scripts to work on the target database.
• With scripts, I'm not dependent on a tool like Visual Studio to build and maintain the database.
2) I like good old fashioned insert statements using a script. Again, at deployment time scripts are your best friend. At our shop, when we deploy our applications we have to have scripts ready for the DBA's to run, as that's what they expect.
I just find that scripts are simple, easy to maintain, and the "least common denominator" when it comes to creating a database and loading up data to it. By least common denominator, I mean that the majority of people (i.e. DBA's, other people in your shop that might not have visual studio) will be able to use them without any trouble.
The other thing that's important with scripts is that it forces you to learn SQL and more specfically DDL (data definition language). While the hand-holding GUI tools are nice, there's no substitute for taking the time to learn SQL and DDL inside out. I've found that those skills are invaluable to have in almost any shop.
Frankly, I find the concept of using Excel here a bit scary. It obviously works, but it's creating a dependency on an ad-hoc data source that won't be resolved until much later. Last thing you want is to be in a mad rush to deploy a database and find out that the Excel file is mangled, or worse, missing entirely. I suppose the severity of this would vary from company to company as a function of risk tolerance, but I would be actively seeking to remove Excel from the equation, or at least remove it as a permanent fixture.
I always use scripts to create databases, because scripts are portable and repeatable - you can use (almost) the same script to create a development database, a QA database, a UAT database, and a production database. For this reason it's equally important to use scripts to modify existing databases.
I also always use a script to create bootstrap data (AKA startup data), and there's a very important reason for this: there's usually more scripting to be done afterward. Or at least there should be. Bootstrap data is almost invariably read-only, and as such, you should be placing it on a read-only filegroup to improve performance and prevent accidental changes. So you'll generally need to script the data first, then make the filegroup read-only.
On a more philosophical level, though, if this startup data is required for the database to work properly - and most of the time, it is - then you really ought to consider it part of the data definition itself, the metadata. For that reason, I don't think it's appropriate to have the data defined anywhere but in the same script or set of scripts that you use to create the database itself.
Test data is a little different, but in my experience you're usually trying to auto-generate that data in some fashion, which makes it even more important to use a script. You don't want to have to manually maintain an ad-hoc database of millions of rows for testing purposes.
If your problem is that the test or startup data comes from an external source - a web page, a CSV file, etc. - then I would handle this with an actual "configuration database." This way you don't have to validate references with VLOOKUPS as in Excel, you can actually enforce them.
Use SQL Server Integration Services (formerly DTS) to pull your external data from CSV, Excel, or wherever, into your configuration database - if you need to periodically refresh the data, you can save the SSIS package so it ends up being just a couple of clicks.
If you need to use Excel as an intermediary, i.e. to format or restructure some data from a web page, that's fine, but the important thing IMO is to get it out of Excel as soon as possible, and SSIS with a config database is an excellent repeatable method of doing that.
When you are ready to migrate the data from your configuration database into your application database, you can use SQL Server Management Studio to generate a script for the data (in case you don't already know - when you right click on the database, go to Tasks, Generate Scripts, and turn on "Script Data" in the Script Options). If you're really hardcore, you can actually script the scripting process, but I find that this usually takes less than a minute anyway.
It may sound like a lot of overhead, but in practice the effort is minimal. You set up your configuration database once, create an SSIS package once, and refresh the config data maybe once every few months or maybe never (this is the part you're already doing, and this part will become less work). Once that "setup" is out of the way, it's really just a few minutes to generate the script, which you can then use on all copies of the main database.
Since I use an object-relational mapper (Hibernate, there is also a .NET version), I prefer to generate such data in my programming language. The ORM then takes care of writing things into the database. I don't have to worry about changing column names in the data because I need to fix the mapping anyway. If refactoring is involved, it usually takes care of the startup/test data also.
Excel is an unnecessary component of this process.
Script the current version the database components that you want to reuse, and add the script to your source control system. When you need to make changes in the future, either modify the entities in the database and regenerate the script, or modify the script and regenerate the database.
Avoid mixing Visual Studio's db designer and Excel as they only add complexity. Scripts and SQL Management Studio are your friends.