I am facing a new task in my job and I need to find out how to generate and administer test data. Googling led to a lot of information about specific test data generation like filling a database with random data or camouflaged production data, generating files, generating test data with multi-objective genetic algorithms to minimize test data and optimize coverage, etc.
But my task is somehow harder, because the environment is not only one database, it's a heterogeneous environment, which evolved over time, consisting of databases, files, different servers, programs, etc. Time shall also be simulated by the files aging and so on.
I am somehow lost here and need some starting points from where I can dig further into the materia.
Do you know any tools, knowledge sources, websites, books, experiential reports or something else considering the topic "Evolving testing environments"?
Sounds like a daunting environment; I'd suggest using a "divide and conquer" approach to identify all the test data variables. Make a list of each element of the environment needs to be varied under test, e.g.
Database type
File age
File size
Server operating system
Programs running on the server
(I'm just guessing at the different elements here based on your question). Then, for each element, make a list of values for it, e.g.
Database type: Oracle, MySQL, PostGreSQL
Server operating system: Windows Server 2003, Windows Server 2008, Fedora 12 Linux
When you're done with that, figure out which values are most important to test; for example; you might want to prioritize Oracle if 80% of your customers use Oracle.
Finally, you should have a set of values for the different environment elements that you can use to create test environments by using different combinations of the element values, using the most important ones first.
Related
so after searching and not finding similar cases I want to open a new question.
So here is the case:
We are working with a large database with a very complicated data structure. Also we are working on multiple systems to ensure stability (development, testing, quality and productive) and its always a struggle so move data between those systems. As I said the data structure is very large and there is also a lot of logic inside the database. Customers are able to add new data parts as configuration and there is also a static income of data which are used for statistics and monitoring. So let me explain the problem with a small example:
Lets take this Database as an example. We have some families making some contest with each other. And they will create some statistics about the points they make.
The Purple Tables are fixed configurations. They are created once and they can only be changed via an Operator. Those changes will be done and tested in the development system first.
The Yellow Tables are changing configurations. Each Family is able to create or delete multiple Contests and assign their kids.
The Red Table is just plain data. Each time a kid makes points, a new row is added with the amount and current time and the relation to the kid and contest.
This table will be the base for the later statistics.
This Database is developed on two systems a productive one which is used by the families and a develop system which is used by the programmers/operators.
While developing the programmers will add test data like kids families contests and points. And while using the families will create new contests and assign new kids and will fill up the point table.
It's necessary to copy new/tested/fixed families from the development to the productive system.
Its also necessary to copy Contests, Contest-Kid-Assignments and Points from the productive to the development system to find new errors.
Also it must be possible to change the table structure on the development system and transmit this change to the productive system. (This shouldn't be the main topic here sometimes it can be such a large changes that there just is no easy way, so lets keep this point simple but keep it in mind.)
I want to copy parts of the tables to another system but be able to ignore some tables (for example: Points) and I want to make sure to not copy kids without their parent family so there is no "parentless" object in the database.
Question: What would be a good and save way to do this?
I don't need a solution for a specific database type or some scripts. I'm looking for tools, libraries or good practice. (But just as a note we're using mssql.)
We are currently making a tool for this problem (not going well: unstable, overly complicated, slow and possible reinventing the wheel).
Also a lot of devs I know just copy the whole database (making a backup and running it into another server) But this is also making problems: users are being copied and their guid change so they loose permissions etc. I don't think this is a good solution. Also the database is down for quite a long time and its never a smooth process.
Making it manually is sometimes the easiest way but considering the size of our data structure its not just a huge piece of work there is also a large possibility for mistakes.
So I'm hoping someone knows a tool or something similar to help me out.
Welcome to the pains of development for a Stateful entity like a database. :) RedGate makes a tool called SQL Source Control that is good for moving changed data and Schema into Production, and it can interface with source control solutions such as GIT. It's a bit pricey, but it's the best I've found. One option for keeping dev up to date with prod data and dev changes is one I concocted at my last place of employment, which was... not 100% perfect, but better than nothing, and free. It was developed in Powershell, and it went something like this:
Create Pre-restore, Pre-dacpac and Post-dacpac SQL scripts to store data and
permission diffs between dev and prod
Use SQLPackage.EXE to make DacPac of Dev(Dacpac is basically an xml schema of db, no
data)
Execute Pre-restore Proc (Often copying out test data that needs to be persisted)
Restore Prod over Dev
Execute Pre-dacpac script (any DDL That could cause data loss may need to go here)
Use SQLPackage.EXE to apply DacPac made in step 2 to Newly restored database
Execute Post-Dacpac Script (Permissions, restoration of data copied in step 3)
Again, like I said, it worked and automated the restoration of prod data into our dev environment while keeping our dev changes intact, but it required a good bit of upkeep and maintenance. Also, keep in mind, once your DB reaches a certain size, doing a nightly restore is no longer a viable option due to the time it takes to restore.
Just a bit of background on where my question is coming from: my company has multiple databases across the globe that uses the same schema and once of my department's responsibility is to monitor and make sure all these DBs are in sync from a schema SQL change perspective.
Now, my question is if anyone knows of any Software/tool that has a a Frontend UI which is able to do the following (the lower number the more important to have):
Able to track what SQL code change was applied on which database and when. Basically, if we write a SQL query that changed the structure of a table and we need it applied to 80% or 100% percent of the DBs, either via manual input or some automatic check the tool will tell me that yes, this was indeed applied.
Code distribution tool: we give it the query or a file that contains the code and it's able to push to the Databases it needs to (and create the audit log for that)
Code/object repository: keeps track of what was custom developed and pushed to the databases
I know SSIS might be able to do some of these things, but we need a tool that also has a simple frontend interface that can be accessed by non-IT personnel. (*clarification: we are not planning on giving non-DBA people access to change things, just to the audit aspect of said tool)
I've tried searching the internet, but i have a feeling i'm not using the right vocabulary to get the results i'm looking for.
Hence i wanted to see if the community was aware of any such tool or something similar?
Try searching for one of these two types of systems:
Release/Build/Deployment Automation Complex programs like Serena that have modules for pushing, tracking, and auditing any kind of software, anywhere. These will include all the GUI bells and whistles. But you'll have to deal with extra databases, configuration, agents, workflows, consultants(?), etc. These programs are geared more towards developers.
Remote Execution/Configuration Management Simpler programs like Salt, Fabric, and Ansible that let you run operating system commands anywhere. They don't offer as many features, and you have to do more of the work yourself, but in some ways that's liberating. If you know exactly what commands you want to run you don't need some other program holding your hand. These programs are geared more towards administrators.
From a database administrator's point of view, the main problem with those types of programs is that none of them are relational. Yes they can connect to a database and run a script, but none of them really speak SQL. Their native languages are Java, XML, SSH, etc. There's nothing wrong with those technologies, but if you only care about databases you don't want to deal with all that complexity.
If you're not happy with either of those types of programs I recommend you look at my open source program Method5. It is a remote execution program built as an extension to Oracle SQL. It works entirely inside an Oracle database, so you can install it yourself and won't need any additional websites, agents, configuration files, GUIs, etc.
Based on your comment about getting bogged down by links, and my answer to your question about half a year ago, I think this is the kind of program you were gradually heading towards creating. It took my team a couple thousand hours of developing and testing to get it right so you were probably wise to give up on making your own.
To specifically answer your requirements:
Tracking Changes are stored in an audit trail. But more importantly it has the ability and a pre-built script to compare an unlimited number of schemas, all in one view. At the end of the day what you really want to know is "are my schemas the same", not necessarily "did the same thing get run everywhere?".
Code Distribution If you just have SQL or PL/SQL, deploying it through Method5 is as easy as it can possibly get. Just specify what you want to run, and where you want to run it, like this: select * from table(m5('create index ...', 'dev, qa, prodDB1, prodDB2')); The program does not (yet) run SQL*Plus scripts. But when you have the ability to run SQL and PL/SQL so easily there's little need for SQL*Plus.
Code Repository All executions are stored in a simple table, M5_AUDIT. It contains the code, who ran it, where they ran it, and how they ran it. It wasn't designed to be a repository like SVN but it's good enough for simple auditing and tracking code.
Method5 does not contain a GUI but in some ways I consider that to be a feature. Since everything is done relationally, everything is in a simple table. You can use any of your existing GUIs - Toad, PL/SQL Developer, Excel, Apex, etc. It's a robust back-end solution that will hopefully make a good foundation for easily building a simple front end.
For most database-backed projects I've worked on, there is a need to get "startup" or test data into the database before deploying the project. Examples of startup data: a table that lists all the countries in the world or a table that lists a bunch of colors that will be used to populate a color palette.
I've been using a system where I store all my startup data in an Excel spreadsheet (with one table per worksheet), then I have a utility script in SQL that (1) creates the database, (2) creates the schemas, (3) creates the tables (including primary and foreign keys), (4) connects to the spreadsheet as a linked server, and (5) inserts all the data into the tables.
I mostly like this system. I find it very easy to lay out columns in Excel, verify foreign key relationships using simple lookup functions, perform concatenation operations, copy in data from web tables or other spreadsheets, etc. One major disadvantage of this system is the need to sync up the columns in my worksheets any time I change a table definition.
I've been going through some tutorials to learn new .NET technologies or design patterns, and I've noticed that these typically involve using Visual Studio to create the database and add tables (rather than scripts), and the data is typically entered using the built-in designer. This has me wondering if maybe the way I'm doing it is not the most efficient or maintainable.
Questions
In general, do you find it preferable to build your whole database via scripts or a GUI designer, such as SSMSE or Visual Studio?
What method do you recommend for populating your database with startup or test data and why?
Clarification
Judging by the answers so far, I think I should clarify something. Assume that I have a significant amount of data (hundreds or thousands of rows) that needs to find its way into the database. This data could be sourced from various places, such as text files, spreadsheets, web tables, etc. I've received several suggestions to script this process using INSERT statements, but is this really viable when you're talking about a lot of data?
Which leads me to...
New questions
How would you write a SQL script to take the country data on this page and insert it into the database?
With Excel, I could just copy/paste the table into a worksheet and run my utility script, and I'd basically be done.
What if you later realized you needed a new column, CapitalCity?
With Excel, I could take that information from this page, paste it into Excel, and with a quick text-to-column manipulation, I'd have the data in the format I need.
I honestly didn't write this question to defend Excel as the best way or even a good way to get data into a database, but the answers so far don't seem to be addressing my main concern--how to get all this data into your database. Writing a script with hundreds of INSERT statements by hand would be extremely time consuming and error prone. Somehow, this script needs to be machine generated, but how?
I think your current process is fine for seeding the database with initial data. It's simple, easy to maintain, and works for you. If you've got a good database design with adequate constraints then it doesn't really matter how you seed the initial data. You could use an intermediate tool to generate scripts but why bother?
SSIS has a steep learning curve, doesn't work well with source control (impossible to tell what changed between versions), and is very finicky about type conversions from Excel. There's also an issue with how many rows it reads ahead to determine the data type -- you're in deep trouble if your first x rows contain numbers stored as text.
1) I prefer to use scripts for several reasons.
• Scripts are easy to modify, and plus when I get ready to deploy my application to a production environment, I already have the scripts written so I'm all set.
• If I need to deploy my database to a different platform (like Oracle or MySQL) then it's easy to make minor modifications to the scripts to work on the target database.
• With scripts, I'm not dependent on a tool like Visual Studio to build and maintain the database.
2) I like good old fashioned insert statements using a script. Again, at deployment time scripts are your best friend. At our shop, when we deploy our applications we have to have scripts ready for the DBA's to run, as that's what they expect.
I just find that scripts are simple, easy to maintain, and the "least common denominator" when it comes to creating a database and loading up data to it. By least common denominator, I mean that the majority of people (i.e. DBA's, other people in your shop that might not have visual studio) will be able to use them without any trouble.
The other thing that's important with scripts is that it forces you to learn SQL and more specfically DDL (data definition language). While the hand-holding GUI tools are nice, there's no substitute for taking the time to learn SQL and DDL inside out. I've found that those skills are invaluable to have in almost any shop.
Frankly, I find the concept of using Excel here a bit scary. It obviously works, but it's creating a dependency on an ad-hoc data source that won't be resolved until much later. Last thing you want is to be in a mad rush to deploy a database and find out that the Excel file is mangled, or worse, missing entirely. I suppose the severity of this would vary from company to company as a function of risk tolerance, but I would be actively seeking to remove Excel from the equation, or at least remove it as a permanent fixture.
I always use scripts to create databases, because scripts are portable and repeatable - you can use (almost) the same script to create a development database, a QA database, a UAT database, and a production database. For this reason it's equally important to use scripts to modify existing databases.
I also always use a script to create bootstrap data (AKA startup data), and there's a very important reason for this: there's usually more scripting to be done afterward. Or at least there should be. Bootstrap data is almost invariably read-only, and as such, you should be placing it on a read-only filegroup to improve performance and prevent accidental changes. So you'll generally need to script the data first, then make the filegroup read-only.
On a more philosophical level, though, if this startup data is required for the database to work properly - and most of the time, it is - then you really ought to consider it part of the data definition itself, the metadata. For that reason, I don't think it's appropriate to have the data defined anywhere but in the same script or set of scripts that you use to create the database itself.
Test data is a little different, but in my experience you're usually trying to auto-generate that data in some fashion, which makes it even more important to use a script. You don't want to have to manually maintain an ad-hoc database of millions of rows for testing purposes.
If your problem is that the test or startup data comes from an external source - a web page, a CSV file, etc. - then I would handle this with an actual "configuration database." This way you don't have to validate references with VLOOKUPS as in Excel, you can actually enforce them.
Use SQL Server Integration Services (formerly DTS) to pull your external data from CSV, Excel, or wherever, into your configuration database - if you need to periodically refresh the data, you can save the SSIS package so it ends up being just a couple of clicks.
If you need to use Excel as an intermediary, i.e. to format or restructure some data from a web page, that's fine, but the important thing IMO is to get it out of Excel as soon as possible, and SSIS with a config database is an excellent repeatable method of doing that.
When you are ready to migrate the data from your configuration database into your application database, you can use SQL Server Management Studio to generate a script for the data (in case you don't already know - when you right click on the database, go to Tasks, Generate Scripts, and turn on "Script Data" in the Script Options). If you're really hardcore, you can actually script the scripting process, but I find that this usually takes less than a minute anyway.
It may sound like a lot of overhead, but in practice the effort is minimal. You set up your configuration database once, create an SSIS package once, and refresh the config data maybe once every few months or maybe never (this is the part you're already doing, and this part will become less work). Once that "setup" is out of the way, it's really just a few minutes to generate the script, which you can then use on all copies of the main database.
Since I use an object-relational mapper (Hibernate, there is also a .NET version), I prefer to generate such data in my programming language. The ORM then takes care of writing things into the database. I don't have to worry about changing column names in the data because I need to fix the mapping anyway. If refactoring is involved, it usually takes care of the startup/test data also.
Excel is an unnecessary component of this process.
Script the current version the database components that you want to reuse, and add the script to your source control system. When you need to make changes in the future, either modify the entities in the database and regenerate the script, or modify the script and regenerate the database.
Avoid mixing Visual Studio's db designer and Excel as they only add complexity. Scripts and SQL Management Studio are your friends.
I'm about to release a FOSS data generator that can generate random yet meaningful data in CSV format. Rather belatedly, I guess, I need to poll the state of the art for such products - because if there is a well known and useful existing tool, I can write my work off to experience. I am aware of of a couple of SQL Server specific tools, but mine is not database specific.
So, links? And if you have used such a product,
what features did you find it was missing?
Edit: To add a bit more info on my tool (Ooh, Matron!) it is intended to allow generation of any kind of random data from existing data files, and
supports weighting. It is XML based (sorry, folks) and lets you say things like:
<pick distribute="20,80" >
<datafile file="femalenames.dat"/>
<datafile file="malenames.dat"/>
<pick/>
to select female names about 20% of the time and male names 80% of the time.
But the purpose of this question is not to describe my product but to get info on other tools.
Latest: If anyone is interested, they can get the alpha of my data generator at http://code.google.com/p/csvtest
That can be a one-liner in R where I use the littler scripting front-end:
# generate the data as a one-liner from the command-line
# we set the RNG seed, and draw from a bunch of distributions
# indented just to fit the box here
edd#ron:~$ r -e'set.seed(42); write.csv(data.frame(y=runif(10), x1=rnorm(10),
x2=rt(10,4), x3=rpois(10, 0.4)), file="/tmp/neil.csv",
quote=FALSE, row.names=FALSE)'
edd#ron:~$ cat /tmp/neil.csv
y,x1,x2,x3
0.914806043496355,-0.106124516091484,0.830735621223563,0
0.937075413297862,1.51152199743894,1.6707628713402,0
0.286139534786344,-0.0946590384130976,-0.282485683052060,0
0.830447626067325,2.01842371387704,0.714442314565005,0
0.641745518893003,-0.062714099052421,-1.08008578470128,0
0.519095949130133,1.30486965422349,2.28674786332467,0
0.736588314641267,2.28664539270111,-0.73270267483628,1
0.134666597237810,-1.38886070111234,-1.45317770550920,1
0.656992290401831,-0.278788766817371,-1.01676025893376,1
0.70506478403695,-0.133321336393658,0.404860813371462,0
edd#ron:~$
You have not said anything about your data-generating process, but rest assured that R can probably cope with just about any requirement, including multivariate normal, t, skew-t, and more. The (six different) random-number generators in R are also of very high quality.
R can also write to DBs, or read parameters from it, and if it needs to be on Windoze then the Rscript front-end could be used instead of littler.
I asked a similar question some months ago:
Tools for Generating Mock Data?
I got some sincere suggestions, but most were not suitable for my needs. Either expensive (non-free) software, or else not flexible enough w.r.t. data types and database structure, or range of mock data, or way too slow (e.g. the Rails ActiveRecord solution).
Features I was looking for were:
Generate mock data to fill existing database tables
Quick to generate > 1 million rows
Produce either SQL script format or flat file suitable for importing
Scriptable command-line interface, not a GUI
Not dependent on Microsoft Windows environment
Nice-to-have features:
Extensible/configurable
Open-source, free license
Written in a dynamic language like Perl/PHP/Python
Point it at a database and let it "discover" the metadata
Integrated with testing tools (e.g. DbUnit)
Option to fill directly into the database as it generates data
The answer I accepted as Databene Benerator. Though since asking the question, I admit I haven't used it very much.
I was surprised that even when asking the community, the range of tools for generating mock data was so thin. This seems like a niche waiting to be filled! I'll be interested to see what you release.
I'd like to start a discussion about the implementation of a database system.
I'm working for a company having a database system grown over ca. the last 10 years.
Let me try to describe what it's doing and how it's implemented:
The system is divided into 3 main parts handled by 3 different teams.
Entry:
The Entry Team is responsible for creating GUIs for the system. In the background is a huge MS SQL database (ca. 100 tables) and the GUI is created using .NET. There are different GUI applications and each application has lots of different tabs to fill in the corresponding tables. If e.g. a new column is added to the database, this column is added manually to the GUI application.
Dataflow:
The purpose of the Dataflow Team is to do do data calculations and prepare the data for the reporting team. This is done via multiple levels. Let me try to explain the process a little bit more in detail: The Dataflow Team uses the data from the Entry database copied to another server and another database via Transactional-Replication (this data contains information from all clients). Then once per hour a self-written application is checking for changed rows in the input tables (using a ChangedDate column) and then calling a stored procedure for each output table calculating new data using 1-N of the input tables. After that the data is copied to another database on another server using again Transaction-Replication. Here another stored procedure is called to calclulate additional new output tables. This stored procedure is started using a SQL job. From there the data is split to different databases, each database being client specific. This copying is done using another self-written application using the .NET bulkcopy command (filtering on the client). These client specific databases are copied to different client specific reporting databases on other servers via another self-written application which compares the reporting database with the client specific database to calculate the data difference. Just the data differences are copied (because the reporting database run in former times on the client servers).
This whole process is orchestrated by another self-written application to control e.g. if the Transactional-Replications are finished before starting the job to call the Stored procedure etc... Futhermore also the synchronisation between the different clients is orchestrated here. The process can be graphically displayed by a self-written monitoring tool which looks pretty complex as you can imagine...
The status of all this components is logged and can be viewed by another self-written application.
If new columns or tables are added all this components have to be manually changed.
For deployment installation instructions are written using MS Word. (ca. 10 people working in this team)
Reporting:
The Reporting Team created it's own platform written in .NET to allow the client to create custom reports via a GUI. The reports are accessible via the Web.
The biggest tables have around 1 million rows. So, I hope I didn't forget anything important.
Well, what I want to discuss is how other people realize this scenario, I can't imagine that every company writes it's own custom applications.
What are actually the possibilities to allow fast calculations on databases (next to using T-SQL). I'm somehow missing the link here to the object oriented programming I'm used to from my old company, but we never dealt with so much data and maybe for fast calculations this is the way to do it...Or is it possible using e.g. LINQ or BizTalk Server to create the algorithms and calculations, maybe even in a graphical way? The question is just how to convert the existing meter-long Stored procedures into the new format...
In future we want to use data warehousing, but that will take a while, so maybe it's possible to have a separate step to streamline the process.
Any comments are appreciated.
Thanks
Daniel
Why on earth would you want to convert existing working complex stored procs (which can be performance tuned) to LINQ (or am I misunderstanding you)? Because you personally don't like t-sql? Not a good enough reason. Are they too slow? Then they can be tuned (which is something you really don't want to try to do in LINQ). It is possible the process can be made better using SSIS, but as complex as SSIS is and the amount of time a rewrite of the process would take, I'm not sure you really would gain anything by doing so.
"I'm somehow missing the link here to the object oriented programming..." Relational databases are NOT Object-oriented and cannot perform well if you try to treat them like they are. Learn to think in terms of sets not objects when accessing databases. You are coming from the mindset of one user at a time inserting one record at a time, but this is not the mindset neeeded to deal with the transfer of large amounts of data. For these types of things, using the database to handle the problem is better than doing things in an object-oriented manner. Once you have a large amount of data and lots of reporting, people are far more interested in performance than you may have been used to in the past when you used some tools that might not be so good for performance. Whether you like T-SQL or not, it is SQL Server's native language and the database is optimized for it's use.
The best advice, having been here before, is to start by learning first how SQL works, and doing it in the context of the existing architecture sounds like a good way to start (since nothing you've described sounds irrational on the face of it.)
Whatever abstractions you try to lay on top (LINQ, Biztalk, whatever) all eventually resolve to pure SQL. And almost always they add overhead and complexity.
Your OO paradigms aren't transferable. Any suggestions about abstractions will need to be firmly defensible based on your firm grasp of the SQL consequences.
It will take a while, but it's all worth knowing, both professionally and personally.
I'm currently re-engineering a complex system which is moving from Focus (a database and language) to a data warehouse (separate team) and processing (my team) and reporting (separate team).
The current process is combined - data is loaded and managed in the Focus language and Focus database(s) and then reported (and historical data is retained)
In the new process, the DW is loaded and then our process begins. Our processes are completely coded in SQL, and a million row fact table (for one month) would be relatively small. We have some feeds where the monthly data is 25 million rows. There are some statistics tables produced which are over 200 million rows (a month). The processing can take several hours a month, end to end. We use tables to store intermediate results, and we ensure indexing strategies are suitable for the processing. Except for one piece implemented as an SSIS flow from the database back to itself because of extremely poor scalar UDF performance, the entire system is implemented as a series of T-SQl SPs.
We also have a process monitoring system similar to what you are discussing as well as having the dependencies in a table which ensures that each process runs only if all its prerequisites are satisfied. I've recently grafted on the MSAGL to graphically display and interact with the process (previously I was using graphviz to generate static images) from a .NET Windows application. The new system thus has much clearer dependency information as well as good information about process performance so effort can be concentrated on the slowest performing bottlenecks.
I would not plan on doing any re-engineering of any complex system without a clear strategy, a good inventory of the existing system and a large budget for time and money.
From the sounds of what you are saying, you have a three step process.
Input data
Analyze data
Report data
Steps one and three need to be completed by "users". Therefore, a GUI is needed for each respective team to do the task at hand, otherwise, they would be directly working on SQL Server, and would require extensive SQL knowledge. For these items, I do not see any issue with the approach your organization is taking, you are building a customized system to report on the data at hand. The only item that might be worth considering on these side, is standardization between the teams on common libraries and the technologies used.
Your middle step does seem to be a bit lengthy, with many moving parts. However, I've worked on a number of large reporting systems where that is truly the only way to get around it. WIthout knowing more of your organization and the exact nature of operations.
By "fast calculations" you must mean "fast retrieval" Data warehouses (both relational and otherwise) are fast with math because the answers are pre-calculated in advance. SQL, unless you are using CLR stored procedures, is usually a rather slow when it comes to math.
You'd be hard pressed to defeat the performance of BCP and SQL with anything else. If the update routines are long and bloated because they loop through the tables, then sure I can see why you'd want to go to .NET. But you'd probably increase performance by figuring out how to rewrite them all nice and SET based. BCP is not going to be able to be beaten. When I used SQL Server 2000 BCP was often faster than DTS. And SSIS in general (due to all the data type checking) seems to be way slower than DTS. If you kill performance no doubt people are going to be coming to you. Still if you are doing a ton of row by row complex calculations, optimizing that into a CLR stored procedure or even a .NET application that is called from SQL Server to do the processing will probably result in a speed up. Of course if you were row processing and you manage to rewrite the queries to do set processing you'd probably get a bigger speed up. But depending upon how complex the calculations are .NET may help.
Now if a front end change could immediately update and propagate the data, then you might want to change things to .NET so that as soon as a row is changed it can be recalculated and update all the clients. However if a lot of rows are changed or the database is just ginormous then you will kill performance. If the operation needs to be done in bulk then probably the way it is currently being done is the best.
The only thing I might as is that maybe there is a lot of duplicate SQL that looks exactly the same except for a table name and or the column names. If so, you can probably use .NET combined with SQL-SMO(or DMO if using SQL Server 2000) to code generate it.
Here's an example that I often see to load a datawarehouse
Assuming some row tables are loaded with the data from the source
select changed rows from source into temporary tables
see if any columns that matter were changed
if so terminate existing row (or clone it into some history table)
insert/update new row
I often see one of those queries per table and the only variations are the table/column names and maybe references to the key column. You can easily get the column definitions and key definitions out of SQL Server and then make a .NET program to create the INSERT/SELECT/ETC. In the worst case you may just have to store some type of table with TABLE_NAME, COLUMN_NAME for the columns that matter. Then instead of having to wrap your head around a complex ETL process and 20 or 200 update queries, you just need to wrap your head around UPDATE and one query. Any changes to the way things are done can be done once and applied to all the queries.
In particular my guess is that you can apply this technique to the individual client databases if you haven't already. Probably all the queries/bulk copy scripts are the same or almost the same with the exception of database/server name. So you can just autogenerate them based on a CLIENTs table or something.....