I would request for some help on test automation data storage and retrieval. We are writing test automation scripts with Selenium Webdriver. We started off with using MS Excel sheets to store our test data and using Apache POI to read the data. What we observed recently is sometimes when multiple people modify the same sheet and checkin into GIT, the changes are not reflected.
One automation engineer proposed using .csv file to avoid this problem and I proposed using a Oracle database to store test data.
Is it a good idea to store test data in different Oracle DB tables? My idea is to create oracle tables with two columns that store name/value pairs. My application is large and might require anywhere between 5 to 10 tables.
Kindly let me know.
Regards
Srinivas
I believe, for most of the cases CSV would be good enough.
Creating special database (especially for storing only key/pair values) for test data seems like overkill.
CSV pros
Advantages of using GIT (easy diffs, history of changes)
Performance: It's plain text file, easy to read
Disk space
Support of parameterized tests (Junit, TestNG)
CSV cons
Maintain relations between different data if such exists
Database pros
Flexibility
Easier mapping complicated objects with relations
Database cons
Time for building initial setup and maintaining it
Performance: Starting, cleaning-up and initializing DB (to keep repeatability of tests)
Preferably you should test also database that you're using
Main problem with using database is maintaining tests code, which should be simple, fast and repeatable.
Related
We're approaching the migration of legacy OpenVMS RMS files into relational database (both MS SQL 2012 and Oracle 10g are available).
I wonder if there are:
Tools to retrieve schema of indexed files
Tools to parse indexed files
Tools to deal with custom RMS data formats (zoned decimals etc)
as a bundle/API/Library
Perhaps I should change the approach?
There are several tools available, notably through ODBC vendors (I work for one: Attunity).
1 >> Tools to retrieve schema of indexed files
Please clarify. Looking for just record/column layout and indexes within the files or also relationships between files.
1a) How are the files currently being used? Cobol, Basic, Fortran programs? Datatrieve?
They will be using some data definition method, so you want a tool which can exploit that.
Connx, and Attunity Connect can 'import' CDD definitions, BASIC - MAP files, Cobol Copybooks. Variants are typically covered as well. I have written many a (perl/awk) script to convert special definition to XML.
1b ) Analyze/RMS, or a program with calling RMS XAB's can get available index information. Atunity connect will know how to map those onto the fields from 1a)
1c ) There is no formal, stored, relationship between (indexed) files on OpenVMS. That's all in the program logic. However, some modestly smart Perl/Awk/DCL script can often generate a tablem of likely foreign/primary keys by looking at filed names and datatypes matches.
How many files / layouts / gigabytes are we talking about?
2 >> Tools to parse indexed files
Please clarify? Once the structure is known (question 1), the parsing is done by reading using that structure right? You never ever want to understand the indexed file internals. Just tell RMS to fetch records.
3 >> Tools to deal with custom RMS data formats (zoned decimals etc) as a bundle/API/Library
Again, please clarify. Once the structure is known just use the 'right' tool to read using that structure and surely it will honor the detailed data definitions.
(I know it is quite simple to write one yourself, just thought there would be something in the industry)
Famous last words... 'quite simple'. Entire companies have been build and thrive doing just that for general cases. I admit that for specific cases it can be relatively straightforward, but 'the devil is in the details'.
In the Attunity Connect case we have a UDT (User Defined data Type) to handle the 'odd' cases, often involving DATES. Dates in integers, in strings, as units since xxx are all available out of the box, but for example some have -1 meaning 'some high date' which needs some help to be stored in a DB.
All the databases have some bulk load tool (BCP, SQL$LOADER).
As long as you can deliver data conforming to what those expect (tabular, comma-seperated, quoted-or-not, escapes-or-not) you should be in good shape.
The EGH tool Vselect may be a handy, and high performance, way to bulk read indexed files, filter and format some and spit out sequential files for the DB loaders. It can read RMS indexed file faster than RMS can! (It has its own metadata language though!)
Attunity offers full access and replication services.
They include a CDC (change data capture) to not a only load the data, but to also keep it up to date in near-real-time. That's useful for 'evolution' versus 'revolution'.
Check out Attunity 'Replicate'. Once you have a data dictionary, just point to the tables desired (include, exlude filters), point to a target DB and click to replicate. Of course there are options for (global or per-table) transformations (like an AREA-CODE+EXHANGE+NUMBER to single phone number, or adding a modified date columns ).
Will this be a single big switch conversion, or is there desire to migrate the data and keep the old systems alive for days, months, years perhaps, all along keeping the data in close sync?
Hope this helps some,
Hein van den Heuvel.
OP: Perhaps I should change the approach? Probably.
You might consider finding data migration vendors, some which likely have off-the-shelf solutions, if not as a COTS tool, more likely packaged as a service (I don't think this is a big market).
What this won't help you with is what I think of as much bigger problem with the application code: who is going to change all the code that is making RMS calls, in the corresponding code that makes relational DB calls? How will the entity ("Joe Programmer", or some tool), know where the data migrated to, so that he can write the correct call? What are you doing to do about the fact that the data representation is like to change?
Ideally you'd like an automated migration tool, that will move the data itself (therefore knows that datalayouts and representation changes), and will make the code changes that correspond. You can look for these kind of vendors, too.
I am working on Selenium automation using WebDriver. It is keyword and data driven approach. I am handling all the inputs of objects,data and test configuration from Microsoft Excel.
Now client want to use database. He is asking me which one is more good to use in framework. Either database or Microsoft Excel utility? I have to reply to him with valid points.
Which one is the better to use in framework and also why second one is not good to use?
This question really requires an opinionated answer, but because there are some important benefits from one or the other I will still answer it!
With Excel, data entry is easy and accessible to non-technical testers. It has good sorting ability and the data can be organised in a pretty intuitive fashion. However, you cannot add or alter data during a test easily (I understand you could do this but I would this this is overboard). This means data must be previously organised and specific to the test requirements.
In a database data can come from queries and be altered on the fly. You could write helper functions that populate required data if it can't be found in the db. This means tests can run without worrying about what data is currently in the db (to an extent) and can alleviate some project issues. However, with a database there can be issues importing/exporting and there could be a lot of coding overhead. Also, obviously non-technical testers will find it hard to change the data.
As I said this is really an opinion in the end and is very much a project by project decision.
With my experience efficiency plays an integral role comparing these too approaches. Using a database tend to slow down the performance unlike excel sheets. Using a file(excel) increases the efficiency. And in our company automations suits we rarely use databases. Instead will push data in to files and use them as the repository.
For most database-backed projects I've worked on, there is a need to get "startup" or test data into the database before deploying the project. Examples of startup data: a table that lists all the countries in the world or a table that lists a bunch of colors that will be used to populate a color palette.
I've been using a system where I store all my startup data in an Excel spreadsheet (with one table per worksheet), then I have a utility script in SQL that (1) creates the database, (2) creates the schemas, (3) creates the tables (including primary and foreign keys), (4) connects to the spreadsheet as a linked server, and (5) inserts all the data into the tables.
I mostly like this system. I find it very easy to lay out columns in Excel, verify foreign key relationships using simple lookup functions, perform concatenation operations, copy in data from web tables or other spreadsheets, etc. One major disadvantage of this system is the need to sync up the columns in my worksheets any time I change a table definition.
I've been going through some tutorials to learn new .NET technologies or design patterns, and I've noticed that these typically involve using Visual Studio to create the database and add tables (rather than scripts), and the data is typically entered using the built-in designer. This has me wondering if maybe the way I'm doing it is not the most efficient or maintainable.
Questions
In general, do you find it preferable to build your whole database via scripts or a GUI designer, such as SSMSE or Visual Studio?
What method do you recommend for populating your database with startup or test data and why?
Clarification
Judging by the answers so far, I think I should clarify something. Assume that I have a significant amount of data (hundreds or thousands of rows) that needs to find its way into the database. This data could be sourced from various places, such as text files, spreadsheets, web tables, etc. I've received several suggestions to script this process using INSERT statements, but is this really viable when you're talking about a lot of data?
Which leads me to...
New questions
How would you write a SQL script to take the country data on this page and insert it into the database?
With Excel, I could just copy/paste the table into a worksheet and run my utility script, and I'd basically be done.
What if you later realized you needed a new column, CapitalCity?
With Excel, I could take that information from this page, paste it into Excel, and with a quick text-to-column manipulation, I'd have the data in the format I need.
I honestly didn't write this question to defend Excel as the best way or even a good way to get data into a database, but the answers so far don't seem to be addressing my main concern--how to get all this data into your database. Writing a script with hundreds of INSERT statements by hand would be extremely time consuming and error prone. Somehow, this script needs to be machine generated, but how?
I think your current process is fine for seeding the database with initial data. It's simple, easy to maintain, and works for you. If you've got a good database design with adequate constraints then it doesn't really matter how you seed the initial data. You could use an intermediate tool to generate scripts but why bother?
SSIS has a steep learning curve, doesn't work well with source control (impossible to tell what changed between versions), and is very finicky about type conversions from Excel. There's also an issue with how many rows it reads ahead to determine the data type -- you're in deep trouble if your first x rows contain numbers stored as text.
1) I prefer to use scripts for several reasons.
• Scripts are easy to modify, and plus when I get ready to deploy my application to a production environment, I already have the scripts written so I'm all set.
• If I need to deploy my database to a different platform (like Oracle or MySQL) then it's easy to make minor modifications to the scripts to work on the target database.
• With scripts, I'm not dependent on a tool like Visual Studio to build and maintain the database.
2) I like good old fashioned insert statements using a script. Again, at deployment time scripts are your best friend. At our shop, when we deploy our applications we have to have scripts ready for the DBA's to run, as that's what they expect.
I just find that scripts are simple, easy to maintain, and the "least common denominator" when it comes to creating a database and loading up data to it. By least common denominator, I mean that the majority of people (i.e. DBA's, other people in your shop that might not have visual studio) will be able to use them without any trouble.
The other thing that's important with scripts is that it forces you to learn SQL and more specfically DDL (data definition language). While the hand-holding GUI tools are nice, there's no substitute for taking the time to learn SQL and DDL inside out. I've found that those skills are invaluable to have in almost any shop.
Frankly, I find the concept of using Excel here a bit scary. It obviously works, but it's creating a dependency on an ad-hoc data source that won't be resolved until much later. Last thing you want is to be in a mad rush to deploy a database and find out that the Excel file is mangled, or worse, missing entirely. I suppose the severity of this would vary from company to company as a function of risk tolerance, but I would be actively seeking to remove Excel from the equation, or at least remove it as a permanent fixture.
I always use scripts to create databases, because scripts are portable and repeatable - you can use (almost) the same script to create a development database, a QA database, a UAT database, and a production database. For this reason it's equally important to use scripts to modify existing databases.
I also always use a script to create bootstrap data (AKA startup data), and there's a very important reason for this: there's usually more scripting to be done afterward. Or at least there should be. Bootstrap data is almost invariably read-only, and as such, you should be placing it on a read-only filegroup to improve performance and prevent accidental changes. So you'll generally need to script the data first, then make the filegroup read-only.
On a more philosophical level, though, if this startup data is required for the database to work properly - and most of the time, it is - then you really ought to consider it part of the data definition itself, the metadata. For that reason, I don't think it's appropriate to have the data defined anywhere but in the same script or set of scripts that you use to create the database itself.
Test data is a little different, but in my experience you're usually trying to auto-generate that data in some fashion, which makes it even more important to use a script. You don't want to have to manually maintain an ad-hoc database of millions of rows for testing purposes.
If your problem is that the test or startup data comes from an external source - a web page, a CSV file, etc. - then I would handle this with an actual "configuration database." This way you don't have to validate references with VLOOKUPS as in Excel, you can actually enforce them.
Use SQL Server Integration Services (formerly DTS) to pull your external data from CSV, Excel, or wherever, into your configuration database - if you need to periodically refresh the data, you can save the SSIS package so it ends up being just a couple of clicks.
If you need to use Excel as an intermediary, i.e. to format or restructure some data from a web page, that's fine, but the important thing IMO is to get it out of Excel as soon as possible, and SSIS with a config database is an excellent repeatable method of doing that.
When you are ready to migrate the data from your configuration database into your application database, you can use SQL Server Management Studio to generate a script for the data (in case you don't already know - when you right click on the database, go to Tasks, Generate Scripts, and turn on "Script Data" in the Script Options). If you're really hardcore, you can actually script the scripting process, but I find that this usually takes less than a minute anyway.
It may sound like a lot of overhead, but in practice the effort is minimal. You set up your configuration database once, create an SSIS package once, and refresh the config data maybe once every few months or maybe never (this is the part you're already doing, and this part will become less work). Once that "setup" is out of the way, it's really just a few minutes to generate the script, which you can then use on all copies of the main database.
Since I use an object-relational mapper (Hibernate, there is also a .NET version), I prefer to generate such data in my programming language. The ORM then takes care of writing things into the database. I don't have to worry about changing column names in the data because I need to fix the mapping anyway. If refactoring is involved, it usually takes care of the startup/test data also.
Excel is an unnecessary component of this process.
Script the current version the database components that you want to reuse, and add the script to your source control system. When you need to make changes in the future, either modify the entities in the database and regenerate the script, or modify the script and regenerate the database.
Avoid mixing Visual Studio's db designer and Excel as they only add complexity. Scripts and SQL Management Studio are your friends.
The problem: we have one application that has a portion which is used by a very small subset of the total users, and that part of the application is running off of a separate database as well. In a perfect world, the schemas of the two databases would be synced up, but such is not the case. Some migrations have been run on the smaller database, most haven't; and furthermore, there is nothing such as revision number to be able to easily identify which have and which haven't. We would like to solve this quandary for future projects. During a discussion we've come up with the following possible plan of action, and I am wondering if anyone knows of any project which has already solved this problem:
What we would like to do is create an empty database from the schema of the large fully-migrated database, and then move all of the data from the smaller non-migrated database into that empty one. If it makes things easier, it can probably be assumed for the sake of this problem specifically that no migrations have ever removed anything, only added.
Else, if there are other known solutions, I'd like to hear them as well.
You could use a schema comparison tool like Red-Gate's SQL Compare. You can synchronize the changes and not lose any data. I wrote about this and many alternative tools ranging widely in price here:
http://bertrandaaron.wordpress.com/2012/04/20/re-blog-the-cost-of-reinventing-the-wheel/
The nice thing is that most tools have trial versions. So, you can try them our for 14 days (fully functional) and only buy it if it meets your expectations. I can't speak for the other tools, but I've been using RG for years and it is a very capable and reliable tool.
(Updated 2012-06-23 to help prevent link-rot.)
Red-Gate's SQL Compare as Aaron Bertrand mentions in his answer is a very good option. However, if you are not permitted to purchase something, an option is to try something like:
1) For each database, script out all the tables, constraints, indexes, views, procedures, etc.
2) run a DIFF, and go through all the differences and make sure that the small DB can accept them. If not implement any changes (including data) necessary onto the small DB so it can accept the changes.
3) create a new empty database from the schema of the large DB
4) import the data from the small DB into the nee DB.
You could also reverse engineer your database into Visual Studio as a database project. Visual Studio Team Suite Database Edition GDR R2 (I know long name) has the capability to do a schema comparison and data comparison, but the beauty of this approach is that you get all of your database into a nice database project where you can manage change and integrate with source control. This would allow you to build from a common source and deploy consistent changes.
i remember in my previous job, i needed to do data migration. in that case, i needed to migrate to a new system, i was to develop, so it has a different table schema. i think 1st, i should know:
in general, how is data migrated (with the same schema) to a different DB engine. eg. MySQL -> MSSQL. in my case, my destination DB was MySQL and i used MySQL Migration Toolkit
i am thinking, in an enterprise app, there may be stored procedures, triggers that also need to be imported.
if table schema is different, how will i then go abt doing this? in my prev job, what i did was import data (in my case, from Access) into my destination (MySQL) leaving table structures. then use SQL to select data and manipulate as required into final destination tables.
in my case, where i dont have documentation for the old db, and the columns was not named correctly, eg. it uses say 'field1', 'field2' etc. i needed to trace from the application code what the columns mean. any better way? or sometimes, columns contain multiple values in delimited data, is reading code the only way?
I really depends, but from your question I assume you want to hear what other people do.
So here is what I do in my current project.
I have to migrate from Oracle to Oracle but to a completely different schema.
The old system was 2-tier (old client, old database) the new system is 3-tier (new client, business logic, new database). We have more than 600 tables in the new schema.
After much pondering we scraped the idea of doing a migration from old database to new database in SQL. We decided that in our case i would be much easier to go:
old database -> old client -> business logic -> new database
In the old database much of the data is stored in strange ways and the old client
mangles it in complex ways. We have access to the source code of the old client but it is a very large system.
We wrote a migration tool that sits above the old client and the business logic.
We have some SQL before and some SQL after that but the bulk of data is migrated via
old client and business logic.
The downside is that it is slow, a complete migration taking more than 190 hours in our case but otherwise it works well.
UPDATE
As far as stored procedures and triggers are concerned:
Even as we use the same DBMS in old and new system (both Oracle) the procedures and
triggers are written from scratch for the new system.
When I've performed database migrations, I've used the application instead a general tool to migrate the database. The application connects to two databases and copies objects from one to the other. You don't have to worry about schema or permissions or whatnot since all that is handled in the application, just like what happens when you set up the application in the first place.
Of course, this may not help you if your application doesn't support this. But if you're writing an application, I strongly recommend doing it this way.
I recommend the wikipedia article for a good overview and links to the main commercial tools (and some non-commercial ones). Stored procedures (and kin, e.g. user-defined function), if abundant, are going to be the "hot spots" in the migration, requiring rare abd costly human skills -- as soon as you get away from the "declarative" mood of mainstream SQL, and into procedural code, you cannot expect automated tools to do a decent job (Turing's Theorem says that they actually can't, in a sufficiently general case;-). So, you need engineers with a good understanding of the procedural trappings of BOTH engines -- the one you're migrating from, the one you're migrating to. You can buy that -- it's one of the niches where consultants make REALLY good money!-)
If you are using MS SQL Server, you can use SSMS to script out the schema and all data in one go: SQL Server 2008: Script Data as Inserts.
If you are not using any/many non-standard SQL constructs, then you might be able to manually edit this scipt without too much effort.