For a testing projects I need to test is databases are working as required or not; as well as need to check that given two databases, the data in DB are identical or not.
So, is there any pre-defined algorithm for preforming this task?
Define what is identical. Is it schema, data or both? If schema, is it only tables or all elements including functions/procedures/views etc.
If you would use jdbc, then start with schema revealing functions and compare and contrast the objects one by one. Then repeat the same for the data inside the tables.
You can take advantage of the OpenSource projects to read the schema details.
Related
Context :
Let's suppose we have multiple datamarts (Ex : HR, Accounting, Marketing ...) and all of them use the Star Schema as dimensional modeling (Kimball approach ) .
Question :
Since Snowflake cloud data warehouse architecture eliminate the need to spin off separate physical data marts / databases in order to maintain performance. So, what's the best approach to build the multiple datamarts on Snowflake ?
Create database for each datamart ? create one database (EDW )with multiple schema and each schema refer to a datamart ?
Thank you !
Ron is correct - the answer depends on a few things:
If there are conformed dimensions, then one database and schema might be the way to go
If they are completely non-integrated data marts I would go with separate schemas or even separate databases. They are all logical containers in Snowflake (rather than physical) with full role based access control available to segregate users.
So really - how do you do it today? Does that work for you or are there things you need or want to do that you cannot do today with your current physical setup. How is security set up with your BI tools? Do they reference a database name or just a schema name? If you can, minimize changes to your data pipeline and reporting so you have fewer things that might need refactoring (at least for your first POC or migration).
One thing to note is that with Snowflake you have the ability to easily do cross database joins (i.e., database.schema.table) - all you need is SELECT access, so even if you separate the marts by database oyu can still do cross mart reporting if needed.
Hope that helps.
There is no specific need to separate star schemas at all.
If you're using shared / conformed dimensions across your marts, separation would actually be an anti-pattern.
If your issue is simplifying the segregation of users, schema per mart works well.
All of the approaches you've suggested (DB/mart, DW/schema,...) will work, I'm just not clear on the need.
The goal of having separate data marts is more related to governance, to keep data organized and where it is expected to be found (i.e. sales transactions in the "sales data mart"), and less related to performance issues.
The advantage of having a single database acting as a data warehouse is that all your data for analytics will be stored in one place, making it more accessible and easier to find. In this case, you can use schemas to implement (logically) separate data marts. You can also use schemas within a database to keep development data separate from production data, for each data mart.
Snowflake is different from traditional relational databases; given its technical architecture, it has no issues with joining large tables between different databases/schemas so you can certainly build different data marts in separate databases and join their facts or dimensions with some other Snowflake database/data mart.
In you specific case, if you have a large number of data marts (e.g. 10 or more) and you're not using Snowflake for much more than data warehouseing, I think the best path would be to implement each data mart in its own database and use schemas to manage prod/dev data within each schema. This will help keep data organized, as opposed to quickly reaching a point where you'll have hundreds of tables (every data mart, and its dev/prod versions) in one database, which won't be a great development or maintenance experience.
But, from a performance perspective, there's really no noticeable difference.
For a list of automated tests we use sql fixtures to prepare the database in a desired state. The problem is however that the database is frequently updated where columns can be added, or removed from different tables. This is done with other sql files. It is possible that the changed tables are mentioned in the sql fixtures.
Is there some way to update the existing sql fixtures with the sql database updates. It is possible that multiple fixtures have overlap in tables so each fixture should be updated independently.
First and foremost I'd try to get a handle on the migration process if things are truly going too fast. Schemas do change but if you're spending a significant amount of time just keeping up with that, it's worth taking a step back to try to stabilize things.
The next thing to do is to minimize the impact of schema changes: if you have a series of monolithic fixtures which set up entire data-states in one go, break them apart and use a compositional approach. For example, if you have a users table, there should be only one fixture that inserts new users. Other fixtures which depend on user data should rely on the user-fixture to take care of that before inserting what they need. Keep the single responsibility principle in mind and compose the data-state for a given test by invoking a chain of discrete, well-defined fixtures. This way, if a column is added to or removed from users, you only have to change a single fixture.
Finally, if you really want to automate things, you're getting into generating code. There's one major caveat here which is that it's very difficult to generate test data humans can easily work with while debugging. If you're fine with meaningless gibberish in all your text fields, then it's not a dealbreaker, but it is something to be aware of.
Relational databases expose the structure and metadata in the information schema so you can query it exactly like you do your own tables. You'll need to consult the documentation for your specific database vendor since there are differences among them, but you can for example list the tables in your database with the query SELECT * FROM information_schema.tables. Likewise for information_schema.columns; other tables describe primary and foreign keys and constraints; and so forth. Use that information to build queries which insert well-formed data, and you're off to the races. This is almost certainly more trouble than it's worth.
Background
I have a software component that writes data to a postgres database (into several tables) and I want to write an automatic functional test for this component. I already have a host of unit tests in place that check the subcomponents, but I'd like a test that checks the whole system end-to-end.
For each test run, I use a clean database (actually a completely new, this-test-run-only database). The software component is stable in the sense that given the same input, it will always write the same user data to the database.
The database design is relational, such that most tables contain foreign keys. Obviously, I don't want to check the value of these keys, because I don't want to rely on the fact that these keys are generated in a predictive manner by postgres.
Assume that there are no issues regarding user rights on the database, connection issues etc. Also disregard development/production disparities.
I currently use a number of select statements to produce a textual "dump" of the database and compare it to a reference dump (ignoring whitespace and so on), but this seems rather clumsy. Also, this doesn't take into account the relationships between the tables. Extending the current approach to deal with this doesn't strike me as maintainable at all, should the database layout ever change.
My software as well as the testing framework is written in C++, the testing scripts are simple bash scripts. I'm open to use any language to achieve this.
Question
How can I automatically verify the database contents in "the database way"?
Even better would be an approach that doesn't rely on postgres as the backend.
pgTap is a testing framework for PostgreSQL. You can use it to test both the structure and the content of a PostgreSQL database. I've used it on projects that had to meet certain contractual standards for seeded data (data for "lookup" tables like state codes and abbreviations, delivery carriers, user roles, etc.). It has worked well for that purpose.
But I don't yet see a compelling reason to abandon your current method, which is already written and working. Text dumps of single tables are supported by all current SQL dbms, as far as I know. If you move to a different dbms, you'll have to change the name of the dump program and the arguments to it. I can't imagine why you'd need to change the reference file, but I suppose that could happen.
The "database way" is really just to select the data you expect to be in the database, and see if it's really there. That's pretty much what you're doing now, and what pgTap does with perhaps greater flexibility.
To increase maintainability (to reduce duplication), you could generate the INSERT statements from the reference data, or you could generate the reference data from the INSERT statements. I can imagine development environments where that would be a wise thing to do, but I don't know whether yours is one of them.
I have a few challenges I need help on. I need to pull data in to my SQL database from arbitrary sources.
The details are: I know the exact structure of my database and the structure will not change. When I do take in new data, it will occur only one time, at the time I set up an instance of my database. I will make many instances of my database and each time it will have to pull data from a different source, and those sources will be structured in different ways.
The data will most likely contain thousands of rows of records. The data source will most likely be held in Excel, Access, more rare Word and even rarer, it'll be held in a SQL database.
I can assume that most of the core data will be the same, just put in different locations. They will follow a general grouping despite how there held.
Essentially, I'm transferring data from legacy systems to a SQL system and this must be done for many groups and they need their own private instance of the database.
Any thoughts on how I would do this? How hard would it be to write a program that would do most of this for me?
This is definitely a real-world question. Is it possible to write a program that will do most of this? Not most of this, I think, but perhaps some of it.
For each table in your target system, create a view that displays the source data you expect to be able to insert. Choose column names that make it easy to tell what has to be done; most likely you'll choose column names that match the target columns in your INSERT statement. Save your INSERT statements as stored procedures.
Now, when you are given a new source of data in a new format, you will still have to recreate your views, but once the views are displaying the right data under your chosen column names, you can run your stored procedures without change.
I have a similar type of project where data is being retrieved from Access, .ini file, file modification dates, and MySql. I scrape this data every morning and basically append to a set SqlServer schema.
I created a DataTable and as I iterate a set of directories, insert the data into each new row. Once I have the DataTable complete, I perform a bulkcopy to append to the database.
I hope that helps you out a bit. I know my project doesn't cover all the aspects of your question; but also don't have a DBA to provide views, stored procedures, etc. Nor do I have the additional time to devote to such things. Not the most favorable of conditions, but that's the way it is.
HTH...
The best way of solving this problem is with and ETL (Extract-Transform-Load) solution. A good choice is SSIS which is through Microsoft's BI suite.
This is the building blocks for consciousness or the base......
1 A data base that organizes thousands of files similar to dna,
2 user interface
3 parts are hidden, preventing a system breach/crash
So I have an interesting problem that's been the fruit of lots of good discussion in my group at work.
We have some scientific software producing SQLlite files, and this software is basically a black box. We don't control its table designs, formats, etc. It's entirely conceivable that this black box's output could change, and our design needs to be able to handle that.
The SQLlite files are entire databases which our user would like to query across. There are two ways (we see) of implementing this, one, to create a single database and a backend in Python that appends tables from each database to the master database, and two, querying across separate databases' tables and unifying the results in Python.
Both methods run into trouble when the black box produces alters its table structures, say for example renaming a column, splitting up a table, etc. We have to take this into account, and we've discussed translation tables that translate queries of columns from one table format to another.
We're interested in ease of implementation, how well the design handles a change in database/table layout, and speed. Also, a last dimension is how well it would work with existing Python web frameworks (Django doesn't support cross-database queries, and neither does SQLAlchemy, so we know we are in for a lot of programming.)
If you find yourself querying across databases, you should look into consolidating. Cross-database queries are evil.
If your queries are essentially relegated to individual databases, then you may want to stick with multiple databases, as clearly their separation is necessary.
You cannot accommodate arbitrary changes in a database's schema without categorizing and anticipating that change in some way. In the very best case with nontrivial changes, you can sometimes simply ignore new data or tables, in the worst case, your interpretation of the data will entirely break down.
I've encountered similar issues where users need data pivoted out of a normalized schema. The schema does NOT change. However, their required output format requires a fixed number of hierarchical levels. Thus, although the database design accommodates all the changes they want to make, their chosen view of that data cannot be maintained in the face of their changes. Thus it is impossible to maintain the output schema in the face of data change (not even schema change). This is not to say that it's not a valid output or input schema, but that there are limits beyond which their chosen schema cannot be used. At this point, they have to revise the output contract, the pivoting program (which CAN anticipate this and generate new columns) can then have a place to put the data in the output schema.
My point being: the semantics and interpretation of new columns and new tables (or removal of columns and tables which existing logic may depend on) is nontrivial unless new columns or tables can be anticipated in some way. However, in these cases, there are usually good database designs which eliminate those strategies in the first place:
For instance, a particular database schema can contain any number of tables, all with the same structure (although there is no theoretical reason they could not be consolidated into a single table). A particular kind of table could have a set of columns all similarly named (although this "array" violates normalization principles and could be normalized into a commonkey/code/value schema).
Even in a data warehouse ETL situation, a new column is going to have to be determined whether it is a fact or a dimensional attribute, and then if it is a dimensional attribute, which dimension table it is best assigned to. This could somewhat be automated for facts (obvious candidates would be scalars like decimal/numeric) by inspecting the metadata for unmapped columns, altering the DW table (yikes) and then loading appropriately. But for dimensions, I would be very leery of automating somethings like this.
So, in summary, I would say that schema changes in a good normalized database design are the least likely to be able to be accommodated because: 1) the database design already anticipates and accommodates a good deal of change and flexibility and 2) schema changes to such a database design are unlikely to be able to be anticipated very easily. Conversely, schema changes in a poorly normalized database design are actually more easy to anticipate as shortcomings in the database design are more visible.
So, my question to you is: How well-designed is the database you are working from?
You say that you know that you are in for a lot of programming...
I'm not sure about that. I would go for a quick and dirty solution not a 'generic' solution because generic solutions like the entity attribute value model often have a bad performance. Don't do client side joining (unifying the results) inside your Python code because that is very slow. Use SQL for joining, it is designed for that purpose. Users can also make their own reports with all kind of reporting tools that generate sql statements. You don't have to do everything in your app, just start with solving 80% of the problems, not 100%.
If something breaks because something inside the black box changes you can define views for backward compatibility that keeps your app functioning.
Maybe the scientific software will add a lot of new features and maybe it will change its datamodel because of those new features..? That is possible but then you will have to change your application anyways to take profit from those new features.
It sounds to me as if your problem isn't really about MySQL or SQLlite. It's about the sharing of data, and the contract that needs to exist between the supplier of data and the user of the same data.
To the extent that databases exist so that data can be shared, that contract is fundamental to everything about databases. When databases were first being built, and database theory was first being solidified, in the 1960s and 1970s, the sharing of data was the central purpose in building databases. Today, databases are frequently used where files would have served equally well. Your situation may be a case in point.
In your situation, you have a beggar's contract with your data suppliers. They can change the format of the data, and maybe even the semantics, and all you can do is suck it up and deal wth it. This situation is by no means uncommon.
I don't know the specifics of your situation, so what follows could be way off target.
If it was up to me, I would want to build a database that was as generic, as flexible, and as stable as possible, without losing the essential features of structured and managed data. Maybe, some design like star schema would make sense, but I might adopt a very different design if I were actually in your shoes.
This leaves the problem of extracting the data from the databases you are given, transforming the data into the stable format the central database supports, and loading it into the central database. You are right in guessing that this involves a lot of programming. This process, known as "ETL" in data warehousing texts, is not the simplest of programming challenges.
At least ETL collects all the hard problems in one place. Once you have the data loaded into a database that's built for your needs, and not for the needs of your suppliers, turning the data into valuable information should be relatively easy, at least at the programming or SQL level. There are even OLAP tools that make using the data as simple as a video game. There are challenges at that level, but they aren't the same kind of challenges I'm talking about here.
Read up on data warehousing, and especially data marts. The description may seem daunting to you at first, but it can be scaled down to meet your needs.