How to convert a list of attribute-value pairs into a flat table whose columns are attributes - sql

I'm trying to convert a csv file containing 3 columns (ATTRIBUTE_NAME,ATTRIBUTE_VALUE,ID) into a flat table whose each row is (ID,Attribute1,Attribute2,Attribute3,....). The samples of such tables are provided at the end.
Either Python, Perl or SQL is fine. Thank you very much and I really appreciate your time and efforts!
In fact, my question is very similar to this post, except that in my case the number of attributes is pretty big (~300) and not consistent across each ID, so hard coding each attribute might not be a practical solution.
For me, the challenging/difficult parts are:
There are approximately 270 millions lines of input, the total size of the input table is about 60 GB.
Some single values (string) contain comma (,) within, and the whole string will be enclosed with double-quote (") to make the reader aware of that. For example "JPMORGAN CHASE BANK, NA, TX" in ID=53.
The set of attributes is not the same across ID's. For example, the number of overall attributes is 8, but ID=53, 17 and 23 has only 7, 6 and 5 respectively. ID=17 does not have attributes string_country and string_address, so output blank/nothing after the comma.
The input attribute-value table looks like this. In this sample input and output, we have 3 ID's, whose number of attributes can be different depending on we can obtain such attributes from the server or not.
ATTRIBUTE_NAME,ATTRIBUTE_VALUE,ID
num_integer,100,53
string_country,US (United States),53
string_address,FORT WORTH,53
num_double2,546.0,53
string_acc,My BankAcc,53
string_award,SILVER,53
string_bankname,"JPMORGAN CHASE BANK, NA, TX",53
num_integer,61,17
num_double,34.32,17
num_double2,200.541,17
string_acc,Your BankAcc,17
string_award,GOLD,17
string_bankname,CHASE BANK,17
num_integer,36,23
num_double,78.0,23
string_country,CA (Canada),23
string_address,VAN COUVER,23
string_acc,Her BankAcc,23
The output table should look like this. (The order of attributes in the columns is not fixed. It can be sorted alphabetically or by order-of-appearance.)
ID,num_integer,num_double,string_country,string_address,num_double2,string_acc,string_award,string_bankname
53,100,,US (United States),FORT WORTH,546.0,My BankAcc,SILVER,"JPMORGAN CHASE BANK, NA, TX"
17,61,34.32,,,200.541,Your BankAcc,GOLD,CHASE BANK
23,36,78.0,CA (Canada),VAN COUVER,,Her BankAcc,,

This program will do as you ask. It expects the name of the input file as a parameter on the command line.
Update Looking more carefully at the data I see that not all of the data fields are available for every ID. That makes things more complex if the fields are to be kept in the same order as they appear in the file.
This program works by scanning the file and accumulating all the data for output into hash %data. At the same time it builds a hash %headers, that keeps the position each header appears in the data for each ID value.
Once the file has been scanned, the collected headers are sorted by finding the first ID for each pair that includes information for both headers. The sort order for that pair within the complete set must be the same as the order they appeared in the data for that ID, so it's just a matter of comparing the two position values using <=>.
Once a sorted set of headers has been created, the %data hash is dumped, accessing the complete list of values for each ID using a hash slice.
Update 2 Now that I realise the sheer size of your data I can see that my second attempt was also flawed, as it tried to read all of the information into memory before outputting it. That isn't going to work unless you have a monster machine with about 1TB of memory!
You may get some mileage from this version. It scans twice through the file, the first time to read the data so that the full set of header names can be created and ordered, then again to read the data for each ID and output it.
Let me know if it's not working for you, as there's still things I can do to make it more memory-efficient.
use strict;
use warnings;
use 5.010;
use Text::CSV;
use Fcntl 'SEEK_SET';
my $csv = Text::CSV->new;
open my $fh, '<', $ARGV[0] or die qq{Unable to open "$ARGV[0]" for input: $!};
my %headers = ();
my $last_id;
my $header_num;
my $num_ids;
while (my $row = $csv->getline($fh)) {
next if $. == 1;
my ($key, $val, $id) = #$row;
unless (defined $last_id and $id eq $last_id) {
++$num_ids;
$header_num = 0;
$last_id = $id;
print STDERR "Processing ID $id\n";
}
$headers{$key}[$num_ids-1] = ++$header_num;
}
sub by_position {
for my $id (0 .. $num_ids-1) {
my ($posa, $posb) = map $headers{$_}[$id], our $a, our $b;
return $posa <=> $posb if $posa and $posb;
}
0;
}
my #headers = sort by_position keys %headers;
%headers = ();
print STDERR "List of headers complete\n";
seek $fh, 0, SEEK_SET;
$. = 0;
$csv->combine('ID', #headers);
print $csv->string, "\n";
my %data = ();
$last_id = undef;
while () {
my $row = $csv->getline($fh);
next if $. == 1;
if (not defined $row or defined $last_id and $last_id ne $row->[2]) {
$csv->combine($last_id, #data{#headers});
print $csv->string, "\n";
%data = ();
}
last unless defined $row;
my ($key, $val, $id) = #$row;
$data{$key} = $val;
$last_id = $id;
}
output
ID,num_integer,num_double,string_country,string_address,num_double2,string_acc,string_award,string_bankname
53,100,,"US (United States)","FORT WORTH",546.0,"My BankAcc",SILVER,"JPMORGAN CHASE BANK, NA, TX"
17,61,34.32,,,200.541,"Your BankAcc",GOLD,"CHASE BANK"
23,36,78.0,"CA (Canada)","VAN COUVER",,"Her BankAcc",,

Use Text::CSV from CPAN:
#!/usr/bin/env perl
use strict;
use warnings;
# --------------------------------------
use charnames qw( :full :short );
use English qw( -no_match_vars ); # Avoids regex performance penalty
use Text::CSV;
my $col_csv = Text::CSV->new();
my $id_attr_csv = Text::CSV->new({ eol=>"\n", });
$col_csv->column_names( $col_csv->getline( *DATA ));
while( my $row = $col_csv->getline_hr( *DATA )){
# do all the keys but skip if ID
for my $attribute ( keys %$row ){
next if $attribute eq 'ID';
$id_attr_csv->print( *STDOUT, [ $attribute, $row->{$attribute}, $row->{ID}, ]);
}
}
__DATA__
ID,num_integer,num_double,string_country,string_address,num_double2,string_acc,string_award,string_bankname
53,100,,US (United States),FORT WORTH,546.0,My BankAcc,SILVER,"JPMORGAN CHASE BANK, NA, TX"
17,61,34.32,,,200.541,Your BankAcc,GOLD,CHASE BANK
23,36,78.0,CA (Canada),VAN COUVER,,Her BankAcc,,

Related

How do I loop thought each DB field to see if range is correct

I have this response in soapUI:
<pointsCriteria>
<calculatorLabel>Have you registered for inContact, signed up for marketing news from FNB/RMB Private Bank, updated your contact details and chosen to receive your statements</calculatorLabel>
<description>Be registered for inContact, allow us to communicate with you (i.e. update your marketing consent to 'Yes'), receive your statements via email and keep your contact information up to date</description>
<grades>
<points>0</points>
<value>No</value>
</grades>
<grades>
<points>1000</points>
<value>Yes</value>
</grades>
<label>Marketing consent given and Online Contact details updated in last 12 months</label>
<name>c21_mrktng_cnsnt_cntct_cmb_point</name>
</pointsCriteria>
There are many many many pointsCriteria and I use the below xquery to give me the DB value and Range of what that field is meant to be:
<return>
{
for $x in //pointsCriteria
return <DBRange>
<db>{data($x/name/text())}</db>
<points>{data($x//points/text())}</points>
</DBRange>
}
</return>
And i get the below response
<return><DBRange><db>c21_mrktng_cnsnt_cntct_cmb_point</db><points>0 1000</points></DBRange>
That last bit sits in a property transfer. I need SQL to bring back all rows where that DB field is not in that points range (field can only be 0 or 1000 in this case), my problem is I dont know how to loop through each DBRange/DBrange in this manner? please help
I'm not sure that I really understand your question, however I think that you want to make queries in your DB using specific table with a column name defined in your <db> field of your xml, and using as values the values defined in <points> field of the same xml.
So you can try using a groovy TestStep, first parse your Xml and get back your column name, and your points. To iterate over points if the values are separated with a blank space you can make a split(" ") to get a list and then use each() to iterate over the points on this list. Then using groovy.sql.Sql you can perform the queries in your DB.
Only one more thing, you need to put the JDBC drivers for your vendor DB in $SOAPUI_HOME/bin/ext and then restart SOAPUI in order that it can load the necessary driver classes.
So the follow code approach can achieve your goal:
import groovy.sql.Sql
import groovy.util.XmlSlurper
// soapui groovy testStep requires that first register your
// db vendor drivers, as example I use oracle drivers...
com.eviware.soapui.support.GroovyUtils.registerJdbcDriver( "oracle.jdbc.driver.OracleDriver")
// connection properties db (example for oracle data base)
def db = [
url : 'jdbc:oracle:thin:#db_host:d_bport/db_name',
username : 'yourUser',
password : '********',
driver : 'oracle.jdbc.driver.OracleDriver'
]
// create the db instance
def sql = Sql.newInstance("${db.url}", "${db.username}", "${db.password}","${db.driver}")
def result = '''<return>
<DBRange>
<db>c21_mrktng_cnsnt_cntct_cmb_point</db>
<points>0 1000</points>
</DBRange>
</return>'''
def resXml = new XmlSlurper().parseText(result)
// get the field
def field = resXml.DBRange.db.text()
// get the points
def points = resXml.DBRange.points.text()
// points are separated by blank space,
// so split to get an array with the points
def pointList = points.split(" ")
// for each point make your query
pointList.each {
def sqlResult = sql.rows "select * from your_table where ${field} = ?",[it]
log.info sqlResult
}
sql.close();
Hope this helps,
Thanks again for your help #albciff, I had to add this into a multidimensional array (I renamed field to column and result is a large return from the Xquery above)
def resXml = new XmlSlurper().parseText(result)
//get the columns and points ranges
def Column = resXml.DBRange.db*.text()
def Points = resXml.DBRange.points*.text()
//sorting it all out into a multidimensional array (index per index)
count = 0
bigList = Column.collect
{
[it, Points[count++]]
}
//iterating through the array
bigList.each
{//creating two smaller lists and making it readable for sql part later
def column = it[0]
def points = it[1]
//further splitting the points to test each
pointList = points.split(" ")
pointList.each
{//test each points range per column
def sqlResult = sql.rows "select * from my_table where ${column} <> ",[it]
log.info sqlResult
}
}
sql.close();
return;

Efficient semantic triples with Perl, without external db server

I have several semantic triples. Some examples:
Porky,species,pig // Porky's species is "pig"
Bob,sister,May // Bob's sister is May
May,brother,Sam // May's borther is Sam
Sam,wife,Jane // Sam's wife is Jane
... and so on ...
I store each triple in 6 different hashes. Example:
$ijk{Porky}{species}{pig} = 1;
$ikj{Porky}{pig}{species} = 1;
$jik{species}{Porky}{pig} = 1;
$jki{species}{pig}{Porky} = 1;
$kij{pig}{Porky}{species} = 1;
$kji{pig}{species}{Porky} = 1;
This lets me efficiently ask questions like:
What species is Porky (keys %{$ijk{Porky}{species}})
List all pigs (keys %{$jki{species}{pig}})
What information do I have on Porky? (keys %{$ijk{Porky}})
List all species (keys %{$jik{species}})
and so on. Note that none of the examples above go through a list one element at a time. They all take me "instantly" to my answer. In other words, each answer is a hash value. Of course, the answer itself may be a list, but I don't traverse any lists to get to that answer.
However, defining 6 separate hashes seems really inefficient. Is there
an easier way to do this without using an external database engine
(for this question, SQLite3 counts as an external database engine)?
Or have I just replicated a small subset of SQL into Perl?
EDIT: I guess what I'm trying to say: I love associative arrays, but they seem to be the wrong data structure for this job. What's the right data structure here, and what Perl module implements it?
Have you looked at using RDF::Trine? It has DBI-backed stores, but it also has in-memory stores, and can parse/serialize in RDF/XML, Turtle, N-Triples, etc if you need persistence.
Example:
use strict;
use warnings;
use RDF::Trine qw(statement literal);
my $ns = RDF::Trine::Namespace->new("http://example.com/");
my $data = RDF::Trine::Model->new;
$data->add_statement(statement $ns->Peppa, $ns->species, $ns->Pig);
$data->add_statement(statement $ns->Peppa, $ns->name, literal 'Peppa');
$data->add_statement(statement $ns->George, $ns->species, $ns->Pig);
$data->add_statement(statement $ns->George, $ns->name, literal 'George');
$data->add_statement(statement $ns->Suzy, $ns->species, $ns->Sheep);
$data->add_statement(statement $ns->Suzy, $ns->name, literal 'Suzy');
print "Here are the pigs...\n";
for my $pig ($data->subjects($ns->species, $ns->Pig)) {
my ($name) = $data->objects($pig, $ns->name);
print $name->literal_value, "\n";
}
print "Let's dump all the data...\n";
my $ser = RDF::Trine::Serializer::Turtle->new;
print $ser->serialize_model_to_string($data), "\n";
RDF::Trine is quite a big framework, so has a bit of a compile-time penalty. At run-time it's relatively fast though.
RDF::Trine can be combined with RDF::Query if you wish to query your data using SPARQL.
use RDF::Query;
my $q = RDF::Query->new('
PREFIX : <http://example.com/>
SELECT ?name
WHERE {
?thing :species :Pig ;
:name ?name .
}
');
my $r = $q->execute($data);
print "Here are the pigs...\n";
while (my $row = $r->next) {
print $row->{name}->literal_value, "\n";
}
RDF::Query supports both SPARQL 1.0 and SPARQL 1.1. RDF::Trine and RDF::Query are both written by Gregory Williams who was a member of the SPARQL 1.1 Working Group. RDF::Query was one of the first implementations to achieve 100% on the SPARQL 1.1 Query test suite. (It may have even been the first?)
"Efficient" is not really the right word here since you're worried about improving speed in exchange for memory, which is generally how it works.
Only real alternative is to store the triplets as distinct values, and then just have three "indexes" into them:
$row = [ "Porky", "species", "pig" ];
push #{$subject_index{Porky}}, $row;
push #{$relation_index{species}}, $row;
push #{$target_index{pig}}, $row;
To do something like "list all pigs", you'd have to find the intersection of $relation_index{species} and $target_index{pig}. Which you can do manually, or with your favorite set implementation.
Then wrap it all up in a nice object interface, and you've basically implemented INNER JOIN. :)
A single hash of hash should be sufficient:
use strict;
use warnings;
use List::MoreUtils qw(uniq);
use Data::Dump qw(dump);
my %data;
while (<DATA>) {
chomp;
my ($name, $type, $value) = split ',';
$data{$name}{$type} = $value;
}
# What species is Porky?
print "Porky's species is: $data{Porky}{species}\n";
# List all pigs
print "All pigs: " . join(',', grep {defined $data{$_}{species} && $data{$_}{species} eq 'pig'} keys %data) . "\n";
# What information do I have on Porky?
print "Info on Porky: " . dump($data{Porky}) . "\n";
# List all species
print "All species: " . join(',', uniq grep defined, map $_->{species}, values %data) . "\n";
__DATA__
Porky,species,pig
Bob,sister,May
May,brother,Sam
Sam,wife,Jane
Outputs:
Porky's species is: pig
All pigs: Porky
Info on Porky: { species => "pig" }
All species: pig
I think you are mixing categories and values, such as name=Porky, and species=pig.
Given your example, I'd go with something like this:
my %hash;
$hash{name}{Porky}{species}{pig} = 1;
$hash{species}{pig}{name}{Porky} = 1;
$hash{name}{Bob}{sister}{May} = 1;
$hash{sister}{May}{name}{Bob} = 1;
$hash{name}{May}{brother}{Sam} = 1;
$hash{brother}{Sam}{name}{May} = 1;
$hash{name}{Sam}{wife}{Jane} = 1;
$hash{wife}{Jane}{name}{Sam} = 1;
Yes, this has some apparent redundancy, since we can easily distinguish most names from other values. But the 3rd-level hash key is also a top level hash key, which can be used to get more information on some element.
Or have I just replicated a small subset of SQL into Perl?
It's pretty easy to start using actual SQL, using an SQLite in memory database.
#!/usr/bin/perl
use warnings; use strict;
use DBI;
my $dbh = DBI->connect("dbi:SQLite::memory:", "", "", {
sqlite_use_immediate_transaction => 0,
RaiseError => 1,
});
$dbh->do("CREATE TABLE triple(subject,predicate,object)");
$dbh->do("CREATE INDEX 'triple(subject)' ON triple(subject)");
$dbh->do("CREATE INDEX 'triple(predicate)' ON triple(predicate)");
$dbh->do("CREATE INDEX 'triple(object)' ON triple(object)");
for ([qw<Porky species pig>],
[qw<Porky color pink>],
[qw<Sylvester species cat>]) {
$dbh->do("INSERT INTO triple(subject,predicate,object) VALUES (?, ?, ?)", {}, #$_);
}
use JSON;
print to_json( $dbh->selectall_arrayref('SELECT * from triple WHERE predicate="species"', {Slice => {}}) );
Gives:
[{"object":"pig","predicate":"species","subject":"Porky"},
{"object":"cat","predicate":"species","subject":"Sylvester"}]
You can then query and index the data in a familiar manner. Very scalable as well.

Storing a hash in batches

I'm trying to retrieve a bunch of rows using sql (as a test - lets say 1000 rows in each iteration up to a million rows) and store in a file (.store file in my case but could be a text file - doesn't matter) in batches to avoid an out of memory issue. I am sql within a perl script.
Will appreciate if anyone can share an example.
example would be like -
sub query{
$test = "select * from employees";
return $test;
}
// later in the code -
my $temp;
my $dataset=DBUtils::make_database_iterator({query=> test($temp)});
}
store $dataset, $result_file;
return;
The best I can offer you with the limited amount of information you have given is this, which uses the SELECT statement's LIMIT clause to retrieve a limited number of rows from the table.
Obviously you will have to provide actual values for the DSN, the name of the table, and the store_block subroutine yourself.
use strict;
use warnings;
use autodie;
use DBI;
my $blocksize = 1000;
my ($dsn, $user, $pass) = (...);
my $dbh = DBI->connect($dsn, $user, $pass);
my $sth = $dbh->prepare('SELECT * FROM table LIMIT ? OFFSET ?') or die $DBI::errstr;
open my $fh, '>', 'test.store';
for (my $n = 0; $sth->execute($blocksize, $n * $blocksize); ++$n) {
my $block = $sth->fetchall_arrayref;
last unless #$block;
store_block($block, $fh);
}
close $fh;
sub store_block {
my ($block, $fh) = #_;
...
}
You say you want to work in batches to avoid an out of memory error. This suggests you're doing something like this...
my #all_the_rows = query_the_database($sql);
store_stuff(#all_the_rows);
You want to avoid doing that as much as possible for exactly the reason you gave, if the dataset grows large you might run out of memory.
Instead, you can read one row at a time and write one row at a time using DBI.
use strict;
use warnings;
use DBI;
# The file you're writing results to
my $file = '...';
# Connect to the database using DBI
my $dbh = DBI->connect(
...however you do that...,
{RaiseError => 1} # Turn on exceptions
);
# Prepare and execute the statement
my $sth = $dbh->prepare("SELECT * FROM employees");
$sth->execute;
# Get a row, write a row.
while( my $row = $sth->fetchrow_arrayref ) {
append_row_to_storage($row, $file);
}
I leave writing append_row_to_storage up to you.

How can I generate schema from text file? (Hadoop-Pig)

Somehow i got filename.log which looks like for example (tab separated)
Name:Peter Age:18
Name:Tom Age:25
Name:Jason Age:35
because the value of key column may differ i cannot define schema when i load text like
a = load 'filename.log' as (Name:chararray,Age:int);
Neither do i want to call column by position like
b = foreach a generate $0,$1;
What I want to do is, from only that filename.log, to make it possible to call each value by key, for example
a = load 'filename.log' using PigStorage('\t');
b = group b by Name;
c = foreach b generate group, COUNT(b);
dump c;
for that purpose, i wrote some Java UDF which seperate key:value and get value for every field in tuple as below
public class SPLITALLGETCOL2 extends EvalFunc<Tuple>{
#Override
public Tuple exec(Tuple input){
TupleFactory mTupleFactory = TupleFactory.getInstance();
ArrayList<String> mProtoTuple = new ArrayList<String>();
Tuple output;
String target=input.toString().substring(1, input.toString().length()-1);
String[] tokenized=target.split(",");
try{
for(int i=0;i<tokenized.length;i++){
mProtoTuple.add(tokenized[i].split(":")[1]);
}
output = mTupleFactory.newTupleNoCopy(mProtoTuple);
return output;
}catch(Exception e){
output = mTupleFactory.newTupleNoCopy(mProtoTuple);
return output;
}
}
}
How should I alter this method to get what I want? or How should I write other UDF to get there?
Whatever you do, don't use a tuple to store the output. Tuples are intended to store a fixed number of fields, where you know what every field contains. Since you don't know that the keys will be in Name,Age form (or even exist, or that there won't be more) you should use a bag. Bags are unordered sets of tuples. They can have any number of tuples in them as long as the tuples have the same schema. These are all valid bags for the schema B: {T:(key:chararray, value:chararray)}:
{(Name,Foo),(Age,Bar)}
{(Age,25),(Name,Jim)}
{(Name,Bob)}
{(Age,30),(Name,Roger),(Hair Color,Brown)}
{(Hair Color,),(Name,Victor)} -- Note the Null value for Hair Color
However, it sounds like you really want a map:
myudf.py
#outputSchema('M:map[]')
def mapize(the_input):
out = {}
for kv in the_input.split(' '):
k, v = kv.split(':')
out[k] = v
return out
myscript.pig
register '../myudf.py' using jython as myudf ;
A = LOAD 'filename.log' AS (total:chararray) ;
B = FOREACH A GENERATE myudf.mapize(total) ;
-- Sample usage, grouping by the name key.
C = GROUP B BY M#'Name' ;
Using the # operator you can pull out all values from the map using the key you give. You can read more about maps here.

Programmatically extracting relationships between tables in an RDBMS w/out foreign keys?

I'm reverse engineering the relationships between a medium-sized number of tables (50+) in an Oracle database where there are no foreign keys defined between the tables. I can count (somewhat) on being able to match column names across tables. For example, column name "SomeDescriptiveName" is probably the same across the set of tables.
What I would like to be able to do is to find a better way of extracting some set of relationships based on those matching column names than manually going through the tables one by one. I could do something with Java DatabaseMetaData methods but it seems like this is one of those tasks that someone has probably had to script before. Maybe extract the columns names with Perl or some other scripting lang, use the column names as a hash key and add tables to an array pointed to by the hash key?
Anyone have any tips or suggestions that might make this simpler or provide a good starting point? It's an ugly need, if foreign keys had already been defined, understanding the relationships would have been much easier.
Thanks.
You pretty much wrote the answer in your question.
my %column_tables;
foreach my $table (#tables) {
foreach my $column ($table->columns) {
push #{$column_tables[$column]}, $table;
}
}
print "Likely foreign key relationships:\n";
foreach my $column (keys %column_tables) {
my #tables = #{$column_tables[$column]};
next
if #tables < 2;
print $column, ': ';
foreach my $table (#tables) {
print $table->name, ' ';
}
print "\n";
}
My strategy would be to use the Oracle system catalog to find columns that are the same in column name and data type but different in table name. Also which one of the columns is part of a table's primary or unique key.
Here's a query that may be close to doing this, but I don't have an Oracle instance handy to test it:
SELECT col1.table_name || '.' || col1.column_name || ' -> '
|| col2.table_name || '.' || col2.column_name
FROM all_tab_columns col1
JOIN all_tab_columns col2
ON (col1.column_name = col2.column_name
AND col1.data_type = col2.data_type)
JOIN all_cons_columns cc
ON (col2.table_name = cc.table_name
AND col2.column_name = cc.column_name)
JOIN all_constraints con
ON (cc.constraint_name = con.constraint_name
AND cc.table_name = con.table_name
AND con.constraint_type IN ('P', 'U')
WHERE col1.table_name != col2.table_name;
Of course this won't get any case of columns that are related but have different names.
You can use a combination of three (or four) approaches, depending on how obfuscated the schema is:
dynamic methods
observation:
enable tracing in the RDBMS (or ODBC layer), then
perform various activities in the application (ideally record creation), then
identify which tables were altered in tight sequence, and with what column-value pairs
values occurring in more than one column during the sequence interval may indicate a foreign key relationship
static methods (just analyzing existing data, no need to have a running application)
nomenclature: try to infer relationships from column names
statistical: look at minimum/maximum (and possibly the average) of unique values in all numerical columns, and attempt to perform a match
code reverse engineering: your last resort (unless dealing with scripts) - not for the faint of heart :)
This is an interesting question. The approach I took was a brute force search for columns that matched types and values for a small sample set. You'll probably have to tweak the heuristics to provide good results for your schema. I ran this on a schema that didn't use auto-incremented keys and it worked well. The code is written for MySQL, but it's very easy to adapt to Oracle.
use strict;
use warnings;
use DBI;
my $dbh = DBI->connect("dbi:mysql:host=localhost;database=SCHEMA", "USER", "PASS");
my #list;
foreach my $table (show_tables()) {
foreach my $column (show_columns($table)) {
push #list, { table => $table, column => $column };
}
}
foreach my $m (#list) {
my #match;
foreach my $f (#list) {
if (($m->{table} ne $f->{table}) &&
($m->{column}{type} eq $f->{column}{type}) &&
(samples_found($m->{table}, $m->{column}{name}, $f->{column}{samples})))
{
# For better confidence, add other heuristics such as
# joining the tables and verifying that every value
# appears in the master. Also it may be useful to exclude
# columns in large tables without an index although that
# heuristic may fail for composite keys.
#
# Heuristics such as columns having the same name are too
# brittle for many of the schemas I've worked with. It may
# be too much to even require identical types.
push #match, "$f->{table}.$f->{column}{name}";
}
}
if (#match) {
print "$m->{table}.$m->{column}{name} $m->{column}{type} <-- #match\n";
}
}
$dbh->disconnect();
exit;
sub show_tables {
my $result = query("show tables");
return ($result) ? #$result : ();
}
sub show_columns {
my ($table) = #_;
my $result = query("desc $table");
my #columns;
if ($result) {
#columns = map {
{ name => $_->[0],
type => $_->[1],
samples => query("select distinct $_->[0] from $table limit 10") }
} #$result;
}
return #columns;
}
sub samples_found {
my ($table, $column, $samples) = #_;
foreach my $v (#$samples) {
my $result = query("select count(1) from $table where $column=?", $v);
if (!$result || $result->[0] == 0) {
return 0;
}
}
return 1;
}
sub query {
my ($sql, #binding) = #_;
my $result = $dbh->selectall_arrayref($sql, undef, #binding);
if ($result && $result->[0] && #{$result->[0]} == 1) {
foreach my $row (#$result) {
$row = $row->[0];
}
}
return $result;
}