store Perl hash data in a database - sql
I have written Perl code that parses a text file and uses a hash to tally up the number of times a US State abbreviation appears in each file/record. I end up with something like this.
File: 521
OH => 4
PA => 1
IN => 2
TX => 3
IL => 7
I am struggling to find a way to store such hash results in an SQL database. I am using mariadb. Because the structure of the data itself varies, one file will have some states and the next may have others. For example, one file may contain only a few states, the next may contain a group of completely different states. I am even having trouble conceptualizing the table structure. What would be the best way to store data like this in a database?
There are many possible ways to store the data.
For sake of simplicity see if the following approach will be an acceptable solution for your case. The solution is base on use one table with two indexes based upon id and state columns.
CREATE TABLE IF NOT EXISTS `state_count` (
`id` INT NOT NULL,
`state` VARCHAR(2) NOT NULL,
`count` INT NOT NULL,
INDEX `id` (`id`),
INDEX `state` (`state`)
);
INSERT INTO `state_count`
(`id`,`state`,`count`)
VALUES
('251','OH',4),
('251','PA',1),
('251','IN',2),
('251','TX',3),
('251','IL',7);
Sample SQL SELECT output
MySQL [dbs0897329] > SELECT * FROM state_count;
+-----+-------+-------+
| id | state | count |
+-----+-------+-------+
| 251 | OH | 4 |
| 251 | PA | 1 |
| 251 | IN | 2 |
| 251 | TX | 3 |
| 251 | IL | 7 |
+-----+-------+-------+
5 rows in set (0.000 sec)
MySQL [dbs0897329]> SELECT * FROM state_count WHERE state='OH';
+-----+-------+-------+
| id | state | count |
+-----+-------+-------+
| 251 | OH | 4 |
+-----+-------+-------+
1 row in set (0.000 sec)
MySQL [dbs0897329]> SELECT * FROM state_count WHERE state IN ('OH','TX');
+-----+-------+-------+
| id | state | count |
+-----+-------+-------+
| 251 | OH | 4 |
| 251 | TX | 3 |
+-----+-------+-------+
2 rows in set (0.001 sec)
It's a little unclear in what direction your question goes. But if you want a good relational model to store the data into, that would be three tables. One for the files. One for the states. One for the count of the states in a file. For example:
The tables:
CREATE TABLE file
(id integer
AUTO_INCREMENT,
path varchar(256)
NOT NULL,
PRIMARY KEY (id),
UNIQUE (path));
CREATE TABLE state
(id integer
AUTO_INCREMENT,
abbreviation varchar(2)
NOT NULL,
PRIMARY KEY (id),
UNIQUE (abbreviation));
CREATE TABLE occurrences
(file integer,
state integer,
count integer
NOT NULL,
PRIMARY KEY (file,
state),
FOREIGN KEY (file)
REFERENCES file
(id),
FOREIGN KEY (state)
REFERENCES state
(id),
CHECK (count >= 0));
The data:
INSERT INTO files
(path)
VALUES ('521');
INSERT INTO states
(abbreviation)
VALUES ('OH'),
('PA'),
('IN'),
('TX'),
('IL');
INSERT INTO occurrences
(file,
state,
count)
VALUES (1,
1,
4),
(1,
2,
1),
(1,
3,
2),
(1,
4,
3),
(1,
4,
7);
The states of course would be reused. Fill the table with all 50 and use them. They should not be inserted for every file again.
You can fill occurrences explicitly with a count of 0 for file where the respective state didn't appear, if you want to distinguish between "I know it's 0." and "I don't know the count.", which would then be encoded through the absence of a corresponding row. If you don't want to distinguish that and no row means a count of 0, you can handle that in queries by using outer joins and coalesce() to "translate" to 0.
Related
How to add non duplicate items inside an array in PostgreSQL?
question_id | person_id | question_title | question_source | question_likes | question_dislikes | created_at -------------+-----------+--------------------------+--------------------------+----------------+-------------------+--------------------------------- 2 | 2 | This is question Title 1 | This is question Video 1 | | | 2021-11-11 10:32:53.93505+05:30 3 | 3 | This is question Title 1 | This is question Video 1 | | | 2021-11-11 10:32:58.69947+05:30 1 | 1 | This is question Title 1 | This is question Video 1 | {2} | {1} | 2021-11-11 10:32:45.81733+05:30 I'm trying to add values inside the question_likes and question_dislikes which are arrays. I'm not able to figure out the query to add only non duplicate items inside the array using sql. I want all items in the array to be unique. Below is the query i used to create the questions table: CREATE TABLE questions( question_id BIGSERIAL PRIMARY KEY, person_id VARCHAR(50) REFERENCES person(person_id) NOT NULL, question_title VARCHAR(100) NOT NULL, question_source VARCHAR(100), question_likes TEXT[] UNIQUE, question_dislikes TEXT[] UNIQUE, created_at TIMESTAMPTZ DEFAULT NOW() ); UPDATE questions SET question_likes = array_append(question_likes,$1) WHERE question_id=$2 RETURNING * What changes can i make to the above query to make it work?
This is a drawback of using a de-normalized model. Just because Postgres supports arrays, doesn't mean that they are a good choice to solve problems that are better solved using best practices of relational data modeling. However, in your case you can simply prevent this by adding an additional condition to your WHERE clause: UPDATE questions SET question_likes = array_append(question_likes,$1) WHERE question_id=$2 AND $1 <> ALL(question_likes) RETURNING * This prevents the update completely if the value of $1 is found anywhere in the array.
Primary key collision in scope of one trasaction
I have a postgresql database, which heavily relies on events from the outside, e.g. administrator changing / adding some fields or records might trigger a change in overall fields structure in other tables. There lies the problem, however, as sometimes the fields changed by the trigger function are primary key fields. There is a table, which uses two foreign keys ids as the primary key, as in example below: # | PK id1 | PK id2 | data | 0 | 1 | 1 | ab | 1 | 1 | 2 | cd | 2 | 1 | 3 | ef | However, within one transaction (if I may call it such, since, in fact, it is a plpgsql function), the structure might be changed to: # | PK id1 | PK id2 | data | 0 | 1 | 3 | ab | 1 | 1 | 2 | cd | 2 | 1 | 1 | ef | Which, as you might have noticed, changed the 0th record's second primary key to 3, and the 2nd's to 1, which is the opposite of what they were before. It is 100% certain that after the function has taken its effect there will be no collisions whatsoever, but I'm wondering, how can this be implemented? I could, in fact, use a synthetic primary key as a BIGSERIAL, yet there is still a need for those two ids to be UNIQUE constained, so it wouldn't do the trick, unfortunately.
You can declare a constraint as deferrable, for example a primary key: CREATE TABLE elbat (id int, nmuloc int, PRIMARY KEY (id) DEFERRABLE); You can then use SET CONSTRAINTS in a transaction to set deferrable constraints as deferred. That means that they can be violated temporarily during the transaction but must be fulfilled at the transaction's COMMIT. Let's assume we have some data in our example table: INSERT INTO elbat (id, nmuloc) VALUES (1, 1), (2, 2); We can now switch the IDs like this: BEGIN TRANSACTION; SET CONSTRAINTS ALL DEFERRED; UPDATE elbat SET id = 2 WHERE nmuloc = 1; SELECT * FROM elbat; UPDATE elbat SET id = 1 WHERE nmuloc = 2; COMMIT; There's no error even though the IDs are both 2 after the first UPDATE. db<>fiddle More on that can be found in the documentation, e.g. in CREATE TABLE (or ALTER TABLE) and SET CONSTRAINTS.
postgres: Check pair of columns when inserting new row
I have a table like this: id | person | supporter | referredby| -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 0 | ABC | DEF | | 1 | ABC | GHI | DEF | 2 | CBA | FED | | 3 | CBA | IHG | FED | What I'm trying to accomplish is I'd like postgres to reject an INSERT if the value in referredby isn't in the supporter column for a specific person. (null referredby is ok) For example, with the data above: 4, 'ABC', 'JKL', null: accepted (can be null) 4, 'ABC', 'JKL', 'IHG': rejected (IHG not listed as a supporter for ABC) 4, 'ABC', 'JKL', 'DEF': accepted (DEF is listed as a supporter for ABC) Maybe a check constraint? I'm not sure how to piece it together
Add a foreign key that references person, supporter. (Needs to be unique key.) alter table t add constraint cname unique(person, supporter); alter table t add constraint fk foreign key (person, referredby) references t (person, supporter); (ANSI SQL syntax, but probably also supported by Postgresql.)
SQL Query 2 tables null results
I was asked this question in an interview: From the 2 tables below, write a query to pull customers with no sales orders. How many ways to write this query and which would have best performance. Table 1: Customer - CustomerID Table 2: SalesOrder - OrderID, CustomerID, OrderDate Query: SELECT * FROM Customer C RIGHT OUTER JOIN SalesOrder SO ON C.CustomerID = SO.CustomerID WHERE SO.OrderID = NULL Is my query correct and are there other ways to write the query and get the same results?
Answering for MySQL instead of SQL Server, cause you tagged it later with SQL Server, so I thought (since this was an interview question, that it wouldn't bother you, for which DBMS this is). Note though, that the queries I wrote are standard sql, they should run in every RDBMS out there. How each RDBMS handles those queries is another issue, though. I wrote this little procedure for you, to have a test case. It creates the tables customers and orders like you specified and I added primary keys and foreign keys, like one would usually do it. No other indexes, as every column worth indexing here is already primary key. 250 customers are created, 100 of them made an order (though out of convenience none of them twice / multiple times). A dump of the data follows, posted the script just in case you want to play around a little by increasing the numbers. delimiter $$ create procedure fill_table() begin create table customers(customerId int primary key) engine=innodb; set #x = 1; while (#x <= 250) do insert into customers values(#x); set #x := #x + 1; end while; create table orders(orderId int auto_increment primary key, customerId int, orderDate timestamp, foreign key fk_customer (customerId) references customers(customerId) ) engine=innodb; insert into orders(customerId, orderDate) select customerId, now() - interval customerId day from customers order by rand() limit 100; end $$ delimiter ; call fill_table(); For me, this resulted in this: CREATE TABLE `customers` ( `customerId` int(11) NOT NULL, PRIMARY KEY (`customerId`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `customers` VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12),(13),(14),(15),(16),(17),(18),(19),(20),(21),(22),(23),(24),(25),(26),(27),(28),(29),(30),(31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41),(42),(43),(44),(45),(46),(47),(48),(49),(50),(51),(52),(53),(54),(55),(56),(57),(58),(59),(60),(61),(62),(63),(64),(65),(66),(67),(68),(69),(70),(71),(72),(73),(74),(75),(76),(77),(78),(79),(80),(81),(82),(83),(84),(85),(86),(87),(88),(89),(90),(91),(92),(93),(94),(95),(96),(97),(98),(99),(100),(101),(102),(103),(104),(105),(106),(107),(108),(109),(110),(111),(112),(113),(114),(115),(116),(117),(118),(119),(120),(121),(122),(123),(124),(125),(126),(127),(128),(129),(130),(131),(132),(133),(134),(135),(136),(137),(138),(139),(140),(141),(142),(143),(144),(145),(146),(147),(148),(149),(150),(151),(152),(153),(154),(155),(156),(157),(158),(159),(160),(161),(162),(163),(164),(165),(166),(167),(168),(169),(170),(171),(172),(173),(174),(175),(176),(177),(178),(179),(180),(181),(182),(183),(184),(185),(186),(187),(188),(189),(190),(191),(192),(193),(194),(195),(196),(197),(198),(199),(200),(201),(202),(203),(204),(205),(206),(207),(208),(209),(210),(211),(212),(213),(214),(215),(216),(217),(218),(219),(220),(221),(222),(223),(224),(225),(226),(227),(228),(229),(230),(231),(232),(233),(234),(235),(236),(237),(238),(239),(240),(241),(242),(243),(244),(245),(246),(247),(248),(249),(250); CREATE TABLE `orders` ( `orderId` int(11) NOT NULL AUTO_INCREMENT, `customerId` int(11) DEFAULT NULL, `orderDate` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`orderId`), KEY `fk_customer` (`customerId`), CONSTRAINT `orders_ibfk_1` FOREIGN KEY (`customerId`) REFERENCES `customers` (`customerId`) ) ENGINE=InnoDB AUTO_INCREMENT=128 DEFAULT CHARSET=utf8; INSERT INTO `orders` VALUES (1,247,'2013-06-24 19:50:07'),(2,217,'2013-07-24 19:50:07'),(3,8,'2014-02-18 20:50:07'),(4,40,'2014-01-17 20:50:07'),(5,52,'2014-01-05 20:50:07'),(6,80,'2013-12-08 20:50:07'),(7,169,'2013-09-10 19:50:07'),(8,135,'2013-10-14 19:50:07'),(9,115,'2013-11-03 20:50:07'),(10,225,'2013-07-16 19:50:07'),(11,112,'2013-11-06 20:50:07'),(12,243,'2013-06-28 19:50:07'),(13,158,'2013-09-21 19:50:07'),(14,24,'2014-02-02 20:50:07'),(15,214,'2013-07-27 19:50:07'),(16,25,'2014-02-01 20:50:07'),(17,245,'2013-06-26 19:50:07'),(18,182,'2013-08-28 19:50:07'),(19,166,'2013-09-13 19:50:07'),(20,69,'2013-12-19 20:50:07'),(21,85,'2013-12-03 20:50:07'),(22,44,'2014-01-13 20:50:07'),(23,103,'2013-11-15 20:50:07'),(24,19,'2014-02-07 20:50:07'),(25,33,'2014-01-24 20:50:07'),(26,102,'2013-11-16 20:50:07'),(27,41,'2014-01-16 20:50:07'),(28,94,'2013-11-24 20:50:07'),(29,43,'2014-01-14 20:50:07'),(30,150,'2013-09-29 19:50:07'),(31,218,'2013-07-23 19:50:07'),(32,131,'2013-10-18 19:50:07'),(33,77,'2013-12-11 20:50:07'),(34,2,'2014-02-24 20:50:07'),(35,45,'2014-01-12 20:50:07'),(36,230,'2013-07-11 19:50:07'),(37,101,'2013-11-17 20:50:07'),(38,31,'2014-01-26 20:50:07'),(39,56,'2014-01-01 20:50:07'),(40,176,'2013-09-03 19:50:07'),(41,223,'2013-07-18 19:50:07'),(42,145,'2013-10-04 19:50:07'),(43,26,'2014-01-31 20:50:07'),(44,62,'2013-12-26 20:50:07'),(45,195,'2013-08-15 19:50:07'),(46,153,'2013-09-26 19:50:07'),(47,179,'2013-08-31 19:50:07'),(48,104,'2013-11-14 20:50:07'),(49,7,'2014-02-19 20:50:07'),(50,209,'2013-08-01 19:50:07'),(51,86,'2013-12-02 20:50:07'),(52,110,'2013-11-08 20:50:07'),(53,204,'2013-08-06 19:50:07'),(54,187,'2013-08-23 19:50:07'),(55,114,'2013-11-04 20:50:07'),(56,38,'2014-01-19 20:50:07'),(57,236,'2013-07-05 19:50:07'),(58,79,'2013-12-09 20:50:07'),(59,96,'2013-11-22 20:50:07'),(60,37,'2014-01-20 20:50:07'),(61,207,'2013-08-03 19:50:07'),(62,22,'2014-02-04 20:50:07'),(63,120,'2013-10-29 20:50:07'),(64,200,'2013-08-10 19:50:07'),(65,51,'2014-01-06 20:50:07'),(66,181,'2013-08-29 19:50:07'),(67,4,'2014-02-22 20:50:07'),(68,123,'2013-10-26 19:50:07'),(69,108,'2013-11-10 20:50:07'),(70,55,'2014-01-02 20:50:07'),(71,76,'2013-12-12 20:50:07'),(72,6,'2014-02-20 20:50:07'),(73,18,'2014-02-08 20:50:07'),(74,211,'2013-07-30 19:50:07'),(75,53,'2014-01-04 20:50:07'),(76,216,'2013-07-25 19:50:07'),(77,32,'2014-01-25 20:50:07'),(78,74,'2013-12-14 20:50:07'),(79,138,'2013-10-11 19:50:07'),(80,197,'2013-08-13 19:50:07'),(81,221,'2013-07-20 19:50:07'),(82,118,'2013-10-31 20:50:07'),(83,61,'2013-12-27 20:50:07'),(84,28,'2014-01-29 20:50:07'),(85,16,'2014-02-10 20:50:07'),(86,39,'2014-01-18 20:50:07'),(87,3,'2014-02-23 20:50:07'),(88,46,'2014-01-11 20:50:07'),(89,189,'2013-08-21 19:50:07'),(90,59,'2013-12-29 20:50:07'),(91,249,'2013-06-22 19:50:07'),(92,127,'2013-10-22 19:50:07'),(93,47,'2014-01-10 20:50:07'),(94,178,'2013-09-01 19:50:07'),(95,141,'2013-10-08 19:50:07'),(96,188,'2013-08-22 19:50:07'),(97,220,'2013-07-21 19:50:07'),(98,15,'2014-02-11 20:50:07'),(99,175,'2013-09-04 19:50:07'),(100,206,'2013-08-04 19:50:07'); Okay, now to the queries. Three ways came to my mind, I omitted the right join that MDiesel did, because it's actually just another way of writing left join. It was invented for lazy sql developers, that don't want to switch table names, but instead just rewrite one word. Anyway, first query: select c.* from customers c left join orders o on c.customerId = o.customerId where o.customerId is null; Results in an execution plan like this: +----+-------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+ | 1 | SIMPLE | c | index | NULL | PRIMARY | 4 | NULL | 250 | Using index | | 1 | SIMPLE | o | ref | fk_customer | fk_customer | 5 | wtf.c.customerId | 1 | Using where; Using index | +----+-------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+ Second query: select c.* from customers c where c.customerId not in (select distinct customerId from orders); Results in an execution plan like this: +----+--------------------+--------+----------------+---------------+-------------+---------+------+------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+--------+----------------+---------------+-------------+---------+------+------+--------------------------+ | 1 | PRIMARY | c | index | NULL | PRIMARY | 4 | NULL | 250 | Using where; Using index | | 2 | DEPENDENT SUBQUERY | orders | index_subquery | fk_customer | fk_customer | 5 | func | 2 | Using index | +----+--------------------+--------+----------------+---------------+-------------+---------+------+------+--------------------------+ Third query: select c.* from customers c where not exists (select 1 from orders o where o.customerId = c.customerId); Results in an execution plan like this: +----+--------------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+ | 1 | PRIMARY | c | index | NULL | PRIMARY | 4 | NULL | 250 | Using where; Using index | | 2 | DEPENDENT SUBQUERY | o | ref | fk_customer | fk_customer | 5 | wtf.c.customerId | 1 | Using where; Using index | +----+--------------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+ We can see in all execution plans, that the customers table is read as a whole, but from the index (the implicit one as the only column is primary key). This may change, when you select other columns from the table, that are not in an index. The first one seems to be the best. For each row in customers only one row in orders is read. The id column suggests, that MySQL can do this in one step, as only indexes are involved. The second query seems to be the worst (though all 3 queries shouldn't perform too bad). For each row in customers the subquery is executed (the select_type column tells this). The third query is not much different in that it uses a dependent subquery, but should perform better than the second query. Explaining the small differences would lead to far now. If you're interested, here's the manual page that explains what each column and their values mean here: EXPLAIN output Finally: I'd say, that the first query will perform best, but as always, in the end one has to measure, to measure and to measure.
I can thing of two other ways to write this query: SELECT C.* FROM Customer C LEFT OUTER JOIN SalesOrder SO ON C.CustomerID = SO.CustomerID WHERE SO.CustomerID IS NULL SELECT C.* FROM Customer C WHERE NOT C.CustomerID IN(SELECT CustomerID FROM SalesOrder)
The solutions involving outer joins will perform better than a solution using NOT IN.
Write SQL script to insert data
In a database that contains many tables, I need to write a SQL script to insert data if it is not exist. Table currency | id | Code | lastupdate | rate | +--------+---------+------------+-----------+ | 1 | USD | 05-11-2012 | 2 | | 2 | EUR | 05-11-2012 | 3 | Table client | id | name | createdate | currencyId| +--------+---------+------------+-----------+ | 4 | tony | 11-24-2010 | 1 | | 5 | john | 09-14-2010 | 2 | Table: account | id | number | createdate | clientId | +--------+---------+------------+-----------+ | 7 | 1234 | 12-24-2010 | 4 | | 8 | 5648 | 12-14-2010 | 5 | I need to insert to: currency (id=3, Code=JPY, lastupdate=today, rate=4) client (id=6, name=Joe, createdate=today, currencyId=Currency with Code 'USD') account (id=9, number=0910, createdate=today, clientId=Client with name 'Joe') Problem: script must check if row exists or not before inserting new data script must allow us to add a foreign key to the new row where this foreign related to a row already found in database (as currencyId in client table) script must allow us to add the current datetime to the column in the insert statement (such as createdate in client table) script must allow us to add a foreign key to the new row where this foreign related to a row inserted in the same script (such as clientId in account table) Note: I tried the following SQL statement but it solved only the first problem INSERT INTO Client (id, name, createdate, currencyId) SELECT 6, 'Joe', '05-11-2012', 1 WHERE not exists (SELECT * FROM Client where id=6); this query runs without any error but as you can see I wrote createdate and currencyid manually, I need to take currency id from a select statement with where clause (I tried to substitute 1 by select statement but query failed). This is an example about what I need, in my database, I need this script to insert more than 30 rows in more than 10 tables. any help
You wrote I tried to substitute 1 by select statement but query failed But I wonder why did it fail? What did you try? This should work: INSERT INTO Client (id, name, createdate, currencyId) SELECT 6, 'Joe', current_date, (select c.id from currency as c where c.code = 'USD') as currencyId WHERE not exists (SELECT * FROM Client where id=6);
It looks like you can work out if the data exists. Here is a quick bit of code written in SQL Server / Sybase that I think answers you basic questions: create table currency( id numeric(16,0) identity primary key, code varchar(3) not null, lastupdated datetime not null, rate smallint ); create table client( id numeric(16,0) identity primary key, createddate datetime not null, currencyid numeric(16,0) foreign key references currency(id) ); insert into currency (code, lastupdated, rate) values('EUR',GETDATE(),3) --inserts the date and last allocated identity into client insert into client(createddate, currencyid) values(GETDATE(), ##IDENTITY) go