I have a file, "sharp.csv" in which data are like this :
<name>;<src_ip_address>;<mac_address>;<ip_address>;Vlan1;NOW()
Edit : These data are filled by a perl script
I want to insert these data into a SQL database with the help of a BASH script
+---------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------------+-------------+------+-----+---------+-------+
| code_site | varchar(64) | NO | PRI | NULL | |
| ip_source | varchar(64) | NO | PRI | NULL | |
| mac_relevee | varchar(64) | NO | PRI | NULL | |
| ip_relevee | varchar(64) | YES | | NULL | |
| vlan_concerne | varchar(64) | YES | | NULL | |
| date_polling | datetime | YES | | NULL | |
+---------------+-------------+------+-----+---------+-------+
So my commands are just :
arp_router_file="$DIR/working-dir/sharp.csv"
db_arp_router_table="routeur_arp"
$mysql -h $db_address -u $db_user -p$db_passwd $db_name -e "load data local infile '$arp_router_file' REPLACE INTO TABLE $db_arp_router_table fields terminated by ';';"
But my NOW() command won't work.
Data are beeing inserted, but the "date_polling" row is filled with "0000-00-00 00:00:00"
NOW() is a function but the load data command will treat all the fields in the input file as literal data, thus trying to insert "NOW()" in a datetime field. The string "NOW()" can't be converted to a valid datetime value, so you end up with the default "0000-00-00 00:00:00" value.
You will have to build actual SQL INSERT queries from your input file, you can do this using awk:
cat input_file.csv | awk -F';' '{print "INSERT INTO routeur_arp (code_site, ip_source, mac_relevee, ip_relevee, vlan_concerne, date_polling) VALUES (\"" $1 "\", \"" $2 "\", \"" $3 "\", \"" $4 "\", \"" $5 "\", " $6 ");"}' > sql_statements.sql
To avoid sending NOW() instruction to MySQL, i'll send it a datetime with perl
use Time::Piece ;
my $t = localtime ;
my $date = $t->ymd;
my $time = $t->hms;
And instead of
<name>;<src_ip_address>;<mac_address>;<ip_address>;Vlan1;NOW()
It'll be
<name>;<src_ip_address>;<mac_address>;<ip_address>;Vlan1;$date $time\n
Perl send it as a String, MySQL understand is as a correct Datetime
Related
say I have some output from the command openstack security group list:
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID | Name | Description | Project | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| 1dda8a57-fff4-4832-9bac-4e806992f19a | default | Default security group | 0ce266c801ae4611bb5744a642a01eda | [] |
| 2379d595-0fdc-479f-a211-68c83caa9d42 | default | Default security group | 602ad29db6304ec39dc253bcbba408a7 | [] |
| 431df666-a9ba-4643-a3a0-9a70c89e1c05 | tempest | tempest test | b320a32508a74829a0563078da3cba2e | [] |
| 5b54e63c-f2e5-4eda-b2b9-a7061d19695f | default | Default security group | 57e745b9612941709f664c58d93e4188 | [] |
| 6381ebaf-79fb-4a31-bc32-49e2fecb7651 | default | Default security group | f5c30c42f3d74b8989c0c806603611da | [] |
| 6cce5c94-c607-4224-9401-c2f920c986ef | default | Default security group | e3190b309f314ebb84dffe249009d9e9 | [] |
| 7402fdd3-0f1e-4eb1-a9cd-6896f1457567 | default | Default security group | d390b68f95c34cefb0fc942d4e0742f9 | [] |
| 76978603-545b-401d-9959-9574e907ec57 | default | Default security group | 3a7b5361e79f4914b09b022bcae7b44a | [] |
| 7705da1e-d01e-483d-ab82-c99fdb9eba9c | default | Default security group | 1da03b5e7ce24be38102bd9c8f99e914 | [] |
| 7fd52305-850c-4d9a-a5e9-0abfb267f773 | default | Default security group | 5b20d6b7dfab4bfbac0a1dd3eb6bf460 | [] |
| 82a38caa-8e7f-468f-a4bc-e60a8d4589a6 | default | Default security group | d544d2243caa4e1fa027cfdc38a4f43e | [] |
| a4a5eaba-5fc9-463a-8e09-6e28e5b42f80 | default | Default security group | 08efe6ec9b404119a76996907abc606b | [] |
| e7c531e3-cdc3-4b7c-bf32-934a2f2de3f1 | default | Default security group | 539c238bf0e84463b8639d0cb0278699 | [] |
| f96bf2e8-35fe-4612-8988-f489fd4c04e3 | default | Default security group | 2de96a1342ee42a7bcece37163b8dfa0 | [] |
+--------------------------------------+---------+------------------------+----------------------------------+------+
And I have a list of Project IDs:
0ce266c801ae4611bb5744a642a01eda
b320a32508a74829a0563078da3cba2e
57e745b9612941709f664c58d93e4188
f5c30c42f3d74b8989c0c806603611da
e3190b309f314ebb84dffe249009d9e9
d390b68f95c34cefb0fc942d4e0742f9
3a7b5361e79f4914b09b022bcae7b44a
5b20d6b7dfab4bfbac0a1dd3eb6bf460
d544d2243caa4e1fa027cfdc38a4f43e
08efe6ec9b404119a76996907abc606b
539c238bf0e84463b8639d0cb0278699
2de96a1342ee42a7bcece37163b8dfa0
which is the intersection of two files I get from runnning fgrep -x -f projects secgrup
how can I extract the rows from the ID column for which the Project column IDs match this list that I have?
It would be something like:
openstack security group list | awk '$2 && $2!="ID" && $10 in $(fgrep -x -f projects secgrup) {print $2}'
which should yield:
1dda8a57-fff4-4832-9bac-4e806992f19a
431df666-a9ba-4643-a3a0-9a70c89e1c05
5b54e63c-f2e5-4eda-b2b9-a7061d19695f
6381ebaf-79fb-4a31-bc32-49e2fecb7651
6cce5c94-c607-4224-9401-c2f920c986ef
7402fdd3-0f1e-4eb1-a9cd-6896f1457567
76978603-545b-401d-9959-9574e907ec57
7fd52305-850c-4d9a-a5e9-0abfb267f773
82a38caa-8e7f-468f-a4bc-e60a8d4589a6
a4a5eaba-5fc9-463a-8e09-6e28e5b42f80
e7c531e3-cdc3-4b7c-bf32-934a2f2de3f1
f96bf2e8-35fe-4612-8988-f489fd4c04e3
but obviously this doesn't work.
You can use this awk for this:
awk -F ' *\\| *' 'FNR == NR {arr[$1]; next}
$5 in arr {print $2}' projects secgrup
1dda8a57-fff4-4832-9bac-4e806992f19a
431df666-a9ba-4643-a3a0-9a70c89e1c05
5b54e63c-f2e5-4eda-b2b9-a7061d19695f
6381ebaf-79fb-4a31-bc32-49e2fecb7651
6cce5c94-c607-4224-9401-c2f920c986ef
7402fdd3-0f1e-4eb1-a9cd-6896f1457567
76978603-545b-401d-9959-9574e907ec57
7fd52305-850c-4d9a-a5e9-0abfb267f773
82a38caa-8e7f-468f-a4bc-e60a8d4589a6
a4a5eaba-5fc9-463a-8e09-6e28e5b42f80
e7c531e3-cdc3-4b7c-bf32-934a2f2de3f1
f96bf2e8-35fe-4612-8988-f489fd4c04e3
Here:
-F ' *\\| *' sets input field separator to | surrounded with 0 or more spaces on both sides.
With your shown samples only, please try following awk code. Written and tested in GNU awk.
awk '
FNR==NR{
arr1[$0]
next
}
match($0,/.*default \| Default security group \| (\S+)/,arr2) && (arr2[1] in arr1){
print arr2[1]
}
' ids Input_file
Explanation:
Checking FNR==NR condition which will be TRUE when first Input_file named ids(where your ids are stored) is being read.
Then creating an array named arr1 is being created with index of current line.
next keyword will skip all further statements from here.
Then using match function with regex .*default \| Default security group \| (\S+) which will create 1 capturing group and share its value to array named arr2.
Then checking condition if arr2 value is present inside arr1 then print its value else do nothing.
Since this morning :
psql PGP_SYM_DECRYPT : HINT: No function matches the given name and argument types.
LINE 1: select login,PGP_SYM_DECRYPT(password,'*******') from pa...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
Extenstion pg_crypto is present.
so I can't select any data from previously pgp_sym_encrypt queries...
what goes wrong ? how to solve that ?
If the extension was not updated and it was working yesterday, the issue is likely with the search_path. Make sure the path of the extension is set in the search_path.
So, to check where the extension is installed, type \dx and note the schema. Then, type show search_path; and make sure the extension's schema is listed there (it could be public). If not, add the extension's schema to the search path.
To follow investigations I build a copy of the table with less columns.
And tried out the same password visual check :
perso=# select quoi,login,pgp_sym_decrypt(password::bytea,'someKEY') from tempo where quoi ilike '%somesite%' ;
quoi | login | pgp_sym_decrypt
--------------+-----------------------+-----------------
somesite.com | somename#somewhere.fr | foobar
(1 row)
perso=# select quoi,login,pgp_sym_decrypt(password,'someKEY') from tempo where quoi ilike '%somesite%' ;
quoi | login | pgp_sym_decrypt
--------------+-----------------------+-----------------
somesite.com | somename#somewhere.fr | foobar
(1 row)
perso=# \d+ tempo
Table "public.tempo"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
----------+---------+-----------+----------+---------+----------+--------------+-------------
ref | integer | | | | plain | |
quoi | text | | | | extended | |
login | text | | | | extended | |
password | bytea | | | | extended | |
there is no more problems here so there is an issue on the table or on the data store-mode.
perso=# \d+ passwd
Table "public.passwd"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
----------+---------+-----------+----------+-------------------------------------+----------+--------------+-------------
ref | integer | | not null | nextval('passwd_ref_seq'::regclass) | plain | |
quoi | text | | not null | | extended | |
login | text | | not null | | extended | |
password | text | | not null | | extended | |
Indexes:
"passwd_pkey" PRIMARY KEY, btree (ref)
"passwd_password_key" UNIQUE CONSTRAINT, btree (password)
perso=#
perso=# select quoi,login,pgp_sym_decrypt(password,'someKEY') from passwd where quoi ilike '%somesite%' ;
ERROR: function pgp_sym_decrypt(text, unknown) does not exist
LINE 1: select quoi,login,pgp_sym_decrypt(password,'someKEY') fr...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
perso=#
here we have back error & detect text on password column.
So test :
alter table => BIG FAIL it reencoded a second time all datas..
drop test table passwd
restore test table passwd
copy in tempo table the datas
delete datas in passwd table
alter the passwd table
copy data back in passwd table
as a procedure for my test.
so I did :
perso=#
perso=# delete from passwd ;
DELETE 106
perso=# alter table passwd alter column password type bytea using PGP_SYM_ENCRYPT(password::text,'someKEY');
ALTER TABLE
perso=# \d+ passwd
Table "public.passwd"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
----------+---------+-----------+----------+-------------------------------------+----------+--------------+-------------
ref | integer | | not null | nextval('passwd_ref_seq'::regclass) | plain | |
quoi | text | | not null | | extended | |
login | text | | not null | | extended | |
password | bytea | | not null | | extended | |
Indexes:
"passwd_pkey" PRIMARY KEY, btree (ref)
"passwd_password_key" UNIQUE CONSTRAINT, btree (password)
perso=# insert into passwd (ref,quoi,login,password) select ref,quoi,login,password::bytea from tempo ;
INSERT 0 106
perso=#
perso=#
perso=# select quoi,login,pgp_sym_decrypt(password,'someKEY') from passwd where quoi ilike '%somesite%' ;
quoi | login | pgp_sym_decrypt
--------------+-----------------------+-----------------
somesite.com | somename#somewhere.fr | foobar
(1 row)
perso=#
Then I backup the database; and applied similar procedure to fix the issue with success. It might be a better way to do that but this way I understand the process.
both solution in there :
using queries with column:bytea syntax
fix the column type to be:bytea
I have a fasta file temp_mart.txt as such:
ENSG00000100219|ENST00000005082
MTLLTFRDVAIEFSLEEWKCLDLAQQNLYRDVMLENYRNLFSVGLTVCKPGL
And I tried to load it into a table in sql using
load data local infile '~/Desktop/temp_mart.txt' into table mart
But instead of getting an output like this:
+-----------------+------+------+
| ENSG | ENST | 3UTR |
+-----------------+------+------+
| >ENSG0000010021 | ENST00000005082 | MTLLTFRDVAIEFSL |
I get this:
+-----------------+------+------+
| ENSG | ENST | 3UTR |
+-----------------+------+------+
| >ENSG0000010021 | NULL | NULL |
| MTLLTFRDVAIEFSL | NULL | NULL |
| EPWNVKRQEAADGHP | NULL | NULL |
| DKFTAMSSHFTQDLL | NULL | NULL |
Everything seems to go into the first column. What is the best way to load it into the table so it presents as intended? Do I need to convert it into a csv file first?
I trying to restore sql dump that looks like this:
COPY table_name (id, oauth_id, foo, bar) FROM stdin;
1 142 \N xxxxxxx
2 142 \N yyyyyyy
<dozen similar lines>
last line in this dump: \.
command to restore:
psql < table.sql
or
psql --file=dump.sql
\d+ table_name:
Table "public.table_name"
Column | Type | Modifiers | Storage | Stats target | Description
---------------------+-----------------------+-------------------------------------------------------------------------+----------+--------------+-------------
id | integer | not null default nextval('connected_table_name_id_seq'::regclass) | plain | |
oauth_id | integer | not null | plain | |
foo | character varying | | extended | |
bar | character varying | | extended | |
Looks sadly that standard method for backup and rollback does not work :(
Version of the psql: 9.5.4, version of the server: 9.5.2
I have a table with the following format.
mysql> describe unit_characteristics;
+----------------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------------+------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| uut_id | int(10) unsigned | NO | PRI | NULL | |
| uut_sn | varchar(45) | NO | | NULL | |
| characteristic_name | varchar(80) | NO | PRI | NULL | |
| characteristic_value | text | NO | | NULL | |
| creation_time | datetime | NO | | NULL | |
| last_modified_time | datetime | NO | | NULL | |
+----------------------+------------------+------+-----+---------+----------------+
each uut_sn has multiple characteristic_name/value pairs. I want to use MySQL to generate a table
+----------------------+-------------+-------------+-------------+--------------+
| uut_sn | char_name_1 | char_name_2 | char_name_3 | char_name_4 | ... |
+----------------------+-------------+-------------+-------------+--------------+
| 00000 | char_val_1 | char_val_2 | char_val_3 | char_val_4 | ... |
| 00001 | char_val_1 | char_val_2 | char_val_3 | char_val_4 | ... |
| 00002 | char_val_1 | char_val_2 | char_val_3 | char_val_4 | ... |
| ..... | char_val_1 | char_val_2 | char_val_3 | char_val_4 | ... |
+----------------------+------------------+------+-----+---------+--------------+
Is this possible with just one query?
Thanks,
-peter
This is a standard pivot query:
SELECT uc.uut_sn,
MAX(CASE
WHEN uc.characteristic_name = 'char_name_1' THEN uc.characteristic_value
ELSE NULL
END) AS char_name_1,
MAX(CASE
WHEN uc.characteristic_name = 'char_name_2' THEN uc.characteristic_value
ELSE NULL
END) AS char_name_2,
MAX(CASE
WHEN uc.characteristic_name = 'char_name_3' THEN uc.characteristic_value
ELSE NULL
END) AS char_name_3,
FROM unit_characteristics uc
GROUP BY uc.uut_sn
To make it dynamic, you need to use MySQL's dynamic SQL syntax called Prepared Statements. It requires two queries - the first gets a list of the characteristic_name values, so you can concatenate the appropriate string into the CASE expressions like you see in my example as the ultimate query.
You're using the EAV antipattern. There's no way to automatically generate the pivot table you describe, without hardcoding the characteristics you want to include. As #OMG Ponies mentions, you need to use dynamic SQL to general the query in a custom fashion for the set of characteristics you want to include in the result.
Instead, I recommend you fetch the characteristics one per row, as they are stored in the database, and if you want an application object to represent a single UUT with all its characteristics, you write code to loop over the rows as you fetch them in your application, collecting them into objects.
For example in PHP:
$sql = "SELECT uut_sn, characteristic_name, characteristic_value
FROM unit_characteristics";
$stmt = $pdo->query($sql);
$objects = array();
while ($row = $stmt->fetch()) {
if (!isset($objects[ $row["uut_sn"] ])) {
$object[ $row["uut_sn"] ] = new Uut();
}
$objects[ $row["uut_sn"] ]->$row["characteristic_name"]
= $row["characterstic_value"];
}
This has a few advantages over the solution of hardcoding characteristic names in your query:
This solution takes only one SQL query instead of two.
No complex code is needed to build your dynamic SQL query.
If you forget one of the characteristics, this solution automatically finds it anyway.
GROUP BY in MySQL is often slow, and this avoids the GROUP BY.