To migrate my database faster, I tried copying the raw files (MYD and MYI) files of a database into another machine. All the tables are working fine except two tables that were partitioned. My directory structure looks like this:
table1.frm
table1.MYI
table1.MYD
table2.frm
table2.par
table2#P#p0.MYD
table2#P#p0.MYI
table2#P#p1.MYD
table2#P#p1.MYI
table3.frm
table3.par
table3#P#p0.MYD
table3#P#p0.MYI
table3#P#p1.MYD
table3#P#p1.MYI
The following is producing an error:
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| test |
+--------------------+
3 rows in set (0.06 sec)
mysql> use test;
Database changed
mysql> show tables;
+---------------------------+
| Tables_in_test |
+---------------------------+
| table1 |
| table2 |
| table3 |
+---------------------------+
3 rows in set (0.00 sec)
mysql> explain table1;
+-------+---------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+---------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| a | int(11) | YES | | NULL | |
+-------+---------+------+-----+---------+----------------+
2 rows in set (0.01 sec)
mysql> explain table2;
ERROR 1017 (HY000): Can't find file: 'table2' (errno: 2)
mysql> explain table3;
ERROR 1017 (HY000): Can't find file: 'table3' (errno: 2)
mysql> check TABLE table2;
+--------------------------+-------+----------+--------------------------------------------------+
| Table | Op | Msg_type | Msg_text |
+--------------------------+-------+----------+--------------------------------------------------+
| test.table2 | check | Error | Can't find file: 'table2' (errno: 2) |
| test.table2 | check | error | Corrupt |
+--------------------------+-------+----------+--------------------------------------------------+
2 rows in set (0.00 sec)
I checked the permissions and everything looks fine. I tried repair but that didn't seem to work either. Is there anything that can be done?
The server you are porting them to might not have partioning enabled.
Try SHOW VARIABLES LIKE '%partition%'; and check the value of the variable have_partioning or have_partition_engine (depending on your version of mysql).
Further information can be found in the documentation.
Related
I have SQL for example
show tables from mydb;
It shows the list of table
|table1|
|table2|
|table3|
Then,I use sql sentence for each table.
such as "show full columns from table1 ;"
+----------+--------+-----------+------+-----+---------+----------------+---------------------------------+---------+
| Field | Type | Collation | Null | Key | Default | Extra | Privileges | Comment |
+----------+--------+-----------+------+-----+---------+----------------+---------------------------------+---------+
| id | bigint | NULL | NO | PRI | NULL | auto_increment | select,insert,update,references | |
| user_id | bigint | NULL | NO | MUL | NULL | | select,insert,update,references | |
| group_id | int | NULL | NO | MUL | NULL | | select,insert,update,references | |
+----------+--------+-----------+------+-----+---------+----------------+---------------------------------+---------+
So in this case I can use programming language such as .(this is not correct code just showing the flow)
tables = "show tables from mydb;"
for t in tables:
cmd.execute("show full columns from {t} ;")
However is it possible to do this in sql only?
If you are using MySQL you can use the system view - INFORMATION_SCHEMA.
It contains table name and column name (and other details). No loop is require and you can easily filter by other information, too.
SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
If you are using Microsoft SQL Server, you can use the above command
I am learning indexing of database.
here are indexings of a table. And this table has 330k records.
mysql> show index from employee;
+----------+------------+-------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | Visible | Expression |
+----------+------------+-------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
| employee | 0 | PRIMARY | 1 | id | A | 297383 | NULL | NULL | | BTREE | | | YES | NULL |
| employee | 0 | ak_employee | 1 | personal_code | A | 297383 | NULL | NULL | | BTREE | | | YES | NULL |
| employee | 1 | idx_email | 1 | email | A | 297383 | NULL | NULL | | BTREE | | | YES | NULL |
+----------+------------+-------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
as you can see, there are only three indexing on this table.
Now I want to query with where on birth_date column, I think it will be very slow because there is no indexing on birth-date column, I when I try query, I found it is very fast.
mysql> select sql_no_cache *
-> from employee
-> where birth_date > '1955-11-11'
-> limit 100
-> ;
100 rows in set, 1 warning (0.04 sec)
So I am confused:
why it is still so fast without indexing?
due to its still fast, why do we still need indexing?
This is your query:
select sql_no_cache *
from employee
where birth_date > '1955-11-11'
limit 100
There are no indexes so the query starts reading the data from the data pages. On each record, it compares the birthdate and returns the row. When it finds 100 (due to the limit) it stops.
Presumably, it finds 100 rows quite quickly. After all, the median age of the United States is about 38 -- which is (as I write this) a birth year of 1981. By far, most people were born after 1955.
The query would be much slower if you had an order by or group by. That would require reading all the data before returning anything.
I have to update a value in one field of a table (t1).
Current table t1 records :
| POLNAME | VALUE |
|-------------------|
| TEST_01 | Normal |
| TEST_02 | High |
| TEST_03 | Normal |
| TEST_04 | Low |
| TEST_05 | Low** |
New table t1 records expected after the update :
| POLNAME | VALUE |
|-------------------|
| REST_01 | Normal |
| REST_02 | High |
| REST_03 | Normal |
| REST_04 | Low |
| REST_05 | Low** |
I need to replace, in t1-POLNAME field, the fixed value 'TEST_' by 'REST_' for all records of table t1.
I can do one by one using an UPDATE SQL command, but my goal is to replace all records using one SQL command.
You can use the REPLACE SQL FUNCTION.
UPDATE t1 SET POLNAME=REPLACE(POLNAME, 'TEST','REST');
You can use the REPLACE Syntax
UPDATE t1 SET POLNAME=REPLACE(POLNAME, 'TEST','REST');
I'm new to using stored procedures, what is the best way to update and insert using stored procedures. I have two tables and I can match them by a distinct ID, I want to update if the ID exists in both my load table and my destination table, and I want to insert if the item does not exist in my destination table. Just an example template would be very helpful, thanks!
If I understood well, you want to select values in one table and insert them in other table. If the id exists in the second table, you need to update the row. If I'm not wrong you need something like this:
mysql> select * from table_1;
+----+-----------+-----------+
| id | name | last_name |
+----+-----------+-----------+
| 1 | fagace | acero |
| 2 | ratangelo | saleh |
| 3 | hectorino | josefino |
+----+-----------+-----------+
3 rows in set (0.00 sec)
mysql> select * from table_2;
+----+-----------+-----------+
| id | name | last_name |
+----+-----------+-----------+
| 1 | fagace | acero |
| 2 | ratangelo | saleh |
+----+-----------+-----------+
2 rows in set (0.00 sec)
mysql> insert into table_2 select t1.id,t1.name,t1.last_name from table_1 t1 on duplicate key update name=t1.name, last_name=t1.last_name;
Query OK, 1 row affected (0.00 sec)
Records: 3 Duplicates: 0 Warnings: 0
mysql> select * from table_2;
+----+-----------+-----------+
| id | name | last_name |
+----+-----------+-----------+
| 1 | fagace | acero |
| 2 | ratangelo | saleh |
| 3 | hectorino | josefino |
+----+-----------+-----------+
3 rows in set (0.00 sec)
mysql>
You should look up the SQL MERGE statement.
It allows you to perform UPSERT's - i.e. INSERT if a key value does not already exist, UPDATE if the key does exist.
http://technet.microsoft.com/en-us/library/bb510625.aspx
However, your requirement to check for a key value in 2 places before you perform an update does make it more complex. I haven't tried this but I would think a VIEW or a CTE could be used to establish if the ID exists in both your tables & then base the MERGE on the CTE/VIEW.
But definitely start by looking at MERGE!
I was wondering if there's a drawback (other than bad practice) to using something like this
SELECT * FROM my_table WHERE id LIKE '1';
where id is an integer. I know you're supposed to use id=1 but I am writing a java program and if everything can use LIKE it'll be a lot easier for me. Also, so far, everything works fine; I get the correct query results, so if there is no drawback I will continue doing it like this.
edit: I am using MySQL.
MySQL will allow it, but will ignore the index:
mysql> describe METADATA_44;
+---------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------+--------------+------+-----+---------+-------+
| AtextId | int(11) | NO | PRI | NULL | |
| num | varchar(128) | YES | | NULL | |
| title | varchar(128) | YES | | NULL | |
| file | varchar(128) | YES | | NULL | |
| context | varchar(128) | YES | | NULL | |
| source | varchar(128) | YES | | NULL | |
+---------+--------------+------+-----+---------+-------+
6 rows in set (0.00 sec)
mysql> explain select * from METADATA_44 where Atextid like '7';
+----+-------------+-------------+------+---------------+------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------+------+---------------+------+---------+------+------+-------------+
| 1 | SIMPLE | METADATA_44 | ALL | PRIMARY | NULL | NULL | NULL | 591 | Using where |
+----+-------------+-------------+------+---------------+------+---------+------+------+-------------+
mysql> explain select * from METADATA_44 where Atextid=7;
+----+-------------+-------------+-------+---------------+---------+---------+-------+------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------+-------+---------------+---------+---------+-------+------+-------+
| 1 | SIMPLE | METADATA_44 | const | PRIMARY | PRIMARY | 4 | const | 1 | |
+----+-------------+-------------+-------+---------------+---------+---------+-------+------+-------+
1 row in set (0.00 sec)
You'd need to look at the Query Execution Plan on your RDBMS to verify that LIKE with no wildcards is treated as efficiently as an = would be. A quick test in SQL Server shows that it would give you an index scan rather than a seek so I guess it doesn't look at that when generating the plan and for SQL Server using = would be much more efficient. I don't have a MySQL install to test against.
Edit: Just to update this SQL Server seems to handle it fine and do a seek when the data type is varchar. When it is run against an int column though you get the scan. This is because it does an implicit conversion to varchar on the int column so can't use the index.
You are better off writing your query as
SELECT * FROM my_table WHERE id = 1;
otherwise mysql will have to typecast '1' to int which is the type of the column id
so obviously there is a small performance penalty, when u know the type of the column supply the value according to that type
Speed. [15-char filler as there's not much more to say]
Without using any wildcards with LIKE, is should be fine for your needs if the speed/efficiency is something you don't bother with.