Need help/tools to compare SQL queries - sql

I have this comments table with over 4 million rows:
CREATE TABLE `comments`
(
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`gid` int(11) unsigned NOT NULL DEFAULT '0',
`userid` int(6) unsigned DEFAULT NULL,
`date` int(11) unsigned DEFAULT NULL,
`comment` text NOT NULL,
`status` enum('on','alert') NOT NULL DEFAULT 'on',
PRIMARY KEY (`id`),
KEY `gid_2` (`gid`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
now I'm thinking about extracting the text-field to decrease the 400 MB and increase performance. Like this:
CREATE TABLE commentstext
(
id int(11) unsigned NOT NULL AUTO_INCREMENT,
`comment` text NOT NULL,
PRIMARY KEY (id)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
but I'm not sure if this will perform better this way. I need to test this cases by using different queries (also). My results so far differ a lot. Between 0.001* - 3.321 sec. I'm not able to check this by querying in phpmyadmin.
Is there a better and easy way or tool to compare queries performance?

That's what I was looking for:
SELECT BENCHMARK(1000000000, (
SELECT
comments.comment
FROM
comments
WHERE
`gid`=303410
LIMIT 1
));
(result 34.1612 sec.)
(result 32.2737 sec.)
SELECT BENCHMARK(1000000000, (
SELECT
commentstext.comment
FROM
commentsindex,
commentstext
WHERE
`gid`=303410
AND commentsindex.`id` = commentstext.`id`
LIMIT 1
));
(result 34.1237 sec.)
(result 34.2914 sec.)
SELECT BENCHMARK(1000000000, (
SELECT
commentstext.comment
FROM
commentsindex
INNER JOIN
commentstext
ON commentstext.`id` = commentsindex.`id`
WHERE
`gid`=303410
LIMIT 1
));
(result 32.8471 sec.)
(result 34.7079 sec.)
... but now I'm really wondering that it doesn't matter, which table design is in use. confused

Related

Doing 4 way filter based on 3 tables using GORM

I've been trying to achieve a 4 way to join/filter based on 4 tables "Offers", "UserPaymentMethods" and a junction table "OffersUserPaymentMethods" defined as below;
So I want to filter "offers" based on payment_method_id because offer_id lives in offers_user_payment_methods which makes it a bit tricky. Front-end will send payment_method_id and I need to filter offers based on the payment_method_id that's it.
CREATE TABLE `offers_user_payment_methods` (
`offer_id` bigint(20) unsigned NOT NULL,
`user_payment_method_id` bigint(20) unsigned NOT NULL
)
CREATE TABLE `offers` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`user_uid` longtext NOT NULL,
`base` varchar(20) NOT NULL,
`quote` varchar(20) NOT NULL,
`side` longtext NOT NULL,
`price` decimal(32,16) NOT NULL,
`origin_amount` decimal(32,16) NOT NULL,
`available_amount` decimal(32,16) NOT NULL,
`min_order_amount` decimal(32,16) NOT NULL,
`max_order_amount` decimal(32,16) NOT NULL,
`payment_time_limit` bigint(20) unsigned NOT NULL,
`state` longtext NOT NULL,
`created_at` datetime(3) DEFAULT NULL,
`updated_at` datetime(3) DEFAULT NULL
)
CREATE TABLE `user_payment_methods` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`user_uid` longtext NOT NULL,
`payment_method_id` bigint(20) unsigned DEFAULT NULL,
`data` json DEFAULT NULL,
`created_at` datetime(3) DEFAULT NULL,
`updated_at` datetime(3) DEFAULT NULL,
)
CREATE TABLE `payment_methods` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`type` longtext NOT NULL,
`bank_name` longtext NOT NULL,
`logo` longtext NOT NULL,
`options` json DEFAULT NULL,
`enabled` tinyint(1) NOT NULL,
`created_at` datetime(3) DEFAULT NULL,
`updated_at` datetime(3) DEFAULT NULL
)
You will struggle to do this efficiently and entirely with Gorm. Preloading/associations aren't done using joins in Gorm and there is no way to filter based on them. I see two potential options:
1. Write your own query using joins and scan in the results
You can use Gorm for the query and execution, but honestly, I would just avoid all the need for reflection etc and just define a struct and scan straight into that.
The results will contain duplicated data, so you will have to manually transpose the results and build up the object.
3. Execute two queries, one to find the IDs of the offers, and one to find the offers
The first query would be the equivalent of:
SELECT offers_user_payment_methods.offer_id FROM offers_user_payment_methods
INNER JOIN user_payment_methods ON offers_user_payment_methods. user_payment_method_id = user_payment_methods.id
WHERE user_payment_methods.payment_method_id = ?
If you scan these results into var offerIDs []int, you can use Gorm to find the offers by passing this slice as the param:
offers := make(Offer, 0)
db.Find(&offers, offerIDs)
I think this solution has the benefit of you do the more complex query and leave the easy stuff to Gorm (which is what it does ~ok).

Is it posible to create some field with math function in SQL? if yes, how should i do it in this problem?

I'm working on a SQL table that save money transaction for every day. this is my table design:
CREATE TABLE `transaction` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`date` datetime NOT NULL,
`member_id` int(11) NOT NULL,
`name` varchar(60) CHARACTER SET utf8 DEFAULT NULL,
`balance_lastMonth` int(11) NOT NULL,
`income` int(11) NOT NULL,
`outcome` int(11) NOT NULL,
`balance` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `member_id` (`member_id`),
CONSTRAINT `transaction_ibfk_1` FOREIGN KEY (`member_id`) REFERENCES `member` (`id`)
) ENGINE=InnoDB CHARSET=latin1
the balance field formula : balance_lastMonth + income - outcome
and balance_lastMonth is balance in the last month
is it possible to achieve it in one table? if yes how to do it? or may be there is better way to do it. i'm using 10.4.6-MariaDB.
You can calculate the balance using a query:
select t.*,
sum(income - outcome) over (partition by member_id order by date) as balance,
sum(income - outcome) over (partition by member_id order by date) - (income - outcome) as balance_lastmonth
from transaction t;
The simplest thing to do is to encapsulate this in a view and just use that.
If you actually want to store the results in the table, you'll need to use a trigger. I don't recommend that approach unless you have some sort of requirement that the balances be stored.

How to delete a record from database where all fields are the same to another?

I have two only records in a database table and I want to delete only one of them.
The problem is that I don't have any primary key nor unique identifier, so how could I delete one and only one record?
It seems a easy question but I didn't find out how to do it ¿?.
CREATE TABLE `ToDo` (
`id` bigint(20) NOT NULL,
`caption` varchar(255) DEFAULT NULL,
`description` varchar(255) DEFAULT NULL,
`priority` int(11) DEFAULT NULL,
`done` tinyint(1) DEFAULT NULL,
`idUser_c` int(11) DEFAULT NULL,
`idUser_u` int(11) DEFAULT NULL,
`idUser_d` int(11) DEFAULT NULL,
`date_c` datetime DEFAULT NULL,
`date_u` datetime DEFAULT NULL,
`date_d` datetime DEFAULT NULL,
`version` bigint(20) DEFAULT '0'
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO `ToDo` (`id`,`caption`,`description`,`priority`,`done`,`idUser_c`,`idUser_u`,`idUser_d`,`date_c`,`date_u`,`date_d`,`version`) VALUES (3,'hello','how are you',2,0,1,1,1,'2018-03-03 13:35:54','2018-03-03 13:35:57','2018-03-03 13:36:00',0);
INSERT INTO `ToDo` (`id`,`caption`,`description`,`priority`,`done`,`idUser_c`,`idUser_u`,`idUser_d`,`date_c`,`date_u`,`date_d`,`version`) VALUES (3,'hello','how are you',2,0,1,1,1,'2018-03-03 13:35:54','2018-03-03 13:35:57','2018-03-03 13:36:00',0);
This addresses the title, which implies potentially more than 2 rows in the table:
CREATE TABLE new LIKE ToDo;
INSERT INTO new
SELECT DISTINCT id, caption, ...
FROM ToDo;
RENAME TABLE ToDo TO old,
new TO ToDo;
DROP TABLE old;
Well, what a good reason for an auto-incremented column! Well, you can add one:
alter table todo add ToDoId int auto_increment primary key;
This also sets the value.
Then you can do:
delete td
from todo td join
todo td1
on td.id = td1.id and td.caption = td1.caption and . . . and
td1.id < td.id;
This assumes that the columns are not NULL.
Alternatively, fix the entire table:
create temporary table temp_todo as
select *
from todo;
truncate table todo;
insert into todo
select distinct *
from todo;
This handles NULL values better than the first version.
Along the way, fix the table to have an auto-incremented primary key, so you can avoid this problem forevermore.
I think I found it myself, I just got stuck for a sec!
DELETE FROM ToDo WHERE ... LIMIT 1;

SQL performance ( MySQL )

i have this tabel,
CREATE TABLE `forum_rank` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) NOT NULL DEFAULT '0',
`rank` int(11) NOT NULL DEFAULT '0',
`forum_id` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=2 DEFAULT CHARSET=latin1;
now i ask about what perfome best, its * or alle felt like this 2 eg.
select * form forum_rank;
or
select id, user_id, rank, forum_id from forum_rank;
You should explicitly specify the columns. Otherwise the database engine will first have to find out what the table's columns are (resolve the * operator) and after perform the actual query.
I don't think performance will be a problem here. There's a better reason to prefer the second idiom: your code is less likely to break if you add additional columns.

Slow query, can I speed it up?

I'm retrieving images stored as blobs in the database using a python script running on the same server.
SELECT *
FROM imagedb_production.imagedb IMAGE
LEFT JOIN dccms_production.tblmedia MEDIA ON IMAGE.name = MEDIA.name
LEFT JOIN dccms_production.tblmultimedia CAP ON MEDIA.contentItemID = CAP.contentItemID
LIMIT 5000,100;
An EXPLAIN returns
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE IMAGE index NULL name_idx 767 NULL 10145962 Using index
1 SIMPLE MEDIA ref name name 63 imagedb_production.IMAGE.name 1
1 SIMPLE CAP eq_ref PRIMARY,idx_contentItemID PRIMARY 4 dccms_production.MEDIA.contentItemID 1 Using index
(Sorry the output looks like crap)
This query takes close to 12 minutes is there any way I can speed this up before going through and tuning the mysql db instance?
Additional information
'imagedb', 'CREATE TABLE `imagedb` (
`multimediaID` int(11) NOT NULL auto_increment,
`name` varchar(255) NOT NULL,
`content` mediumblob,
`description` longtext,
`mime_type` varchar(255) default NULL,
PRIMARY KEY (`multimediaID`),
KEY `name_idx` (`name`)
) ENGINE=InnoDB AUTO_INCREMENT=2320759 DEFAULT CHARSET=utf8'
'tblmedia', 'CREATE TABLE `tblmedia` (
`mediaID` int(11) NOT NULL auto_increment,
`contentItemID` int(11) NOT NULL default ''0'',
`name` varchar(255) default NULL,
`width` int(11) default NULL,
`height` int(11) default NULL,
`file1Size` bigint(20) default NULL,
`file2Size` bigint(20) default NULL,
`mediaSlug` int(11) default NULL,
PRIMARY KEY (`mediaID`),
KEY `idx_contentItemID` (`contentItemID`),
KEY `name` (`name`(20))
) ENGINE=InnoDB AUTO_INCREMENT=899975 DEFAULT CHARSET=utf8'
'tblmultimedia', 'CREATE TABLE `tblmultimedia` (
`contentItemID` int(11) NOT NULL default ''0'',
`caption` text,
`mimeType` varchar(255) default NULL,
PRIMARY KEY (`contentItemID`),
KEY `idx_contentItemID` (`contentItemID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8'
You have 10,000,000 rows with no sorting, I would fix that. Add a default order by clause.
Older versions of MySQL did not take limit clauses into account until much later. I think newer versions do a better job of that. You might want to check into different ways to limit the result set.