Optimize a mysql like query - sql

I added the jquery autocomplete plugin to my places textfield to help users better select a location. What I didn't realize before building is that the query would be very slow.
select * from `geoplanet_places` where name LIKE "%San Diego%" AND (place_type = "County" OR place_type = "Town")
The query above took 1.18 seconds. Then I tried adding indexes for name and place_type but that only slowed it down (1.93s).
Is there a way to optimize this query or is there another technique to speed up the query.
This geoplanet_places table has 437,715 rows (mysql)
CREATE TABLE `geoplanet_places` (
`id` int(11) NOT NULL auto_increment,
`woeid` bigint(20) default NULL,
`parent_woeid` bigint(20) default NULL,
`country_code` varchar(255) collate utf8_unicode_ci default NULL,
`name` varchar(255) collate utf8_unicode_ci default NULL,
`language` varchar(255) collate utf8_unicode_ci default NULL,
`place_type` varchar(255) collate utf8_unicode_ci default NULL,
`ancestry` varchar(255) collate utf8_unicode_ci default NULL,
`activity_count` int(11) default '0',
`activity_count_updated_at` datetime default NULL,
`bounding_box` blob,
`slug` varchar(255) collate utf8_unicode_ci default NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `index_geoplanet_places_on_woeid` (`woeid`),
KEY `index_geoplanet_places_on_ancestry` (`ancestry`),
KEY `index_geoplanet_places_on_parent_woeid` (`parent_woeid`),
KEY `index_geoplanet_places_on_slug` (`slug`),
KEY `index_geoplanet_places_on_name` (`name`),
KEY `index_geoplanet_places_on_place_type` (`place_type`)
) ENGINE=InnoDB AUTO_INCREMENT=5652569 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
EXPLAIN
id 1
select_type SIMPLE
table geoplanet_places
type ALL
possible_keys index_geoplanet_places_on_place_type
key NULL
key_len NULL
ref NULL
rows 441273
Extra Using where

You can switch the storage engine of the table to MyISAM to take advantage of full text indexing.
The name index wont help you unless you change the like to LIKE 'San Diego%' which can do a prefix search on the index

Get rid of the leading '%' in your where-like clause, so it becomes: where name like "San Diego%". For auto complete, this seems a reasonable limitation (assumes that the user starts typing correct characters) that should speed up the query significantly, as MySql will be able to use an existing index (index_geoplanet_places_on_name).

Related

Doing 4 way filter based on 3 tables using GORM

I've been trying to achieve a 4 way to join/filter based on 4 tables "Offers", "UserPaymentMethods" and a junction table "OffersUserPaymentMethods" defined as below;
So I want to filter "offers" based on payment_method_id because offer_id lives in offers_user_payment_methods which makes it a bit tricky. Front-end will send payment_method_id and I need to filter offers based on the payment_method_id that's it.
CREATE TABLE `offers_user_payment_methods` (
`offer_id` bigint(20) unsigned NOT NULL,
`user_payment_method_id` bigint(20) unsigned NOT NULL
)
CREATE TABLE `offers` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`user_uid` longtext NOT NULL,
`base` varchar(20) NOT NULL,
`quote` varchar(20) NOT NULL,
`side` longtext NOT NULL,
`price` decimal(32,16) NOT NULL,
`origin_amount` decimal(32,16) NOT NULL,
`available_amount` decimal(32,16) NOT NULL,
`min_order_amount` decimal(32,16) NOT NULL,
`max_order_amount` decimal(32,16) NOT NULL,
`payment_time_limit` bigint(20) unsigned NOT NULL,
`state` longtext NOT NULL,
`created_at` datetime(3) DEFAULT NULL,
`updated_at` datetime(3) DEFAULT NULL
)
CREATE TABLE `user_payment_methods` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`user_uid` longtext NOT NULL,
`payment_method_id` bigint(20) unsigned DEFAULT NULL,
`data` json DEFAULT NULL,
`created_at` datetime(3) DEFAULT NULL,
`updated_at` datetime(3) DEFAULT NULL,
)
CREATE TABLE `payment_methods` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`type` longtext NOT NULL,
`bank_name` longtext NOT NULL,
`logo` longtext NOT NULL,
`options` json DEFAULT NULL,
`enabled` tinyint(1) NOT NULL,
`created_at` datetime(3) DEFAULT NULL,
`updated_at` datetime(3) DEFAULT NULL
)
You will struggle to do this efficiently and entirely with Gorm. Preloading/associations aren't done using joins in Gorm and there is no way to filter based on them. I see two potential options:
1. Write your own query using joins and scan in the results
You can use Gorm for the query and execution, but honestly, I would just avoid all the need for reflection etc and just define a struct and scan straight into that.
The results will contain duplicated data, so you will have to manually transpose the results and build up the object.
3. Execute two queries, one to find the IDs of the offers, and one to find the offers
The first query would be the equivalent of:
SELECT offers_user_payment_methods.offer_id FROM offers_user_payment_methods
INNER JOIN user_payment_methods ON offers_user_payment_methods. user_payment_method_id = user_payment_methods.id
WHERE user_payment_methods.payment_method_id = ?
If you scan these results into var offerIDs []int, you can use Gorm to find the offers by passing this slice as the param:
offers := make(Offer, 0)
db.Find(&offers, offerIDs)
I think this solution has the benefit of you do the more complex query and leave the easy stuff to Gorm (which is what it does ~ok).

Need a database efficiency suggestion

I have the following database table. I am trying to figure out a way that I can structure this so that I can have a position for each player column. Because each user is going to have multiple players and there will be multiple users, I cannot figure out the best way to model my db table for efficiency.
CREATE TABLE `user_players` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) NOT NULL,
`firstname` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`lastname` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`username` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`email` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player1` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player2` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player3` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player4` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player5` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player6` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
The only thing that I can think of is adding a player_position for ever player, so that it would look like this...
`player1` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player_position1` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player2` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player_position2` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
Is there a better, more efficient way to do this?
You need separate tables for users and players. The player table will have a foreign key for the user that owns it.
If you want to design efficient databases, then I'd suggest you to first get atleast some knowledge about Normalization.
To learn basics of Normalization, refer to:
What is Normalisation (or Normalization)?
http://www.studytonight.com/dbms/database-normalization.php
https://www.youtube.com/watch?v=tCabZRVXv2I
Clearly your database is not Normalized and needs Normalization.
Issue 1:
Achieve 1st Normalization form by assigning a Primary Key.
Issue 2:
Your database consists of Transitive Dependency(Transitive dependency if you consider id as a primary key. Thereafter, player fields will depend upon non key attribute. i.e. user_id).
Fix it by creating different tables for user and player.
Also take a look at the concept of Foreign Key.
If you fix these two issues then you'll no longer need both id and user_id together. You can drop one of them.
Final Database Schema:
CREATE TABLE `user` (
`user_id` int(11) NOT NULL PRIMARY KEY, /*Make it AUTO_INCREMENT if you wish to*/
`firstname` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`lastname` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`username` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`email` varchar(100) COLLATE utf8_unicode_ci NOT NULL
)
CREATE TABLE `player` (
`player_id` int(11) NOT NULL PRIMARY KEY, /*Make it AUTO_INCREMENT if you wish to*/
`player1` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player2` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player3` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player4` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player5` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player6` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
FOREIGN KEY (user_id) REFERENCES user(user_id)
)
P.S.: Syntax may vary depending upon the type of database that you're using.

Gii CRUD generator and related tables

I am using Yii framework and I have got a problem with CRUD generator.
I have got two tables called Users and news with the following structures:
CREATE TABLE IF NOT EXISTS `news` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`keyword` varchar(1000) COLLATE utf8_persian_ci DEFAULT NULL,
`user_id` tinyint(3) unsigned NOT NULL,
`title` varchar(100) COLLATE utf8_persian_ci DEFAULT NULL,
`body` varchar(1000) COLLATE utf8_persian_ci DEFAULT NULL,
`publishedat` date DEFAULT NULL,
`state` tinyint(1) unsigned DEFAULT NULL,
`archive` tinyint(1) unsigned DEFAULT NULL,
`last_modified` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `news_FKIndex1` (`keyword`(255)),
KEY `news_FKIndex2` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_persian_ci AUTO_INCREMENT=3 ;
CREATE TABLE IF NOT EXISTS `users` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`username` varchar(20) NOT NULL,
`password` varchar(128) NOT NULL,
`create_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`lastvisit_at` timestamp NULL DEFAULT NULL,
`is_disabled` tinyint(1) NOT NULL DEFAULT '1',
PRIMARY KEY (`id`),
UNIQUE KEY `username` (`username`),
KEY `status` (`is_disabled`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=7 ;
when I generate a CRUD using Gii for my news table I cannot see the fields for users table. Instead of user_id I want to see the username in the table created by CRUD generator. How can I make a change in the code to get the result as above?
First, user_id needs to be a foreign key field not just a key field.
Second, gii will not generate the field as you require by default. For such functionality an extension such as Giix might help. However, since a relation exists you could always use relationName.username to display the username in a grid view or a list view.

creating friend graph

I want to create a friend list for my website which is supposed to be stored in a database table, following is the table structure I think should best serve the purpose.
CREATE TABLE `sdt_friend_graph` (
`user` INT(11) NOT NULL,
`friend` INT(11) NOT NULL,
`status` ENUM('requested','accepted') COLLATE utf8_unicode_ci DEFAULT NULL,
`requested_on` DATETIME DEFAULT NULL,
`accepted_on` DATETIME DEFAULT NULL,
PRIMARY KEY (`user`,`friend`)
)
just want to find out if my approach is ok, or is there any better way to do this to make it more efficient, I'm open to suggestions.
Regards,
your table structure looks fine, i would just add user as an AUTO_INCREMENT field and change the name to friendid... just for semantics.

Slow query, can I speed it up?

I'm retrieving images stored as blobs in the database using a python script running on the same server.
SELECT *
FROM imagedb_production.imagedb IMAGE
LEFT JOIN dccms_production.tblmedia MEDIA ON IMAGE.name = MEDIA.name
LEFT JOIN dccms_production.tblmultimedia CAP ON MEDIA.contentItemID = CAP.contentItemID
LIMIT 5000,100;
An EXPLAIN returns
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE IMAGE index NULL name_idx 767 NULL 10145962 Using index
1 SIMPLE MEDIA ref name name 63 imagedb_production.IMAGE.name 1
1 SIMPLE CAP eq_ref PRIMARY,idx_contentItemID PRIMARY 4 dccms_production.MEDIA.contentItemID 1 Using index
(Sorry the output looks like crap)
This query takes close to 12 minutes is there any way I can speed this up before going through and tuning the mysql db instance?
Additional information
'imagedb', 'CREATE TABLE `imagedb` (
`multimediaID` int(11) NOT NULL auto_increment,
`name` varchar(255) NOT NULL,
`content` mediumblob,
`description` longtext,
`mime_type` varchar(255) default NULL,
PRIMARY KEY (`multimediaID`),
KEY `name_idx` (`name`)
) ENGINE=InnoDB AUTO_INCREMENT=2320759 DEFAULT CHARSET=utf8'
'tblmedia', 'CREATE TABLE `tblmedia` (
`mediaID` int(11) NOT NULL auto_increment,
`contentItemID` int(11) NOT NULL default ''0'',
`name` varchar(255) default NULL,
`width` int(11) default NULL,
`height` int(11) default NULL,
`file1Size` bigint(20) default NULL,
`file2Size` bigint(20) default NULL,
`mediaSlug` int(11) default NULL,
PRIMARY KEY (`mediaID`),
KEY `idx_contentItemID` (`contentItemID`),
KEY `name` (`name`(20))
) ENGINE=InnoDB AUTO_INCREMENT=899975 DEFAULT CHARSET=utf8'
'tblmultimedia', 'CREATE TABLE `tblmultimedia` (
`contentItemID` int(11) NOT NULL default ''0'',
`caption` text,
`mimeType` varchar(255) default NULL,
PRIMARY KEY (`contentItemID`),
KEY `idx_contentItemID` (`contentItemID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8'
You have 10,000,000 rows with no sorting, I would fix that. Add a default order by clause.
Older versions of MySQL did not take limit clauses into account until much later. I think newer versions do a better job of that. You might want to check into different ways to limit the result set.