creating friend graph - sql

I want to create a friend list for my website which is supposed to be stored in a database table, following is the table structure I think should best serve the purpose.
CREATE TABLE `sdt_friend_graph` (
`user` INT(11) NOT NULL,
`friend` INT(11) NOT NULL,
`status` ENUM('requested','accepted') COLLATE utf8_unicode_ci DEFAULT NULL,
`requested_on` DATETIME DEFAULT NULL,
`accepted_on` DATETIME DEFAULT NULL,
PRIMARY KEY (`user`,`friend`)
)
just want to find out if my approach is ok, or is there any better way to do this to make it more efficient, I'm open to suggestions.
Regards,

your table structure looks fine, i would just add user as an AUTO_INCREMENT field and change the name to friendid... just for semantics.

Related

SQL messages that can be read by specified users

Task:
At present, the database knows two types of messages:
Messages that a user posts and that are public for anyone and everyone to read
Messages that a user posts and that are non-public.
These messages can only be read by users that the posting user has marked as friends.
In this step, you should add a third type of message. This third type of message should be readable by specified recipients only.
This means the database needs to provide the following:
A way of distinguishing between the three types of messages. This involves a change to the Message table.
A way of specifying who the recipients of a particular message are. This will probably require an additional table.
Your job is to implement the necessary changes and additional table for this purpose and any keys and foreign key
relationships required.
here are two existing tables witch relate to the task(copies from my db).
User table
CREATE TABLE IF NOT EXISTS `User` (
`user_id` int(10) unsigned NOT NULL auto_increment,
`given_name` varchar(60) default NULL,
`surname` varchar(60) default NULL,
`address` varchar(255) default NULL,
`city_id` int(10) unsigned NOT NULL,
`date_of_birth` datetime default NULL,
`email` varchar(80) default NULL,
PRIMARY KEY (`user_id`),
KEY `ix_user_surname` (`surname`),
KEY `ix_user_given_name` (`given_name`),
KEY `ix_user_name` (`given_name`,`surname`),
KEY `ix_user_date_of_birth` (`date_of_birth`),
KEY `ix_user_email` (`email`),
KEY `ix_user_city_id` (`city_id`)
) ENGINE=InnoDB
Message table
CREATE TABLE IF NOT EXISTS `Message` (
`message_id` int(10) unsigned NOT NULL auto_increment,
`owner_id` int(10) unsigned default NULL,
`subject` varchar(255) default NULL,
`body` text,
`posted` datetime default NULL,
`is_public` tinyint(4) default '0',
PRIMARY KEY (`message_id`),
KEY `ix_message_owner_id` (`owner_id`)
) ENGINE=InnoDB
Ok, so is_public give you the ability to distinguish between two types (e.g. is_public = '0' means private, and is_public = '1' means public). But now you have a new concept of specified receipts, so the yes/no model won't work anymore b/c you have 3 types. Usually in this situation you can switch to a flag or type column.
So maybe make a message_type column that is one of 'PUBLIC', 'PRIVATE', 'SPECIFIED' or something like that.
After that it sounds like you need at least two more tables. Users must be able to specify friends and users must be able to specify users to receive particular messages.

I want to add a column in my table for a user's profile pic

I want to add a extra field in my database table users.
The table is currently like this:
CREATE TABLE users (
user_id INT(8) NOT NULL AUTO_INCREMENT,
user_name VARCHAR(30) NOT NULL,
user_pass VARCHAR(255) NOT NULL,
user_email VARCHAR(255) NOT NULL,
user_date DATETIME NOT NULL,
user_level INT(8) NOT NULL,
UNIQUE INDEX user_name_unique (user_name),
PRIMARY KEY (user_id)
);
How will it look if I add a column for profile pic data of the user?
Thanks!
Basically, there's 2 options:
store pic-data in database
store picture-locations in database and picture itself on filesystem (picture location points to location on filesyste,
Option 2. is generally preferred.
In that case your table becomes:
CREATE TABLE users (
user_id INT(8) NOT NULL AUTO_INCREMENT,
user_name VARCHAR(30) NOT NULL,
user_pass VARCHAR(255) NOT NULL,
user_email VARCHAR(255) NOT NULL,
user_date DATETIME NOT NULL,
user_level INT(8) NOT NULL,
pic_location VARCHAR(255) NOT NULL,
UNIQUE INDEX user_name_unique (user_name),
PRIMARY KEY (user_id)
);
Some suggestions for the future:
Please search a bit first before asking this. Store pictures as files or in the database for a web app?
Make your question-header a question. This leads more people to actually wanting to answer you question, instead of having to click through before knowing what you want to accomplish.
Cheers,
Geert-Jan
If you need to store the photo in the table, you need
ALTER TABLE users ADD COLUMN Photo IMAGE;

SQL tables with similar structure - best practices

Imagine that we have a website where users can read articles, view photos, watch videos, and many more. Every "item" may be commented, so that we need space to save that comments somewhere. Let's discuss storage possibilities for this case.
Distributed solution
We can obviously create separate tables for each "item", so that we have tables like:
CREATE TABLE IF NOT EXISTS `article_comments` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`createdBy` int(11) DEFAULT NULL,
`createdAt` int(11) DEFAULT NULL,
`article` int(11) DEFAULT NULL,
`content` text,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
and then obviously photo_comments, video_comments, and so on. The advantages of this way are as follows:
we can specify Foreign Key to every "item" table,
database is divided into logical parts.
there is no problem with export of such data.
Disadvantages:
many tables
probably hard to maintain (adding fields, etc.)
Centralized solution
On the other hand we can merge all those tables into two:
CREATE TABLE IF NOT EXISTS `comment_types` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
and
CREATE TABLE IF NOT EXISTS `comments` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`createdBy` int(11) DEFAULT NULL,
`createdAt` int(11) DEFAULT NULL,
`type` int(11) DEFAULT NULL,
`content` text,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
Table comment_types is a dictionary, it contains key-value pairs of commented item "type" and its name, for example :
1:Articles
2:Photos
3:Videos
Table comments stores usual data with additional type field.
Advantages:
Maintenance (adding / removing fields),
Adding new comment types "on the fly".
Disadvantages:
Harder to migrate / export,
Possible performance drop when querying large dataset.
Discussion:
Which storage option will be better in terms of query performance (assume that dataset IS big enough for that to be the case),
Again performance - will adding INDEX on type remove or drastically reduce that percormance drop?
Which storage option will be better in terms of management and possible migration in the future (distributed will be better, of course, but let's see if centralized one isn't the one far away)
I'm not sure either of the disadvantages you list for option 2 are serious, data export is easily accomplished with a simple WHERE clause and I wouldn't worry about performance. Option 2 is properly normalised and in a modern relational database performance should be excellent (and can be tweaked further with appropriate indexes etc if necessary).
I would only consider the first option if I could prove that it was necessary for performance, scalability or other reasons - but it must be said that seems unlikely.

Optimize a mysql like query

I added the jquery autocomplete plugin to my places textfield to help users better select a location. What I didn't realize before building is that the query would be very slow.
select * from `geoplanet_places` where name LIKE "%San Diego%" AND (place_type = "County" OR place_type = "Town")
The query above took 1.18 seconds. Then I tried adding indexes for name and place_type but that only slowed it down (1.93s).
Is there a way to optimize this query or is there another technique to speed up the query.
This geoplanet_places table has 437,715 rows (mysql)
CREATE TABLE `geoplanet_places` (
`id` int(11) NOT NULL auto_increment,
`woeid` bigint(20) default NULL,
`parent_woeid` bigint(20) default NULL,
`country_code` varchar(255) collate utf8_unicode_ci default NULL,
`name` varchar(255) collate utf8_unicode_ci default NULL,
`language` varchar(255) collate utf8_unicode_ci default NULL,
`place_type` varchar(255) collate utf8_unicode_ci default NULL,
`ancestry` varchar(255) collate utf8_unicode_ci default NULL,
`activity_count` int(11) default '0',
`activity_count_updated_at` datetime default NULL,
`bounding_box` blob,
`slug` varchar(255) collate utf8_unicode_ci default NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `index_geoplanet_places_on_woeid` (`woeid`),
KEY `index_geoplanet_places_on_ancestry` (`ancestry`),
KEY `index_geoplanet_places_on_parent_woeid` (`parent_woeid`),
KEY `index_geoplanet_places_on_slug` (`slug`),
KEY `index_geoplanet_places_on_name` (`name`),
KEY `index_geoplanet_places_on_place_type` (`place_type`)
) ENGINE=InnoDB AUTO_INCREMENT=5652569 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
EXPLAIN
id 1
select_type SIMPLE
table geoplanet_places
type ALL
possible_keys index_geoplanet_places_on_place_type
key NULL
key_len NULL
ref NULL
rows 441273
Extra Using where
You can switch the storage engine of the table to MyISAM to take advantage of full text indexing.
The name index wont help you unless you change the like to LIKE 'San Diego%' which can do a prefix search on the index
Get rid of the leading '%' in your where-like clause, so it becomes: where name like "San Diego%". For auto complete, this seems a reasonable limitation (assumes that the user starts typing correct characters) that should speed up the query significantly, as MySql will be able to use an existing index (index_geoplanet_places_on_name).

How to fix the error that I get using MySQL LOAD INFILE?

I have a products table with the following structure
CREATE TABLE IF NOT EXISTS `products` (
`id` int(50) NOT NULL AUTO_INCREMENT,
`productname` varchar(255) NOT NULL,
`description` text NOT NULL,
`merchanturl` text NOT NULL,
`imageurl` text NOT NULL,
`price` varchar(10) NOT NULL,
`original` varchar(10) NOT NULL,
`currency` varchar(12) NOT NULL,
`extrafields` text NOT NULL,
`feedid` varchar(25) NOT NULL,
`category` varchar(255) NOT NULL,
`merchant` varchar(255) NOT NULL,
PRIMARY KEY (`id`),
FULLTEXT KEY `productname` (`productname`),
FULLTEXT KEY `description` (`description`)
) ENGINE=MyISAM;
I use mysql LOAD INFILE command to import delimited data files into this table. It has 4 million records now. When I import more data using LOAD INFILE I get the following error:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket
'/var/run/mysqld/mysqld.sock' (2)
I am not able to access the products table after that.
How can I improve the performance of the table? Note that some data files are more than 100MB in size. I have another 4 million entries which need to be
imported to the table.
Please suggest methods to avoid these issues.
Thanks,
Sree
Try connect to mysql server using TCP/IP instead of socket. Socket is only available for unix like operating system.