Need a database efficiency suggestion - sql

I have the following database table. I am trying to figure out a way that I can structure this so that I can have a position for each player column. Because each user is going to have multiple players and there will be multiple users, I cannot figure out the best way to model my db table for efficiency.
CREATE TABLE `user_players` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) NOT NULL,
`firstname` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`lastname` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`username` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`email` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player1` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player2` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player3` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player4` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player5` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player6` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
The only thing that I can think of is adding a player_position for ever player, so that it would look like this...
`player1` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player_position1` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player2` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player_position2` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
Is there a better, more efficient way to do this?

You need separate tables for users and players. The player table will have a foreign key for the user that owns it.

If you want to design efficient databases, then I'd suggest you to first get atleast some knowledge about Normalization.
To learn basics of Normalization, refer to:
What is Normalisation (or Normalization)?
http://www.studytonight.com/dbms/database-normalization.php
https://www.youtube.com/watch?v=tCabZRVXv2I
Clearly your database is not Normalized and needs Normalization.
Issue 1:
Achieve 1st Normalization form by assigning a Primary Key.
Issue 2:
Your database consists of Transitive Dependency(Transitive dependency if you consider id as a primary key. Thereafter, player fields will depend upon non key attribute. i.e. user_id).
Fix it by creating different tables for user and player.
Also take a look at the concept of Foreign Key.
If you fix these two issues then you'll no longer need both id and user_id together. You can drop one of them.
Final Database Schema:
CREATE TABLE `user` (
`user_id` int(11) NOT NULL PRIMARY KEY, /*Make it AUTO_INCREMENT if you wish to*/
`firstname` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`lastname` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`username` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`email` varchar(100) COLLATE utf8_unicode_ci NOT NULL
)
CREATE TABLE `player` (
`player_id` int(11) NOT NULL PRIMARY KEY, /*Make it AUTO_INCREMENT if you wish to*/
`player1` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player2` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player3` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player4` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player5` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`player6` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
FOREIGN KEY (user_id) REFERENCES user(user_id)
)
P.S.: Syntax may vary depending upon the type of database that you're using.

Related

Doing 4 way filter based on 3 tables using GORM

I've been trying to achieve a 4 way to join/filter based on 4 tables "Offers", "UserPaymentMethods" and a junction table "OffersUserPaymentMethods" defined as below;
So I want to filter "offers" based on payment_method_id because offer_id lives in offers_user_payment_methods which makes it a bit tricky. Front-end will send payment_method_id and I need to filter offers based on the payment_method_id that's it.
CREATE TABLE `offers_user_payment_methods` (
`offer_id` bigint(20) unsigned NOT NULL,
`user_payment_method_id` bigint(20) unsigned NOT NULL
)
CREATE TABLE `offers` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`user_uid` longtext NOT NULL,
`base` varchar(20) NOT NULL,
`quote` varchar(20) NOT NULL,
`side` longtext NOT NULL,
`price` decimal(32,16) NOT NULL,
`origin_amount` decimal(32,16) NOT NULL,
`available_amount` decimal(32,16) NOT NULL,
`min_order_amount` decimal(32,16) NOT NULL,
`max_order_amount` decimal(32,16) NOT NULL,
`payment_time_limit` bigint(20) unsigned NOT NULL,
`state` longtext NOT NULL,
`created_at` datetime(3) DEFAULT NULL,
`updated_at` datetime(3) DEFAULT NULL
)
CREATE TABLE `user_payment_methods` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`user_uid` longtext NOT NULL,
`payment_method_id` bigint(20) unsigned DEFAULT NULL,
`data` json DEFAULT NULL,
`created_at` datetime(3) DEFAULT NULL,
`updated_at` datetime(3) DEFAULT NULL,
)
CREATE TABLE `payment_methods` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`type` longtext NOT NULL,
`bank_name` longtext NOT NULL,
`logo` longtext NOT NULL,
`options` json DEFAULT NULL,
`enabled` tinyint(1) NOT NULL,
`created_at` datetime(3) DEFAULT NULL,
`updated_at` datetime(3) DEFAULT NULL
)
You will struggle to do this efficiently and entirely with Gorm. Preloading/associations aren't done using joins in Gorm and there is no way to filter based on them. I see two potential options:
1. Write your own query using joins and scan in the results
You can use Gorm for the query and execution, but honestly, I would just avoid all the need for reflection etc and just define a struct and scan straight into that.
The results will contain duplicated data, so you will have to manually transpose the results and build up the object.
3. Execute two queries, one to find the IDs of the offers, and one to find the offers
The first query would be the equivalent of:
SELECT offers_user_payment_methods.offer_id FROM offers_user_payment_methods
INNER JOIN user_payment_methods ON offers_user_payment_methods. user_payment_method_id = user_payment_methods.id
WHERE user_payment_methods.payment_method_id = ?
If you scan these results into var offerIDs []int, you can use Gorm to find the offers by passing this slice as the param:
offers := make(Offer, 0)
db.Find(&offers, offerIDs)
I think this solution has the benefit of you do the more complex query and leave the easy stuff to Gorm (which is what it does ~ok).

VB application does not return same list in two computer

I have a program that use MSSQL 2005. My problem is that this app is wrote with VB6 and when I get customer list in a computer it return 6000 rows and it is correct. But when I get customer list with another computer with same MSSQL(2005) and same OS (Windows XP). what can I do to solve this problem?
Thanks in advanced.
EDIT
The query is simple and it is:
SELECT * FROM Buyer
I think, maybe the problem is in indexing, clustered, SATA3 HDD or something else.
This is Design of the table what I was speaking about it:
CREATE TABLE [dbo].[Buyer](
[BuyerCode] [nvarchar](10) COLLATE Arabic_CI_AS NOT NULL,
[Atbar] [money] NULL,
[AddB] [nvarchar](100) COLLATE Arabic_CI_AS NULL,
[Tel] [nvarchar](200) COLLATE Arabic_CI_AS NULL,
[CityCode] [nvarchar](6) COLLATE Arabic_CI_AS NOT NULL,
[CityName] [nvarchar](35) COLLATE Arabic_CI_AS NULL,
[TBLO] [nvarchar](150) COLLATE Arabic_CI_AS NULL,
[SKH] [nvarchar](15) COLLATE Arabic_CI_AS NULL,
[NP] [nvarchar](50) COLLATE Arabic_CI_AS NULL,
[CodeAG] [nvarchar](20) COLLATE Arabic_CI_AS NULL,
[CodeSF] [nvarchar](2) COLLATE Arabic_CI_AS NOT NULL,
[NameSF] [nvarchar](70) COLLATE Arabic_CI_AS NULL,
[KindM] [nvarchar](15) COLLATE Arabic_CI_AS NULL,
[VAZ] [bit] NOT NULL,
[name] [nvarchar](250) COLLATE Arabic_CI_AS NULL,
[vazk] [bit] NULL,
[Tozeh] [nvarchar](350) COLLATE Arabic_CI_AS NOT NULL CONSTRAINT [DF_Buyer_Tozeh] DEFAULT (N''),
[Tozehp] [nvarchar](350) COLLATE Arabic_CI_AS NOT NULL CONSTRAINT [DF_Buyer_Tozehp] DEFAULT (N''),
[Onvan] [nvarchar](50) COLLATE Arabic_CI_AS NULL,
[GhK] [smallint] NULL,
[AutoFCode] [bit] NOT NULL CONSTRAINT [DF_Buyer_AutoFCode] DEFAULT ((1)),
[CodeF] [numeric](18, 0) NULL,
[NameF] [nvarchar](100) COLLATE Arabic_CI_AS NULL,
[DateF] [char](10) COLLATE Arabic_CI_AS NULL,
CONSTRAINT [PK_Buyer] PRIMARY KEY CLUSTERED
(
[BuyerCode] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY];
I just recently did a VB6 update where the control couldnt do more then 6000 entries. Very likely the same reason here. Its probably a maximum for that control. Check to see if you can either get an updated one if available (if its third party) or maybe use a different control.
Ensure your connection string is the same on both installs of your app (recompile if necessary)
Ensure you're connecting to the same database on both machines (that is, not using localhost)
Ensure your VB code isn't modifying the ResultsSet

Gii CRUD generator and related tables

I am using Yii framework and I have got a problem with CRUD generator.
I have got two tables called Users and news with the following structures:
CREATE TABLE IF NOT EXISTS `news` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`keyword` varchar(1000) COLLATE utf8_persian_ci DEFAULT NULL,
`user_id` tinyint(3) unsigned NOT NULL,
`title` varchar(100) COLLATE utf8_persian_ci DEFAULT NULL,
`body` varchar(1000) COLLATE utf8_persian_ci DEFAULT NULL,
`publishedat` date DEFAULT NULL,
`state` tinyint(1) unsigned DEFAULT NULL,
`archive` tinyint(1) unsigned DEFAULT NULL,
`last_modified` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `news_FKIndex1` (`keyword`(255)),
KEY `news_FKIndex2` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_persian_ci AUTO_INCREMENT=3 ;
CREATE TABLE IF NOT EXISTS `users` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`username` varchar(20) NOT NULL,
`password` varchar(128) NOT NULL,
`create_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`lastvisit_at` timestamp NULL DEFAULT NULL,
`is_disabled` tinyint(1) NOT NULL DEFAULT '1',
PRIMARY KEY (`id`),
UNIQUE KEY `username` (`username`),
KEY `status` (`is_disabled`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=7 ;
when I generate a CRUD using Gii for my news table I cannot see the fields for users table. Instead of user_id I want to see the username in the table created by CRUD generator. How can I make a change in the code to get the result as above?
First, user_id needs to be a foreign key field not just a key field.
Second, gii will not generate the field as you require by default. For such functionality an extension such as Giix might help. However, since a relation exists you could always use relationName.username to display the username in a grid view or a list view.

Optimize a mysql like query

I added the jquery autocomplete plugin to my places textfield to help users better select a location. What I didn't realize before building is that the query would be very slow.
select * from `geoplanet_places` where name LIKE "%San Diego%" AND (place_type = "County" OR place_type = "Town")
The query above took 1.18 seconds. Then I tried adding indexes for name and place_type but that only slowed it down (1.93s).
Is there a way to optimize this query or is there another technique to speed up the query.
This geoplanet_places table has 437,715 rows (mysql)
CREATE TABLE `geoplanet_places` (
`id` int(11) NOT NULL auto_increment,
`woeid` bigint(20) default NULL,
`parent_woeid` bigint(20) default NULL,
`country_code` varchar(255) collate utf8_unicode_ci default NULL,
`name` varchar(255) collate utf8_unicode_ci default NULL,
`language` varchar(255) collate utf8_unicode_ci default NULL,
`place_type` varchar(255) collate utf8_unicode_ci default NULL,
`ancestry` varchar(255) collate utf8_unicode_ci default NULL,
`activity_count` int(11) default '0',
`activity_count_updated_at` datetime default NULL,
`bounding_box` blob,
`slug` varchar(255) collate utf8_unicode_ci default NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `index_geoplanet_places_on_woeid` (`woeid`),
KEY `index_geoplanet_places_on_ancestry` (`ancestry`),
KEY `index_geoplanet_places_on_parent_woeid` (`parent_woeid`),
KEY `index_geoplanet_places_on_slug` (`slug`),
KEY `index_geoplanet_places_on_name` (`name`),
KEY `index_geoplanet_places_on_place_type` (`place_type`)
) ENGINE=InnoDB AUTO_INCREMENT=5652569 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
EXPLAIN
id 1
select_type SIMPLE
table geoplanet_places
type ALL
possible_keys index_geoplanet_places_on_place_type
key NULL
key_len NULL
ref NULL
rows 441273
Extra Using where
You can switch the storage engine of the table to MyISAM to take advantage of full text indexing.
The name index wont help you unless you change the like to LIKE 'San Diego%' which can do a prefix search on the index
Get rid of the leading '%' in your where-like clause, so it becomes: where name like "San Diego%". For auto complete, this seems a reasonable limitation (assumes that the user starts typing correct characters) that should speed up the query significantly, as MySql will be able to use an existing index (index_geoplanet_places_on_name).

running sql code with microsoft sql 2008

i have about 8mb of sql code i need to run. it looks like this:
/*
MySQL Data Transfer
Source Host: 10.0.0.5
Source Database: jnetdata
Target Host: 10.0.0.5
Target Database: jnetdata
Date: 5/26/2009 12:27:33 PM
*/
SET FOREIGN_KEY_CHECKS=0;
-- ----------------------------
-- Table structure for chavrusas
-- ----------------------------
CREATE TABLE `chavrusas` (
`id` int(11) NOT NULL auto_increment,
`date_created` datetime default NULL,
`luser_id` int(11) default NULL,
`ruser_id` int(11) default NULL,
`luser_type` varchar(50) default NULL,
`ruser_type` varchar(50) default NULL,
`SessionDay` varchar(250) default NULL,
`SessionTime` datetime default NULL,
`WeeklyReminder` tinyint(1) NOT NULL default '0',
`reminder_phone` tinyint(1) NOT NULL default '0',
`calling_card` varchar(50) default NULL,
`active` tinyint(1) NOT NULL default '0',
`notes` mediumtext,
`ended` tinyint(1) NOT NULL default '0',
`end_date` datetime default NULL,
`initiated_by_student` tinyint(1) NOT NULL default '0',
`initiated_by_volunteer` tinyint(1) NOT NULL default '0',
`student_general_reason` varchar(50) default NULL,
`volunteer_general_reason` varchar(50) default NULL,
`student_reason` varchar(250) default NULL,
`volunteer_reason` varchar(250) default NULL,
`student_nli` tinyint(1) NOT NULL default '0',
`volunteer_nli` tinyint(1) NOT NULL default '0',
`jnet_initiated` tinyint(1) default '0',
`belongs_to` varchar(50) default NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=5913 DEFAULT CHARSET=latin1;
-- ----------------------------
-- Table structure for tbluseravailability
-- ----------------------------
CREATE TABLE `tbluseravailability` (
`availability_id` int(11) NOT NULL auto_increment,
`user_id` int(11) NOT NULL,
`weekday_id` int(11) NOT NULL,
`timeslot_id` int(11) NOT NULL,
PRIMARY KEY (`availability_id`)
) ENGINE=MyISAM AUTO_INCREMENT=10865 DEFAULT CHARSET=latin1;
-- ----------------------------
-- Table structure for tblusers
-- ----------------------------
CREATE TABLE `tblusers` (
`id` int(11) NOT NULL auto_increment,
`
etc
how do i run it on microsoft sql 2008?
You don't, you'd have to convert the code to use SQL Server's T-SQL syntax. You could use a conversion tool though, like this one.
Run a tool to automatically convert MySQL sentences to T-SQL,
Intensive use of Find&Replace can make the work too. As an example:
CREATE TABLE `chavrusas` (
`id` int(11) NOT NULL auto_increment,
`date_created` datetime default NULL,
`luser_id` int(11) default NULL,
`ruser_id` int(11) default NULL,
`luser_type` varchar(50) default NULL,
`ruser_type` varchar(50) default NULL,
`SessionDay` varchar(250) default NULL,
`SessionTime` datetime default NULL,
`WeeklyReminder` tinyint(1) NOT NULL default '0',
`reminder_phone` tinyint(1) NOT NULL default '0',
`calling_card` varchar(50) default NULL,
`active` tinyint(1) NOT NULL default '0',
`notes` mediumtext,
`ended` tinyint(1) NOT NULL default '0',
`end_date` datetime default NULL,
`initiated_by_student` tinyint(1) NOT NULL default '0',
`initiated_by_volunteer` tinyint(1) NOT NULL default '0',
`student_general_reason` varchar(50) default NULL,
`volunteer_general_reason` varchar(50) default NULL,
`student_reason` varchar(250) default NULL,
`volunteer_reason` varchar(250) default NULL,
`student_nli` tinyint(1) NOT NULL default '0',
`volunteer_nli` tinyint(1) NOT NULL default '0',
`jnet_initiated` tinyint(1) default '0',
`belongs_to` varchar(50) default NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=5913 DEFAULT CHARSET=latin1;
Find: \s`
Replace with [
Find: `\s
Replace with: ]
Find: PRIMARY KEY (id)
Replace with: CONSTRAINT PK_[SOME IDENTIFIER] PRIMARY KEY ([$1])
Find: ENGINE=MyISAM AUTO_INCREMENT=5913 DEFAULT CHARSET=latin1;
Replace with: ;
a few more Find&Replace and you'll get this script, T-SQL compliant:
CREATE TABLE [chavrusas] (
[id] INT IDENTITY(1,1) NOT NULL ,
[date_created] datetime NULL,
[luser_id] INT NULL,
[ruser_id] INT NULL,
[luser_type] varchar(50) NULL,
[ruser_type] varchar(50) NULL,
[SessionDay] varchar(250) NULL,
[SessionTime] datetime NULL,
[WeeklyReminder] INT NOT NULL,
[reminder_phone] INT NOT NULL,
[calling_card] varchar(50) NULL,
[active] INT NOT NULL,
[notes] TEXT,
[ended] INT NOT NULL,
[end_date] datetime NULL,
[initiated_by_student] INT NOT NULL,
[initiated_by_volunteer] INT NOT NULL,
[student_general_reason] varchar(50) NULL,
[volunteer_general_reason] varchar(50) NULL,
[student_reason] varchar(250) NULL,
[volunteer_reason] varchar(250) NULL,
[student_nli] INT NOT NULL,
[vvolunteer_nli] INT NOT NULL,
[jnet_initiated] INT,
[belongs_to] varchar(50) NULL,
CONSTRAINT PK_chavrusas PRIMARY KEY ([id])
)
At a glance, the code snippet you provided looks acceptable for SQL Server. I would suggest using one of the many tools available for executing SQL against an existing SQL Server instance (SQL Server Management Studio, Query Analyzer, etc.). Paste the code into a new query window (or open up the associated file with the query) and parse it to see if you uncover any errors. Once you tweak the code to work with SQL Server 2008 (assuming that's necessary), you should just be able to execute it to create the tables and such.
EDIT: I tested your code against SQL Server 2005 and there were some problems that would be annoying like replacing ` with ', fixing table/column references surrounded by apostrophes, etc.. Automated tool would be your best approach given the prevelance of errors.