yii uploading files error - yii

i'm a Portuguese student and i'm trying to implement a web application for game's management.
structure for table game
`game` (
`idGame` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(75) NOT NULL,
`primaryScreen` blob NOT NULL,
`game_content` blob NOT NULL,
`category` varchar(45) NOT NULL,
`platform` varchar(45) NOT NULL,
`device` varchar(45) NOT NULL,
`description` varchar(250) NOT NULL,
`funcionalities` varchar(150) NOT NULL,
PRIMARY KEY (`idGame`)
)
`screen` (
`idScreen` int(11) NOT NULL AUTO_INCREMENT,
`id_Game` int(11) NOT NULL,
`image` longblob NOT NULL,
PRIMARY KEY (`idScreen`),
KEY `id_Game` (`id_Game`)
)
i have read a yii tutorial of how to upload a file to a bolb in a database
http://www.yiiframework.com/wiki/95/saving-files-to-a-blob-field-in-the-database
the only difference is that in the function before save i only want to keep the content of file (can i do this or do i have to keep file name, file extension...)
so i do like this in model:
public function beforeSave()
{
if($file=CUploadedFile::getInstance($this,'game_uploaded'))
{
// $this->file_name=$file->name;
//$this->file_type=$file->type;
//$this->file_size=$file->size;
$this->game_content=file_get_contents($file->tempName);
//$file->saveAs('path/to/uploads');
}
if($file=CUploadedFile::getInstance($this,'primscreen'))
{
//$this->file_name=$file->name;
//$this->file_type=$file->type;
//$this->file_size=$file->size;
$this->primaryScreen=file_get_contents($file->tempName);
//$file->saveAs('path/to/uploads');
}
return parent::beforeSave();
}
but when i try i have this error:
PDOStatement::execute(): MySQL server has gone away
could someone help me please?
i appreciate any suggestion. thanks :)

At Your config file
Try to change its host from "localhost" to "127.0.0.1"

Try to use base64_encode before saving to database
Bom trabalho!

One common cause of MySQL server has gone away is because of a timeout. Try to increase the default_socket_timeout as so:
ini_set('default_socket_timeout', 300);
Another could be the size of the blob. Try increasing your upload size limit in the php.ini
post_max_size
upload_max_filesize

Related

Annoying error when trying to run SQL exported from db browser for sqlite

I am simply trying to run a basic SQL script to recreate a database.
The database was initially created in SQLite, and I exported it using DB Browser for SQLite.
The start of the file looks like this:
BEGIN TRANSACTION;
CREATE TABLE "AspNetUsers"
(
`Id` varchar(128) NOT NULL,
`Email` varchar(256) DEFAULT NULL,
`EmailConfirmed` tinyint(1) NOT NULL,
`PasswordHash` longtext,
`SecurityStamp` longtext,
`PhoneNumber` longtext,
`PhoneNumberConfirmed` tinyint(1) NOT NULL,
`TwoFactorEnabled` tinyint(1) NOT NULL,
`LockoutEndDateUtc` datetime DEFAULT NULL,
`LockoutEnabled` tinyint(1) NOT NULL,
`AccessFailedCount` int(11) NOT NULL,
`UserName` varchar(256) NOT NULL,
`IsActivated` tinyint(1) NOT NULL DEFAULT (0),
`Organisation` TEXT NOT NULL,
PRIMARY KEY(`Id`)
);
I created a new db and when running the query in SSMS I get this annoying error
Msg 102, Level 15, State 1, Line 3
Incorrect syntax near '`'.
I tried deleting all the whitespace between the first ( and 'Id' but then I just get
Msg 102, Level 15, State 1, Line 2
Incorrect syntax near '`'.
I also tried replacing the `s with 's but with the same result....
I'm pretty sure the server I'm trying to execute this on is running SQL Server Express - not sure if that makes a difference
Why must life be so difficult?
The code is rather specific to SQLite in several respects:
The use of backticks is non-standard.
Having a length for integer columns in non-standard.
text and longtext are non-standard.
The equivalent create table statement in SQL Server would be:
CREATE TABLE AspNetUsers (
Id varchar(128) NOT NULL,
Email varchar(256) DEFAULT NULL,
EmailConfirmed tinyint NOT NULL,
PasswordHash varchar(max),
SecurityStamp varchar(max),
PhoneNumber varchar(max),
PhoneNumberConfirmed tinyint NOT NULL,
TwoFactorEnabled tinyint NOT NULL,
LockoutEndDateUtc datetime DEFAULT NULL,
LockoutEnabled tinyint NOT NULL,
AccessFailedCount int NOT NULL,
UserName varchar(256) NOT NULL,
IsActivated tinyint NOT NULL DEFAULT (0),
Organisation varchar(max) NOT NULL,
PRIMARY KEY (Id)
);
Except for the varchar(max), this would be pretty standard for any database.
Some notes:
You probably don't need varchar(max) for any of these fields. Although you can use it, it looks awkward to have a phone number that could occupy megabytes of data.
You could probably replace the tinyints with bits.
DEFAULT NULL is redundant.

Unable to insert different datatype values in Database

I am trying to migrate data from one database to another but unable to do so due to unable to properly handle the datatypes.
Schema of Target database
CREATE TABLE `Report_aggregation` (
`Supplier` varchar(255) DEFAULT NULL,
`Product_code` int(11) DEFAULT NULL,
`Product_Name` varchar(255) DEFAULT NULL,
`Balance_on_Hand` int(11) DEFAULT NULL,
`Pending` int(11) DEFAULT NULL,
`Sale_Yesterday` int(11) DEFAULT NULL,
`Stock_day` decimal(10,0) DEFAULT NULL,
`Sale_avg` decimal(10,0) DEFAULT NULL,
`Stock_day_avg` varchar(255) DEFAULT NULL,
`Lead_time` int(11) DEFAULT NULL,
`Frequency_per_week` int(11) DEFAULT NULL,
`Saftey_stock` int(11) DEFAULT NULL,
`Forecast_order_qty` int(11) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
SET FOREIGN_KEY_CHECKS = 1;
Here is how I am inserting values.
setValues: { ps, i ->
ps.setString(1, contracts[i].Supplier.toString())
ps.setInt(2, contracts[i].Product_code)
ps.setString(3, contracts[i].Product_Name.toString())
ps.setInt(4, contracts[i].Balance_on_Hand)
ps.setInt(5, contracts[i].Pending)
ps.setInt(6, contracts[i].Sale_Yesterday)
ps.setDecimal(7, contracts[i].Stock_day)
ps.setDecimal(8, contracts[i].Sale_avg)
ps.setString(9, contracts[i].Stock_day_avg.toString())
ps.setInt(10, contracts[i].Lead_time)
ps.setInt(11, contracts[i].Frequency_per_week)
ps.setInt(12, contracts[i].Saftey_stock)
ps.setInt(13, contracts[i].Forecast_order_qty)
}
where contracts is my resultSet from some other database.
I get this exception upon execution.
groovy.lang.MissingMethodException: No signature of method: com.jolbox.bonecp.PreparedStatementHandle.setInt() is applicable for argument types: (java.lang.Integer, null) values: [2, null]
I am new to groovy and unable to debug properly, may be I am missing something very basic.
Any help would be much appreciated.
Looks like there is problem in line
ps.setInt(2, contracts[i].Product_code)
You are inserting into preprared statement null and groovy cannot resolve type so it throws exception about MissingMethodException.
Maybye try :
ps.setInt(2, contracts[i].Product_code as Integer)
It will tell groovy that value you are passing is type of Integer, even if it's null.
You can also use ResultSet's methods like getLong, or getInt. It should solve your problem too.
More information about these methods :
http://docs.oracle.com/javase/7/docs/api/java/sql/ResultSet.html#getInt%28java.lang.String%29

creating friend graph

I want to create a friend list for my website which is supposed to be stored in a database table, following is the table structure I think should best serve the purpose.
CREATE TABLE `sdt_friend_graph` (
`user` INT(11) NOT NULL,
`friend` INT(11) NOT NULL,
`status` ENUM('requested','accepted') COLLATE utf8_unicode_ci DEFAULT NULL,
`requested_on` DATETIME DEFAULT NULL,
`accepted_on` DATETIME DEFAULT NULL,
PRIMARY KEY (`user`,`friend`)
)
just want to find out if my approach is ok, or is there any better way to do this to make it more efficient, I'm open to suggestions.
Regards,
your table structure looks fine, i would just add user as an AUTO_INCREMENT field and change the name to friendid... just for semantics.

Optimize a mysql like query

I added the jquery autocomplete plugin to my places textfield to help users better select a location. What I didn't realize before building is that the query would be very slow.
select * from `geoplanet_places` where name LIKE "%San Diego%" AND (place_type = "County" OR place_type = "Town")
The query above took 1.18 seconds. Then I tried adding indexes for name and place_type but that only slowed it down (1.93s).
Is there a way to optimize this query or is there another technique to speed up the query.
This geoplanet_places table has 437,715 rows (mysql)
CREATE TABLE `geoplanet_places` (
`id` int(11) NOT NULL auto_increment,
`woeid` bigint(20) default NULL,
`parent_woeid` bigint(20) default NULL,
`country_code` varchar(255) collate utf8_unicode_ci default NULL,
`name` varchar(255) collate utf8_unicode_ci default NULL,
`language` varchar(255) collate utf8_unicode_ci default NULL,
`place_type` varchar(255) collate utf8_unicode_ci default NULL,
`ancestry` varchar(255) collate utf8_unicode_ci default NULL,
`activity_count` int(11) default '0',
`activity_count_updated_at` datetime default NULL,
`bounding_box` blob,
`slug` varchar(255) collate utf8_unicode_ci default NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `index_geoplanet_places_on_woeid` (`woeid`),
KEY `index_geoplanet_places_on_ancestry` (`ancestry`),
KEY `index_geoplanet_places_on_parent_woeid` (`parent_woeid`),
KEY `index_geoplanet_places_on_slug` (`slug`),
KEY `index_geoplanet_places_on_name` (`name`),
KEY `index_geoplanet_places_on_place_type` (`place_type`)
) ENGINE=InnoDB AUTO_INCREMENT=5652569 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
EXPLAIN
id 1
select_type SIMPLE
table geoplanet_places
type ALL
possible_keys index_geoplanet_places_on_place_type
key NULL
key_len NULL
ref NULL
rows 441273
Extra Using where
You can switch the storage engine of the table to MyISAM to take advantage of full text indexing.
The name index wont help you unless you change the like to LIKE 'San Diego%' which can do a prefix search on the index
Get rid of the leading '%' in your where-like clause, so it becomes: where name like "San Diego%". For auto complete, this seems a reasonable limitation (assumes that the user starts typing correct characters) that should speed up the query significantly, as MySql will be able to use an existing index (index_geoplanet_places_on_name).

How to fix the error that I get using MySQL LOAD INFILE?

I have a products table with the following structure
CREATE TABLE IF NOT EXISTS `products` (
`id` int(50) NOT NULL AUTO_INCREMENT,
`productname` varchar(255) NOT NULL,
`description` text NOT NULL,
`merchanturl` text NOT NULL,
`imageurl` text NOT NULL,
`price` varchar(10) NOT NULL,
`original` varchar(10) NOT NULL,
`currency` varchar(12) NOT NULL,
`extrafields` text NOT NULL,
`feedid` varchar(25) NOT NULL,
`category` varchar(255) NOT NULL,
`merchant` varchar(255) NOT NULL,
PRIMARY KEY (`id`),
FULLTEXT KEY `productname` (`productname`),
FULLTEXT KEY `description` (`description`)
) ENGINE=MyISAM;
I use mysql LOAD INFILE command to import delimited data files into this table. It has 4 million records now. When I import more data using LOAD INFILE I get the following error:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket
'/var/run/mysqld/mysqld.sock' (2)
I am not able to access the products table after that.
How can I improve the performance of the table? Note that some data files are more than 100MB in size. I have another 4 million entries which need to be
imported to the table.
Please suggest methods to avoid these issues.
Thanks,
Sree
Try connect to mysql server using TCP/IP instead of socket. Socket is only available for unix like operating system.