Joomla - Importing sql database to new server results to error - sql

I want to move a website built in Joomla 3.5.1 to a new server. Bought domain/space at the new server and I backed up the database/files from the old one.
I transferred the files via ftp to the new server and I opened phpmyadmin to import the .sql file. The thing is that after it's uploaded, I get the following error:
SQL query:
CREATE TABLE `jos_assets` (
`id` int(10) UNSIGNED NOT NULL COMMENT 'Primary Key',
`parent_id` int(11) NOT NULL DEFAULT '0'COMMENT AS `Nested set parent.`,
`lft` int(11) NOT NULL DEFAULT '0'COMMENT AS `Nested set lft.`,
`rgt` int(11) NOT NULL DEFAULT '0'COMMENT AS `Nested set rgt.`,
`level` int(10) UNSIGNED NOT NULL COMMENT 'The cached level in the nested tree.',
`name` varchar(50) COLLATE utf8_unicode_ci NOT NULL COMMENT 'The unique name for the asset.\n',
`title` varchar(100) COLLATE utf8_unicode_ci NOT NULL COMMENT 'The descriptive title for the asset.',
`rules` varchar(5120) COLLATE utf8_unicode_ci NOT NULL COMMENT 'JSON encoded access control.'
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
MySQL said: Documentation
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'AS `Nested set parent.`,
`lft` int(11) NOT NULL DEFAULT '0'COMMENT AS `Nested ' at line 3
Tried some edits on the sql import file but with no luck. Anyone knows how to fix it?

There are missing spaces before the keyword COMMENT:
'0'COMMENT
should be
'0' COMMENT
there are three occurrences here I bet you might find more errors, which you can fix by simple find/replace (sed);
Best of all you should try and get a new backup: possibly these could have been linux line endings trimmed in a double conversion to windows and back? You might zip / gzip the sql dump on the source server and explode on the destination server, to guarantee line ending integrity; or for ftp transfer choose binary mode.

Related

current_timestamp() vs CURRENT_TIMESTAMP for default value on TIMESTAMP column (mariadb:latest/mysql, laravel, sequel pro)

What is the difference between CURRENT_TIMESTAMP and current_timestamp() ?
I'm using laravel, and in the laravel migration file for my Tasks table I have this:
$table->timestamp('created_at')->useCurrent();
$table->timestamp('updated_at')->useCurrent()->useCurrentOnUpdate();
For my database container, I am using a local docker setup with:
mariadb:latest (seems to be bringing up version 10.8.3-MariaDB-1:10.8.3+maria~jammy (mariadb.org binary distribution) )
and the weird thing is..
I have Sequel Pro open, and when trying to manually insert a record ( for testing purposes ) through the sequel pro interface and it is failing with the following error:
Incorrect datetime value: 'current_timestamp()' for column .. created_at..
Notice when I click to add a new row, the defaults are 'current_timestamp()'
If I manually change these defaults to 'CURRENT_TIMESTAMP' instead of 'current_timestamp()' it seems to work:
The function/call or lower case version of CURRENT_TIMESTAMP does not work...
If I add a new row programatically / with laravel:
$newTask = new Task();
$newTask->title = 'testing';
$newTask->save();
the row is inserted properly ( with the current timestamp values.. ):
Where is this problem at?
Laravel side/configuration?
Could it be the 'mariadb:latest' bringing up a bug?
Could it be a sequel pro bug ?
This is the create table definition btw:
CREATE TABLE `tasks` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`title` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`description` text COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`created_at` timestamp NOT NULL DEFAULT current_timestamp(),
`updated_at` timestamp NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(),
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=10 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
While developing this question, I have decided to download MySql Workbench and try inserting the values there through the MySql Workbench interface,
and it seems to work, and it seems that it is because MySql Workbench simply runs INSERT queries:
INSERT INTO `tasks_pabloserver_db`.`tasks` (`title`) VALUES ('teeest');
INSERT INTO `tasks_pabloserver_db`.`tasks` (`title`) VALUES ('test444');
which work and insert the proper default/timestamp values.
I looked at the table structure in MySql Workbench and the default value is 'current_timestamp()' and not 'CURRENT_TIMESTAMP', and it still works so it cannot be the database version I guess.
So then I tried to run these same INSERT statements in Sequel Pro and it also worked properly, so my conclusion is that Sequel Pro interface has a bug and that is to blame.

H2 org.h2.jdbc.JdbcSQLSyntaxErrorException occurs when executing a script file in a h2 database

I have used java -cp h2-1.4.199.jar org.h2.tools.RunScript -url jdbc:h2:mem:db1 -script infra_params.sql command to execute below sql script in a H2 database.
infra_params.sql file:-
DROP TABLE IF EXISTS `infrastructure_parameter`;
CREATE TABLE `infrastructure_parameter` (
`id` varchar(36) NOT NULL,
`created_timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`modified_timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`NAME` varchar(255) DEFAULT NULL,
`PROPERTIES` varchar(255) DEFAULT NULL,
`ready` tinyint(1) DEFAULT '0',
`TYPE` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `UNQ_infrastructure_parameter_0` (`NAME`,`PROPERTIES`)
) ENGINE=InnoDB DEFAULT CHARSET=UTF8;
LOCK TABLES `infrastructure_parameter` WRITE;
But it gives following exception:-
Exception in thread "main" org.h2.jdbc.JdbcSQLSyntaxErrorException: Syntax error in SQL statement "
LOCK[*] TABLES `INFRASTRUCTURE_PARAMETER` WRITE "; SQL statement:
LOCK TABLES `infrastructure_parameter` WRITE [42000-199]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:451)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
at org.h2.message.DbException.get(DbException.java:205)
at org.h2.message.DbException.get(DbException.java:181)
at org.h2.message.DbException.getSyntaxError(DbException.java:229)
at org.h2.command.Parser.getSyntaxError(Parser.java:989)
at org.h2.command.Parser.parsePrepared(Parser.java:951)
at org.h2.command.Parser.parse(Parser.java:788)
at org.h2.command.Parser.parse(Parser.java:764)
at org.h2.command.Parser.prepareCommand(Parser.java:683)
at org.h2.engine.Session.prepareLocal(Session.java:627)
at org.h2.engine.Session.prepareCommand(Session.java:565)
at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1292)
at org.h2.jdbc.JdbcStatement.executeInternal(JdbcStatement.java:217)
at org.h2.jdbc.JdbcStatement.execute(JdbcStatement.java:205)
at org.h2.tools.RunScript.process(RunScript.java:261)
at org.h2.tools.RunScript.process(RunScript.java:192)
at org.h2.tools.RunScript.process(RunScript.java:328)
at org.h2.tools.RunScript.runTool(RunScript.java:143)
at org.h2.tools.RunScript.main(RunScript.java:70)
Any help on how to fix this issue would be appreciated.
LOCK TABLES is a MySQL-specific command, it isn't supported by H2.
You need to remove it from your file.
If you really need to use the same script in both MySQL and H2 and need this command in MySQL, you can try to wrap it into executable comment. MySQL, unlike other databases, executes code in /*! … */ comments.
/*! LOCK TABLES `infrastructure_parameter` WRITE; */

Postgres copy error extra data after last expected column from SQL Server BCP file

I am migrating a database from SQL Server 2016 hosted on Windows to Postgres 11 hosted on Debian.
I am exporting data with the BCP utility from SQL Server 2016 and am importing it in Postgres 11 with the COPY command.
For a lot of tables it works, but for some, I keep getting the "extra data after last expected column" error, even if my file contains the same amount of columns. It seems that COPY command has trouble with lines that contains empty strings, showned as "NUL" in Notepad++.
Here is the definition of my table in SQL Server. (table and column names changed)
Create table test (
TypeId int not null,
Name nvarchar(50) not null,
License nvarchar(50) not null,
LastChanged timestamp not null,
Id1 uniqueidentifier not null,
Id2 uniqueidentifier not null,
DescriptionCol nvarchar(256) not null default '',
ConditionCol bit not null default 0,
ConditionCol2 bit not null default 0,
ConditionCol3 bit not null default 1,
DescriptionCol2 nvarchar (2) not null default ''
)
And here is the table definition in Postgres.
CREATE TABLE test (
typeid integer NOT NULL,
name citext COLLATE pg_catalog."default" NOT NULL,
license citext COLLATE pg_catalog."default" NOT NULL,
lastchanged bytea NOT NULL,
id1 uuid NOT NULL,
id2 uuid NOT NULL DEFAULT uuid_generate_v4(),
descriptioncol text COLLATE pg_catalog."default" NOT NULL DEFAULT ''::text,
conditioncol boolean NOT NULL DEFAULT false,
conditioncol2 boolean NOT NULL DEFAULT false,
conditioncol3 boolean NOT NULL DEFAULT true,
descriptioncol2 text COLLATE pg_catalog."default" NOT NULL
)
I extract the data that way:
bcp Database.Schema.test out E:\MyFile.dat -S ServerName -U User -P Password -a65535 -c -C 65001
And then I connect to the remote Postgres server and import data that way:
\copy Schema.test FROM 'E:\MyFile.dat' (DELIMITER E'\t', FORMAT CSV, NULL '', ENCODING 'UTF8');`
Now if I open the file that was generated in Notepad++, I will see "NUL" characters and that seems to be the problem that the COPY command cannot take.
If I try to put some data in the "NUL" caracter on the first row, then the copy command gives me the "extra data after last expected column" on the third row instead of the first row. I cannot edit the file and replace the "NUL" character with something else as I have hundreds of tables to migrate with some very big tables.
I need to either specify an option to SQL Server BCP utility or to Postgres COPY command in order to make this work.
As it is stated by #Tometzky,
bcp utility represents an empty string as a null and a null string as an empty string.
this explains the cause of unwanted behavior.
As an alternative to bcp, you may consider to use ssis (Microsoft SQL Server Integration Services) for this manner. It is easy to use and has wide range of compatibility between DBMS Systems.

SQL syntax error

I use phpMyAdmin and want to create a table. I use the visual interface for creating the table but I'm gonna post the code from "Preview SQL" option:
CREATE TABLE `baza`.`koncert` (
`koncert_id` INT(10) NOT NULL AUTO_INCREMENT ,
`koncert_naziv` VARCHAR(50) NULL ,
`koncert_lokacija` VARCHAR(50) NOT NULL ,
`koncert_datum` DATE NULL DEFAULT NULL ,
`koncert_cijena` DOUBLE(10) NOT NULL ,
`koncert_slika` VARCHAR(500) NOT NULL )
ENGINE = InnoDB CHARSET=utf8 COLLATE utf8_croatian_ci;
And I get this error:
1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL version for the right syntax to use near ')' NOT NULL, 'koncert_slika' VARCHAR(500) NOT NULL ) ENGINE=InnoDB CHARSET=ut
I tried setting the 'koncert_datum' default value to CURRENT_TIMESTAMP, but then I get an error "Invalid default value for 'koncert_datum'". I just don't understand what could possibly be wrong (and I used the phpMyAdmin visual interface to try create the table!)
According to the documentation https://dev.mysql.com/doc/refman/5.7/en/floating-point-types.html the DOUBLE type needs total digits and decimal digits. Something like
`koncert_cijena` DOUBLE(12,2) NOT NULL ,
Check the schema and ensure baza.koncert is present and try running them by removing "ENGINE = InnoDB CHARSET=utf8 COLLATE utf8_croatian_ci" this.

MySQL "CREATE TABLE IF NOT EXISTS" -> Error 1050

Using the command:
CREATE TABLE IF NOT EXISTS `test`.`t1` (
`col` VARCHAR(16) NOT NULL
) ENGINE=MEMORY;
Running this twice in the MySQL Query Browser results in:
Table 't1' already exists Error 1050
I would have thought that creating the table "IF NOT EXISTS" would not throw errors. Am I missing something or is this a bug? I am running version 5.1. Thanks.
Works fine for me in 5.0.27
I just get a warning (not an error) that the table exists;
As already stated, it's a warning not an error, but (if like me) you want things to run without warnings, you can disable that warning, then re-enable it again when you're done.
SET sql_notes = 0; -- Temporarily disable the "Table already exists" warning
CREATE TABLE IF NOT EXISTS ...
SET sql_notes = 1; -- And then re-enable the warning again
You can use the following query to create a table to a particular database in MySql.
create database if not exists `test`;
USE `test`;
SET #OLD_FOREIGN_KEY_CHECKS=##FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0;
/*Table structure for table `test` */
CREATE TABLE IF NOT EXISTS `tblsample` (
`id` int(11) NOT NULL auto_increment,
`recid` int(11) NOT NULL default '0',
`cvfilename` varchar(250) NOT NULL default '',
`cvpagenumber` int(11) NULL,
`cilineno` int(11) NULL,
`batchname` varchar(100) NOT NULL default '',
`type` varchar(20) NOT NULL default '',
`data` varchar(100) NOT NULL default '',
PRIMARY KEY (`id`)
);
create database if not exists `test`;
USE `test`;
SET #OLD_FOREIGN_KEY_CHECKS=##FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0;
/*Table structure for table `test` */
***CREATE TABLE IF NOT EXISTS `tblsample` (
`id` int(11) NOT NULL auto_increment,
`recid` int(11) NOT NULL default '0',
`cvfilename` varchar(250) NOT NULL default '',
`cvpagenumber` int(11) NULL,
`cilineno` int(11) NULL,
`batchname` varchar(100) NOT NULL default '',
`type` varchar(20) NOT NULL default '',
`data` varchar(100) NOT NULL default '',
PRIMARY KEY (`id`)
);***
I have a solution to a problem that may also apply to you. My database was in a state where a DROP TABLE failed because it couldn't find the table... but a CREATE TABLE also failed because MySQL thought the table existed. (This state could easily mess with your IF NOT EXISTS clause).
I eventually found this solution:
sudo mysqladmin flush-tables
For me, without the sudo, I got the following error:
mysqladmin: refresh failed; error: 'Access denied; you need the RELOAD privilege for this operation'
(Running on OS X 10.6)
Create mysql connection with following parameter. "'raise_on_warnings': False". It will ignore the warning. e.g.
config = {'user': 'user','password': 'passwd','host': 'localhost','database': 'db', 'raise_on_warnings': False,}
cnx = mysql.connector.connect(**config)
I had a similar Problem as #CraigWalker on debian: My database was in a state where a DROP TABLE failed because it couldn't find the table, but a CREATE TABLE also failed because MySQL thought the table still existed. So the broken table still existed somewhere although it wasn't there when I looked in phpmyadmin.
I created this state by just copying the whole folder that contained a database with some MyISAM and some InnoDB tables
cp -a /var/lib/mysql/sometable /var/lib/mysql/test
(this is not recommended!)
All InnoDB tables where not visible in the new database test in phpmyadmin.
sudo mysqladmin flush-tables didn't help either.
My solution: I had to delete the new test database with drop database test and copy it with mysqldump instead:
mysqldump somedatabase -u username -p -r export.sql
mysql test -u username -p < export.sql
Well there are lot of answeres already provided and lot are making sense too.
Some mentioned it is just warning and some giving a temp way to disable warnings. All that will work but add risk when number of transactions in your DB is high.
I came across similar situation today and here is the query I came up with...
declare
begin
execute immediate '
create table "TBL" ("ID" number not null)';
exception when others then
if SQLCODE = -955 then null; else raise; end if;
end;
/
This is simple, if exception come while running query it will be suppressed. and you can use same for SQL or Oracle.
If anyone is getting this error after a Phpmyadmin export, using the custom options and adding the "drop tables" statements cleared this right up.