when I order with an obligation to pay
[PrestaShopException]
Can't load Order status
at line 242 in file classes/PaymentModule.php
237. }
238.
239. $order_status = new OrderState((int) $id_order_state, (int) $this->context->language->id);
240. if (!Validate::isLoadedObject($order_status)) {
241. PrestaShopLogger::addLog('PaymentModule::validateOrder - Order Status cannot be loaded', 3, null, 'Cart', (int) $id_cart, true);
242. throw new PrestaShopException('Can\'t load Order status');
243. }
244.
245. if (!$this->active) {
246. PrestaShopLogger::addLog('PaymentModule::validateOrder - Module is not active', 3, null, 'Cart', (int) $id_cart, true);
247. die(Tools::displayError());
PaymentModuleCore->validateOrder - [line 58 - modules/ps_wirepayment/controllers/front/validation.php] - [9 Arguments]
Ps_WirepaymentValidationModuleFrontController->postProcess - [line 270 - classes/controller/Controller.php]
ControllerCore->run - [line 509 - classes/Dispatcher.php]
DispatcherCore->dispatch - [line 24 - override/classes/Dispatcher.php]
Dispatcher->dispatch - [line 28 - index.php]
insert with phpmyadmin the table
ps_order_state:
INSERT INTO `ps_order_state` (`id_order_state`, `invoice`, `send_email`, `module_name`, `color`, `unremovable`, `hidden`, `logable`, `delivery`, `shipped`, `paid`, `deleted`) VALUES
(1, 0, 1, 'cheque', 'RoyalBlue', 1, 0, 0, 0, 0, 0, 0),
(2, 1, 1, '', 'LimeGreen', 1, 0, 1, 0, 0, 1, 0),
(3, 1, 1, '', 'DarkOrange', 1, 0, 1, 0, 0, 1, 0),
(4, 1, 1, '', 'BlueViolet', 1, 0, 1, 1, 1, 1, 0),
(5, 1, 0, '', '#108510', 1, 0, 1, 1, 1, 1, 0),
(6, 0, 1, '', 'Crimson', 1, 0, 0, 0, 0, 0, 0),
(7, 1, 1, '', '#ec2e15', 1, 0, 0, 0, 0, 0, 0),
(8, 0, 1, '', '#8f0621', 1, 0, 0, 0, 0, 0, 0),
(9, 1, 1, '', 'HotPink', 1, 0, 0, 0, 0, 1, 0),
(10, 0, 1, 'bankwire', 'RoyalBlue', 1, 0, 0, 0, 0, 0, 0),
(11, 0, 0, '', 'RoyalBlue', 1, 0, 0, 0, 0, 0, 0),
(12, 1, 1, '', 'LimeGreen', 1, 0, 1, 0, 0, 1, 0),
(13, 1, 0, '', '#DDEEFF', 0, 0, 1, 0, 0, 0, 0);
Your ps_wirepayment module is trying to set an order status to an invalid / deleted id_order_state.
If you haven't modified it , bankwire modulee relies on ps_configuration "PS_OS_BANKWIRE" status value,
so make sure the ID value is valid and links to a valid order state in your database.
I need to insert my values if only they are not presented in my table.
I wrote the function:
do
$$
declare
v_video_config_bundle_id bigint;
v_are_records_exist boolean;
begin
select id from config_bundle into v_video_config_bundle_id where code = 'video';
select count(id) > 0 from config_bundle into v_are_records_exist
where config_bundle_id = v_video_config_bundle_id
and preference = 'true' and amount = 0 and repeatability in (1,7,14,21,30,45) and format='day';
case
when (v_are_records_exist = false) then
insert into config_plan(config_bundle_id, amount, repeatability, format, payment_amount, preference_type, preference, trial, weight, status, is_default)
values (v_video_config_bundle_id, 0, 7, 'day', 0, 'personal', true, false, 2, 'ACTIVE', false),
(v_video_config_bundle_id, 0, 14, 'day', 0, 'personal', true, false, 2, 'ACTIVE', false),
(v_video_config_bundle_id, 0, 21, 'day', 0, 'personal', true, false, 2, 'ACTIVE', false);
end;
end;
$$
But I still get an exception ERROR:
syntax error at or near ";"
Position: 1420
How to fix it?
Let SQL make all decisions; put all the determination logic into a single SQL statement. You can do this by converting the filtering logic into NOT EXISTS (SELECT ... structure. So something like:
insert into config_plan(config_bundle_id, amount, repeatability, format, payment_amount, preference_type, preference, trial, weight, status, is_default)
with new_config ( amount, repeatability, format, payment_amount, preference_type, preference, trial, weight, status, is_default) as
( values ( 0, 7, 'day', 0, 'personal', true, false, 2, 'ACTIVE', false),
( 0, 14, 'day', 0, 'personal', true, false, 2, 'ACTIVE', false),
( 0, 21, 'day', 0, 'personal', true, false, 2, 'ACTIVE', false)
)
select amount, repeatability, format, payment_amount, preference_type, preference, trial, weight, status, is_default
from new_config nc
where not exists ( select null
from config_plan cp
where (cp.preference, cp.amount , cp.repeatability ,cp.format) =
(nc.preference, nc.amount , nc.repeatability ,nc.format)
) ;
The above is not tested as you did not supply table description and sample data. However, see here for an example of the technique.
I have a csv file with following structure:
Tokens,Tags,Polarities
"['i', 'agree', 'about', 'arafat', '.', 'i', 'mean', ',', 'shit', ',', 'they', 'even', 'gave', 'one', 'to', 'jimmy', 'carter', 'ha', '.', 'it', 'should', 'be', 'called', ""''"", 'the', 'worst', 'president', ""''"", 'prize', '.']","[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]","[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, 0, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]"
"['musicmonday', 'britney', 'spears', '-', 'lucky', 'do', 'you', 'remember', 'this', 'song', '?', 'it', '`', 's', 'awesome', '.', 'i', 'love', 'it', '.']","[0, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]","[-1, 2, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]"
"['wtf', '?', 'hilary', 'swank', 'is', 'coming', 'to', 'my', 'school', 'today', ',', 'just', 'to', 'chill', '.', 'lol', 'wow']","[0, 0, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]","[-1, -1, 1, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]"
"['my', '3-year-old', 'was', 'amazed', 'yesterday', 'to', 'find', 'that', ""'"", 'real', ""'"", '10', 'pin', 'bowling', 'is', 'nothing', 'like', 'it', 'is', 'on', 'the', 'wii', '...']","[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]","[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, -1]"
"['God', 'damn', '.', 'That', 'Sony', 'remote', 'for', 'google', 'is', 'fucking', 'hideeeeeous', '!']","[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]","[-1, -1, -1, -1, -1, -1, -1, 0, -1, -1, -1, -1]"
I am trying to read the file as follows:
twitter_train = pd.read_csv('twitter_train.csv')
Then I can see that it has a correct structure:
twitter_train.head(3)
Tokens Tags Polarities
0 ['i', 'agree', 'about', 'arafat', '.', 'i', 'm... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -...
1 ['musicmonday', 'britney', 'spears', '-', 'luc... [0, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... [-1, 2, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1,...
2 ['wtf', '?', 'hilary', 'swank', 'is', 'coming'... [0, 0, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... [-1, -1, 1, 1, -1, -1, -1, -1, -1, -1, -1, -1,...
I want to convert each column to a list of lists, for example:
twitter_train_lists = twitter_train['Tokens'].tolist()
But I have incorrect structure that has an extra \ or " with each element in the list and around each list itself:
['[\'i\', \'agree\', \'about\', \'arafat\', \'.\', \'i\', \'mean\', \',\', \'shit\', \',\', \'they\', \'even\', \'gave\', \'one\', \'to\', \'jimmy\', \'carter\', \'ha\', \'.\', \'it\', \'should\', \'be\', \'called\', "\'\'", \'the\', \'worst\', \'president\', "\'\'", \'prize\', \'.\']',
"['musicmonday', 'britney', 'spears', '-', 'lucky', 'do', 'you', 'remember', 'this', 'song', '?', 'it', '`', 's', 'awesome', '.', 'i', 'love', 'it', '.']",
"['wtf', '?', 'hilary', 'swank', 'is', 'coming', 'to', 'my', 'school', 'today', ',', 'just', 'to', 'chill', '.', 'lol', 'wow']",
'[\'my\', \'3-year-old\', \'was\', \'amazed\', \'yesterday\', \'to\', \'find\', \'that\', "\'", \'real\', "\'", \'10\', \'pin\', \'bowling\', \'is\', \'nothing\', \'like\', \'it\', \'is\', \'on\', \'the\', \'wii\', \'...\']',
"['God', 'damn', '.', 'That', 'Sony', 'remote', 'for', 'google', 'is', 'fucking', 'hideeeeeous', '!']"]
How I can extract lists properly from this csv file to get the correct structure:
[['i', 'agree', 'about', 'arafat', '.', 'i', 'mean', ',', 'shit', ',', 'they', 'even', 'gave', 'one', 'to', 'jimmy', 'carter', 'ha', '.', 'it', 'should', 'be', 'called', "''", 'the', 'worst', 'president', "''", 'prize', '.'],
['musicmonday', 'britney', 'spears', '-', 'lucky', 'do', 'you', 'remember', 'this', 'song', '?', 'it', '`', 's', 'awesome', '.', 'i', 'love', 'it', '.'],
['wtf', '?', 'hilary', 'swank', 'is', 'coming', 'to', 'my', 'school', 'today', ',', 'just', 'to', 'chill', '.', 'lol', 'wow'],
['my', '3-year-old', 'was', 'amazed', 'yesterday', 'to', 'find', 'that', "'", 'real', "'", '10', 'pin', 'bowling', 'is', 'nothing', 'like', 'it', 'is', 'on', 'the', 'wii', '...'],
['God', 'damn', '.', 'That', 'Sony', 'remote', 'for', 'google', 'is', 'fucking', 'hideeeeeous', '!']]
You can find the original dataset file here: https://github.com/1tangerine1day/Aspect-Term-Extraction-and-Analysis/tree/master/data
Update:
I tried another way but have the same problem:
import csv
with open('twitter_train.csv', newline='') as f:
reader = csv.reader(f)
data = list(reader)
Another incorrect output:
print(data[3])
["['wtf', '?', 'hilary', 'swank', 'is', 'coming', 'to', 'my', 'school', 'today', ',', 'just', 'to', 'chill', '.', 'lol', 'wow']", '[0, 0, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]', '[-1, -1, 1, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]']
Thanks in advance!
Your info in your csv is actually a string not a list. You need to make them actual lists.
twitter_train = pd.read_csv('twitter_train.csv')
twitter_train['Tokens'] = list(twitter_train['Tokens'].str.strip("['").str.rstrip("']").str.split("', '"))
I'm trying to add 9500 or so records to a table from a different table, the problem is that the table I'm adding them to has an egregious amount of colums while the table I'm pulling from is pretty normalized so I'm not quite sure how to do the inner join on it, this is what I have so far:
INSERT INTO Products (Code, ManufacturerId, VendorId, IsActive, Name, NamePlural, ShortDescription, Description, UpSellMessage, Cost, Price, IsOnSale, SalePrice, IsShipable, ShipPrice, Weight, Length, Width, Height, HasCountryTax, HasStateTax, HasLocalTax, DateAdded, Keywords, Inventory_Tracked, DropShip, DownloadOneTime, DealTimeIsActive, MMIsActive, ProductType, RecurringSubscriptionPrice, PaymentPeriod, Term, BillingDelay, SaleType, BundleGroupID, ComputePrice, PriceUp, PriceChangedAmount, PriceChangedType, SwatchesPerRow, ChangeOnClick, ChangeOnMouseover, ShowCloseUpLink, LinkBigImage, SwatchAllignment, DescriptionAllignment, DetailLink)
FROM otherProducts table2
INNER JOIN table2
VALUES (table2.col1, 1, 1, 0, table2.col1, table2.col1, table2.col4, table2.col4, 'some message that does not matter', table2.col3, table2.col2, 0, '0', 1, '8', '3', '8', '8', '8', 1, 1, 1, '12/27/2013', ' ', 0, 0, 0, 0, 0, 0, 0, '0', 0, 0, 0, 0, 0, 1, '0', 0, 0, 0, 0, 0, 0, 0, 0, table2.col1+'.aspx');
I can see this is a giant mess, and it just gives me errors, can anyone help me with this?
You want the insert . . . select syntax:
INSERT INTO Products (Code, ManufacturerId, VendorId, IsActive, Name, NamePlural, ShortDescription, Description, UpSellMessage, Cost, Price, IsOnSale, SalePrice, IsShipable, ShipPrice, Weight, Length, Width, Height, HasCountryTax, HasStateTax, HasLocalTax, DateAdded, Keywords, Inventory_Tracked, DropShip, DownloadOneTime, DealTimeIsActive, MMIsActive, ProductType, RecurringSubscriptionPrice, PaymentPeriod, Term, BillingDelay, SaleType, BundleGroupID, ComputePrice, PriceUp, PriceChangedAmount, PriceChangedType, SwatchesPerRow, ChangeOnClick, ChangeOnMouseover, ShowCloseUpLink, LinkBigImage, SwatchAllignment, DescriptionAllignment, DetailLink)
select table2.col1, 1, 1, 0, table2.col1, table2.col1, table2.col4, table2.col4, 'some message that does not matter', table2.col3, table2.col2, 0, '0', 1, '8', '3', '8', '8', '8', 1, 1, 1, '12/27/2013', ' ', 0, 0, 0, 0, 0, 0, 0, '0', 0, 0, 0, 0, 0, 1, '0', 0, 0, 0, 0, 0, 0, 0, 0, table2.col1+'.aspx'
FROM otherProducts table2;
I don't think you need to do a join at all.
I have a query like this
SET QUOTED_IDENTIFIER OFF
SET DATEFORMAT 'mdy'
INSERT INTO TABLE1
(AccountID, TimeStamp, UserID, NodeID, Deleted, UserPriority, ParentRecordID, NodeLevel, Name, NodeClass, DeviceID, DeviceType, SubTypeLevel)
VALUES
(0, "10/03/2002 02:33:39", 0, 0, 0, 0, 0, 0,"XXXXXX",7000, 0, 0, 0`)
When I replace XXXXXX with منطقة تحكم بالبداية السريعة, the query after the string turns right to left like this
SET QUOTED_IDENTIFIER OFF
SET DATEFORMAT 'mdy'
INSERT INTO TABLE1
(AccountID, TimeStamp, UserID, NodeID, Deleted, UserPriority, ParentRecordID, NodeLevel, Name, NodeClass, DeviceID, DeviceType, SubTypeLevel)
VALUES
(0, "10/03/2002 02:33:39", 0, 0, 0, 0, 0, 0, "منطقة تحكم بالبداية السريعة", 7000, 0, 0, 0)
Please tell me how to overcome this.
I am using SQL server 2000 MSDE.
You can resolve this case by adding the letter N before each values entered (that need conversion)
For example:
INSERT INTO TABLE1(AccountID, TimeStamp, UserID, NodeID, Deleted, UserPriority,
ParentRecordID, NodeLevel, Name, NodeClass, DeviceID, DeviceType, SubTypeLevel)
VALUES
(0, "10/03/2002 02:33:39", 0, 0, 0, 0, 0, 0, "منطقة تحكم بالبداية السريعة"N, 7000, 0, 0, 0)
=>
Insert ... Into ... Values (id1,id2,..., N'Arabic word',N'Hebrew word',N'Chinese word');
This issue is solved when we add N before the nvarchar value.
SET QUOTED_IDENTIFIER OFF SET DATEFORMAT 'mdy' INSERT INTO ControlTreeEx (AccountID, TimeStamp, UserID, NodeID, Deleted, UserPriority, ParentRecordID, NodeLevel, Name, NodeClass, DeviceID, DeviceType, SubTypeLevel) VALUES (0, "10/03/2002 02:33:39", 0, 0, 0, 0, 0, 0, N'منطقة تحكم بالبداية', 7000, 0, 0, 0)