WordPress error_log filling up with SQL errors - sql

Should I create this table or it could be an error fixable in some other way?
Every time I search a customer number I get one of these errors in error_log within the WordPress document root.
I need some help in debugging this please I am in panic mode:
[16-Dec-2022 14:12:35 UTC] WordPress database error Table 'store_wp557.wp_wc_customer_lookup' doesn't exist for query SELECT SUM(wps.total_sales)
FROM `wp_wc_customer_lookup` AS wpl
JOIN `wp_wc_order_stats` as wps ON wpl.customer_id = wps. customer_id
WHERE wpl.user_id=122442
made by require('wp-blog-header.php'), require_once('wp-load.php'), require_once('wp-config.php'), require_once('wp-settings.php'), do_action('wp_loaded'), WP_Hook->do_action, WP_Hook->apply_filters, WC_Form_Handler::process_login, wp_signon, do_action('wp_login'), WP_Hook->do_action, WP_Hook->apply_filters, XLWCTY_Common::update_user_order_count, XLWCTY_Common::get_customer_total_spent

You can go to the WooCommerce -> Status -> Tools page in the backend then you'll find the Verify base database tables tool.
Click on the Verify database button it will validate and recreate the missing base tables of WooCommerce.
You'll also see the Update database tool click the Update database button and all the databases will be updated to the latest version of WooCommerce.
Note: Make sure you have a full backup of the site and database before running those tools. otherwise, you might lose the data if something goes wrong.

Related

How to copy from table design from database to another vb.net access

The aim is when updating the application and update the access database without altering the data so update by update only the new tables or new columns so i want to copy the exact table with it's structure to the old database vb.net and access database.
what I've tried is detecting the differences between the old database and the new one by getting in combobox1 the only missed table and in combobox2 the missed columns in the old database in exact table already there in both database and get it's data type .
so i want to copy the entire table and then create only missed columns
thank you
There is not a built in tool to do this.
But, worse yet, there is no "generate" change scripts in Access
(Like say with SQL server).
So, how do you approach this issue? What do some of the accounting systems or commercial programs that use ms-access as the database?
Well, you have to build a kind of "up-grade" system in your software.
This means two things:
To add a new column to a table (for example), you NEVER go open up the access database with access, but "add" or "write" the code to add that field in question.
In fact, I had an applcation deployed out in the field - many desktops.
So, I had a code module called upgrade. And each time I needed a new field or whatever, then I would write the code to add that new colum.
AS LONG as I always added things into that code module, I was ok. (never break the rule for adding new fields, tables or even increasing the length of some field? - use code).
And it became quite easy after I had some code written. I would in fact often cut + paste a previous bit of code to add a new column to a table.
However, after about 5 years, that messy code module had 800+ lines of code in it!!!
But, I ALSO realized that MOST things like adding a new column or whatever? Same code over and over.
So, what I did next was built a "upgrade" table. It looked like this:
Version action SQL RunCode
2.5 AddTable tblCustomers
2.5 AddField "sql here to add table"
etc. etc.
So, I had a version number, and then I compare against the up-grade table. I had "action", and the code would simple loop this table, and do whatever.
So, for example, to add a field, you can use access "DDL" command (data definition commands - most SQL systems support this, and so does Access).
so, say like this:
' any new table code goes here:
If lngVer < 1148 Then
' add event Invoice text option
ExecuteSQLNR "ALTER TABLE dbo.Events ADD InvoiceText ntext NULL"
ExecuteSQLNR "ALTER TABLE dbo.Events ADD HideEventDate bit NULL default 0"
Or, say to increase a column lengh from 50 to 55
db.Execute "ALTER TABLE tblGroupRemind ALTER COLUMN Anotes text(255)", dbFailOnError
As noted, since oh so many the commands were VERY similar, then I started putting that information into a table, and then I would execute the required upgrades in a loop.
For a whole new table? Well, I thought that was too much code, so I always included a blank empty database - and for new tables, I would place them in that upgrade.accDB table - and "transfer/copy" the table from that upgrade database to the real one. That way, I could with great ease create a whole new table, and create in Access designers, and then add/copy that table to the "upgrade.accDB" database.
As noted? The above ideas an approaches work quite well.
In fact, over time, I found it LESS hassle while coding away to add the new column or whatever LESS effort then having to open up ms-acces, and then the table, and then the designer and make the changes.
However, the BIG issue with above?
Well, you have to get all users at least upgraded to your EXISTING schema, and there is no automated tools.
in fact, before I had any automated tools? I would open up note pad, and if I added some field to some table? I would simple type into note pad that new field in such and such table is required).
Then, when on customer site, I would open up their database, and then go look at the note pad document for the list of changes I was to make. (that is what I was doing before I started automating the process - and of course it not always practical to be "on site" or have the customers database.
But, ONCE I had all of the above working?
Then during development, I would open up my "upgrade" database, add the new row and action (new table, new column, (and more).
I even had a column that defined the function to run AFTER that one command. I mean, quite often when you add a new column, or change somthing in a table, often you need to copy data, or at least process some data after you make that change.
Once you get above going?
Then you simple NEVER make changes in the data tables directly, but use your "system" for this. And that works REALLY well.
For one, a customer could open up a older data file - say one from 4 or 5 years ago. The applcation version number would be detected, and then the upgrade code would run all though the versions to update that database. (and I did this automatic on startup - so they never even knew such a upgrade had occurred).
So, you just have to make sure that for each change you make, you put that code in your upgrade system, and you are done.
But, for existing systems? You have to look at what changes you made since last deploy, and write out the "ddl" commands (the alter table SQL commands).
There is no automated way of doing this.
As FYI?
One of the BEST and more valuable free tools in Visual Tools is the SQL server compare utility. It will not only automatic detect and tell you the changes between two SQL server databases, but will also upgrade for you. (very nice).
But, such a system is not available for Access. In fact, so valuable is that utility for SQL server, you might consider upgrading from Access to SQL server for this applcation. With that utility? I can work local, add fields, columns, tables and even stored procedures to that SQL database. When I am on site (or even by VPN), then I run that compare tool - it shows the changes, and ALSO has a button to update the target schema.
I don't know of a automated "schema" checker and updater for Access.
So, what I suggest for above ONLY works if you put such a system in place, and THEN as a developer always make your schema changes to your upgrade system, and never directly in the database with ms-access.

How to get the query displayed when a change is made to a table or a field in a table in Postgresql?

I have used mysql for some projects and recently I moved to postgresql. In mysql when I alter a table or a field the corresponding query will be displayed in the page. But such a feature was not found in postgresql(kindly excuse me if I'm wrong). Since the query was readily available it was very helpful for me to test something in the local database(without explicitly typing the query), copy the printed query and run it in the server. Now it seems like I've to manually do all the trick. Even though I'm familiar with the query operations,at times it can be pretty time consuming process. Can anybody help me? How can I get the corresponding query to get displayed in postgresql(like in mysql) whenever a change is made to the table?
If you use SELECT * FROM ... there should not be any reason for your output to not include newly added columns, no matter how you get your results - would that be psql in command line, PgAdmin3 or any other IDE.
After you add new columns, it is possible that these changes are still in open transaction in other window or SQL command - be sure to COMMIT such transaction. Note that your changes to data or schema will not be visible to any other database clients until transaction commits.
If your IDE still does not show changes, maybe you need to refresh list of tables or if that option is not available, restart your IDE. If that does not work still, maybe you should use better IDE.
If you have used SELECT field1, field2, ... FROM ... then you must add new fields into your SELECT statement(s) - but this would be true for any other SQL implementation, MySQL included.
You could use the LISTEN / NOTIFY mechanism in PostgreSQL to notify your client on altering the database schema.

Sync Framework between SQL Server 2008 and SQL Server CE

I'm working with a moderately sized database of about 60,000 records. I am working on building a mobile application which will be able to check out a single table into a compact .sdf on for viewing and altering on the device, then allow the user to sync their changes back up with the main server and receive any new information.
I have set it up with the Sync Framework using a WCF Service Library. When setting up the connection for some reason the database won't let me check "Use SQL Server Change Tracking" and throws up the error:
"'Unable to initialize the client database, because the schema for table 'Inventory' could >not be retrieved by the GetSchema() method of DbServerSyncProvider. Make sure that you can >establish a connection to the client database and that either the >SelectIncrementalInsertsCommand property or the SelectIncrementalUpdatesCommand property of >the SyncAdapter is specified correctly."
So I leave it unchecked and set it to use some already created columns "AddDateTime" and "LastEditTime" it seems to work okay, and after a massive amount of tweaking I have it partially working. The changes on the device sync up perfectly with the database, updates, deletes, all get applied. However, changes on the server side...never get updated. I've made sure everything is set up right with the bidirectional setup so that shouldn't be the problem. And, I let it sit overnight so the database received ~500 new records, this morning it actually synced the latest 24 entries to the database...out of 500 new. So that should be further proof that it's able to receive information from the server, but for all useful purposes, it's not.
I've tried pretty much everything and I'm honestly getting close to losing it. If anyone has any ideas they can throw out I can chase after I would be most grateful.
I'm not sure if I just need to go back and figure out why I can't do it with the "SQL Server Change Tracking". Or if there is a simple explanation for why it's not actually syncing 99% of the changes on the server back to the client.
Also, the server database table schema can't be altered as a lot of other services use it. But the compact database can be whatever the heck in needs to be to just store the table and sync properly in both directions.
Thank you!!
Quick Overview:
Using WCF and syncing without SQL Server Change Tracking (Fully enabled on server and database)
Syncing changes from client to server works perfectly
Syncing from server back to the client not so much: out of 500 new entries overnight, on a sync it downloaded 24.
EDIT: JuneT got me thinking about the time and their anchors. When I synced this morning it pulled 54 of about 300 new added records. I went in to the line (there are about 60 or so columns, so I removed them for readability, this is kind of a joke)
this.SelectIncrementalUpdatesCommand.CommandText = #"SELECT [Value], [Value], [Value] FROM >TABLE WHERE ([LastEditDate] > #sync_last_received_anchor AND [LastEditDate] <= >#sync_new_received_anchor AND [AddDateTime] <= #sync_last_received_anchor)";
And replaced #sync_last_received_anchor with two DIFFERENT times. Upon syncing it now returns the rows trapped between those two and took out the middle one giving me:
this.SelectIncrementalUpdatesCommand.CommandText = #"SELECT[Value], [Value], [Value] FROM >TABLE WHERE ([LastEditDate] > '2012-06-13 01:03:07.470' AND [AddDateTime] <= '2012-06-14 >08:54:27.727')"; (NOTE: The second date is just the current time now)
Though it returned a few hundred more rows than initially planned (set the date gap for 600, it returned just over 800). It does in fact sync the client up with the the new server changes.
Can anyone explain why I can't use #sync_last_received_anchor and what I should be looking for. I suppose I could always add box that allows the user to select the date to begin updating from? Or maybe add some sort of xml file to store the sync date that would be updated anytime a sync was -successfully- completed?
Thanks!
EDIT:
Ran the SQL profiler on it...the date (#sync_last_received_anchor) is getting set to 8 hours ahead of whatever time it really is. I have no idea how or why it's doing this, but that would definitely make sense.
Turns out the anchors are collected like this:
this.SelectNewAnchorCommand.CommandText = "Select #sync_new_received_anchor = GETUTCDATE()";
That UTC date is what was causing the 8 hours gap. To fix it either change it to GETDATE(), or convert your columns to UTC time in the WHERE clause of the commands.
After spending many hours with many cups of coffee, I've figured out how to solve this error of mine. While I was running the code on desktop testing area, everything seemed to be working perfectly; however the same code and webservice on target device gave this error repeatedly. Then, suddenly, the "dbo_" prefixes on compact database table names started looking interesting, like they were trying to tell me something really important. So, I've listened...
Configuration.SyncTables.Add("Products);
on ClientSyncAgent.cs should be changed to
Configuration.SyncTables.Add("dbo_Products");
[Exeunt]

Getting around cyclical foreign key errors when trying to generate insert data scripts in SQL 2008

I am trying to generate some insert scripts using the SQL Server 2008 Script Wizard. Upon generating the scripts, I get the following error:
"The selected database contains foreign keys that create a cycle. Publishing data only is not supported for databases with cyclical foreign key relationships."
I've attempted to disable and remove all constraints in the database. The error is still occurring. Is there any way to get around this? Possibly make SQL ignore the constraints while generating the scripts.
On the Wizard page where you choose the radio button to select All Database Objects or Specific Objects, make sure to select All Database Objects. For some reason the tool needs something in there to generate even if you just want the table insert script.
Once I changed that radio button to All Database Objects, and selected the Advanced option to generate Type of script = Data Only, it worked all the way through.
I had the same problem as the OP. Then I tried again, this time in the advanced options, for the "types of data to script" option, I selected "schema and data" rather than data only. Then it worked for me without complaining about cyclical keys.
I was having the same issue, and I discovered today that you can use SQL Server Management Studio 2012 against a 2008 R2 DB and you won't get the error:
Sql Server Scripting Data Only: Workaround for CyclicalForeignKeyException?
Saving to file vs. to a new Query editor window seems to make it work for me on Management Studio 2008 :\
First off IMHO HLGEM's response is a bit cavalier--there are valid reasons at times to have cyclic references.
That said I think the script generator is hyper-sensitive. It seems to think just about any PK/FK pair is "cyclic" and I ended up having to use a copy of my database from which I'd stripped all keys to get the export to get beyond the "cyclical" error. A script like the following can help you drop keys globally but of course be careful!
SELECT
'ALTER TABLE ' + object_name(parent_obj) + ' DROP CONSTRAINT ' + [name]
AS Script
from sysobjects where xtype IN ('F')
[I didn't write this. See http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=46682]
Further the tool is pretty useless in terms of feedback since its report doesn't provide enough detail to narrow down where the supposed cyclical references exist.
Finally I found the tool to be pretty flaky in that I get random timeouts. One other observation that I haven't researched extensively is I think the tool may require you to start from scratch after the cyclical error to clear it's cache since I see different behavior when I use Previous button vs. starting afresh.
You can export the data by setting the script option - "Script Check Constraints" to False
Sorry this will not work :(
You will have to determine which table is causing the issue.
I was getting the same error because I didn't had a table selected in the object list (one big table I wanted to create in another script). Selecting all of the tables solved the problem.
PD: Maybe a little bit late, but searching CyclicalForeignKeyException gets first in Google.

Problem with a MS Access query after a "Compact and repair" operation

I have an Access application that use the classical front-end/back-end approach. Yesterday, the backend got corrupted for a reason I don't know. So I opened the backend with Access 2003 and access asked me if I wanted to repair the file, I said yes and it seemed to work.
I can open the database see the tables contents and run most of the queries.
However there is an access query that doesn't work with a specific where clause.
Example :
// This works in the original DB, but not in the compacted one :
SELECT a, b, c
FROM tbl1 INNER JOIN tbl2 ON tbl1.d = tbl2.d
WHERE e = 3 AND tbl2.f = 1;
// This works in both the original and the compacted one :
SELECT a, b, c
FROM tbl1 INNER JOIN tbl2 ON tbl1.d = tbl2.d
WHERE e = 3;
When I try to run the queries, nothing happens. The access process start to use most of the CPU and the GUI stop responding. If I run the query from the query editor, I can use Ctrl+Break to stop the execution. I tried to give the query lot of time and it didn't help.
I've checked the execution plan in showplan.out and it seems correct (at least it should not takes forever to execute)
I tried to compact the DB again. I tried to import the tables in a new DB. I even tried to import the tables and their data in a mdb file that was in a now good state (from a backup).
Anyone have an idea?
Sounds like an index was corrupted and when that happens, it's dropped during the compact. Check for a system table called MSysCompactErrors -- you'll have to show hidden objects and/or system objects in Tools | Options | VIEW.
Never compact a Jet MDB without making a backup beforehand. Because of that rule, the COMPACT ON CLOSE function is completely useless, as it's not cancellable, so you always make sure it's turned off in all MDBs.
I don't know what type of meta data Access brings along when it imports a table from one database into another one. If the meta data is corrupted, importing the table to another database wouldn't necessarily resolve the problem. If practical, you might try creating the tables from scratch in a brand new database and then just exporting and importing (or copying and paste appending) the data into the new database.
I've never seen a table get corrupted like this in such a small database, although with Access anything is possible. Could there be something wrong with the data?
I'd try recreating the query fresh (new name, etc.), and see what happens.
You could even try copying it (even within the same DB or to a brand new one). If that works, the worst case scenario is you have to copy all the objects across to a new DB.
Is there an index on the field tbl2.f?
Also try going into that table in datasheet view, sort tbl2.f in ascending sequence and see if there is anything really strange in the first or last records.
Do you have access to a SQL Server installation? You could use the Upsizing Wizard under the Tools -> Database Utilities menu to copy the data to SQL Server, and see if you get the same problem there.