Warning: Cannot modify header information - headers already sent by (output started at /home/sites/superallan.com/public_html/includes/common.inc:2561) in drupal_send_headers() (line 1040 of /home/sites/superallan.com/public_html/includes/bootstrap.inc).
PDOException: SQLSTATE[42S02]: Base table or view not found: 1146 Table 'web247-sa_admin.field_data_field_embedcode' doesn't exist: SELECT field_data_field_embedcode0.entity_type AS entity_type, field_data_field_embedcode0.entity_id AS entity_id, field_data_field_embedcode0.revision_id AS revision_id, field_data_field_embedcode0.bundle AS bundle FROM {field_data_field_embedcode} field_data_field_embedcode0 WHERE (field_data_field_embedcode0.deleted = :db_condition_placeholder_0) AND (field_data_field_embedcode0.bundle = :db_condition_placeholder_1) LIMIT 10 OFFSET 0; Array ( [:db_condition_placeholder_0] => 1 [:db_condition_placeholder_1] => blog ) in field_sql_storage_field_storage_query() (line 569 of /home/sites/superallan.com/public_html/modules/field/modules/field_sql_storage/field_sql_storage.module).
As I understand it Drupal is looking for a data field that I deleted. I thought maybe it had got corrupted and Drupal couldn't find it to delete it properly. In phpMyAdmin it doesn't exist so how can I get Drupal to recognize it's not longer there and stop it showing this error at the bottom of every page?
You can see it on this page: http://superallan.com/404
Have you tried clearing the site cache, or uninstalling the module that provides the field? It looks like a reference to the field's SQL data is sticking around in your database, which is of course causing the error you posted.
This worked for me:
DELETE FROM field_config WHERE deleted = 1;
DELETE FROM field_config_instance WHERE deleted = 1;
I received no adverse effect from removing all the things that were already deleted.
Source:
http://digcms.com/remove-field_deleted_data-drupal-database/
Related
I have what people say "it works on my computer" issue.
On my Trait, I have the following code:
use App\Models\Lessons;
...
$lessons = Lessons::find($lesson);
But when I fire this up in production, it gives error message Base table or view not found: 1146 Table db.Lessons doesn't exist (SQL: select * from LessonswhereLessons.id = 5 limit 1) ....
I check my model and I can find protected $table = 'Lessons';. (This was the #1 SO answer given based on the related question)
I also checked the DB and I can see the lessons table.
BTW, this works in local.
I have tried deleting the DB, and rerunning artisan/seeder/etc (This a new install so no problem dropping) but still getting the same issue. Any opinions are highly appreciated.
I own a Magento CE 1.7 based website where we keep getting variations of the following error, usually after the daily morning site reindex:
"SQL ERROR: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'main_table.data' in 'field list'
SQL QUERY: SELECT DISTINCT main_table.data, main_table.lifetime, main_table.expire, main_table.priority, additional_table.*, IFNULL(al.value, main_table.frontend_label) AS store_label FROM eav_attribute AS main_table
INNER JOIN catalog_eav_attribute AS additional_table ON additional_table.attribute_id = main_table.attribute_id
LEFT JOIN eav_attribute_label AS al ON al.attribute_id = main_table.attribute_id AND al.store_id = 1 WHERE (main_table.entity_type_id (...)"
I have a website down detector set up so that I can know immediately when the site is unavailable. However it obviously doesn't detect whenever the site is unusable due to errors such as “SQL ERROR: SQLSTATE[42S22]”. That means that my team must be constantly manually (!!!) monitoring if the site is indeed working properly (by clicking on product and category links)!
Usually we surpass the error by just cleaning the cache or performing a new reindex.
My questions are the following:
1 - Is there a way of automatically perform a detection of this type of errors in Magento so that - if it happens - a cache cleaning (or a site reindex) is immediately run and an alert is sent to the webmaster?
2 – If such an error is detected, is there a way of it not be shown to the person accessing the site? That is, if the error is detected, is it possible to immediately display a message (“We’ll be back soon”) while the cache is being cleaned or the site reindexed?
I will be grateful for any help you can provide.
Thank you!
I am trying to load JSON file from Staging area (S3) into Stage table using COPY INTO command.
Table:
create or replace TABLE stage_tableA (
RAW_JSON VARIANT NOT NULL
);
Copy Command:
copy into stage_tableA from #stgS3/filename_45.gz file_format = (format_name = 'file_json')
Got the below error when executing the above (sample provided)
SQL Error [100069] [22P02]: Error parsing JSON: document is too large, max size 16777216 bytes If you would like to continue loading
when an error is encountered, use other values such as 'SKIP_FILE' or
'CONTINUE' for the ON_ERROR option. For more information on loading
options, please run 'info loading_data' in a SQL client.
When I had put "ON_ERROR=CONTINUE" , records got partially loaded, i.e until the record with more than max size. But no records after the Error record was loaded.
Was "ON_ERROR=CONTINUE" supposed to skip only the record that has max size and load records before and after it ?
Yes, the ON_ERROR=CONTINUE skips the offending line and continues to load the rest of the file.
To help us provide more insight, can you answer the following:
How many records are in your file?
How many got loaded?
At what line was the error first encountered?
You can find this information using the COPY_HISTORY() table function
Try setting the option strip_outer_array = true for file format and attempt the loading again.
The considerations for loading large size semi-structured data are documented in the below article:
https://docs.snowflake.com/en/user-guide/semistructured-considerations.html
I partially agree with Chris. The ON_ERROR=CONTINUE option only helps if the there are in fact more than 1 JSON objects in the file. If it's 1 massive object then you would simply not get an error or the record loaded when using ON_ERROR=CONTINUE.
If you know your JSON payload is smaller than 16mb then definitely try the strip_outer_array = true. Also, if your JSON has a lot of nulls ("NULL") as values use the STRIP_NULL_VALUES = TRUE as this will slim your payload as well. Hope that helps.
I am running a very simple query and trying to extract the results to a text file. The entire query is essentially what is below, I am selecting everything from one single table with one piece of where criteria which is limiting the data to one month's worth. After it has extracted around 1.2 gig this error shows up. Is there any way that I can work around this other than extracting smaller date ranges? I am trying to pull a couple of years worth of data so if I can only get it a few days at a time it will take a lot of manual work.
I am currently using the free trial of a DB2 query tool - Razor SQL if that makes a difference, I can probably purchase different software if it would help. I am trying to get IBM's tool but for some reason it freezes during the download so I am still working on that. I have searched about this error but everything I see seems much more complex than what I am doing and I can't tell if it applies or not. Thanks in advance.
select *
from MyTable
where date_col between date '2014-01-01' and date '2014-01-31'
I stumbled at this error too, found out it is related to db2jcc.jar (type 4) driver.
Excerpt: If there are no items in the result set left (or to begin with), the Result set is closed automatically and therefore the Exception. Suggestion is to handle it in the application, perhaps in my case, I started checking if(rs.next()) but otherwise, there is a work around. Check out the source link below for how you can set some properties to Data source and avoid exception.
Source :
"Invalid operation: result set is closed" error with Data Server Driver for JDBC
In my case, i missed some properties in WAS, after add allowNextOnExhaustedResultSet the issue is fixed.
1.Log in to the WebSphere Application Server administration console.
2.Select Resources > JDBC > Data sources > Application Center DataSource name > Custom properties and click New.
3.In the Name field, enter allowNextOnExhaustedResultSet.
4.In the Value field, type 1.
5.Change the type to java.lang.Integer.
6.Click OK.
Sometimes you need also check whether resultSetHoldability properties exists. Details refer to here.
I encountered this failure also when ugrading from JDBC Type 2 driver (db2java.zip) JDBC type 4 driver (db2jcc4.jar)
Statement statement = results.getStatement();
if (statement != null)
{
connection = statement.getConnection(); // ** failed here
statement.close();
}
Solution was to check if the statement is closed or not as follows.
Changed to:
Statement statement = results.getStatement();
if (statement != null && !statement.isClosed()) {
{
connection = statement.getConnection();
statement.close();
}
Creating property bellow with type Integer it's worked for me:
allowNextOnExhaustedResultSet:
I had the same issue on WAS 7 so i had to add and change few this on Admin Console.
This TeamWorksRuntimeException exception should be fixed by applying APAR JR50863 which is available on top of BPM V8.5.5 or included on BPM V8.5 refresh pack 6.
For the case that the APAR does not solve the problem, try following workaround:
Log in to the WebSphere Application Server admin console
Select Resources > JDBC > Data sources > DataSource name (TeamWorksDB) > Custom properties and click New
In the Name field, enter downgradeHoldCursorsUnderXa
In the Value field, type true
Change the type to java.lang.Boolean
Click OK to save your changes
Select custom property resultSetHoldability
In the Value field, type 1
Click OK to save your changes
Source of the Answer : https://developer.ibm.com/answers/questions/194821/invalid-operation-result-set-is-closed-errorcode-4/
Restarting the app may fix the problem if connection pool lost session to Db2. If using Tomcat then connection pool property of 'testonBorrow' may reestablish the connection to Db2.
I am getting this error and don't understand why:
Error Executing Database Query. [Macromedia][SQLServer JDBC
Driver][SQLServer]Invalid column name 'buildno'. The error occurred
in C:/data/wwwroot/webappsdev/cfeis/redbook/redbook_bio_load.cfm: line
10
8 : select *
9 : from redbook_bio
10 : where build_num = '#session.build_num#'
11 : </cfquery>
12 :
VENDORERRORCODE: 207 SQLSTATE: 42S22 SQL: select * from
redbook_bio where buildno = '4700' DATASOURCE: xxxx
******"
It is saying buildno is an invalid column name, but I do not have that name in my query. I used to, but changed both the column in the database and the column name in the query to build_num. You can see my exact code with line numbers, and that there is no 'buildno' in there. But looking at the SQL statement below that, it is still trying to use 'buildno'.
I had my editor check the directory for anywhere it says buildno and no results came back. I have restarted the CF Service and cleared the cache. Why would it still be trying to run it with buildno instead of build_num like the code says?
There was a cfquery cache setting in the Administrator. We had it set to 100. Apparently clearing the cache template and component cache doesn't clear the cfquery cache. I changed the query name and it fixed the problem. It most likely could have been fixed by setting the cfquery cache value to 0.