DB2 LOAD Modifier - GeneratedOverride or IdentityOverride - sql

I am performing a DB2 load, and I am struggling to understand the impact of using GeneratedOverride over IdentityOverride. When I run the following command:
db2 load from tab123.ixf of ixf replace into application.table_abc
All rows are rejected, with the following error being the culprit:
SQL3550W The field value in row row-number and column column-number is not NULL, but the target column has been defined as GENERATED ALWAYS.
So to try and step around this, I executed
:
db2 load from tab123.ixf of ixf modified by identityoverride replace into application.table_abc
But this immediately returned this error:
SQL3526N The modifier clause "IDENTITY OVERRIDE" is inconsistent with the current load command. Reason code: "3".
From checking the reason code I see that the issue is "Generated or identity related file type modifiers have been specified but the target table contains no such columns." .. but the SQL3550W error seems to infer that the columns are generated always!
The only way I can get these rows to commit to the table is to run..
db2 load from tab123.ixf of ixf modified by generatedoverride replace into application.table_abc
Can anyone enlighten me to why I am recieving the SQL3526N error, or what the implications of running generatedoverride are?
Thanks for sticking with me..

Generated columns are not necessarily identity columns, apparently that's the case in your situation. Check the CREATE TABLE syntax to see what are other ways to generate column values.
By using the GENERATEDOVERRIDE option during the load you are obviously replacing (overriding) the generated values with those from the input file.

Related

Partial Loading Due to " in data - Snowflake Issue

I haven't been able to find anything that describes this issue I am having, although, I am sure many have had this problem. It may be as simple as forcing pre-processing in Python before loading the data in.
I am trying to load data from S3 into Snowflake tables. I am seeing errors such as:
Numeric value '' is not recognized
Timestamp '' is not recognized
In the table definitions, these columns are set to DEFAULT NULL, so if there are NULL values here it should be able to handle them. I opened the files in Python to check on these columns and sure enough some of the rows (the exact number throwing an error in Snowflake) are NaN's.
Is there a way to correct for this in Snowflake?
Good chance you need to add something to your COPY INTO statement to get this to execute correctly. Try this parameter in your format options:
NULL_IF = ('NaN')
If you have more than just NaN values (like actual strings of 'NULL'), then you can add those to the list in the () above.
If you are having issues loading data into tables (from any source) and are experiencing a similar issue to the one described above, such that the error is telling you *datatype* '' is not recognized then you will need to follow these instructions:
Go into the FILE_FORMAT you are using through the DATABASES tab
Select the FILE_FORMAT and click EDIT in the tool bar
Click on Show SQL on the bottom left window that appears, copy the statement
Paste the statement into a worksheet and alter the NULL_IF statement as follows
NULL_IF = ('\\N','');
Snowflake doesn't seem to recognize a completely empty value by default, so you need to add it as an option!

Google BigQuery: Error: Invalid schema update. Field has changed mode from REQUIRED to NULLABLE

I'm trying to append the results of a query to another table.
It doesn't work and sends out the following error:
Error: Invalid schema update. Field X has changed mode from REQUIRED to NULLABLE.
The field X is indeed REQUIRED, but I don't try to insert any NULL-values into that specific column (the whole table doesn't have a single NULL value).
This looks like a bug to me. Anyone knows a way to work around this issue?
The issue is fixed after switching from Legacy SQL to Standard SQL.

Deleting rows in BigQuery fails with "Invalid schema update"

I'm trying to delete some rows from a BigQuery table (using standard SQL dialect):
DELETE FROM ocds.releases
WHERE
ocid LIKE 'ocds-b5fd17-%'
However, I get the following error:
Query Failed
Error: Invalid schema update. Field packageInfo has changed mode from REQUIRED to NULLABLE
Job ID: ocds-172716:bquijob_2f60927_15d13c97149
It seems as though BigQuery doesn't like deleting rows with a REQUIRED column. Is there any way around this?
It has been a known limitation that BigQuery DML doesn't work with tables with required fields (see https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language#known_issues).
We are in the process of removing this limitation. We whitelisted your project today. Please try running your query again in the same project. Let us know if the problem is still there, or if you want to have more projects whitelisted.

Pentaho Execute SQL Statements variable conversion to null

I am using PDI to delete and insert some data from a DB. I have the following issue. I create two variables called START_DATE and END_DATE that are used to select the data that will be deleted from my DB. I am able to get them and run my transformation with no erors in the log file, but when I checked if data was deleted, I find it didn't. I send checked my "DeleteProcedure" step, and it says "Conversion error: null". I have tried different approached to take the variables and pass them as Strings, but I haven't been able to solve this issue. It cannot be a SQL mistake as I tested it with a constant and it works.
Any ideas? I attach some pics. Thanks!
As a documentation of the Execute SQL script says:
Note: When you have an issue, that the SQL is started at the initialization phase of the transformation and not for each row, make sure to check the option "Execute for each row" (see description below).
In your case it executes during the initialization phase of the transformation that's why it gets null values instead of ones from previous step.

VS 2005 SSIS Error value origin

I have an ssis package created in vs 2005 that has started to give me the following error:
[Lawson Staging Table [4046]] Error: There was an error with input column "JOB_CODE" (4200) on input "OLE DB Destination Input" (4059). The column status returned was: "The value violated
the integrity constraints for the column.".
My first question is: what are the 4046, 4200 & 4059 values following my table, column and destination?
My second question is about the integrity constraint message. The destination table is a heap (no keys or indexes) with no constraints. The destination column is defined as a varchar(10). The input column is from oracle, is defined as char(9) and is called job_code. So - where is there an integrity constraint defined?
The final question is about the select statement; looks like the following:
Select ...
,lpad(trim(e.job_code),10,'0') as job_code ...
If I take the lpad and trim functions out, it works but I need these functions in place because my spec calls for a fixed length column padded with leading zeros. This column returns data as expected in TOAD but fails in the ssis package. Does anyone see an issue with how the functions are being used?
Since this package worked in the past but suddenly started to throw this error, I'm assuming that new invalid data has come into play. however, recently added rows don't seem to be any different then historical records.
Those numbers are more likely to be the ids assigned to the each task/table/column etc.
You could probably go to the advanced editor of the data flow task and look at the input and output properties. You can see that for each input or for each column there is an ID assigned.
Next: The error that you are getting occurs usually when "Allow Nulls" option is unchecked.
Try this:
Look at the name of the column for this error/warning.
Go to SSMS and find the table
Allow Nulls for that Column
Save the table
Rerun the SSIS