"Truncated incorrect DOUBLE value" error in DataHandler TYPO3 9 - typo3-9.x

I have an array with about 400 records to insert into a database table
I have tried inserting with the DataHandler and I faced the issues below:
After inserting 195 records, it gives the "Truncated incorrect DOUBLE value" error.
When try creating new record in the list module, it still gives the error above.
If I limit the records to insert to a maximum of 194, records are inserted with no errors
and I can also create records in the list module but the records are duplicated in the database.
For another approach, I used the QueryBuilder's insert() to directly insert the data into the database table.
All the data was inserted as I wanted but when I try to create a new record in the List module, I get the "Truncated incorrect DOUBLE value" again.
But if I limit to 194 no error occurs in the List module when creating a new record.
I will be very glad on help with this problem.

I found something regarding that error and MySQL, e.g. MYSQL Truncated incorrect DOUBLE value
It seems to come from obscure syntax and other quite unrelated MySQL parser errors.
I would try changing values from int to string in your DataHandler-array.
To help you one would need the relevant Datahandler-code with the array-record that is failing and the column definition (DESCRIBE tablename) for the table in question. Also the database version would be interesting.

Related

while inserting the values i am getting "invalid number" error

sq
in image you can see that i am getting error "invalid number".
how can i fix this error.
you should check your data types for every column where you want to insert data. For example I see that you try to add data with "$9,99" which seems a little bit strange. May have a look again.
Maybe there is a column of orders which expects a number and you're trying to insert an non-number.
You could double-check the table definition in SQL Developer or by using DESC <table>.

Finding the column throwing exception during data migration with SSIS from Oracle to MS SQL

I am working on a data migration project. In current task, I have to select data from n number of tables from Oracle, join them and insert the data into a single SQL table. The number of rows are in millions.
Issue: There is data in Oracle which when we are trying to insert in SQL is giving exception. For example the datatype of the Oracle column is VARCHAR2 whereas in SQL it's int. The data is numbers. But there are few columns which have special characters like ','. This is one such example which will fail when we are trying to insert into SQL table. It's failing for many such columns.
I am using SSIS for this task. I am moving all the error id's of the rows into an error table which are throwing such error as mentioned in above example.
Question: I want the column name for which the insertion is failing for each row. Is there an option in SSIS? On error I want to store the id and the column name in an Error table.
Tried to search on internet, but didn't get anything. In SSIS, we do have option to configure the rows having Error. But didn't find that giving column name option to insert into a error table.
Edit: The data will come on daily basis i.e. the SSIS package will be executed daily.
The Error Output contains many columns providing information about it.
The list of columns includes the columns in the component input, the ErrorCode and ErrorColumn columns added by previous error outputs, and the ErrorCode and ErrorColumn columns added by this component.
If you are using OLEDB Destination, you cannot redirect the error rows while using Fast load option. And since you mentioned that
The number of rows are in millions.
Then it is not recommended to use the Row-by-Row insertion.
If there are few columns, i suggest adding a Data Conversion Transformation and use its Error output to get the error information.
References and helpful links
Configuring Error Output Columns
SSIS how to redirect the rows in OLEDB Destination when the fast load option is turned on and maximum insert commit size set to zero
Error Handling in Data
Error Handling With OLE DB Destinations

Google BigQuery: Error: Invalid schema update. Field has changed mode from REQUIRED to NULLABLE

I'm trying to append the results of a query to another table.
It doesn't work and sends out the following error:
Error: Invalid schema update. Field X has changed mode from REQUIRED to NULLABLE.
The field X is indeed REQUIRED, but I don't try to insert any NULL-values into that specific column (the whole table doesn't have a single NULL value).
This looks like a bug to me. Anyone knows a way to work around this issue?
The issue is fixed after switching from Legacy SQL to Standard SQL.

Trying to create a new column, get #1064 error

I've been using phpMyAdmin to manage my db without any problem, but today I ran into this error if I try to add any column by using the interface to any table of any database:
ALTER TABLE `testing` ADD `faaa` INT NOT NULL AFTER ;
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 1
But if I add the column via SQL command in phpMyAdmin, this time by removing AFTER, the column is added without any problem.
I'm still inexperience with phpMyAdmin, so I guess I must have missed a mandatory field to fill when creating a new column in the interface. Can anyone shed a light on this for me?
AFTER column_name is used to designate which column in the table you want to insert the new column after. You're providing the AFTER without telling it which column you want the new column to be inserted behind. If you don't care about the order of the columns in your table, omit the AFTER, and the new column will be inserted at the end of the column list.
You have no column name after the AFTER statement, so the phpMyAdmin doesn't know where it should be put. Whether it's you forgetting to select the column or a phpMyAdmin bug, I have no idea because for adding a new column, the only required fields are the name and type, which you have.

Pentaho table output step not showing proper error in log

In Pentaho, I have a table output step where I load a huge num of records into a netezza target table.
One of the rows fails and the log shows me which values are causing the problem. But the log is probably not right, because when i create an insert statement with those values and run it separately on teh database, it works fine.
My question is:
In Pentaho, is there a way to identify that when a db insert fails, exactly which values caused the problem and why?
EDIT: The error is 'Column width exceeded' and it shows me the values that is supposedly causing the problem. But I made an insert statement with those values and it works good. So I think Pentaho is not showing me the correct error message, it is a different set of values that are causing the problem.
Another way I've used to deal with these kind of problems is to create another table in the DB with widened column types. Then in your transform, add a Table output step connected to the new table. Then connect your original Table output to the new step, but when asked, choose 'Error handling' as the hop type.
When you run your transform, the offending rows will end up in the new table. Then you can investigate exactly what the problem is with that particular row.
For example you can do something like:
insert into [original table] select * from [error table];
You'll probably get a better error message from your native DB interface than from the JDBC driver.
I don't know what is your problem exactly, but I think I had the same problem before.
Everything seems right, but the problem was that in some tranformations, when I transform a numeric value to string for example, the transformation added a whitespace at the end of the field, and the long of the field was n+1 instead of n, but that is very difficult to see.
A practical example would be if you are transforming with a calculator step, you may use YEAR() function to extract the year of a date field, and maybe to that new field with the year have been added a whitespace, so if the year had a length of 4, after that step it will has a length of 5, and when you are going to load a row (with that year field that is a string(5)) into the data warehouse and in your data warehouse is expecting a string(4), you will get the same error that are getting now.
You think is happening --> year = "2013" --> length 4
Really is happening --> year = "2013 " --> length 5
I recommend you to pay quite attention to the string fields and their lengths, because if some transformation adds a whitespace that you don't expect you can lose a lot of time to find the error (myself experience).
I hope this can be useful for you!
EDIT: I'm guessing you are working with PDI (Spoon, before Kettle) and the error is producing when you are loading a data warehouse, so correct me if I'm wrong.
Can you use the file with nzload command, with this command you can find exact error, and bad records in badFile provided by you for detailed analysis.
e.g. -
nzload -u <username> -pw <password> -host <netezzahost> -db <database> -t <tablename> -df <datafile> -lf <logfile> -bf <badrecords file name> -delim <delimiter>