I am trying to write a simple query to get a sequence number.
EXEC SQL SELECT NEXT VALUE FOR #SOP_SEQ INTO :SEQ ;
EXEC SQL SELECT NEXT VALUE FOR #SOP_SEQ INTO :SEQ FROM #SOP_SEQ;
With the first line of code, I get an error message before I can even compile: SQL0104 Token was not valid. Valid tokens: , FROM
I tried the second line of code and I get this error when I compile:
SQL1103 Position 57 Column definitions for table #SOP_SEQ in *LIBL not found.
Can someone point to me what I am doing wrong?
SELECT ... INTO needs a row to run against, and you are not providing any, thus you have no result set.
There are two ways to do what you want.
Using SELECT INTO with SYSDUMMY1
select next value for #sop_seq
into :seq
from sysibm/sysdummy1;
Or, better, using VALUES INTO which does not need the reference to SYSDUMMY1
values next value for #sop_seq
into :seq;
tldr;
SYSIBM/SYSDUMMY1 is a catalog file with a single record, and before VALUES INTO became available, was commonly used to retrieve calculated values into a result set when a single row is required, and there is no real table reference that applies (as in your situation here). This technique is still used, but I would advise toward using VALUES INTO instead as no artificial FROM clause is necessary.
Related
I'm using oracle SQL and want to insert my data into a remote DB via a link.
insert into V_ADITO_ONLINE_BEITRITT#MDB.IGM (<82 different columnNames>)
values (<82 different values>);
As far as I checked all datatypes, columns and values match up, yet I get this error:
Errorcode 1722, SQL-Status 42000: ORA-01722: Invalid Number
ORA-02063: previous line of MDB.IGM
I'd appreciate it a lot if someone could help me with this error.
V_ADITO_ONLINE_BEITRITT is a view if that helps in any way.
Source Orafaq.com
An ORA-01722 ("invalid number") error occurs when an attempt is made to convert a character string into a number
When doing an INSERT INTO ... VALUES (...)
One of the data items you are trying to insert is an invalid number.
Locate and correct it.
If all of the numbers appear to be valid, then you probably have your columns out of order, and an item in the VALUES clause is being inserted into a NUMBER column instead of the expected VARCHAR2 column. This can happen when a table has columns added or removed.
If you are doing an INSERT or UPDATE, with a sub query supplying the values. Obviously, the preceding considerations apply here as well. What makes this more complicated is that the offending character string is hidden as a row in a table. The fix is to identify the row (or rows) which has the non-numeric string, and either change the data (if it is in error) or add something to the sub query to avoid selecting it. The problem is in identifying the exact row.
Assuming that the errant datum is an alphabetic character, one can use the following query:
SELECT ... WHERE UPPER(col) != LOWER(col)
where col is the column with the bad data.
Try to comment a half of fieldnames and values and check if ORA-1722 still exists. When you found error-containing half, try to comment half of it and so on. Locate exact field & value which produces a problem, check and repair them.
I have one job with two transformation in it.
Transformation get list of data which is pass to another transformation. Here it execute for each row pass from first transformation.
In second transformation I have used
"get row from result" -> "table input"
In "get row from result" there are five field but in table input i have to use only 2th position and 3th position field.
even if i try to give single param "?" its giving error
"
2017/06/29 15:11:02 - Get Data from table.0 - Error setting value #3 [String] on prepared statement
2017/06/29 15:11:02 - Get Data from table.0 - Parameter index out of range (3 > number of parameters, which is 2).
"
My query is very simple
select * from table where col1= ? and col2 = ?
How can I achieve this? error? Is my doing anything wrong ?
You can also give names to your parameters, so that your query become
select * from table where col1="${param2}" and col2="${param3}".
Don't forget to check the "Replace variable in script" checkbox, and to adapt the quotes to your sql dialect (ex: '${param1}' for SQL-Server).
Note the param2 and param3 must exists in the transformation's Settings/Parameters, without the ${...} decoration and with values that don't break the SQL.
The values of the parameters can be set or changed in a previous transformation with a Set variables step (variables and parameters are synonymous in first approximation) and a scope at least Valid in the parent job.
Of course, if you insist in unnamed parameters for legacy purposes or any other reason, you are responsible to tell PDI that the first one is to be discarded, (eg where (? is null or 0=0) and col1=? and col2=?.
If you have 5 fields arriving to the table input, you need to pass 5 parameters to your query, and in the right order. Also, each parameter will be used only once.
So, if you have 5 fields and only use 2 of them, the best way is to put a select values step between Get rows from result and Table input and let only the actual query parameters through.
I'm using a row filter to filter out columns that are longer than given length. Under filter conditions there are no conditions for checking row length.
So the workaround is to use:
Field1 REGEXP [^.{0,80}$]
OR
Field1 IS NULL
Field2 REGEXP [^.{0,120}$]
OR
Field2 IS NULL
Length check is a very common requirement. Is there a function/simpler way to do this that I'm missing?
Use Data Validator step:
Create a new validation for every column you want to check and set "Max string length" for every validation created.
You can redirect erroneous rows using "Error handling of step" hop:
By default these rows have same structure and values as the input rows, but you can also include additional information, such as the name of the erroneous column or error description.
Alternatively, you can compute a string length before filtering using calculator step, but it may create a lot of additional columns if you have multiple columns to check.
And, of course, you can always perform such checks in User Defined Java Class or Modified Java Script Value.
Assuming you are talking about strings, you can use a Calculator step with the somewhat hard to find calculation "Return the length of a string A". That will give you the values for your Filter Rows step.
I am having to create a second header line and am using the first record of the Query to do this. I am using a UNION All to create this header record and the second part of the UNION to extract the Data required.
I have one issue on one column.
,'Active Energy kWh'
UNION ALL
,SUM(cast(invc.UNITS as Decimal (15,0)))
Each side are 11 lines before and after the Union and I have tried all sorts of combinations but it always results in an error message.
The above gives me "Error converting data type varchar to numeric."
Any help would be much appreciated.
The error message indicates that one of your values in the INVC table UNITS column is non-numeric. I would hazard a guess that it's either a string (VARCHAR or similar) column or something else - and one of the values has ended up in a state where it cannot be parsed.
Unfortunately there is no way other than checking small ranges of the table to gradually locate the 'bad' row (i.e. Try running the query for a few million rows at a time, then reducing the number until you home in on the bad data). SQL 2014 if you can get a database restored to it has the TRY_CONVERT function which will permit conversions to fail, enabling a more direct check - but you'll need to play with this on another system
(I'm assuming that an upgrade to 2014 for this feature is out of the question - your best bet is likely just looking for the bad row).
The problem is that you are trying to mix header information with data information in a single query.
Obviously, all your header columns will be strings. But not all your data columns will be strings, and SQL Server is unhappy when you mix data types this way.
What you are doing is equivalent to this:
select 'header1' as col1 -- string
union all
select 123.5 -- decimal
The above query produces the following error:
Error converting data type varchar to numeric.
...which makes sense, because you are trying to mix both a string (the header) with a decimal field.
So you have 2 options:
Remove the header columns from your query, and deal with header information outside your query.
Accept the fact that you'll need to convert the data type of every column to a string type. So when you have numeric data, you'll need to cast the column to varchar(n) explicitly.
In your case, it would mean adding the cast like this:
,'Active Energy kWh'
UNION ALL
,CAST(SUM(cast(invc.UNITS as Decimal (15,0))) AS VARCHAR(50)) -- Change 50 to appropriate value for your case
EDIT: Based on comment feedback, changed the cast to varchar to have an explicit length (varchar(n)) to avoid relying on the default length, which may or may not be long enough. OP knows the data, so OP needs to pick the right length.
I am not very familiar with iseries/DB2. However, I work on a website that uses it as its primary database.
A new column was recently added to an existing table. When I view it via AS400, I see the following data type:
Type: S
Length: 9
Dec: 2
This tells me it's a numeric field with 6 digits before the decimal point, and 2 digits after the decimal point.
When I query the data with a simple SELECT (SELECT MYCOL FROM MYTABLE), I get back all the records without a problem. However, when I try using a DISTINCT, GROUP BY, or ORDER BY on that same column I get the following exception:
[SQL0802] Data conversion of data mapping error
I've deduced that at least one record has invalid data - what my DBA calls "blanks" or "4 O". How is this possible though? Shouldn't the database throw an exception when invalid data is attempted to be added to that column?
Is there any way I can get around this, such as filtering out those bad records in my query?
"4 O" means 0x40 which is the EBCDIC code for a space or blank character and is the default value placed into any new space in a record.
Legacy programs / operations can introduce the decimal data error. For example if the new file was created and filled using the CPYF command with the FMTOPT(*NOCHK) option.
The easiest way to fix it is to write an HLL program (RPG) to read the file and correct the records.
The only solution I could find was to write a script that checks for blank values in the column and then updates them to zero when they are found.
If the file has record format level checking turned off [ie. LVLCHK(*NO)] or is overridden to that, then an HLL program. (ex. RPG, COBOL, etc) that was not recompiled with the new record might write out records with invalid data in this column, especially if the new column is not at the end of the record.
Make sure that all programs that use native I/O to write or update records on this file are recompiled.
I was able to solve this error by force-casting the key columns to integer. I changed the join from this...
FROM DAILYV INNER JOIN BXV ON DAILYV.DAITEM=BXV.BXPACK
...to this...
FROM DAILYV INNER JOIN BXV ON CAST(DAILYV.DAITEM AS INT)=CAST(BXV.BXPACK AS INT)
...and I didn't have to make any corrections to the tables. This is a very old, very messy database with lots of junk in it. I've made many corrections, but it's a work in progress.