Invalid predecessor when editing task in snowflake - sql

I keep getting an error when trying to edit a task on snowflake: whenever I want to edit the task I keep getting the following error message:
SQL-Fehler [91085] [42601]: Invalid predecessor
TableNameA_001_update_newdata was specified.
The task itself looks like this:
CREATE OR REPLACE TASK "TableNameA_001_update_newdata"
WAREHOUSE = marketing_wh
AFTER "TableNameA_001_delete" AS
INSERT INTO tableA
...
By now I do not understand what is triggering the error.
Thanks for your help!

You don't use qualified TASK names (with database and schema in the TASK name).
If you inadvertedly change context (switch database or schema) the new context will not contain any task named "TableNameA_001_delete".
That will result in the error message "Invalid predecessor <TASK name> was specified."

Related

Create table name using username in Hive query running in Oozie workflow?

I've got a Hive SQL script/action as part of an Oozie workflow. I'm doing a CREATE TABLE AS SELECT to output the results. I want to name the table using the username plus an appended string (e.g. "User123456_output_table"), but can't seem to get the correct syntax.
set tablename=${hivevar:current_user()};
CREATE TABLE `${hiveconf:tablename}_output_table` AS SELECT ...
That doesn't work and gives:
Error while compiling statement: FAILED: IllegalArgumentException java.net.URISyntaxException: Relative path in absolute URI: ${hivevar:current_user()%7D_output_table
Or changing the first line to set tablename=${current_user()}; starts running the SELECT query but eventually stops with:
Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.hadoop.hive.ql.metadata.HiveException: [${current_user()}_output_table]: is not a valid table name
Or changing the first line to set tablename=current_user(); starts running the SELECT query but eventually stops with:
Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.hadoop.hive.ql.metadata.HiveException: [current_user()_output_table]: is not a valid table name
Alternatively, is there a way to pass the username from the Oozie workflow via a parameter?
I'm using Hue to do all this rather than the command line.
Thanks
This is wrong: set tablename=${hivevar:current_user()}; - it will not be resolved and substituted as is.
Hive does not calculate variables before substitution, it substitutes them as is, all functions in variables are NOT calculated. variables are just text replacement.
This:
set tablename=current_user();
CREATE TABLE `${hiveconf:tablename}_output_table` ...
gets resolved as
CREATE TABLE `current_user()_output_table` ...
And functions are not supported in table names, it will not work this way.
The solution is to calculate functions outside the script and pass them as parameters.
See this blog: https://prodlife.wordpress.com/2013/12/06/parameterizing-hive-actions-in-oozie-workflows/

Text was truncated or one or more characters had no match in the target code page ole db source to flat file destination

I'm exporting a table output to a CSV file. I'm doing it using SSIS package which has OLE DB Source and Flat File Destination. I'm getting the following errors:
[Flat File Destination [2]] Error: Data conversion failed. The data conversion for column "Address" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".
[Flat File Destination [2]] Error: Cannot copy or convert flat file data for column "Address".
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "Flat File Destination" (2) failed with error code 0xC02020A0 while processing input "Flat File Destination Input" (6). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure.
[OLE DB Source [9]] Error: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020.
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on OLE DB Source returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
Can anyone please advise?
The output column for Address is specified as smaller than your original table column.
See this SO: SSIS data conversion failed
Summary:
(1) Right Click on Flat File Source and choose “Show Advanced Editor” Go to “Input and Output Properties “ Tab Expand “Flat File Source Output” and choose “External Columns”
(2) Select column "Address" and on right hand side, increase length to be same size as column in your original table
Double check anywhere in your Export wizard that allows you to set column sizes. Make sure those of your output file match those of your original table columns.
#user7396598
Thank you for pointing me in the right direction. So I ran a comparison the records seem to be inserting in the same order only until a point then after they are not matching. I could captured the bad data. by running the following:
select * from table where address != cast(address as varchar(1000)), when I removed the bad data my SSIS packaged worked.
Now I need to figure out how to convert the bad data into acceptable format for the CSV.
Reference - https://stackoverflow.com/a/2683496/8452633
SO i had a similar problem of bad data in one of my columns causing this error even after increasing the size of the output column. In my case I solved this problem by replacing the bad data in my columns by using replace function.
I exported the data by writing a query and in that query instead of "select *" I wrote all the column names and used the replace function on the columns that were causing the problems. I replaced all the characters that could potentially cause truncation e.g. comma, pipe, tabs etc with an empty space.

SSIS XML Source Error - Input string was not in a correct format

I have an attribute tlost with the definition below in the XSD file. I have tried both use="required" and use="optional".
<xs:attributeGroup name="defense">
<xs:attribute name="tlost" use="required" type="xs:decimal"/>
</xs:attributeGroup>
In the XML document I am trying to import I will get a value like the following:
<defense ast="0" category="special_team" tlost="0" int="0"/>
I am executing an SSIS package that takes the tlost value and inserts it into a sql database table. The column in the database table has a datatype of DECIMAL(28,10) and allows nulls.
When I execute the package, the previous values work perfectly and the data is inserted. However, when I get a value where tlost="" in the XML file, the package fails and the record is not inserted.
In the data flow path editor, the data type for tlost is DT_DECIMAL. When I check the Advanced Editor for the XML Source, the Input and Output properties have a data type for tlost as decimal [DT_DECIMAL].
I can't figure out why this is failing. I tried to create a derived column and cast it as a (DT_DECIMAL, 10) data type. That didn't work. I tried to check for a null value and replace with 0 if null, that didn't work. So I just ignored the column all together and in the Derived Column task, I replaced the tlost column value with (DT_DECIMAL, 10) 0 to just insert a 0 value and ignore whatever is in the xml file, and the job still failed with the following error message:
Error: 0xC020F444 at Load Play Summary Tables, XML Source [1031]: The error "Input string was not in a correct format." occurred while processing "XML Source.Outputs[defense].Columns[tlost]".
Error: 0xC02090FB at Load Play Summary Tables, XML Source [1031]: The "XML Source" failed because error code 0x80131537 occurred, and the error row disposition on "XML Source.Outputs[defense].Columns[tlost]" at "XML Source.Outputs[defense]" specifies failure on error. An error occurred on the specified object of the specified component.
Error: 0xC02092AF at Load Play Summary Tables, XML Source [1031]: The XML Source was unable to process the XML data. Pipeline component has returned HRESULT error code 0xC02090FB from a method call.
Error: 0xC0047038 at Load Play Summary Tables, SSIS.Pipeline: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on XML Source returned error code 0xC02092AF. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
Please help. I have exhausted everything I can think of to fix this issue. I am processing hundreds of files, and I can't keep fixing bad data files every time this issue occurs.
Can you please try these
1 - Change to data type to string in xsd and before loading into tables take care of data type conversion.
2 - If possible generate the xsd by passing your xml and then verify the data type and use it accordingly ...
rest of the xsd can be changed accordingly...
below is screen grab of what I tried. hope it helps]1

SSIS CSV Import Error 0xC0202092 DTS_E_PRIMEOUTPUTFAILED

All of the sudden, a CSV file that is imported into a db/table every morning is failing every time within the last few weeks. I do not support this process directly, so I don't know much about SSIS, but would greatly appreciate some help as I need this working and whoever supports this process has no idea what the issue is. I'm not sure if that error regarding the row has anything to do with the data in the row because it looks fine to me. The CSV includes Active Directory information for every computer in AD and is exported from PowerShell to a server where the CSV is imported into a table via SSIS. The process is entirely automated and nothing has changed.
[Source - Clean_Gold CSV [1]] Error: The column delimiter for column "LastLogontimestamp" was not found.
[Source - Clean_Gold CSV [1]] Error: An error occurred while processing file "H:\Computers\clean_gold.csv" on data row 40377.
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Source - Clean_Gold CSV" (1) returned error code 0xC0202092. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
I looked at 40,378 and saw someone put a ", in the description of the computer object in Active Directory. That caused an issue with the delimiting.

SSIS export to CSV file failing

I am trying to export the contents of a SQL Server 2005 table to a csv file using SSIS. In the Data Flow Task I have a OLE DB Source for the table and a Flat File Destination for the file.
When copying the data I started getting a failure on one of the column on a certain row and following some investigation found the problem was with comma's in the data below
Data Issue (nvarchar255)
errors code l075 showing,,,re test.
OLE DB Source for Comment col
Derived Column
Given that this was the issue I created a Derived Column object between the source and destination and destination objects and tried filtering out the comma's using a replace REPLACE(Comment,","," ") but the same column is still failing with the below errors.
Destination Component
Exception
[Inspection Failures Destination [206]] Error: Data conversion failed.
The data conversion for column "Comment" returned status value 4 and
status text "Text was truncated or one or more characters had no
match in the target code page.".
[Inspection Failures Destination [206]] Error: Cannot copy
or convert flat file data for column "Comment".
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PROCESSINPUTFAILED.
The ProcessInput method on component "Inspection Failures
Destination" (206) failed with error code 0xC02020A0 while
processing input "Flat File Destination Input" (207). The
identified component returned an error from the ProcessInput
method. The error is specific to the component, but the error
is fatal and will cause the Data Flow task to stop running.
There may be error messages posted before this with more
information about the failure.
[Inspecton Failures Source [128]] Error: The attempt to
add a row to the Data Flow task buffer failed with error
code 0xC0047020.
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.
The PrimeOutput method on component "Inspecton Failures Source"
(128) returned error code 0xC02020C4. The component returned
a failure code when the pipeline engine called PrimeOutput().
The meaning of the failure code is defined by the component,
but the error is fatal and the pipeline stopped executing.
There may be error messages posted before this with more
information about the failure.
Ok, the problem actually appears to be a hidden illegal character in the text
In the image below the top line shows a square before the re test string. The comment column in the database is an nvarchar which apparently uses a different character set so I can not just use the CHAR(13) + CHAR(10) to replace the carriage return.
The fix involved converting the field from an nvarchar to a varchar then performing a replace on the converter ? character resulting in the corrected second ling in the image
SELECT ID,
REPLACE(REPLACE(CAST(Comment AS varchar(255)),'?',' '),',',' ') Comment
FROM tblInspectionFailures WHERE (ID = 216899)
The conversion requirement is detailed here
This does not should like an ideal solution to me but it does work. Does anyone have any other options.
Without replacing comment column can you create another column and map the new derived column to destination column and see.