Specific type code is invalid and Method Invocation impex.exportItemsFlexibleSearch error - sql

Asking for your help, I don't know why I'm still getting these errors :(
20.06.25 18:47:40:992 ERROR line 3 at main script: Flexiblesearch error: type code 'StockLevel ' invalid
20.06.25 18:47:40:992 ERROR line 3 at main script: query was 'SELECT DISTINCT {sl.pk} AS PK, {sl.productCode} AS SKU, {p.name} AS Brand_Name, {aas.code} AS Approval_Status, {p.onlineDate} AS Online_From_Date, {p.offlineDate} AS Online_To_Date, {cv.version} as Catalog_Version, {sl.available} AS Available_Stocks, {sl.reserved} AS Reserved_Stocks FROM {StockLevel AS sl}, {Product AS p}, {ArticleApprovalStatus AS aas}, {CatalogVersion AS cv} WHERE {p.code}={sl.productCode} AND {aas.pk}={p.approvalStatus} AND {cv.pk}={p.catalogVersion} AND {cv.version}='online' AND {aas.code}='approved' ORDER BY {sl.pk}'
20.06.25 18:47:41:173 ERROR line 3 at main script: error executing code line at 3 : Sourced file: inline evaluation of: ``impex.exportItemsFlexibleSearch("SELECT DISTINCT {sl.pk} AS PK, {sl.productCode} . . . '' : Method Invocation impex.exportItemsFlexibleSearch
Here is my Impex for exporting data:
INSERT_UPDATE StockLevel;pk[unique=true];product(code);product(name[lang=en]);product(approvalStatus(code));product(onlineDate[dateformat=MM-dd-yyyy]);product(offlineDate[dateformat=MM-dd-yyyy]);product(catalogVersion(version));available[allownull=true];reserved[allownull=true]
"#% impex.exportItemsFlexibleSearch(""SELECT DISTINCT {sl.pk} AS PK, {sl.productCode} AS SKU, {p.name} AS Brand_Name, {aas.code} AS Approval_Status, {p.onlineDate} AS Online_From_Date, {p.offlineDate} AS Online_To_Date, {cv.version} as Catalog_Version, {sl.available} AS Available_Stocks, {sl.reserved} AS Reserved_Stocks FROM {StockLevel AS sl}, {Product AS p}, {ArticleApprovalStatus AS aas}, {CatalogVersion AS cv} WHERE {p.code}={sl.productCode} AND {aas.pk}={p.approvalStatus} AND {cv.pk}={p.catalogVersion} AND {cv.version}='online' AND {aas.code}='approved' ORDER BY {sl.pk}"");"
Thank you. :)

I tried to run your script in my local machine and it exported data. Did you check if the StockLevel type exists? It's part of the basecommerce extension.
Also, try this refactored impex. It will work when there's no result:
INSERT_UPDATE StockLevel;pk[unique=true];product(code);product(name[lang=en]);product(approvalStatus(code));product(onlineDate[dateformat=MM-dd-yyyy]);product(offlineDate[dateformat=MM-dd-yyyy]);product(catalogVersion(version));available[allownull=true];reserved[allownull=true]
"#% impex.exportItemsFlexibleSearch(""SELECT DISTINCT {sl.pk} FROM {StockLevel AS sl JOIN Product AS p ON {p.code}={sl.productCode} JOIN ArticleApprovalStatus AS aas ON {aas.pk}={p.approvalStatus} JOIN CatalogVersion AS cv ON {cv.pk}={p.catalogVersion}} WHERE {cv.version}='online' AND {aas.code}='approved'"");"

Related

why dbt runs in cli but throws an error on cloud UI for the exact same model?

I am executing dbt run -s model_name on CLI and the task completes successfully. However, when I run the exact same command on dbt cloud, I get this error:
Syntax or semantic analysis error thrown in server while executing query.
Error message from server: org.apache.hive.service.cli.HiveSQLException:
Error running query: org.apache.spark.sql.AnalysisException: cannot
resolve '`pv.meta.uuid`' given input columns: []; line 6 pos 4;
\n'Project ['pv.meta.uuid AS page_view_uuid#249595,
'pv.user.visitorCookieId AS (80) (SQLExecDirectW)")
it looks like it fails recognizing 'pv.meta.uuid' syntax which extract data from a json format. It is not clear to me what is going on. Any thoughts? Thank you!

ABC synthesis - read_liberty

I'm using abc01008.exe to synthesize combinational functions.
I have been using mcnc.genlib and stdcell.lib with no problems.
I would like to use a different std_cell libray that is in the liberty format.
When I type 'rty' or 'read_liberty' I do get the following error:
abc 01> rty
** cmd error: unknown command 'read_liberty'
(this is likely caused by using an alias defined in "abc.rc"
without having this file in the current or parent directory)
abc 01> read_liberty
** cmd error: unknown command 'read_liberty'
(this is likely caused by using an alias defined in "abc.rc"
without having this file in the current or parent directory)
Can someone point me in the right direction?
Thx in advance

R: Knitr gives error for SQL-chunk

I would like to knit the output of my R-markdown, which includes a couple of SQL-chunks. However, if I start knitting, I get the error:
Line 65 Error in eval(expr, envir, enclos) : object 'pp_dataset' not found Calls: <Anonymous> ... process_group.block -> call_block -> eval_lang -> eval Execution halted
I have no clue what is going on, because if I just run this chunk (which starts at line 64) then it works fine.
The chunk that starts at line 64 looks as follows:
```{sql, connection=con, output.var=pp_dataset, error=TRUE, echo=FALSE, include=TRUE}
SELECT
(...)
order by 1,2
```
I've tried several knit-options like error=TRUE/FALSE, echo=TRUE/FALSE and include=TRUE/FALSE but that doesn't work.
Anyone a clue what's wrong?
It looks like you need to quote the dataset name in the rchunk options:
```{sql, connection=con, output.var="pp_dataset", error=TRUE, echo=FALSE,
include=TRUE}
SELECT
(...)
order by 1,2
```
Source: http://rmarkdown.rstudio.com/authoring_knitr_engines.html#sql
I answered the question in this post as well. I'm not sure as to the protocol as the answers are identical.
When rendering your document, Rmarkdown does not have access to your global environment. So you should make sure that all variables that you want to use are defined within the Rmarkdown document, e.g. in a initial chunk:
```{r setup, include=FALSE, warning=FALSE, message=FALSE}
(...)
```
or you should type
render("yourfile.Rmd")
instead of pressing the knit button. In that case, the document does have access to your global environment variables. In this case I guess the 'con' connection is in your global environment, and not found while rendering. Hope this helps!
EDIT: I was able to reproduce the error with your example code:
I was not able to run your code without first initializing the output variable of the SQL statement. In your top-chunck ( so for example below the line setwd(mydirectory), try:
pp_dataset <- NULL
Hope this also solves the issue for you.

Plink. Error: No people remaining after --keep

I'm trying to subset individuals from the ACB population from the file named allconcat39.vcf , using Plink 1.9. For that, I created a text file (tab delimited) in R called indACB, which looks like this:
head indACB.txt
684_HG01879 684_HG01879
685_HG01880 685_HG01880
686_HG01882 686_HG01882
687_HG01883 687_HG01883
688_HG01885 688_HG01885
689_HG01886 689_HG01886
690_HG01889 690_HG01889
691_HG01890 691_HG01890
694_HG01894 694_HG01894
695_HG01896 695_HG01896
when I run the following code:
./plink --vcf allconcat39.vcf --keep indACB.txt --recode --out allconcat39ACB
the following error occurs:
Error: No people remaining after --keep.
I made sure than the vcf and the indACB.txt file had compatible individual IDs and sample IDs. I don't know where else the problem can be. Any thoughts? Thank you in advance !
It was solved in another forum by Christopher Chang: Add --double-id to your command line; otherwise plink treats '_' as a delimiter between the FID and IID.

Unable to extract data with double pipe delimiter in Pig Script

I am trying to extract data which is pipe delimited in Pig. Following is my command
L = LOAD 'entirepath_in_HDFS/b.txt/part-m*' USING PigStorage('||');
Iam getting following error
2016-08-04 23:58:21,122 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1200: Pig script failed to parse:
<line 1, column 4> pig script failed to validate: java.lang.RuntimeException: could not instantiate 'PigStorage' with arguments '[||]'
My input sample file has exactly 5 lines as following
POS_TIBCO||HDFS||POS_LOG||1||7806||2016-07-18||1||993||0
POS_TIBCO||HDFS||POS_LOG||2||7806||2016-07-18||1||0||0
POS_TIBCO||HDFS||POS_LOG||3||7806||2016-07-18||1||0||5
POS_TIBCO||HDFS||POS_LOG||4||7806||2016-07-18||1||0||0
POS_TIBCO||HDFS||POS_LOG||5||7806||2016-07-18||1||0||19.99
I tried several options like using the backslash before delimiter(\||,\|\|) but everything failed. Also, I tried with schema but got the same error.I am using Horton works(HDP2.2.4) and pig (0.14.0).
Any help is appreciated. Please let me know if you need any further details.
I have faced this case, and by checking PigStorage code source, i think PigStorage argument should be parsed into only one character.
So we can use this code instead:
L0 = LOAD 'entirepath_in_HDFS/b.txt/part-m*' USING PigStorage('|');
L = FOREACH L0 GENERATE $0,$2,$4,$6,$8,$10,$12,$14,$16;
Its helpful if you know how many column you have, and it will not affect performance because it's map side.
When you load data using PigStorage, It only expects single character as delimiter.
However if still you want to achieve this you can use MyRegExLoader-
REGISTER '/path/to/piggybank.jar'
A = LOAD '/path/to/dataset' USING org.apache.pig.piggybank.storage.MyRegExLoader('||')
as (movieid:int, title:chararray, genre:chararray);