What is ColumnMapKeyPrune in pig? - apache-pig

Getting error as ERROR 2000: Error processing rule ColumnMapKeyPrune. What could be the reason?

Most likely a bug in pig. Try running with '-t ColumnMapKeyPrune' and see if it produces the expected output. Sample code would help check if the issue is fixed on trunk or not.

Related

dbt Error : Encountered an error: 'utf-8' codec can't decode byte 0xa0 in position 441: invalid start byte

I have upgraded my dbt version to 1.0.0 yesterday night and ran few connection test. It went well . Now when i am running the my first dbt example model , i am getting below error , even though i have not changed any code in this default example model.
Same error i am getting while running dbt seed command also for a csv dataset . The csv is utf-8 encoded and no special character in it .
I am using python 3.9
Could anyone suggest what is the issue ?
Below is my first dbt model sql
After lots of back and forth, I figured out the issue. This is more like fundamental concept issue.
Every time we execute dbt run, dbt will scan through the entire project directory ( including seeds directory even though it is not materializing the seed ) [Attached screenshot below].
If it finds any csv it also parsed it .
In case of above error, I had a csv file which looks follows :
If we see the highlighted line it contains some symbol character which dbt (i.e python) was not able to parse it causing above error.
This symbol was not visible earlier in excel or notepad++.
It could be the issue with Snowflake python connector that #PeterH has pointed out .
As temporary solution , for now we are manually removing these character from Data file.
I’d leave this as a comment but I don’t have the rep yet…
This appears to be related to a recently-opened issue.
https://github.com/dbt-labs/dbt-snowflake/issues/66
Apparently it’s something to do with the snowflake python adapter.
Since you’re seeing the error from a different context, it might be helpful for you to post in that issue that you’re seeing this outside of query preview.

SSIS Conditional Split Error Output not writing any data to table

I have created this simple SSIS package:
This is my conditional split:
This is the Configure Error Output:
I am getting this error when running:
[ErrorTable [52]] Warning: Rows sent to the error output(s) will be
lost. Add new data flow transformations or destinations to receive
error rows, or reconfigure the component to stop redirecting rows to
the error output(s).
Before I got this error it was creating the error table but did not write any rows so I changed it to redirect row and now have this error. I'd be grateful for any help.
I just ran into the same issue and think I found the answer.
The output from your conditional split should not be the red "Error" path, but another output path (blue) to the "Error" file.
I kept expecting the red path to output the data from the conditional split, but it didn't because there actually weren't any errors.
I'm learning here, but I'm guessing the red "error" path is for actual data issues and not the conditional split functions.
Hope that helps! I

Unexpected error running Liquibase: Unknown parameter: '#Liquibase.properties

I am setting up a new user for liquibase (3.5.3). When we run the following command:
liquibase --defaultsFile=Config /Liquibase.properties --logLevel=Info
We get this error message:
--contexts=initial update Unexpected error running Liquibase: Unknown parameter: '#Liquibase.properties '
SEVERE 2/7/17 11:39 AM: liquibase: Unknown parameter:
'#Liquibase.properties'
liquibase.exception.CommandLineParsingException: Unknown parameter:
'#Liquiba se.properties'
at liquibase.integration.commandline.Main.parsePropertiesFile(Main.java:
476)
at liquibase.integration.commandline.Main.run(Main.java:164)
at liquibase.integration.commandline.Main.main(Main.java:103)
For more information, use the --logLevel flag
I thought there may have been a funny character in the file, so we recreated it, but still received the same error. We also, took a working copy of a properties file from another project and modified it. This also produced the same result.
Any ideas on what is going wrong or thoughts on how to fix it, would be greatly appreciated.
m
 is a UTF-8 Byte order mark (or short BOM). Some text editors write one by default when using UTF-8 encoding, even though, most programs do not understand it.
In your case, liquibase seems to be one of the programs which do not understand the BOM and treat it as the beginning of a parameter. To fix this, make sure you save the file as UTF-8 without BOM if your editor supports this option, or alternatively, as ASCII or ISO 8859 (ANSI) if you only use characters defined in ASCII.

Nupic : RegionTest fails, pynode cannot be found

I built nupic as per the instructions in the wiki. However, when I run testeverything, the RegionTest fails with a message that pynode cannot be found since neither nta_rootdir nor pythonpath are set.
echo $pythonpath and echo $nta_rootdir gives the correct results though
The exact message is
MSG: Unable to find the pynode dynamic library because neither NTA_ROOTDIR not PYTHONPATH is set.
How do I fix this?
I'd like to track this issue on the NuPIC issue tracker.

Error loading data "operation: Unexpected"

This is a repost from a question asked on the (now disfunct) bigquery forum.
While uploading data from the bq tool I get the following error:
BigQuery error in load operation: Unexpected. Please try again.
I've tried running several files, but each gives the same exception.
The latest failed job is job_5251c0bf5eb24436a350bdfbdbdb3cd8
It looks like that job hit a SECURITY_VIOLATION error. This is likely due to a line that is longer than the maximum line length (64k).
In the next build of BigQuery (which will probably go live next week) it will give you a better error in this case -- it will tell you which lines are too long, and long lines won't cause the import to fail (subject to the maxBadRecords limit).
In the meantime, you can make sure that your input lines are shorter than 64k (note that newlines can be quoted, so stray quotes can cause lines to appear to be too long).