Unexpected error running Liquibase: Unknown parameter: '#Liquibase.properties - liquibase

I am setting up a new user for liquibase (3.5.3). When we run the following command:
liquibase --defaultsFile=Config /Liquibase.properties --logLevel=Info
We get this error message:
--contexts=initial update Unexpected error running Liquibase: Unknown parameter: '#Liquibase.properties '
SEVERE 2/7/17 11:39 AM: liquibase: Unknown parameter:
'#Liquibase.properties'
liquibase.exception.CommandLineParsingException: Unknown parameter:
'#Liquiba se.properties'
at liquibase.integration.commandline.Main.parsePropertiesFile(Main.java:
476)
at liquibase.integration.commandline.Main.run(Main.java:164)
at liquibase.integration.commandline.Main.main(Main.java:103)
For more information, use the --logLevel flag
I thought there may have been a funny character in the file, so we recreated it, but still received the same error. We also, took a working copy of a properties file from another project and modified it. This also produced the same result.
Any ideas on what is going wrong or thoughts on how to fix it, would be greatly appreciated.
m

 is a UTF-8 Byte order mark (or short BOM). Some text editors write one by default when using UTF-8 encoding, even though, most programs do not understand it.
In your case, liquibase seems to be one of the programs which do not understand the BOM and treat it as the beginning of a parameter. To fix this, make sure you save the file as UTF-8 without BOM if your editor supports this option, or alternatively, as ASCII or ISO 8859 (ANSI) if you only use characters defined in ASCII.

Related

dbt Error : Encountered an error: 'utf-8' codec can't decode byte 0xa0 in position 441: invalid start byte

I have upgraded my dbt version to 1.0.0 yesterday night and ran few connection test. It went well . Now when i am running the my first dbt example model , i am getting below error , even though i have not changed any code in this default example model.
Same error i am getting while running dbt seed command also for a csv dataset . The csv is utf-8 encoded and no special character in it .
I am using python 3.9
Could anyone suggest what is the issue ?
Below is my first dbt model sql
After lots of back and forth, I figured out the issue. This is more like fundamental concept issue.
Every time we execute dbt run, dbt will scan through the entire project directory ( including seeds directory even though it is not materializing the seed ) [Attached screenshot below].
If it finds any csv it also parsed it .
In case of above error, I had a csv file which looks follows :
If we see the highlighted line it contains some symbol character which dbt (i.e python) was not able to parse it causing above error.
This symbol was not visible earlier in excel or notepad++.
It could be the issue with Snowflake python connector that #PeterH has pointed out .
As temporary solution , for now we are manually removing these character from Data file.
I’d leave this as a comment but I don’t have the rep yet…
This appears to be related to a recently-opened issue.
https://github.com/dbt-labs/dbt-snowflake/issues/66
Apparently it’s something to do with the snowflake python adapter.
Since you’re seeing the error from a different context, it might be helpful for you to post in that issue that you’re seeing this outside of query preview.

"Error loading data: 42000" in Pentaho PDI MonetDB bulk Loader step

I want to insert data from a large CSV file to MonetDB. I can't use MonetDB "mclient" because this procedure must run inside a Pentaho Server application within a Docker container. MonetDB is inside a Docker container too.
Here's my very simple transformation:
When I test the transformation, I always get the following error message:
2021/03/20 22:37:37 - MonetDB bulk loader.0 - Error loading data: 42000!COPY INTO: record separator contains '\r\n' but in the input stream, '\r\n' is being normalized into '\n'
Does anyone have any idea what is happening?
Thank you!
This is related to line endings. Pentaho issues a COPY INTO statement,
COPY INTO <table> FROM <file>
USING DELIMITERS ',', '\r\n'
Here, \r\n means DOS/Windows line endings. Since the Oct2020 release, MonetDB always normalizes the line endings from DOS/Windows to Unix style \n when loading data. Before, it used to sometimes normalize and sometimes not. However, normalizing to \n means that looking for \r\n would yield one giant line containing the whole file, hence the error message.
I will submit a patch to MonetDB to automatically replace the USING '\r\n' with '\n'.
This will fix it in the longer term.
In the short term I have no good solution to offer. I have no experience with Pentaho, but looking at the source code, it seems Pentaho is uses the system property lines.separator, which is \r\n on Windows.
This means, if you have access to a Mac or Linux machine to run Pentaho on, that will work as line.separator is \n there. Otherwise, maybe you can ask the Pentaho people if the JVM can be started with something like java -Dline.separator="\n" as a workaround, see also this Stack Overflow question.
Otherwise, we'll have to use a patched version of either Pentaho, the MonetDB JDBC driver or MonetDB. I could send you a patched version of the JDBC driver that automagically replaces the '\r\n' with '\n' before sending the query to the server but you have to figure out yourself how to get Pentaho to use this JDBC driver instead of the default one.

pig error: Job in state DEFINE instead of RUNNING - Generic solution

A typical Pig error that occurs without much usefull information is the following:
Job in state DEFINE instead of RUNNING
Often found in a line like this:
Caused by: java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
I have seen some examples of this error, but would like to have the generic solution for this problem.
So far, at each occasion where I have encountered this error, it is because Pig fails to load files. The error in the question is printed to stderr log, and you will not find anything usefull there.
However, if you were to look in the stdout log, you would expect to find the following:
Message: org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Input Pattern hdfs://x.x.x.x:x/locationOnHDFS/* matches 0 files
Typically followed by:
Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input Pattern hdfs://x.x.x.x:x/locationOnHDFS/* matches 0 files
At this point the most likely suspects are:
There are no files in the specified folder (though the folder exists)
The user that is running the script does not have the rights to access the relevant files
All files are empty (not sure about this one)
Note that it is a commonly known difficulty that pig will error out if you try to read an empty directory (rather than just processing the alias with 0 lines).

doxygen latex make fails for input encoding error

I have a git repo project in eclipse which I have been documenting using doxygen (v1.8.4).
If I run the latex make ion a fresh clone of the project it runs fine and the PDF is made.
However, if I then run a doxy build, which completes OK, then attempt to run the latex make, it fails for
! Package inputenc Error: Keyboard character used is undefined
(inputenc) in inputencoding `utf8'.
See the inputenc package documentation for explanation.
Type H <return> for immediate help.
...
I have tried switching the encoding of the doxyfile by setting DOXYFILE_ENCODING to ISO-8859-1 with no change in the result... How can I fix this?? Thanks.
EDIT: I have used no non-UTF-8 chars as far as I know in my files, the file referenced before the error is very short and definitely doesn't have non-UTF-8 chars in it. I've even tried clearing my latex output dir and building from scratch with no luck...
EDIT: Irealised that the doxy build only appears to run correctly. It doesnt show any errors, but it should, for example run DOT and build about 10 graphs. The console output says Running dot, but it doesn't say generating graph (n/x) like it should when it actually makes the graphs...
Short answer: So by a slow process of elimination I found that this was caused by a single apostrophe in a file that had appeared to be already built and made without error!!
Long answer: Firstly I used used the project properties to flip the encoding from the default Cp1252 to UTF-8. Then I started removing files one-by-one until rebuilding and remaking after each removal, until the make ran successfully. I re-added all files, but deleted the content in the most recently removed file and tested the make - to confirm it was this file and only this file that caused the issue. the make ran fine. So I pasted the content back into the empty file, and started deleting smaller and smaller sections of the file, again rebuilding and remaking each time until I was left with a good make without the apostrophe and a bad one with it... I simply retyped the apostrophe (as this would then force it to be a UTF-8 char) and success!! Such an annoying bug!
Dude you made it a hard way. Why not use python to do the work for you:
f = open(fn,"rb")
data = f.read()
f.close()
for i in range(len(data)):
ch = data[i]
if(ch > 0x7F): # non ASCII character
print("char: %c, idx: %d, file: %s"%(ch,i,fn))
str2 = str(data[i-30:i+30])#.decode("utf-8")
print("txt: %s" % (str2))

Pig problem with load file with complicated name

i need to load file in pig which has a long and complicated name:
dealnews-2011-04-01T12:00:00:00.211-02:00.csv
Pig complained:
ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2999: Unexpected internal error. java.net.URISyntaxException: Relative path in absolute URI:
anyone knows what's the problem? Thanks.
If it's forming a URI from that, the : is a reserved character.
Think about it: file://a:b ... this would be taken as an FTP login.
Your error message seems to complain that what's left after the string is parsed is a relative path (I guess 00.csv after the last colon). Obviously no longer the whole filename.
You will need to escape any reserved characters in the filename before forming a URI.
You could do this on the command line, with for example:
ls | sed -e 's/:/%3A/g'
to transform the colons in the filename.
Or you could rename any files in the directory that use any of ";?:#&=+,$"
not exactly the same case, but we got:
ERROR 2999: Unexpected internal error. java.net.URISyntaxException cannot be cast to java.lang.Error
java.lang.ClassCastException: java.net.URISyntaxException cannot be cast to java.lang.Error
for everything we tried to load, and the problem was that the PIG_CONF_DIR env variable was pointing to a folder that did not exist. We've reset it in the .bash_profile to a folder with valid core-site.xml and mapred-site.xml and everything's good now.
export PIG_CONF_DIR=/my_good_folder