I am unable to connect to an AWS RDS instance using liquibase from the terminal using a Mac. I am able to connect to the same database and URL using MySQL workbench and pymysql in python. I have mysql-connector-java.jar in Liquibase's lib folder. I have also done brew install mysql and brew install mysql-client. I get the same error when using an invalid URL, but I have checked (many times) to make sure the URL string is correct. The exact error I get is:
Unexpected error running Liquibase: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: Connection could not be created to jdbc:mysql://-URL IS HERE-?createDatabaseIfNotExists=true with driver com.mysql.cj.jdbc.Driver. Communications link failure
From the command:
liquibase --url=jdbc:mysql://-URL IS HERE-:3306/cdp_sms?createDatabaseIfNotExists=true --username="cdpadmin" --password=$CDP_DB_PW --changeLogFile=db.changelog-master.xml update
Any help would be greatly appreciated.
I had encounter a similar issue a while ago with Liquibase and AWS RDS. I would suggest you to check your security groups (inbound/outbound rules) in the aws settings.
Related
I am trying install hive in ubuntu. But I am getting this error when i try to create derby metastore
schematool -dbType derby -initSchema
I have configured these things in .bashrc
HIVE_HOME
I configured these things in bin/hive-config.sh
hive-config.sh
What am I doing wrong here? Please help me with this
Thanks in advance
I have tried different versions of hive. Also tried pasting there HIVE_HOME variables in different lines. Hadoop was running while configuring these things.
I have successfully installed kudu on Ubuntu (Trusty) as per the official kudu documentations (see http://kudu.apache.org/docs/installation.html ). The setup has one node running master and tablet server and another node running the tablet server only. I am having issues installing impala-kudu without Cloudera Manager on the node running kudu master. I have followed CDH installation instructions on this (see http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_cdh5_install.html ) page until Step 3. I have avoided installing CDH with YARN and MRv1 as I don’t need to run any mapreduce jobs and will not be using hadoop. Impala-kudu and impala-kudu-shell installed without errors. When I launch the impala-shell it returns:
Starting Impala Shell without Kerberos authentication
Error connecting: TTransportException, Could not connect to kudu_test:21000
***********************************************************************************
Welcome to the Impala shell. Copyright (c) 2015 Cloudera, Inc. All rights reserved.
(Impala Shell v2.7.0-cdh5-IMPALA_KUDU-cdh5 (48f1ad3) built on Thu Aug 18 12:15:44 PDT 2016)Want to know what version of Impala you're connected to? Run the VERSION command to
find out!
***********************************************************************************
[Not connected] >
I have tried to use the CONNECT option to connect to the kudu-master node without success. Both imapala-kudu and kudu are running on the same machine. Are there additional configuration settings which need to be changed or is hadoop and YARN a strict requirement to make impala-kudu work?
After running ps -ef | grep -i impalad I can confirm the impala daemon is not running. After navigating to the impala logs at ~/var/log/impala I find a few errors and warning files. Here is the output of impalad.ERROR:
Log file created at: 2016/09/13 13:26:24
Running on machine: kudu_test
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0913 13:26:24.084389 3021 logging.cc:118] stderr will be logged to this file.
E0913 13:26:25.406966 3021 impala-server.cc:249] Currently configured default filesystem: LocalFileSystem. fs.defaultFS (file:///) is not supported.ERROR: block location tracking is not properly enabled because
- dfs.datanode.hdfs-blocks-metadata.enabled is not enabled.
- dfs.client.file-block-storage-locations.timeout.millis is too low. It should be at least 10 seconds.
E0913 13:26:25.406990 3021 impala-server.cc:252] Aborting Impala Server startup due to improper configuration. Impalad exiting.
Maybe I need to revisit HDFS and the Hive Metastore to ensure I have these services configured properly?
According to the log, impalad quits because the default filesystem is configured to be LocalFileSystem, which is not supported. You have to set a distributed filesystem, such as HDFS as the default.
Although Kudu is a separate storage system and does not rely on HDFS, Impala still seems to require a non-local default FS even when using with Kudu. The Impala_Kudu documentation explicitly lists the following requirement:
Before installing Impala_Kudu, you must have already installed and configured services for HDFS (though it is not used by Kudu), the Hive Metastore (where Impala stores its metadata), and Kudu.
I can even imagine that HDFS may not really be needed for any other reason than to make Impala happy, but this is just speculation from my side. Update: Found IMPALA-1850 which confirms my suspicion that HDFS should not be needed for Impala any more, but it's not just a single check that has to be removed.
I am trying to connect to an Vertica DB from R using "RODBC" package. Also, the machine I am using is an remote server which doesn't have direct internet access so I basically "transfer" all source files from my local to the remote server to build the system. So, in order to give you an clear context, I am listing all my steps in attending of installing "RODBC" package below:
Step1 - I downloaded the RODBC_1.3-13.tar.gz source file for RODBC and then tried to directly install it with "R CMD INSTALL". However, I encountered error as "ODBC headers sql.h and sqlext.h not found".
Step2 - After a few researches, I found that the installation of "unixodbc-dev" would potentially solve this issue. Therefore, I downloaded all needed dependencies for "unixodbc-dev" and transferred them to the server. As you can see the list:
Therefore, I also successfully installed "unixodbc-dev":
However, another error message appears when I tried to re-install the "RODBC" using "sudo R CMD INSTALL /home/mli/RODBC_1.3-13.tar.gz" in which it returns error "no ODBC driver manager found":
As the message indicates, the installation program can't locate my ODBC driver manager. So, I downloaded "vertica-client-7.2.3-0.x86_64.tar.gz" and unzipped it on the server:
So, now my question is how can I customize the "R CMD INSTALL" command say, using some parameter handles to direct the installation program to locate the driver manager? Or am I trying this in a right direction? Please let me know. Any help would be really appreciated!!! :)
ADDITION:
I have also tried it with JDBC in which the I successfully loaded the "RJDBC" package in R and used the JDBC driver from vertica-client-7.2.3-0.x86_64.tar.gz. Also, I have already had "rJava" installed. However, I have still got an error when I tried to make the connection. I am listing my result below:
I successfully installed the "RJDBC" with "$R CMD INSTALL RJDBC_0.2-5.tar.gz --library=/usr/local/lib/R/site-library/" and then I tried the following scripts in R. All the lines are successfully executed except on the line 16:
Based on the error message, I assumed the version of the JDBC driver that I was using is too new for the Vertica server. So, I was trying to use an older version JDBC driver instead, like the "vertica-jdk5-6.1.0-0.jar" which I have downloaded from this link:http://www.java2s.com/Code/Jar/v/Downloadverticajdk56100jar.htm
So, I moved the file "vertica-jdk5-6.1.0-0.jar" to my home directory on the server and then changed the JDBC driver path in the R script:
As you can see, it still returns error "FATAL: Unsupported frontend protocol 3.6: server supports 3.0 to 3.5". Am I doing it right? Or is there an issue with the new driver that I downloaded? How can make it works? Please, any help will be really appreciated! Thanks!!!
A few things:
First, just do sudo apt-get install r-cran-rodbc. The package was created (by yours truly) in no small part because dealing with unixODBC or iODBC is not fun. But even once you have that, you still need the ODBC driver for Linux from Vertica. And that part is filly.
Second, I just did something similar the other day but just used JDBC, which worked. You do of course need sudo apt-get install r-cran-rjava which has its own can of worms (but I already mentioned Java...) Still, maybe try that instead?
Third, you can cheat and just use psql pointed to the Vertica port (usually one above the PostgreSQL port).
I am currently configuring a Cloudera HDP dev image using this tutorial on CentOS 6.5, installing the base and then adding the different components as I need them. Currently, I am installing / testing HCatalog using this section of the tutorial linked above.
I have successfully installed the package and am now testing HCatalog integration with Pig with the following script:
A = LOAD 'groups' USING org.apache.hcatalog.pig.HCatLoader();
DESCRIBE A;
I have previously created and populated a 'groups' table in Hive before running the command. When I run the script with the command pig -useHCatalog test.pig I get an exception rather than the expected output. Below is the initial part of the stacktrace:
Pig Stack Trace
---------------
ERROR 2245: Cannot get schema from loadFunc org.apache.hcatalog.pig.HCatLoader
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1000: Error during parsing. Cannot get schema from loadFunc org.apache.hcatalog.pig.HCatLoader
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1608)
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1547)
at org.apache.pig.PigServer.registerQuery(PigServer.java:518)
at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:991)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:412)
...
Has anyone encountered this error before? Any help would be much appreciated. I would be happy to provide more information if you need it.
The error was caused by HBase's Thrift server not being proper configured. I installed/configured Thrift and added the following to my hive-xml.site with the proper server information added:
<property>
<name>hive.metastore.uris</name>
<value>thrift://<!--URL of Your Server-->:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>
I thought the snippet above was not required since I am running Cloudera HDP in pseudo-distributed mode.Turns out, it and HBase Thrift are required to use HCatalog with Pig.
I am trying to download the dump of a database on Heroku in the following way:
pg_dump dc6psqngs8h580 -h url_address -U user_name> db.sql
but I am getting this error all the time:
pg_dump: server version: 9.2.4; pg_dump version: 9.1.5
pg_dump: aborting because of server version mismatch
I found topics with this error here on SO, but none of them helped me out with this issue.
I added:
PATH="/Applications/Postgres.app/Contents/MacOS/bin:$PATH"
into ~/.profile, also I added
export PATH=/Applications/Postgres.app/Contents/MacOS/bin:$PATH
into ~/.bash_profile, but none of these helped me to successfully download the dump.
Where can be yet the problem and how to fix it?
Thank you very much
Install PostgreSQL 9.2.x whether by homebrew, from source, or some other means.
Basically you can dump an older server with a newer version of the dump file, but not the other way around. This exists to ensure that newer features of the db are not ignored in the dump and there are no command line switches to get around the error.