I have created 2 DBs:
DB1 'Airline',
DB2 'Students'.
I can see DB1 in Hue browser. But cannot see its tables inside /user/hive/warehouse/Airline.db/.
I can see tables of Students.db in /user/hive/warehouse/Students.db. But cannot see it in hue browser.
Is there anything I need to set?
Do you have an acces to Hive CLI? If yes, try run command:
describe database Airline;
You should see something like that:
Airline Airline Hive database hdfs://<host-fqdn>:8020/apps/hive/warehouse public
This is how you can find location of database. You can run the same command for 'Student' database.
Related
Is there is any way to see in which database we are working with in hive terminal. while working in hive using webGUI(hue) there is a list of database from where we can select the database(which will be active database)
Yes, we can. for that we have to set the properties in hive terminal.
SET hive.cli.print.current.db = true;
I have tables from different databases , and i want to create a data warehouse database that contains table replicas from different tables from different databases. I want the data in the warehouse to be synced with the data from the other tables everyday.I am using postgresql
I tried to do this using psql :
pg_dump -t table_to_copy source_db | psql target_db
However it didnt work as it keeps stating errors like table does no exist.
It all worked when i dumped the whole dabatase not only a single table, but however i want the data to be synced and i want to copy tables from different databases not the whole database.
How can i do this?
Thanks!
Probably you need FDW - Foreign Data Wrapper. You can create foreign tables for different external db in different schemas on local db. All tables accessible by local queries. For storing snap you can use local tables with just INSERT INTO local_table_YYYY_MM SELECT * FROM remote_table; .
1
pg_dump -t <table name> <source DB> | psql -d <target DB>
(Check the table name correctly and it says for you , table doesn't exist)
2
pg_dump allows the dumping of only select tables:
pg_dump -Fc -f output.dump -t tablename databasename
(dump 'tablename' from database 'databasename' into file 'output.dump' in pg_dumps binary custom format)
You can restore that pg_restore:
pg_restore -d databasename output.dump
If the table itself already exists in your target database, you can import only the rows by adding the --data-only flag.
Dblink
You can not perform cross database query like SQL Server, PostgreSQL does not support this. DbLink extension of PostgreSQL which is used to connect one database to another database. You have to install and configure DbLink to execute cross database query.
Here is the step by step script and example for executing cross database query in PostgreSQL. Please visit this post:
I am creating and insert tables in HIVE,and the files are created on HDFS and some on external storage S3
Assuming if I created a 10 tables,is there any system table in Hive where I can find the table info created by the user??? (for example like in Teradata we have DBC.tablesv which hold information of all the user defined tables)
You can find where you metastore is configured to be in the hive-site.xml file.
Its usual location is under /etc/hive/{$hadoop_version}/ or /etc/hive/conf/.
grep for "hive.metastore.uris" or "javax.jdo.option.ConnectionURL" to see which db you are using for the metastore. The credentials should also be there.
If, for example, your metastore is on a MySQL server, you can run queries like
SELECT * FROM TBLS;
SELECT * FROM PARTITIONS;
etc
You can't query (as in SELECT ... FROM...) the metadata from within Hive.
You do however have comnands that display that information, e.g. show databases, show tables, desc MyTable etc.
I'm not sure I understood 100% your question, if you mean the informations about the creation of the table, like the query itself, with the location on HDFS, table properties, etc, you can try with:
SHOW CREATE TABLE <table>;
If you need to retrieve a list of the columns names and datatypes try with:
DESCRIBE <table>;
I have created a database using hql , and is being created there . But I am not able to use that database from an impala application . But the table exists in the hive and we ca query it there . This issue is seen only for some newly created tables. Can somebody please help
issue the following command in impala shell.
invalidate metadata;
This will load the metadata information to the impala coordinator node you are connected to.
http://www.cloudera.com/documentation/archive/impala/2-x/2-1-x/topics/impala_invalidate_metadata.html
Hai i am a beginner of Database,
i have a .sql file which contains some tables of data, i want to know how to import them and how to view the list of tables.
presently im using the following:-
software or editor : navicat lite
server : localhost.
databse file format: .sql
Maybe you can try to execute the script in sql server, then type
select * from [database_name].information_schema.tables
to view tables and relevant information.
Remember that a sql file is not really a database, it is a script. You can run the script from any tool, but I'd use command line. This is navicat connected to mysql?
mysql -u username -p databasename < script.sql
password: **
And then the results can be seen using navicat or any other tool
If the .sql file has statements such as "CREATE TABLE..." and then later on "INSERT INTO..." then the script is possibly creating the tables and inserting the data.
To allow that to happen, the tables need to not exist in the database. You can then run the script and it will create the tables and fill in the data.
If the tables do exist, you can always either delete them, or change the CREATE to an ALTER and the script should then run.
Hope that helps.