Can we create a view name having multiple '-' or '/' in Hive - hive

I want to create a view name which is actually a result of stored variable which looks like school/123-324-235. Hive doesnot allows to create such view names. But seriously if I can create such view it would be much helpful. Is there any way to name the view such with any tricks then please help.

As per HIVE-12381 HIVE-11699 Jira's starting from Hive-2.0 version
we can create hive table/view with / in it's name but we are still not able to use - in the table names.
Ex:
Connected to: Apache Hive (version 1.2.1000.2.6.4.0-91)
hive> set hive.support.special.characters.tablename=true;
hive> create view `school123/245` as select * from <tb_name>;
hive> select * from `school123/245`;
hive> drop view `school123/245`;
Please refer to this link for workaround for this case changing the table name in metastore.

Related

Is there a way to create contents of the schema into a Table in BigQuery?

Is there a way to create the contents of the BigQuery schema into a Table?
I don't want to create a table from the schema instead I want to move the contents of the schema into a BigQuery Table. I couldn't find any trivial method to do this.
Currently I download the table schema as JSON and then create a new table from it
This worked for me!
SELECT *
FROM
mydataset.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS

Load data from Drill table into Hive Table

I have created a table using Drill and it is located at
/user/abc/drill/Drilltable.
Now I would like to load the data from DrillTable into HiveTable which is located at path
/user/hive/warehouse/userxyz.db
I am using below statement to load data
INSERT INTO TABLE HiveTable select * from DrillTable;
I get the error
Table not found
and I am bit confused how to let Hive know the path of Drill table.
What would be the right way to handle this?
Hive might be confused about the schema of the drill data as well as the location. If you're willing to experiment, try something like this:
Store the data in a Drill format you can model in Hive, CSV for example, as described in this post.
In Hive, create an external table that defines the schema and location of the textual data. You can then convert the external table to a managed table (optional). For example ....

Hive -where are tables information stored

I am creating and insert tables in HIVE,and the files are created on HDFS and some on external storage S3
Assuming if I created a 10 tables,is there any system table in Hive where I can find the table info created by the user??? (for example like in Teradata we have DBC.tablesv which hold information of all the user defined tables)
You can find where you metastore is configured to be in the hive-site.xml file.
Its usual location is under /etc/hive/{$hadoop_version}/ or /etc/hive/conf/.
grep for "hive.metastore.uris" or "javax.jdo.option.ConnectionURL" to see which db you are using for the metastore. The credentials should also be there.
If, for example, your metastore is on a MySQL server, you can run queries like
SELECT * FROM TBLS;
SELECT * FROM PARTITIONS;
etc
You can't query (as in SELECT ... FROM...) the metadata from within Hive.
You do however have comnands that display that information, e.g. show databases, show tables, desc MyTable etc.
I'm not sure I understood 100% your question, if you mean the informations about the creation of the table, like the query itself, with the location on HDFS, table properties, etc, you can try with:
SHOW CREATE TABLE <table>;
If you need to retrieve a list of the columns names and datatypes try with:
DESCRIBE <table>;

Just get column names from hive table

I know that you can get column names from a table via the following trick in hive:
hive> set hive.cli.print.header=true;
hive> select * from tablename;
Is it also possible to just get the column names from the table?
I dislike having to change a setting for something I only need once.
My current solution is the following:
hive> set hive.cli.print.header=true;
hive> select * from tablename;
hive> set hive.cli.print.header=false;
This seems too verbose and against the DRY-principle.
If you simply want to see the column names this one line should provide it without changing any settings:
describe database.tablename;
However, if that doesn't work for your version of hive this code will provide it, but your default database will now be the database you are using:
use database;
describe tablename;
you could also do show columns in $table or see Hive, how do I retrieve all the database's tables columns for access to hive metadata
The solution is
show columns in table_name;
This is simpler than use
describe tablename;
Thanks a lot.
use desc tablename from Hive CLI or beeline to get all the column names. If you want the column names in a file then run the below command from the shell.
$ hive -e 'desc dbname.tablename;' > ~/columnnames.txt
where dbname is the name of the Hive database where your table is residing
You can find the file columnnames.txt in your root directory.
$cd ~
$ls
Best way to do this is setting the below property:
set hive.cli.print.header=true;
set hive.resultset.use.unique.column.names=false;

Hive - How to see the table created in metastore?

Here is our setup -
We have Hive that uses MySQL on another machine as a metastore.
I can start the Hive command line shell and create a table and describe it.
But when I log on to the other machine where MySQL is used as metastore, I cannot see the Hive table details on the MySQL.
e.g. Here are hive commands -
hive> create table student(name STRING, id INT);
OK
Time taken: 7.464 seconds
hive> describe student;
OK
name string
id int
Time taken: 0.408 seconds
hive>
Next, I log on to the machine where MySQL is installed and this MySQL is used as Hive metastore. I use the "metastore" database. But if I want to list the tables, I cannot see the table or the table info I have created in Hive.
How can I see the Hive table information in the metastore?
First, find what MySql database the metastore is stored in. This is going to be in your hive-site.conf - connection URL. Then, once you connect to MySql you can
use metastore;
show tables;
select * from TBLS; <-- this will give you list of your hive tables
Another useful query if you want to search what other tables a particular column belongs to:
SELECT c.column_name, tbl_name, c.comment, c.type_name, c.integer_idx,
tbl_id, create_time, owner, retention, t.sd_id, tbl_type, input_format, is_compressed, location,
num_buckets, output_format, serde_id, s.cd_id
FROM TBLS t, SDS s, COLUMNS_V2 c
-- WHERE tbl_name = 'my_table'
WHERE t.SD_ID = s.SD_ID
AND s.cd_id = c.cd_id
AND c.column_name = 'my_col'
order by create_time
You can query the metastore schema in your MySQL database. Something like:
mysql> select * from TBLS;
More details on how to configure a MySQL metastore to store metadata for Hive and verify and see the stored metadata here.
*While setting up Hadoop services are any other services(this is mandatory too), admins use a relational databases in most of the scenarios to store the metadata information of the services like hive and oozie.
So, find which database(mysql,postgresql,sqlserver etc) your hive is backed up by, and you can see the metadata information in the TBLS table.*
While upgrading your hive, you have to take backup of these TBLS.