Show create table not showing full output in hive - hive

In hive, when typed - show create table table_name does not show the full output especially when the table has many columns specified. In ordert to show the whole output what command should be added??

Are you using hive or beeline shell? It might be related with interface that you used. Especially shell tools might be limited for long results.
Could you try it from Hue or a graphical desktop tool like Dbeaver?

Related

Google Big Query: Determine invalid views (e.g. dryRun & list)

We have several views in numerous projects & datasets in Google Big Query. Is there a way to list all invalid views? E.g. to "re-validate" all views and then to get a list?
While it might not cover all problems I think I could execute a view using the dryRun parameter to determine its state (https://cloud.google.com/bigquery/docs/dry-run-queries). But in this case I would like to determine all existing views (over all projects, or - as this may be a bad idea - at least within one project) and then to trigger the view with the dryRun parameter and to store the results somewhere/somehow.
Hints how to do that are appreciated.
Regards,
HerrB92
I am not aware of any built-in tools to do this, but it should be doable with some scripting.
bq ls command will return list of datasets, then for each dataset you can continue running bq ls <dataset> (or use SELECT * FROM dataset.INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'VIEW'), then run each view with --dry_run flag.

Google BigQuery list tables

I need to list all tables in my BigQuery, but I don't know how to do it, I try search but I didn't find anything about it.
I need to know if the table exists, if it exists I search for record, if not I create table and insert record.
Depending where/how you want to do this, you can use CLI, API calls or client libraries. Here you have all the info about listing tables
As an example, if you want to list them using Command Line Interface, you can do it like:
bq ls <project>:<dataset>
If you want to use normal SQL queries, you can use the INFROMATION_SCHEMA Beta feature
SELECT table_name from `<project>.<dataset>.INFORMATION_SCHEMA.TABLES`
(project is optional)

Command to get the sql query of a BigQuery view

We have a created a large number of views in BigQuery using standardSql. Now we need to see for the correctness of these created views.
Is there a bq command to get the sql query with which these views have been created in BigQuery?
This command will prevent manual effort of checking for the correctness of these views
Use the show command with the view flag.
e.g. bq show --view <project>:<dataset>.<view>
You can also use the --format=prettyjson flag (instead of --view) so you can easily get the query content when running a script, for example:
bq show --format=prettyjson <project>:<dataset>.<view>

Hive: create table and write it locally at the same time

Is it possible in hive to create a table and have it saved locally at the same time?
When I get data for my analyses, I usually create temporary tables to track eventual
mistakes in the queries/scripts. Some of these are just temporary tables, while others contain the data that I actually need for my analyses.
What I do usually is using hive -e "select * from db.table" > filename.tsv to get the data locally; however when the tables are big this can take quite some time.
I was wondering if there is some way in my script to create the table and save it locally at the same time. Probably this is not possible, but I thought it is worth asking.
Honestly doing it the way you are is the best way out of the two possible ways but it is worth noting you can preform a similar task in an .hql file for automation.
Using syntax like this:
INSERT OVERWRITE LOCAL DIRECTORY '/home/user/temp' select * from table;
You can run a query and store it somewhere in the local directory (as long as there is enough space and correct privileges)
A disadvantage to this is that with a pipe you get the data stored nicely as '|' delimitation and new line separated, but this method will store the values in the hive default '^b' I think.
A work around is to do something like this:
INSERT OVERWRITE LOCAL DIRECTORY '/home/user/temp'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
select books from table;
But this is only in Hive 0.11 or higher

Bteq Scripts to copy data between two Teradata servers

How do I copy data from multiple tables within one database to another database residing on a different server?
Is this possible through a BTEQ Script in Teradata?
If so, provide a sample.
If not, are there other options to do this other than using a flat-file?
This is not possible using BTEQ since you have mentioned both the databases are residing in different servers.
There are two solutions for this.
Arcmain - You need to use Arcmain Backup first, which creates files containing data from your tables. Then you need to use Arcmain restore which restores the data from the files
TPT - Teradata Parallel Transporter. This is a very advanced tool. This does not create any files like Arcmain. It directly moves the data between two teradata servers.(Wikipedia)
If I am understanding your question, you want to move a set of tables from one DB to another.
You can use the following syntax in a BTEQ Script to copy the tables and data:
CREATE TABLE <NewDB>.<NewTable> AS <OldDB>.<OldTable> WITH DATA AND STATS;
Or just the table structures:
CREATE TABLE <NewDB>.<NewTable> AS <OldDB>.<OldTable> WITH NO DATA AND NO STATS;
If you get real savvy you can create a BTEQ script that dynamically builds the above statement in a SELECT statement, exports the results, then in turn runs the newly exported file all within a single BTEQ script.
There are a bunch of other options that you can do with CREATE TABLE <...> AS <...>;. You would be best served reviewing the Teradata Manuals for more details.
There are a few more options which will allow you to copy from one table to another.
Possibly the simplest way would be to write a smallish program which uses one of their communication layers (ODBC, .NET Data Provider, JDBC, cli, etc.) and use that to take a select statement and an insert statement. This would require some work, but it would have less overhead than trying to learn how to write TPT scripts. You would not need any 'DBA' permissions to write your own.
Teradata also sells other applications which hides the complexity of some of the tools. Teradata Data Mover handles provides an abstraction layer between tools like arcmain and tpt. Access to this tool is most likely restricted to DBA types.
If you want to move data from one server to another server then
We can do this with the flat file.
First we have fetch data from source table to flat file through any utility such as bteq or fastexport.
then we can load this data into target table with the help of mload,fastload or bteq scripts.