Writing large scripts in Impala - impala

I have to translate long Teradata scripts (10,000 lines long) into Impala. I never done this before with Impala.
The tools I’ve got to work with are impala shell or hue.
I’ve not seen an example of Impala code that’s more than 50 lines long either in impala shell or hue. Can someone point me to an example of impala script in either impala shell or hue that's at least 500 lines long?
I can handle the syntax change,I don’t need advice on that. I’m looking for gotchas or traps in writing long code into these tools.

You need to create an external table with a source data to your file (as it's shown in Impala tutorial).
-- The EXTERNAL clause means the data is located outside the central location
-- for Impala data files and is preserved when the associated Impala table is dropped.
-- We expect the data to already exist in the directory specified by the LOCATION clause.
CREATE EXTERNAL TABLE tab1
(
id INT,
col_1 BOOLEAN,
col_2 DOUBLE,
col_3 TIMESTAMP
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
LOCATION '/user/cloudera/sample_data/tab1';
Then you can easily move your data whenever you want using INSERT construction.
INSERT INTO table2
SELECT * FROM tab1;

Related

Migrating data from Hive PARQUET table to BigQuery, Hive String data type is getting converted in BQ - BYTES datatype

I am trying to migrate the data from Hive to BigQuery. Data in Hive table is stored in PARQUET file format.Data type of one column is STRING, I am uploading the file behind the Hive table on Google cloud storage and from that creating BigQuery internal table with GUI. The datatype of column in imported table is getting converted to BYTES.
But when I imported CHAR of VARCHAR datatype, resultant datatype was STRING only.
Could someone please help me to explain why this is happening.
That does not answer the original question, as I do not know exactly what happened, but had experience with similar odd behavior.
I was facing similar issue when trying to move the table between Cloudera and BigQuery.
First creating the table as external on Impala like:
CREATE EXTERNAL TABLE test1
STORED AS PARQUET
LOCATION 's3a://table_migration/test1'
AS select * from original_table
original_table has columns with STRING datatype
Then transfer that to GS and importing that in BigQuery from console GUI, not many options, just select the Parquet format and point to GS.
And to my surprise I can see that the columns are now Type BYTES, the names of the columns was preserved fine, but the content was scrambled.
Trying different codecs, pre-creating the table and inserting still in Impala lead to no change.
Finally I tried to do the same in Hive, and that helped.
So I ended up creating external table in Hive like:
CREATE EXTERNAL TABLE test2 (col1 STRING, col2 STRING)
STORED AS PARQUET
LOCATION 's3a://table_migration/test2';
insert into table test2 select * from original_table;
And repeated the same dance with copying from S3 to GS and importing in BQ - this time without any issue. Columns are now recognized in BQ as STRING and data is as it should be.

Trying to copy data from Impala Parquet table to a non-parquet table

I am moving data around within Impala, not my design, and I have lost some data. I need to copy the data from the parquet tables back to their original non-parquet tables. Originally, the developers had done this with a simple one liner in a script. Since I don't know anything about databases and especially about Impala I was hoping you could help me out. This is the one line that is used to translate to a parquet table that I need to be reversed.
impalaShell -i <ipaddr> use db INVALIDATE METADATA <text_table>;
CREATE TABLE <parquet_table> LIKE <text_table> STORED AS PARQUET TABLE;
INSERT OVERWRITE <parquet_table> SELECT * FROM <text_table>;
Thanks.
Have you tried simply doing
CREATE TABLE <text_table>
AS
SELECT *
FROM <parquet_table>
Per the Cloudera documentation, this should be possible.
NOTE: Ensure that your does not exist or use a table name that does not already exist so that you do not accidentally overwrite other data.

Hive: create table and write it locally at the same time

Is it possible in hive to create a table and have it saved locally at the same time?
When I get data for my analyses, I usually create temporary tables to track eventual
mistakes in the queries/scripts. Some of these are just temporary tables, while others contain the data that I actually need for my analyses.
What I do usually is using hive -e "select * from db.table" > filename.tsv to get the data locally; however when the tables are big this can take quite some time.
I was wondering if there is some way in my script to create the table and save it locally at the same time. Probably this is not possible, but I thought it is worth asking.
Honestly doing it the way you are is the best way out of the two possible ways but it is worth noting you can preform a similar task in an .hql file for automation.
Using syntax like this:
INSERT OVERWRITE LOCAL DIRECTORY '/home/user/temp' select * from table;
You can run a query and store it somewhere in the local directory (as long as there is enough space and correct privileges)
A disadvantage to this is that with a pipe you get the data stored nicely as '|' delimitation and new line separated, but this method will store the values in the hive default '^b' I think.
A work around is to do something like this:
INSERT OVERWRITE LOCAL DIRECTORY '/home/user/temp'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
select books from table;
But this is only in Hive 0.11 or higher

hive RegexSerDe null

How should I work with NULL values in RegexSerDe?
I have file with data:
cat MOS/ex1.txt
123,dwdjwhdjwh,456
543,\N,956
I have the table:
CREATE TABLE mos.stations (usaf string, wban STRING, name string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
"input.regex" = "(.*),(.*),(.*)"
);
I successfully loaded the data from file to table:
LOAD DATA LOCAL INPATH '/home/hduser/MOS/ex1.txt' OVERWRITE INTO TABLE mos.stations;
Simple select works fine:
hive> select * from mos.stations;
123dwdjwhdjwh456
543\N956
And next ends with error:
select * from mos.stations where wban is null;
[Hive Error]: Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
What is wrong?
I see a couple of possible issues:
1) It may not having anything to do with null handling at all. The first query doesn't actually spawn an M/R job while the second one does so it might be a simple classpath issue where RegexSerde is not being seen by the M/R tasks because its jar is not in the classpath of the tasktracker. You'll need to find where the hive-contrib jar on your system lives and then make hive aware of it via something like:
add jar /usr/lib/hive/lib/hive-contrib-0.7.1-cdh3u2.jar
Note, your path and jar name may be different. You can run the above through hive right before your query.
2) Another issue might be that the RegexSerde doesn't really deal with "\N" the same way as the default LazySimpleSerde. Judging by the output you are getting in the first query (where it returns a literal "\N") that could be the case. What happens if you query where wban='\\N'? or where wban='\N' (I forget if you need to double escape).
Finally, one word of caution about RegexSerde. While its really handy, its slow as molasses going uphill in January compared to the default serde. If the dataset is large and you plan to run a lot of queries against it, its best to pre-process so that you don't need the RegexSerde. Otherwise, your going to pay a penalty for every query. The same datset above looks like it would be fine with the default serde.

Insert large amount of data efficiently with SQL

Hi I often have to insert a lot of data into a table. For example, I would have data from excel or text file in the form of
1,a
3,bsdf
4,sdkfj
5,something
129,else
then I often construct 6 insert statements in this example and run the SQL script. I found this was slow when I have to send thousands of small packages to server, it also causes extra overhead to the network.
What's your best way of doing this?
Update: I'm using ORACLE 10g.
Use Oracle external tables.
See also e.g.
OraFaq about external tables
What Tom thinks about external tables
René Nyffenegger's notes about external tables
A simple example that should get you started
You need a file located in a server directory (get familiar with directory objects):
SQL> select directory_path from all_directories where directory_name = 'JTEST';
DIRECTORY_PATH
--------------------------------------------------------------------------------
c:\data\jtest
SQL> !cat ~/.gvfs/jtest\ on\ 192.168.xxx.xxx/exttable-1.csv
1,a
3,bsdf
4,sdkfj
5,something
129,else
Create an external table:
create table so13t (
id number(4),
data varchar2(20)
)
organization external (
type oracle_loader
default directory jtest /* jtest is an existing directory object */
access parameters (
records delimited by newline
fields terminated by ','
missing field values are null
)
location ('exttable-1.csv') /* the file located in jtest directory */
)
reject limit unlimited;
Now you can use all the powers of SQL to access the data:
SQL> select * from so13t order by data;
ID DATA
---------- ------------------------------------------------------------
1 a
3 bsdf
129 else
4 sdkfj
5 something
Im not sure if this works in Oracle but in SQL Server you can use BULK INSERT sql statement to upload data from a txt or a csv file.
BULK
INSERT [TableName]
FROM 'c:\FileName.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
Just make sure that the table columns correctly matches whats in the txt file. With a more complicated solution you may want to use a format file see the following:
http://msdn.microsoft.com/en-us/library/ms178129.aspx
There are alot of ways to speed this up.
1) Do it in a single transaction. This will speed things up by avoiding connection opening / closing.
2) Load directly as a CSV file. If you load data as a CSV file, the "SQL" statements aren't required at all. in MySQL the "LOAD DATA INFILE" operation accomplishes this very intuitively and simply.
3) You can also simply dump the whole file as text into a table called "raw". And then let the database parse the data on its own using triggers. This is a hack, but it will simplify your application code and reduce network usage.