HIVE the unit of input size - hive

When I executed the hive, I got
Estimated from input data size:1000.
But, I did not know the unit of it.
Is it B, KB or GB?

You have to look fro below property
hive.exec.reducers.bytes.per.reducer on your hive configuration . As name suggests it is bytes.

Related

Why spark is reading more data that I expect it to read using read schema?

In my spark job, I'm reading a huge table (parquet) with more than 30 columns. To limit the size of data read I specify schema with one column only (I need only this one). Unfortunately, when reading the info in spark UI I get the information that the size of files read equals 1123.8 GiB but filesystem read data size total equals 417.0 GiB. I was expecting that if I take one from 30 columns the filesystem read data size total will be around 1/30 of the initial size, not almost half.
Could you explain to me why is that happening?

Google BigQuery fails with "Resources exceeded during query execution: UDF out of memory" when loading Parquet file

We use the BigQuery Java API to upload data from local data source as described here. When uploading a Parquet file with 18 columns (16 string, 1 float64, 1 timestamp) and 13 Mio rows (e.g. 17GB of data) the upload fails with the following exception:
Resources exceeded during query execution: UDF out of memory.; Failed
to read Parquet file . This might happen if the file contains a row
that is too large, or if the total size of the pages loaded for the
queried columns is too large.
However when uploading the same data using CSV (17.5GB of data) the upload succeeds. My questions are:
What is the difference when uploading Parquet or CSV?
What query is executed during upload?
Is it possible to increase the memory for this query?
Thanks
Tobias
Parquet is columnar data format, which means that loading data requires reading all columns. In parquet, columns are divided into pages. BigQuery keeps entire uncompressed pages for each column in memory while reading data from them. If the input file contains too many columns, BigQuery workers can hit Out of Memory errors.
Even when a precise limit is not enforced as it happens with other formats, it is recommended that records should in the range of 50 Mb, loading larger records may lead to resourcesExceeded errors.
Taking into account the above considerations, it would be great to clarify the following points:
What is the maximum size of rows in your Parquet file?
What is the maximum page size per column?
This info can be retrieved by publicly available tool.
If you think about increasing the alocated memory for queries, you need to read about Bigquery slots.
In my case, I ran bq load --autodetect --source_format=PARQUET ... which failed with the same error (resources exceeded during query execution). Finally, I had to split the data into multiple Parquet files so that they would be loaded in batches.

How to set the number of partitions/nodes when importing data into Spark

Problem: I want to import data into Spark EMR from S3 using:
data = sqlContext.read.json("s3n://.....")
Is there a way I can set the number of nodes that Spark uses to load and process the data? This is an example of how I process the data:
data.registerTempTable("table")
SqlData = sqlContext.sql("SELECT * FROM table")
Context: The data is not too big, takes a long time to load into Spark and also to query from. I think Spark partitions the data into too many nodes. I want to be able to set that manually. I know when dealing with RDDs and sc.parallelize I can pass the number of partitions as an input. Also, I have seen repartition(), but I am not sure if it can solve my problem. The variable data is a DataFrame in my example.
Let me define partition more precisely. Definition one: commonly referred to as "partition key" , where a column is selected and indexed to speed up query (that is not what i want). Definition two: (this is where my concern is) suppose you have a data set, Spark decides it is going to distribute it across many nodes so it can run operations on the data in parallel. If the data size is too small, this may further slow down the process. How can i set that value
By default it partitions into 200 sets. You can change it by using set command in sql context sqlContext.sql("set spark.sql.shuffle.partitions=10");. However you need to set it with caution based up on your data characteristics.
You can call repartition() on dataframe for setting partitions. You can even set spark.sql.shuffle.partitions this property after creating hive context or by passing to spark-submit jar:
spark-submit .... --conf spark.sql.shuffle.partitions=100
or
dataframe.repartition(100)
Number of "input" partitions are fixed by the File System configuration.
1 file of 1Go, with a block size of 128M will give you 10 tasks. I am not sure you can change it.
repartition can be very bad, if you have lot of input partitions this will make lot of shuffle (data traffic) between partitions.
There is no magic method, you have to try, and use the webUI to see how many tasks are generated.

What table size is small enough for MAPJOIN?

How do I decide whether a table is small enough for the MAPJOIN optimization?
My guess is that I should look at
du /misc/hdfs/user/hive/warehouse/my_table
and use MAPJOIN if that is below 50% (? 5%?) of RAM.
I am using hive 0.10.
hive-site.xml
hive.mapjoin.smalltable.filesize
Default Value: 25000000
The threshold for the input file size of the small tables; if the file size is smaller than this threshold, it will try to convert the common join into map join.
This is the current release Wiki, but I think this setting goes back to 0.10.

google-bigquery: Data Size of the data which query is going to process

When enter the query in BigQuery text box ,it immediately provide the size of the data which the query going to process for e.g This query will process 839 GB when run.
Question 1: How bigquery knows so fast about the data size its going to process.
Question 2: How accurate is this figure
Question 3: I want to get this figure through bigquery tool and want to use in my project . Is there a way to get this figure through API.
BigQuery looks at all the columns mentioned in your query, and adds their size. That's the total data to be processed: It only counts the columns mentioned, and their full size.
100% accurate, as long as the column size doesn't change in the meantime.
API parameter to access this figure: dryRun. It doesn't use quota, so feel free to query. https://developers.google.com/bigquery/docs/reference/v2/jobs/query