Spark select query fails on a large dataset in hive table - apache

My below code is reading data from a hive table using spark.
The table has 100 million records in it. When I select this many records in my Rdd and try to do a result.show() it gives serious problem exception.
I basically want to insert records in other table by selecting just a few columns from this table for 100 million record set.
Here is my code:
import org.apache.spark.sql.functions._
import org.apache.spark.sql._
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
val result=sqlContext.sql("Select * from ******reception.recp_customer")
result: org.apache.spark.sql.DataFrame = [data_source_id: smallint, customer_bkey: string ... 129 more fields]
result.show()
java.lang.RuntimeException: serious problem
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1064)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1091)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:311)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2773)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2370)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2377)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2113)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2112)
at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2803)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2112)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2327)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:248)
at org.apache.spark.sql.Dataset.show(Dataset.scala:636)
at org.apache.spark.sql.Dataset.show(Dataset.scala:595)
at org.apache.spark.sql.Dataset.show(Dataset.scala:604)
... 52 elided
Caused by: java.util.concurrent.ExecutionException: java.lang.NumberFormatException: For input string: "0000312_0000"
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1041)
... 94 more
Caused by: java.lang.NumberFormatException: For input string: "0000312_0000"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:441)
at java.lang.Long.parseLong(Long.java:483)
at org.apache.hadoop.hive.ql.io.AcidUtils.parseDelta(AcidUtils.java:323)
at org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:394)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.callInternal(OrcInputFormat.java:658)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator$1.run(OrcInputFormat.java:648)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator$1.run(OrcInputFormat.java:645)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:421)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:645)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:626)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
at java.lang.Thread.run(Thread.java:748)
Clueless about what is causing this.I understand data set is huge how to process it.

Looks like your hive table is an ACID table. For acid tables you can use only hive to query, you cannot use spark to query them as this feature is not yet supported in spark.
You can follow the below JIRA ticket for reference
https://issues.apache.org/jira/browse/SPARK-15348

Run major compaction on your hive table and then try to read your table from spark shell or from your code it must work.
run after connecting hive
ALTER TABLE .recp_customer COMPACT 'major';

The line:
Caused by: java.lang.NumberFormatException: For input string: "0000312_0000"
shows that you are trying to use a String value as it is numeric. Check it out.

Related

How to use WHERE statement on JSON stored in Presto SQL column to filter?

In Presto, I have data for a column in a table is as follows:
header
header 2
{Data: [{'item1': 'stuff1', 'item2': 'stuff2', 'item3': 'stuff3'}, {...}]}
cell 2
{Data: [{'item1': 'stuff11', 'item2': 'stuff21', 'item3': 'stuff31'}, {...}]}
cell 4
I was able to SELECT using JSON syntax using:
SELECT header.Data[1].item1 FROM table
and returns:
header
stuff1
stuff11
However, if I want to filter the table using the WHERE statement:
SELECT * FROM table WHERE header.Data[1].item1 = 'stuff1'
The above statement threw an error and didn't work.
I would like to return something like
header
header 2
{Data: [{'item1': 'stuff1', 'item2': 'stuff2', 'item3': 'stuff3'}, {...}]}
cell 2
Any input would be helpful. Thanks
I've tried several other queries using SQL as well such as but all returned similar error:
WHERE header.Data[1].item1 = 'stuff1'
An example of the error:
Query:
`SELECT header.Data[1].item1 AS f FROM table WHERE f LIKE '%stuff%'
'''
An error occurred while calling o12.execute. : java.sql.SQLException: Query failed (#20220330_200148_01673_9bq5k): line 2:7: Column 'f' cannot be resolved at io.prestosql.jdbc.AbstractPrestoResultSet.resultsException(AbstractPrestoResultSet.java:1761) at io.prestosql.jdbc.PrestoResultSet.getColumns(PrestoResultSet.java:252) at io.prestosql.jdbc.PrestoResultSet.create(PrestoResultSet.java:54) at io.prestosql.jdbc.PrestoStatement.internalExecute(PrestoStatement.java:249) at io.prestosql.jdbc.PrestoStatement.execute(PrestoStatement.java:227) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) at py4j.Gateway.invoke(Gateway.java:259) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:209) at java.lang.Thread.run(Thread.java:750) Caused by: io.prestosql.spi.PrestoException: line 2:7: Column 'f' cannot be resolved at io.prestosql.sql.analyzer.SemanticExceptions.semanticException(SemanticExceptions.java:48) at io.prestosql.sql.analyzer.SemanticExceptions.semanticException(SemanticExceptions.java:43) at io.prestosql.sql.analyzer.SemanticExceptions.missingAttributeException(SemanticExceptions.java:33) at io.prestosql.sql.analyzer.Scope.lambda$resolveField$7(Scope.java:228) at java.base/java.util.Optional.orElseThrow(Optional.java:408) at io.prestosql.sql.analyzer.Scope.resolveField(Scope.java:228) at io.prestosql.sql.analyzer.ExpressionAnalyzer$Visitor.visitIdentifier(ExpressionAnalyzer.java:438) at io.prestosql.sql.analyzer.ExpressionAnalyzer$Visitor.visitIdentifier(ExpressionAnalyzer.java:342) at io.prestosql.sql.tree.Identifier.accept(Identifier.java:72) at io.prestosql.sql.tree.StackableAstVisitor.process(StackableAstVisitor.java:27) at io.prestosql.sql.analyzer.ExpressionAnalyzer$Visitor.process(ExpressionAnalyzer.java:365) at io.prestosql.sql.analyzer.ExpressionAnalyzer$Visitor.visitLikePredicate(ExpressionAnalyzer.java:702) at io.prestosql.sql.analyzer.ExpressionAnalyzer$Visitor.visitLikePredicate(ExpressionAnalyzer.java:342) at io.prestosql.sql.tree.LikePredicate.accept(LikePredicate.java:76) at io.prestosql.sql.tree.StackableAstVisitor.process(StackableAstVisitor.java:27) at io.prestosql.sql.analyzer.ExpressionAnalyzer$Visitor.process(ExpressionAnalyzer.java:365) at io.prestosql.sql.analyzer.ExpressionAnalyzer.analyze(ExpressionAnalyzer.java:303) at io.prestosql.sql.analyzer.ExpressionAnalyzer.analyzeExpression(ExpressionAnalyzer.java:1691) at io.prestosql.sql.analyzer.StatementAnalyzer$Visitor.analyzeExpression(StatementAnalyzer.java:2606) at io.prestosql.sql.analyzer.StatementAnalyzer$Visitor.analyzeWhere(StatementAnalyzer.java:2465) at io.prestosql.sql.analyzer.StatementAnalyzer$Visitor.lambda$visitQuerySpecification$23(StatementAnalyzer.java:1528) at java.base/java.util.Optional.ifPresent(Optional.java:183) at io.prestosql.sql.analyzer.StatementAnalyzer$Visitor.visitQuerySpecification(StatementAnalyzer.java:1528) at io.prestosql.sql.analyzer.StatementAnalyzer$Visitor.visitQuerySpecification(StatementAnalyzer.java:322) at io.prestosql.sql.tree.QuerySpecification.accept(QuerySpecification.java:144) at io.prestosql.sql.tree.AstVisitor.process(AstVisitor.java:27) at io.prestosql.sql.analyzer.StatementAnalyzer$Visitor.process(StatementAnalyzer.java:339) at io.prestosql.sql.analyzer.StatementAnalyzer$Visitor.process(StatementAnalyzer.java:349) at io.prestosql.sql.analyzer.StatementAnalyzer$Visitor.visitQuery(StatementAnalyzer.java:1039) at io.prestosql.sql.analyzer.StatementAnalyzer$Visitor.visitQuery(StatementAnalyzer.java:322) at io.prestosql.sql.tree.Query.accept(Query.java:107) at io.prestosql.sql.tree.AstVisitor.process(AstVisitor.java:27) at io.prestosql.sql.analyzer.StatementAnalyzer$Visitor.process(StatementAnalyzer.java:339) at io.prestosql.sql.analyzer.StatementAnalyzer.analyze(StatementAnalyzer.java:308) at io.prestosql.sql.analyzer.Analyzer.analyze(Analyzer.java:83) at io.prestosql.sql.analyzer.Analyzer.analyze(Analyzer.java:75) at io.prestosql.execution.SqlQueryExecution.analyze(SqlQueryExecution.java:256) at io.prestosql.execution.SqlQueryExecution.(SqlQueryExecution.java:182) at io.prestosql.execution.SqlQueryExecution$SqlQueryExecutionFactory.createQueryExecution(SqlQueryExecution.java:757) at io.prestosql.dispatcher.LocalDispatchQueryFactory.lambda$createDispatchQuery$0(LocalDispatchQueryFactory.java:123) at io.prestosql.$gen.Presto_343____20220330_135137_2.call(Unknown Source) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829)
'''
Alias f introduced by SELECT header.Data[1].item1 AS f is not available in WHERE so you need to use the whole expression:
where header.Data[1].item1 LIKE '%stuff%'

apache-spark-sql: Error does not return the column name with error

When I use spark sql to query the data in the dataframe, my query returns the error. From the error, I cannot figure out what column has errors.
My table is gigantic with 120 columns and 176M rows.
Here is my query:
%sql
select order_entry_date, count(1) cnt, sum(paid_units) paid_unit, sum(total_revenue) rev
from mart_bc_order_item
group by 1
order by 1
The error is below:
java.lang.NumberFormatException: For input string: "�"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at scala.collection.immutable.StringLike$class.toInt(StringLike.scala:272)
at scala.collection.immutable.StringOps.toInt(StringOps.scala:29)
at org.apache.spark.sql.execution.datasources.csv.CSVTypeCast$.castTo(CSVInferSchema.scala:252)
at org.apache.spark.sql.execution.datasources.csv.CSVRelation$$anonfun$csvParser$3.apply(CSVRelation.scala:125)
at org.apache.spark.sql.execution.datasources.csv.CSVRelation$$anonfun$csvParser$3.apply(CSVRelation.scala:94)
at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun$buildReader$1$$anonfun$apply$2.apply(CSVFileFormat.scala:167)
at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun$buildReader$1$$anonfun$apply$2.apply(CSVFileFormat.scala:166)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithKeys$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:126)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Can someone help here?
Thanks,
Vivek
NumberFormatException' you are getting due to the reason that String can not be parsed properly, check your code and data again.

SparkSQL fails to read a specific column from an ORC table in Hive

I am using SparkSQL 2.1.1 to read from an ORC table in Hive 1.2.1 stored in Google Cloud Storage. I can successfully select most of the columns except for one (called here col1) of type smallint. If I try to select that specific column with this code
val hc = new org.apache.spark.sql.hive.HiveContext(sc)
val result = hc.sql("SELECT col1 FROM table")
result.collect().foreach(println)
It will fails with this Exception:
org.apache.spark.SparkException: Job aborted due to stage failure:
Task 0 in stage 24.0 failed 4 times, most recent failure: Lost task
0.3 in stage 24.0 (TID 378, , executor 42): java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot
be cast to org.apache.hadoop.hive.serde2.io.ShortWritable
at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableShortObjectInspector.get(WritableShortObjectInspector.java:36)
at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$4.apply(TableReader.scala:390)
at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$4.apply(TableReader.scala:390)
at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:435)
at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:426)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:232)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
I have already tried to cast that column to type short without success
val hc = new org.apache.spark.sql.hive.HiveContext(sc)
val result = hc.sql("SELECT cast(col1 as short) FROM table")
result.collect().foreach(println)

Unable to increase hive dynamic partitions in spark using spark-sql

I am running a hive query which selects data from a table and inserts result into another hive partitioned table using spark-sql. While inserting it requires 1536 partitions. But spark is not able to insert data with 1536 partitions eventhough I increased max partitions to 2000.
Below is command:
spark-sql --master yarn --num-executors 14 --executor-memory 45G
--executor-cores 30 --driver-memory 10G --conf spark.dynamicAllocation.enabled=false -e "SET
hive.exec.dynamic.partition = true;SET
hive.exec.dynamic.partition.mode = nonstrict;SET
hive.exec.max.dynamic.partitions = 2000; insert into table
weatherdata_part_rv.weather_data_daily_model_location_mapping_rv
partition (model_id,record_date) select
y.rec_id,x.municipal_id,x.model_id,y.record_date from (select * from
weatherdata_part_rv.model_location_xref) x left outer join
weatherdata_part_rv.weather_data_daily y on
x.municipal_id=y.weather_station_id;"
Error stack:
spark-sql --master yarn --num-executors 14 --executor-memory 45G --executor-cores 30 --driver-memory 10G --conf spark.dynamicAllocation.enabled=false -e "SET hive.exec.dynamic.partition = true;SET hive.exec.dynamic.partition.mode = nonstrict;SET hive.exec.max.dynamic.partitions = 2000;
> insert into table weatherdata_part_rv.weather_data_daily_model_location_mapping_rv partition (model_id,record_date) select y.rec_id,x.municipal_id,y.temprature_min_in_celcius,y.temprature_max_in_celcius,y.rainfall_in_mm,y.relative_humidity_min,y.relative_humidity_max,y.radiation_max,y.wind_intensity,y.wind_direction,y.cloud_coverage,y.soil_temprature_in_celcius,y.water_quantity_in_soil,y.lmdt,y.icon,y.probablity_of_rainfall,y.rain_acc_20feb_onwards,x.model_id,y.record_date from (select * from weatherdata_part_rv.model_location_xref) x left outer join weatherdata_part_rv.weather_data_daily y on x.municipal_id=y.weather_station_id;"
17/05/12 09:44:05 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
17/05/12 09:44:05 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
17/05/12 09:44:08 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
hive.exec.dynamic.partition true
Time taken: 1.874 seconds, Fetched 1 row(s)
hive.exec.dynamic.partition.mode nonstrict
Time taken: 0.67 seconds, Fetched 1 row(s)
hive.exec.max.dynamic.partitions 2000
Time taken: 0.047 seconds, Fetched 1 row(s)
17/05/12 09:58:30 ERROR SparkSQLDriver: Failed in [
insert into table weatherdata_part_rv.weather_data_daily_model_location_mapping_rv partition (model_id,record_date) select y.rec_id,x.municipal_id,y.temprature_min_in_celcius,y.temprature_max_in_celcius,y.rainfall_in_mm,y.relative_humidity_min,y.relative_humidity_max,y.radiation_max,y.wind_intensity,y.wind_direction,y.cloud_coverage,y.soil_temprature_in_celcius,y.water_quantity_in_soil,y.lmdt,y.icon,y.probablity_of_rainfall,y.rain_acc_20feb_onwards,x.model_id,y.record_date from (select * from weatherdata_part_rv.model_location_xref) x left outer join weatherdata_part_rv.weather_data_daily y on x.municipal_id=y.weather_station_id]
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.sql.hive.client.Shim_v1_2.loadDynamicPartitions(HiveShim.scala:823)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadDynamicPartitions$1.apply$mcV$sp(HiveClientImpl.scala:689)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadDynamicPartitions$1.apply(HiveClientImpl.scala:687)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadDynamicPartitions$1.apply(HiveClientImpl.scala:687)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:283)
at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:230)
at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:229)
at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:272)
at org.apache.spark.sql.hive.client.HiveClientImpl.loadDynamicPartitions(HiveClientImpl.scala:687)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadDynamicPartitions$1.apply$mcV$sp(HiveExternalCatalog.scala:796)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadDynamicPartitions$1.apply(HiveExternalCatalog.scala:784)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadDynamicPartitions$1.apply(HiveExternalCatalog.scala:784)
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:95)
at org.apache.spark.sql.hive.HiveExternalCatalog.loadDynamicPartitions(HiveExternalCatalog.scala:784)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:268)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:170)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:347)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:62)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:335)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:168)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Number of dynamic partitions created is 1536, which is more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to at least 1536.
at org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions(Hive.java:1578)
... 48 more
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.sql.hive.client.Shim_v1_2.loadDynamicPartitions(HiveShim.scala:823)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadDynamicPartitions$1.apply$mcV$sp(HiveClientImpl.scala:689)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadDynamicPartitions$1.apply(HiveClientImpl.scala:687)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadDynamicPartitions$1.apply(HiveClientImpl.scala:687)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:283)
at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:230)
at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:229)
at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:272)
at org.apache.spark.sql.hive.client.HiveClientImpl.loadDynamicPartitions(HiveClientImpl.scala:687)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadDynamicPartitions$1.apply$mcV$sp(HiveExternalCatalog.scala:796)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadDynamicPartitions$1.apply(HiveExternalCatalog.scala:784)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadDynamicPartitions$1.apply(HiveExternalCatalog.scala:784)
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:95)
at org.apache.spark.sql.hive.HiveExternalCatalog.loadDynamicPartitions(HiveExternalCatalog.scala:784)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:268)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:170)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:347)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:62)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:335)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:168)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Number of dynamic partitions created is 1536, which is more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to at least 1536.
at org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions(Hive.java:1578)
... 48 more
Is there any limit in maximum hive partitions in spark ?
If so, is there any way to increase maximum no of partitions ?
Can you add below property at hive-site.xml at spark_home/conf/hive-site.xml and hive-home/conf/hive-site.xml
hive.exec.max.dynamic.partitions=2000
<name>hive.exec.max.dynamic.partitions</name>
<value>2000</value>
<description></description>
hope this should resolve the issue.
If value is not picking up, try to restart the hs2 process.
i resolved this error by keeping partition column at end of the data frame.
check your column order in df and make at end while selecting in spark.sql

Hive throws ArrayIndexOutOfBoundsException when select count(1) on ORC table

I have a simple table with 9 fields, using ORCFile format (I followed the steps mentioned here). When I try to count the number of rows in that table (350 million rows, btw) by submitting:
select count(1) from my_orc_table;
I get an 'ArrayIndexOutOfBoundsException'. Let me copy the stack, just in case it provides more information:
Error: java.io.IOException: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException: 0
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
Thanks!!