How do I create a HIVE Metastore table parquet snappy files on s3? - hive

I have a HIVE connector in Trino setup.. the files are in S3 and I can start to create tables to query data.. however I am getting an error.. and I suspect it could be the compression method.. is my TABLE creation step looking okay..
SELECT * from ibtickers;
Query 20220925_104301_00008_h8me8, FAILED, 1 node
Splits: 3 total, 0 done (0.00%)
0.15 [0 rows, 0B] [0 rows/s, 0B/s]
Query 20220925_104301_00008_h8me8 failed: Failed to read Parquet file: s3a://MYBUCKET/data_dump/ib_tickers/20220918_192603-092364.snappy.parquet
here is the lines I used to create the table
CREATE TABLE IF NOT EXISTS hive.<MY_SCHEMA>.<MY_TABLE> (
-> column_one VARCHAR,
-> column_two VARCHAR,
-> column_three VARCHAR,
-> column_four DOUBLE,
-> column_five VARCHAR,
-> column_six VARCHAR,
-> query_start_time TIMESTAMP)
-> WITH (
-> external_location = 's3a://<MY_S3_BUCKET_NAME>/dir_one/dir_two',
-> format = 'PARQUET'
-> );
The files work locally when I am utilizing a viewer in pycharm just fine so.. my guess is my TRINO or HIVE setup needs fixing..?
EDIT
I just tried the other compression types from meltano.. and none seem to work.. I've already confirmed the s3 files are reachable and downloadable via boto3 ... so this leads me to think this has to do with how I create the tables.. so any help here is appreciated.
I found the fuller stack trace and it looks like i need to just.. figure out how to add the compression support to ...HIVE Metastore?
Querying parquet that's GZip error
io.trino.spi.TrinoException: Failed to read Parquet file: s3a://test-bucket-13549875235/data_dump/ib_tickers_gzip/20220925_173820-404835.gz.parquet
at io.trino.plugin.hive.parquet.ParquetPageSource.handleException(ParquetPageSource.java:169)
at io.trino.plugin.hive.parquet.ParquetPageSourceFactory.lambda$createPageSource$6(ParquetPageSourceFactory.java:271)
at io.trino.parquet.reader.ParquetBlockFactory$ParquetBlockLoader.load(ParquetBlockFactory.java:75)
at io.trino.spi.block.LazyBlock$LazyData.load(LazyBlock.java:406)
at io.trino.spi.block.LazyBlock$LazyData.getFullyLoadedBlock(LazyBlock.java:385)
at io.trino.spi.block.LazyBlock.getLoadedBlock(LazyBlock.java:292)
at io.trino.spi.Page.getLoadedPage(Page.java:229)
at io.trino.operator.TableScanOperator.getOutput(TableScanOperator.java:314)
at io.trino.operator.Driver.processInternal(Driver.java:411)
at io.trino.operator.Driver.lambda$process$10(Driver.java:314)
at io.trino.operator.Driver.tryWithLock(Driver.java:706)
at io.trino.operator.Driver.process(Driver.java:306)
at io.trino.operator.Driver.processForDuration(Driver.java:277)
at io.trino.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:736)
at io.trino.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:164)
at io.trino.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:515)
at io.trino.$gen.Trino_397____20220925_161652_2.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.UnsupportedOperationException: io.trino.spi.type.DoubleType
at io.trino.spi.type.AbstractType.writeLong(AbstractType.java:91)
at io.trino.parquet.reader.LongColumnReader.readValue(LongColumnReader.java:31)
at io.trino.parquet.reader.PrimitiveColumnReader.lambda$readValues$2(PrimitiveColumnReader.java:248)
at io.trino.parquet.reader.PrimitiveColumnReader.processValues(PrimitiveColumnReader.java:304)
at io.trino.parquet.reader.PrimitiveColumnReader.readValues(PrimitiveColumnReader.java:246)
at io.trino.parquet.reader.PrimitiveColumnReader.readPrimitive(PrimitiveColumnReader.java:235)
at io.trino.parquet.reader.ParquetReader.readPrimitive(ParquetReader.java:441)
at io.trino.parquet.reader.ParquetReader.readColumnChunk(ParquetReader.java:540)
at io.trino.parquet.reader.ParquetReader.readBlock(ParquetReader.java:523)
at io.trino.parquet.reader.ParquetReader.lambda$nextPage$3(ParquetReader.java:272)
at io.trino.parquet.reader.ParquetBlockFactory$ParquetBlockLoader.load(ParquetBlockFactory.java:72)
... 17 more
Querying parquet that's zstd compressed error
io.trino.spi.TrinoException: Failed to read Parquet file: s3a://test-bucket-13549875235/data_dump/ib_tickers_zstd/20220925_175330-188177.zstd.parquet
at io.trino.plugin.hive.parquet.ParquetPageSource.handleException(ParquetPageSource.java:169)
at io.trino.plugin.hive.parquet.ParquetPageSourceFactory.lambda$createPageSource$6(ParquetPageSourceFactory.java:271)
at io.trino.parquet.reader.ParquetBlockFactory$ParquetBlockLoader.load(ParquetBlockFactory.java:75)
at io.trino.spi.block.LazyBlock$LazyData.load(LazyBlock.java:406)
at io.trino.spi.block.LazyBlock$LazyData.getFullyLoadedBlock(LazyBlock.java:385)
at io.trino.spi.block.LazyBlock.getLoadedBlock(LazyBlock.java:292)
at io.trino.spi.Page.getLoadedPage(Page.java:229)
at io.trino.operator.TableScanOperator.getOutput(TableScanOperator.java:314)
at io.trino.operator.Driver.processInternal(Driver.java:411)
at io.trino.operator.Driver.lambda$process$10(Driver.java:314)
at io.trino.operator.Driver.tryWithLock(Driver.java:706)
at io.trino.operator.Driver.process(Driver.java:306)
at io.trino.operator.Driver.processForDuration(Driver.java:277)
at io.trino.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:736)
at io.trino.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:164)
at io.trino.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:515)
at io.trino.$gen.Trino_397____20220925_161652_2.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.UnsupportedOperationException: io.trino.spi.type.DoubleType
at io.trino.spi.type.AbstractType.writeLong(AbstractType.java:91)
at io.trino.parquet.reader.LongColumnReader.readValue(LongColumnReader.java:31)
at io.trino.parquet.reader.PrimitiveColumnReader.lambda$readValues$2(PrimitiveColumnReader.java:248)
at io.trino.parquet.reader.PrimitiveColumnReader.processValues(PrimitiveColumnReader.java:304)
at io.trino.parquet.reader.PrimitiveColumnReader.readValues(PrimitiveColumnReader.java:246)
at io.trino.parquet.reader.PrimitiveColumnReader.readPrimitive(PrimitiveColumnReader.java:235)
at io.trino.parquet.reader.ParquetReader.readPrimitive(ParquetReader.java:441)
at io.trino.parquet.reader.ParquetReader.readColumnChunk(ParquetReader.java:540)
at io.trino.parquet.reader.ParquetReader.readBlock(ParquetReader.java:523)
at io.trino.parquet.reader.ParquetReader.lambda$nextPage$3(ParquetReader.java:272)
at io.trino.parquet.reader.ParquetBlockFactory$ParquetBlockLoader.load(ParquetBlockFactory.java:72)
... 17 more
Querying parquet that's brotli compressed error
io.trino.spi.TrinoException: Failed to read Parquet file: s3a://test-bucket-13549875235/data_dump/ib_tickers_br/20220925_181026-076690.br.parquet
at io.trino.plugin.hive.parquet.ParquetPageSource.handleException(ParquetPageSource.java:169)
at io.trino.plugin.hive.parquet.ParquetPageSourceFactory.lambda$createPageSource$6(ParquetPageSourceFactory.java:271)
at io.trino.parquet.reader.ParquetBlockFactory$ParquetBlockLoader.load(ParquetBlockFactory.java:75)
at io.trino.spi.block.LazyBlock$LazyData.load(LazyBlock.java:406)
at io.trino.spi.block.LazyBlock$LazyData.getFullyLoadedBlock(LazyBlock.java:385)
at io.trino.spi.block.LazyBlock.getLoadedBlock(LazyBlock.java:292)
at io.trino.spi.Page.getLoadedPage(Page.java:223)
at io.trino.operator.TableScanOperator.getOutput(TableScanOperator.java:314)
at io.trino.operator.Driver.processInternal(Driver.java:411)
at io.trino.operator.Driver.lambda$process$10(Driver.java:314)
at io.trino.operator.Driver.tryWithLock(Driver.java:706)
at io.trino.operator.Driver.process(Driver.java:306)
at io.trino.operator.Driver.processForDuration(Driver.java:277)
at io.trino.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:736)
at io.trino.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:164)
at io.trino.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:515)
at io.trino.$gen.Trino_397____20220925_161652_2.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.RuntimeException: Error reading dictionary page
at io.trino.parquet.reader.PageReader.readDictionaryPage(PageReader.java:118)
at io.trino.parquet.reader.PrimitiveColumnReader.setPageReader(PrimitiveColumnReader.java:188)
at io.trino.parquet.reader.ParquetReader.readPrimitive(ParquetReader.java:439)
at io.trino.parquet.reader.ParquetReader.readColumnChunk(ParquetReader.java:540)
at io.trino.parquet.reader.ParquetReader.readBlock(ParquetReader.java:523)
at io.trino.parquet.reader.ParquetReader.lambda$nextPage$3(ParquetReader.java:272)
at io.trino.parquet.reader.ParquetBlockFactory$ParquetBlockLoader.load(ParquetBlockFactory.java:72)
... 17 more
Caused by: io.trino.parquet.ParquetCorruptionException: Codec not supported in Parquet: BROTLI
at io.trino.parquet.ParquetCompressionUtils.decompress(ParquetCompressionUtils.java:69)
at io.trino.parquet.reader.PageReader.readDictionaryPage(PageReader.java:113)
... 23 more
Querying parquet that's snappy compressed error
io.trino.spi.TrinoException: Failed to read Parquet file: s3a://test-bucket-13549875235/data_dump/ib_tickers_snappy/20220918_194105-135895.snappy.parquet
at io.trino.plugin.hive.parquet.ParquetPageSource.handleException(ParquetPageSource.java:169)
at io.trino.plugin.hive.parquet.ParquetPageSourceFactory.lambda$createPageSource$6(ParquetPageSourceFactory.java:271)
at io.trino.parquet.reader.ParquetBlockFactory$ParquetBlockLoader.load(ParquetBlockFactory.java:75)
at io.trino.spi.block.LazyBlock$LazyData.load(LazyBlock.java:406)
at io.trino.spi.block.LazyBlock$LazyData.getFullyLoadedBlock(LazyBlock.java:385)
at io.trino.spi.block.LazyBlock.getLoadedBlock(LazyBlock.java:292)
at io.trino.spi.Page.getLoadedPage(Page.java:229)
at io.trino.operator.TableScanOperator.getOutput(TableScanOperator.java:314)
at io.trino.operator.Driver.processInternal(Driver.java:411)
at io.trino.operator.Driver.lambda$process$10(Driver.java:314)
at io.trino.operator.Driver.tryWithLock(Driver.java:706)
at io.trino.operator.Driver.process(Driver.java:306)
at io.trino.operator.Driver.processForDuration(Driver.java:277)
at io.trino.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:736)
at io.trino.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:164)
at io.trino.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:515)
at io.trino.$gen.Trino_397____20220925_161652_2.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.UnsupportedOperationException: io.trino.spi.type.DoubleType
at io.trino.spi.type.AbstractType.writeLong(AbstractType.java:91)
at io.trino.parquet.reader.LongColumnReader.readValue(LongColumnReader.java:31)
at io.trino.parquet.reader.PrimitiveColumnReader.lambda$readValues$2(PrimitiveColumnReader.java:248)
at io.trino.parquet.reader.PrimitiveColumnReader.processValues(PrimitiveColumnReader.java:304)
at io.trino.parquet.reader.PrimitiveColumnReader.readValues(PrimitiveColumnReader.java:246)
at io.trino.parquet.reader.PrimitiveColumnReader.readPrimitive(PrimitiveColumnReader.java:235)
at io.trino.parquet.reader.ParquetReader.readPrimitive(ParquetReader.java:441)
at io.trino.parquet.reader.ParquetReader.readColumnChunk(ParquetReader.java:540)
at io.trino.parquet.reader.ParquetReader.readBlock(ParquetReader.java:523)
at io.trino.parquet.reader.ParquetReader.lambda$nextPage$3(ParquetReader.java:272)
at io.trino.parquet.reader.ParquetBlockFactory$ParquetBlockLoader.load(ParquetBlockFactory.java:72)
... 17 more

I realized the LONG in a parquet file doesn't convert to DOUBLE but to BIGINT when using HIVE.. this allowed me to proceed to take in the offending column
CREATE TABLE IF NOT EXISTS hive.<MY_SCHEMA>.<MY_TABLE> (
-> column_one VARCHAR,
-> column_two VARCHAR,
-> column_three VARCHAR,
-> column_four BIGINT, <--- change this
-> column_five VARCHAR,
-> column_six VARCHAR)
-> WITH (
-> external_location = 's3a://<MY_S3_BUCKET_NAME>/dir_one/dir_two',
-> format = 'PARQUET'
-> );

Related

Error when using order by in hive table built with Avro

i have created twitter table to store twitter data using AvroSerDe using the following code
CREATE TABLE twitter
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
TBLPROPERTIES ('avro.schema.url'='file:///home/siva/TwitterDataAvroSchema.avsc') ;
my table was created successfully, when i run SELECT * FROM twitter ; it works normally but when i try to execute this query with ORDER BY
select user_screen_name as name , user_location as location ,user_followers_count as count
from twitter
order by count desc
limit 5;
it shows the following error :
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
but when i try to run another query with ORDER BY on another table created in normal way it works .
the full log file :
Diagnostic Messages for this Task:
Error: java.io.IOException: java.io.IOException: org.apache.avro.AvroRuntimeException: java.io.EOFException
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:232)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:142)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:206)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:192)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:466)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:350)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:691)
at java.base/javax.security.auth.Subject.doAs(Subject.java:425)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172)
Caused by: java.io.IOException: org.apache.avro.AvroRuntimeException: java.io.EOFException
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365)
at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:167)
at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:52)
at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:229)
... 11 more
Caused by: org.apache.avro.AvroRuntimeException: java.io.EOFException
at org.apache.avro.file.DataFileStream.next(DataFileStream.java:222)
at org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader.next(AvroGenericRecordReader.java:164)
at org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader.next(AvroGenericRecordReader.java:54)
at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
... 15 more
Caused by: java.io.EOFException
at org.apache.avro.io.BinaryDecoder.ensureBounds(BinaryDecoder.java:473)
at org.apache.avro.io.BinaryDecoder.readInt(BinaryDecoder.java:128)
at org.apache.avro.io.BinaryDecoder.readString(BinaryDecoder.java:259)
at org.apache.avro.io.ResolvingDecoder.readString(ResolvingDecoder.java:201)
at org.apache.avro.generic.GenericDatumReader.readString(GenericDatumReader.java:363)
at org.apache.avro.generic.GenericDatumReader.readString(GenericDatumReader.java:355)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:157)
at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:193)
at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:183)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:151)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:142)
at org.apache.avro.file.DataFileStream.next(DataFileStream.java:233)
at org.apache.avro.file.DataFileStream.next(DataFileStream.java:220)
... 18 more

Spark select query fails on a large dataset in hive table

My below code is reading data from a hive table using spark.
The table has 100 million records in it. When I select this many records in my Rdd and try to do a result.show() it gives serious problem exception.
I basically want to insert records in other table by selecting just a few columns from this table for 100 million record set.
Here is my code:
import org.apache.spark.sql.functions._
import org.apache.spark.sql._
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
val result=sqlContext.sql("Select * from ******reception.recp_customer")
result: org.apache.spark.sql.DataFrame = [data_source_id: smallint, customer_bkey: string ... 129 more fields]
result.show()
java.lang.RuntimeException: serious problem
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1064)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1091)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:311)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2773)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2370)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2377)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2113)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2112)
at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2803)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2112)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2327)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:248)
at org.apache.spark.sql.Dataset.show(Dataset.scala:636)
at org.apache.spark.sql.Dataset.show(Dataset.scala:595)
at org.apache.spark.sql.Dataset.show(Dataset.scala:604)
... 52 elided
Caused by: java.util.concurrent.ExecutionException: java.lang.NumberFormatException: For input string: "0000312_0000"
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1041)
... 94 more
Caused by: java.lang.NumberFormatException: For input string: "0000312_0000"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:441)
at java.lang.Long.parseLong(Long.java:483)
at org.apache.hadoop.hive.ql.io.AcidUtils.parseDelta(AcidUtils.java:323)
at org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:394)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.callInternal(OrcInputFormat.java:658)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator$1.run(OrcInputFormat.java:648)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator$1.run(OrcInputFormat.java:645)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:421)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:645)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:626)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
at java.lang.Thread.run(Thread.java:748)
Clueless about what is causing this.I understand data set is huge how to process it.
Looks like your hive table is an ACID table. For acid tables you can use only hive to query, you cannot use spark to query them as this feature is not yet supported in spark.
You can follow the below JIRA ticket for reference
https://issues.apache.org/jira/browse/SPARK-15348
Run major compaction on your hive table and then try to read your table from spark shell or from your code it must work.
run after connecting hive
ALTER TABLE .recp_customer COMPACT 'major';
The line:
Caused by: java.lang.NumberFormatException: For input string: "0000312_0000"
shows that you are trying to use a String value as it is numeric. Check it out.

SparkSQL fails to read a specific column from an ORC table in Hive

I am using SparkSQL 2.1.1 to read from an ORC table in Hive 1.2.1 stored in Google Cloud Storage. I can successfully select most of the columns except for one (called here col1) of type smallint. If I try to select that specific column with this code
val hc = new org.apache.spark.sql.hive.HiveContext(sc)
val result = hc.sql("SELECT col1 FROM table")
result.collect().foreach(println)
It will fails with this Exception:
org.apache.spark.SparkException: Job aborted due to stage failure:
Task 0 in stage 24.0 failed 4 times, most recent failure: Lost task
0.3 in stage 24.0 (TID 378, , executor 42): java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot
be cast to org.apache.hadoop.hive.serde2.io.ShortWritable
at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableShortObjectInspector.get(WritableShortObjectInspector.java:36)
at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$4.apply(TableReader.scala:390)
at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$4.apply(TableReader.scala:390)
at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:435)
at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:426)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:232)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
I have already tried to cast that column to type short without success
val hc = new org.apache.spark.sql.hive.HiveContext(sc)
val result = hc.sql("SELECT cast(col1 as short) FROM table")
result.collect().foreach(println)

Spark SQL Date cache exception

I've opened spark-sql console and
create the table
create table test1(date1 date, value int)
row format delimited fields terminated by ','
stored as textfile ;
load data in the table
load data local inpath 'test1.csv' into table test1;
with the information
2015-01-01,10
2015-01-01,15
2015-01-02,10
I can execute select year(date1),month(date1),day(date1) from test
but if I run cache table test1;
I get this exception:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 (TID 17, 10.0.200.6): java.lang.ClassCastException: org.apache.spark.sql.catalyst.expressions.MutableAny cannot be cast to org.apache.spark.sql.catalyst.expressions.MutableInt
at org.apache.spark.sql.catalyst.expressions.SpecificMutableRow.getInt(SpecificMutableRow.scala:248)
at org.apache.spark.sql.columnar.IntColumnStats.gatherStats(ColumnStats.scala:191)
at org.apache.spark.sql.columnar.NullableColumnBuilder$class.appendFrom(NullableColumnBuilder.scala:56)
at org.apache.spark.sql.columnar.NativeColumnBuilder.org$apache$spark$sql$columnar$compression$CompressibleColumnBuilder$$super$appendFrom(ColumnBuilder.scala:87)
at org.apache.spark.sql.columnar.compression.CompressibleColumnBuilder$class.appendFrom(CompressibleColumnBuilder.scala:78)
at org.apache.spark.sql.columnar.NativeColumnBuilder.appendFrom(ColumnBuilder.scala:87)
at org.apache.spark.sql.columnar.InMemoryRelation$$anonfun$3$$anon$1.next(InMemoryColumnarTableScan.scala:141)
at org.apache.spark.sql.columnar.InMemoryRelation$$anonfun$3$$anon$1.next(InMemoryColumnarTableScan.scala:117)
at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:249)
at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:172)
at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:79)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:242)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:64)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
It looks like a bug in Spark Cache implementation for dates.
https://issues.apache.org/jira/browse/SPARK-6967
Solved in git

Hive throws ArrayIndexOutOfBoundsException when select count(1) on ORC table

I have a simple table with 9 fields, using ORCFile format (I followed the steps mentioned here). When I try to count the number of rows in that table (350 million rows, btw) by submitting:
select count(1) from my_orc_table;
I get an 'ArrayIndexOutOfBoundsException'. Let me copy the stack, just in case it provides more information:
Error: java.io.IOException: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException: 0
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
Thanks!!