I have one column "category" which contain data like this
"Failed extract of third-party root list from auto update cab at: <http://ctldl.windowsupdate.com/msdownload/update/v3/static/trustedr/en/authrootstl.cab> with error: The data is invalid."
I need to select url part in between " < > " sign of category column.
I have written a hive query -
select level,category,regexp_extract(category,'http://[^\>]*') AS url from event where level='Error';
I got an exception :
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201406122248_0014, Tracking URL = http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201406122248_0014
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_201406122248_0014
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2014-06-13 02:13:35,696 Stage-1 map = 0%, reduce = 0%
2014-06-13 02:14:13,895 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201406122248_0014 with errors
Error during job, obtaining debugging information...
Job Tracking URL: http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201406122248_0014
Examining task ID: task_201406122248_0014_m_000002 (and more) from job job_201406122248_0014
Task with the most failures(4):
-----
Task ID:
task_201406122248_0014_m_000000
URL:
http://localhost.localdomain:50030/taskdetails.jsp?jobid=job_201406122248_0014&tipid=task_201406122248_0014_m_000000
-----
Diagnostic Messages for this Task:
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"level":"Error","datetimes":"6/13/2014 9:24:05 AM","source":"Microsoft-Windows-CAPI2","eventid":4107,"task":"None","category":"\"Failed extract of third-party root list from auto update cab at: <http://ctldl.windowsupdate.com/msdownload/update/v3/static/trustedr/en/authrootstl.cab> with error: The data is invalid."}
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:159)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
how to fix this?
Please help.
Related
We are using Structured Streaming in Databricks environment, Every time while we run this program - kAFKA - Structured Streaming (DBR6.6, Spark 2.4.5) - Writing to CosmosDB, we are getting the same exception as below just before we do the final joins to save the data to Cosmos DB. We haven't modified any spark specific settings and leveraging the default spark /DBR configurations.
Caused by: org.apache.spark.SparkException:
Job aborted due to stage failure:
Task 174 in stage 9353.0 failed 4 times, most recent failure:
Lost task 174.3 in stage 9353.0 (TID 60863, 10.139.64.9, executor 1):
java.lang.IllegalStateException:
Error reading delta file dbfs:/raw_zone/uffRetail_jointbl_dev_cp1/state/8/174/left-keyToNumValues/1.delta of HDFSStateStoreProvider[id = (op=8,part=174),dir = dbfs:/raw_zone/uffRetail_jointbl_dev_cp1/state/8/174/left-keyToNumValues]:
dbfs:/raw_zone/uffRetail_jointbl_dev_cp1/state/8/174/left-keyToNumValues/1.delta does not exist
Caused by: java.io.FileNotFoundException:
/6455647419774311/raw_zone/uffRetail_jointbl_dev_cp1/state/8/174/left-keyToNumValues/1.delta
I am using Hue for accessing Hive Service. I Created a Hive table using
create table tablename(colname type,.....)
row format delimited fields terminated by ',';
I Uploaded the data with 300 000 record perfectly. But while executing a query like:
select count(*) from tablename;
it is creating MapReduce job and at this time I get the following warning, How to resolve this warning.
WARN : Hadoop command-line option parsing not performed. Implement
the Tool interface and execute your application with ToolRunner to
remedy this.
Complete Log:
INFO : Number of reduce tasks determined at compile time: 1
INFO : In order to change the average load for a reducer (in bytes):
INFO : set hive.exec.reducers.bytes.per.reducer=<number>
INFO : In order to limit the maximum number of reducers:
INFO : set hive.exec.reducers.max=<number>
INFO : In order to set a constant number of reducers:
INFO : set mapreduce.job.reduces=<number>
WARN : Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
INFO : number of splits:1
INFO : Submitting tokens for job: job_1442315442114_0017
INFO : The url to track the job: http://dwiclmaster:8088/proxy/application_1442315442114_0017/
INFO : Starting Job = job_1442315442114_0017, Tracking URL = http://dwiclmaster:8088/proxy/application_1442315442114_0017/
INFO : Kill Command = /opt/cloudera/parcels/CDH-5.4.4-1.cdh5.4.4.p0.4/lib/hadoop/bin/hadoop job -kill job_1442315442114_0017
INFO : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
INFO : 2015-09-15 18:29:06,910 Stage-1 map = 0%, reduce = 0%
INFO : 2015-09-15 18:29:15,257 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.65 sec
INFO : 2015-09-15 18:29:21,513 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.19 sec
INFO : MapReduce Total cumulative CPU time: 3 seconds 190 msec
INFO : Ended Job = job_1442315442114_0017
This is just a warning coming up from MapReduce as jobs submitted by Hive do not implement the interface. This can be safely ignored.
More about Tool Runner.
We are testing a multi node hadoop cluster (2.4.0) with Hive (0.13.0). The cluster works fine, but when we runa a query in hive, the mapred job are always executed locally.
For example:
Without hive-site.xml (in fact, without any configuration file other than defaults) we set mapred.job.tracker:
hive> SET mapred.job.tracker=192.168.7.183:8032;
And run a query:
hive> select count(1) from suricata;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
OpenJDK 64-Bit Server VM warning: You have loaded library /hadoop/hadoop-2.4.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/04/29 12:48:02 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/04/29 12:48:02 WARN conf.Configuration: file:/tmp/hadoopuser/hive_2014-04-29_12-47-57_290_2455239450939088471-1/-local-10003/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
14/04/29 12:48:02 WARN conf.Configuration: file:/tmp/hadoopuser/hive_2014-04-29_12-47-57_290_2455239450939088471-1/-local-10003/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
Execution log at: /tmp/hadoopuser/hadoopuser_20140429124747_badfcce6-620e-4718-8c3b-e4ef76bdba7e.log
Job running in-process (local Hadoop)
Hadoop job information for null: number of mappers: 0; number of reducers: 0
2014-04-29 12:48:05,450 null map = 0%, reduce = 0%
.......
.......
2014-04-29 12:52:26,982 null map = 100%, reduce = 100%
Ended Job = job_local1983771849_0001
Execution completed successfully
**MapredLocal task succeeded**
OK
266559841
Time taken: 270.176 seconds, Fetched: 1 row(s)
What are we missing?
Set hive.exec.mode.local.auto as false which will disable the local mode execution in Hive
For each query the compiler generates DAG of map-reduce jobs. If the job runs in local mode, check below properties:
mapreduce.framework.name=local;
hive.exec.mode.local.auto=false;
If auto option is enabled then hive run the job in local mode if
Total input size < hive.exec.mode.local.auto.inputbytes.max
Total number of map tasks < hive.exec.mode.local.auto.tasks.max
Total number of reduce tasks =< 1 or 0
These options are available from 0.7
I use mysql as storage backend with nutch.
Job failed when crawling some sites. Got the following exception and exit nutch when reaching this page: http://www.appchina.com/users.html
Exception in thread "main" java.lang.RuntimeException: job failed: name=parse, jobid=job_local_0004
at org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:47)
at org.apache.nutch.parse.ParserJob.run(ParserJob.java:249)
at org.apache.nutch.crawl.Crawler.runTool(Crawler.java:68)
at org.apache.nutch.crawl.Crawler.run(Crawler.java:171)
at org.apache.nutch.crawl.Crawler.run(Crawler.java:250)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.crawl.Crawler.main(Crawler.java:257)
So I modify the ./src/java/org/apache/nutch/util/NutchJob.java
change the
if (getConfiguration().getBoolean("fail.on.job.failure", true)) {
to
if (getConfiguration().getBoolean("fail.on.job.failure", false)) {
After recompiling, I won't get any exception, but unlimited restart crawling.
FetcherJob : timelimit set for : -1
FetcherJob: threads: 30
FetcherJob: parsing: false
FetcherJob: resuming: false
Using queue mode : byHost
Fetcher: threads: 30
Fetcher: throughput threshold: -1
Fetcher: throughput threshold sequence: 5
QueueFeeder finished: total 2 records. Hit by time limit :0
fetching http://www.appchina.com/
fetching http://www.appchina.com/users.html
-finishing thread FetcherThread0, activeThreads=29
-finishing thread FetcherThread29, activeThreads=28
...
0/0 spinwaiting/active, 2 pages, 0 errors, 0.4 0.4 pages/s, 137 137 kb/s, 0 URLs in 0 queues
-activeThreads=0
ParserJob: resuming: false
ParserJob: forced reparse: false
ParserJob: parsing all
Parsing http://www.appchina.com/
Parsing http://www.appchina.com/users.html
UPDATE
error in hadoop.log
2012-09-17 18:48:51,257 WARN mapred.LocalJobRunner - job_local_0004
java.io.IOException: java.sql.BatchUpdateException: Incorrect string value: '\xE7\x94\xA8\xE6\x88\xB7...' for column 'text' at row 1
at org.apache.gora.sql.store.SqlStore.flush(SqlStore.java:340)
at org.apache.gora.sql.store.SqlStore.close(SqlStore.java:185)
at org.apache.gora.mapreduce.GoraRecordWriter.close(GoraRecordWriter.java:55)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.close(MapTask.java:651)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:766)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
Caused by: java.sql.BatchUpdateException: Incorrect string value: '\xE7\x94\xA8\xE6\x88\xB7...' for column 'text' at row 1
at com.mysql.jdbc.PreparedStatement.executeBatchSerially(PreparedStatement.java:2028)
at com.mysql.jdbc.PreparedStatement.executeBatch(PreparedStatement.java:1451)
at org.apache.gora.sql.store.SqlStore.flush(SqlStore.java:328)
... 6 more
Caused by: java.sql.SQLException: Incorrect string value: '\xE7\x94\xA8\xE6\x88\xB7...' for column 'text' at row 1
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1073)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3609)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3541)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2002)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2163)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2624)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2127)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2427)
at com.mysql.jdbc.PreparedStatement.executeBatchSerially(PreparedStatement.java:1980)
... 8 more
UPDATE again
I've drop the table gora created and create a similar table with a VARCHAR(128) id and utf8mb4 DEFAULT CHARSET. It works now. Why?
Anyone help?
You need to add the hadoop logs for the Parse job. The stack trace attached is not showing that info. After u did that code change, did parsing happen successfully ?
I am trying to run hive queries but I am getting errors as:
hive> FROM (
> FROM t1
> MAP t1.patient_mrn, t1.encounter_date
> USING 'retrieve'
> AS mp1, mp2
> CLUSTER BY mp1) map_output
> INSERT OVERWRITE TABLE t3
> REDUCE map_output.mp1, map_output.mp2
> USING 'q1.txt'
> AS reducef1, reducef2;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
Starting Job = job_201112281627_0097, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201112281627_0097
Kill Command = /home/hadoop/hadoop-0.20.2-cdh3u2//bin/hadoop job -Dmapred.job.tracker=localhost:54311 -kill job_201112281627_0097
2011-12-31 03:10:46,391 Stage-1 map = 0%, reduce = 0%
2011-12-31 03:11:29,794 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201112281627_0097 with errors
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
hive>
Best advice without knowing a lot more is where to find the error logs. So go to your JobTracker's web page, find the page for that job, and drill down to find the error logs.
Look for any "failed" tasks, click there to get to the page for that specific task.
You'll eventually get to the page containing the task-specific log, and that should help you diagnose the problem.
This could happen in n number of scenarios. Rerun the query once more and check the jobtracker for the failed/killed attempts and go through the logs for exact reason.