Pig: Illustrate error 2997 - apache-pig

Below code is working fine and producing the results at the grunt (local mode) except the illustrate on last relation is giving the error 2997
/* Open Grunt in local mode pig -x local */
STOCK_A= LOAD '/media/sf_sand/NYSE_daily_prices_A.csv' USING PigStorage(',') AS (exchange:chararray,symbol:chararray,date:chararray,open:float,high:float,low:float,close:float,volume:int,adj_close:float);
describe STOCK_A;
illustrate STOCK_A;
b= LIMIT STOCK_A 100;
describe b;
illustrate b;
c= FOREACH b GENERATE *;
illustrate c is working
c= FOREACH b GENERATE symbol,date,close;
dump c; — working
Illustrate c is not working below is the error ( Error 2997 Encountered IO exception):
015-06-10 11:52:23,621 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:///
2015-06-10 11:52:23,647 [main] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
2015-06-10 11:52:23,647 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[ConstantCalculator, LoadTypeCastInserter, PredicatePushdownOptimizer, StreamTypeCastInserter], RULES_DISABLED=[AddForEach, ColumnMapKeyPrune, GroupByConstParallelSetter, LimitOptimizer, MergeFilter, MergeForEach, PartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter]}
2015-06-10 11:52:23,650 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2015-06-10 11:52:23,650 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2015-06-10 11:52:23,650 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2015-06-10 11:52:23,651 [main] INFO org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig script settings are added to the job
2015-06-10 11:52:23,651 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2015-06-10 11:52:23,658 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
2015-06-10 11:52:23,658 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cacche
2015-06-10 11:52:23,658 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Distributed cache not supported or needed in local mode. Setting key [pig.schematuple.local.dir] with code temp directory: /tmp/1433937143658-0
2015-06-10 11:52:23,667 [main] INFO org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
2015-06-10 11:52:23,669 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map - Aliases being processed per job phase (AliasName[line,offset]): M: STOCK_A[3,9] C: R:
2015-06-10 11:52:23,672 [main] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2015-06-10 11:52:23,672 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2015-06-10 11:52:23,705 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2015-06-10 11:52:23,707 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2015-06-10 11:52:23,707 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2015-06-10 11:52:23,708 [main] INFO org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig script settings are added to the job
2015-06-10 11:52:23,708 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2015-06-10 11:52:23,708 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Reduce phase detected, estimating # of required reducers.
2015-06-10 11:52:23,709 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting Parallelism to 1
2015-06-10 11:52:23,723 [main] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
2015-06-10 11:52:23,727 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Map - Aliases being processed per job phase (AliasName[line,offset]): M: STOCK_A[3,9],STOCK_A[-1,-1],c[8,3] C: R: b[4,3]
2015-06-10 11:52:23,727 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigHadoopLogger - org.apache.pig.builtin.Utf8StorageConverter(FIELD_DISCARDED_TYPE_CONVERSION_FAILED): Unable to interpret value [115, 116, 111, 99, 107, 95, 112, 114, 105, 99, 101, 95, 111, 112, 101, 110] in field being converted to float, caught NumberFormatException <For input string: "stock_price_open"> field discarded
2015-06-10 11:52:23,727 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigHadoopLogger - org.apache.pig.builtin.Utf8StorageConverter(FIELD_DISCARDED_TYPE_CONVERSION_FAILED): Unable to interpret value [115, 116, 111, 99, 107, 95, 112, 114, 105, 99, 101, 95, 104, 105, 103, 104] in field being converted to float, caught NumberFormatException <For input string: "stock_price_high"> field discarded
2015-06-10 11:52:23,727 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigHadoopLogger - org.apache.pig.builtin.Utf8StorageConverter(FIELD_DISCARDED_TYPE_CONVERSION_FAILED): Unable to interpret value [115, 116, 111, 99, 107, 95, 112, 114, 105, 99, 101, 95, 108, 111, 119] in field being converted to float, caught NumberFormatException <For input string: "stock_price_low"> field discarded
2015-06-10 11:52:23,727 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigHadoopLogger - org.apache.pig.builtin.Utf8StorageConverter(FIELD_DISCARDED_TYPE_CONVERSION_FAILED): Unable to interpret value [115, 116, 111, 99, 107, 95, 112, 114, 105, 99, 101, 95, 99, 108, 111, 115, 101] in field being converted to float, caught NumberFormatException <For input string: "stock_price_close"> field discarded
2015-06-10 11:52:23,727 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigHadoopLogger - org.apache.pig.builtin.Utf8StorageConverter(FIELD_DISCARDED_TYPE_CONVERSION_FAILED): Unable to interpret value [115, 116, 111, 99, 107, 95, 118, 111, 108, 117, 109, 101] in field being converted to int, caught NumberFormatException <For input string: "stock_volume"> field discarded
2015-06-10 11:52:23,727 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigHadoopLogger - org.apache.pig.builtin.Utf8StorageConverter(FIELD_DISCARDED_TYPE_CONVERSION_FAILED): Unable to interpret value [115, 116, 111, 99, 107, 95, 112, 114, 105, 99, 101, 95, 97, 100, 106, 95, 99, 108, 111, 115, 101] in field being converted to float, caught NumberFormatException <For input string: "stock_price_adj_close"> field discarded
java.lang.ClassCastException
2015-06-10 11:52:23,727 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2997: Encountered IOException. Exception

In the last line of your log, you have the following error :
Unable to interpret value [115, 116, 111, 99, 107, 95, 112, 114, 105, 99, 101, 95, 97, 100, 106, 95, 99, 108, 111, 115, 101] in field being converted to float, caught NumberFormatException field discarded java.lang.ClassCastException 2015-06-10 11:52:23,727 [main]
Could you provide a sample of you csv file as I think event STOCK_A is not okay ?
You may also LIMIT the input to few lines, and show the results of DESCRIBE and DUMP on those lines.

Related

Odoo v13 : Could not uninstall crm App : Record does not exist or has been deleted (Record: ir.model.fields(9311,), User: 1)

Odoo Version : 13.0.20210614
Way to Reproduce : in Application, CRM App. > uninstall
Behavior :
image 1 in attach
Can t Unsintall : ERROR (image 1 in attach):
('Record does not exist or has been deleted (Record: ir.model.fields(9311,), User: 1)', None)
Same Bug was reported several times but still not fixed :
https://github.com/odoo/odoo/issues/38008
How to deal with it to uninstall the crm App ?
**************** TRACEBACK **************
2021-06-18 14:21:52,779 6 INFO samadeva-oerp-brstaging-2702918 odoo.addons.base.models.ir_module: ALLOW access to module.module_uninstall on ['sale_crm', 'crm_enterprise', 'crm_sms', 'website_crm', 'website_crm_sms', 'mass_mailing_crm', 'crm'] to user __system__ #1 via 86.243.106.83
2021-06-18 14:21:52,800 6 WARNING samadeva-oerp-brstaging-2702918 odoo.modules.loading: Transient module states were reset
2021-06-18 14:21:52,801 6 ERROR samadeva-oerp-brstaging-2702918 odoo.modules.registry: Failed to load registry
Traceback (most recent call last):
File "/home/odoo/src/odoo/odoo/api.py", line 745, in get
def get(self, record, field, default=NOTHING):
value = self._data[field][record._ids[0]]
KeyError: 9311
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/odoo/src/odoo/odoo/fields.py", line 1037, in __get__
value = env.cache.get(record, self)
File "/home/odoo/src/odoo/odoo/api.py", line 751, in get
raise CacheMiss(record, field)
odoo.exceptions.CacheMiss: ('ir.model.fields(9311,).model', None)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/odoo/src/odoo/odoo/modules/registry.py", line 86, in new
odoo.modules.load_modules(registry._db, force_demo, status, update_module)
File "/home/odoo/src/odoo/odoo/modules/loading.py", line 494, in load_modules
Module.browse(modules_to_remove.values()).module_uninstall()
File "<decorator-gen-61>", line 2, in module_uninstall
File "/home/odoo/src/odoo/odoo/addons/base/models/ir_module.py", line 73, in check_and_log
return method(self, *args, **kwargs)
File "/home/odoo/src/odoo/odoo/addons/base/models/ir_module.py", line 478, in module_uninstall
self.env['ir.model.data']._module_data_uninstall(modules_to_remove)
File "/home/odoo/src/odoo/odoo/addons/base/models/ir_model.py", line 1898, in _module_data_uninstall
model = self.pool.get(ir_field.model)
File "/home/odoo/src/odoo/odoo/fields.py", line 1050, in __get__
_("(Record: %s, User: %s)") % (record, env.uid),
odoo.exceptions.MissingError: ('Record does not exist or has been deleted (Record: ir.model.fields(9311,), User: 1)', None)
My inspection of the cause of this error triggered by Click on uninstall crm module shows me that the database table ir_model_data had a record (fk : res_id=9311) pointing to the other table ir_model_fields, where the corresponding pk id is missing (no record having pk : id=9311). To be able to uninstall the crm App, the only solution that i have found - after searching for hours to solve it using the odoo way - was to delete the "orphan" record in ir_model_data. Because it was not allowed doing that using oddo-bin shell, i had to fire the deletion by puting this line at the end of an def_buttonchangestatus python function clickable on the ui :
self.env['ir.model.data'].search([('res_id','=',9311)],limit=1).unlink()

ERROR 1066: Unable to open iteratorfor alias

Command run (trying to get Maximum run scored)
Run_M = foreach Run_Group_All generate (Match.Player, Match.Run) , MAX(Match.Run);
As per log Group command is failing , can anybody help where is problem?
java.lang.Exception: org.apache.pig.backend.executionengine.ExecException: ERROR 2103: Problem doing work on Longs
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:489)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:556)
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 2103: Problem doing work on Longs
at org.apache.pig.builtin.AlgebraicLongMathBase.doTupleWork(AlgebraicLongMathBase.java:84)
at org.apache.pig.builtin.AlgebraicLongMathBase.exec(AlgebraicLongMathBase.java:93)
at org.apache.pig.builtin.AlgebraicLongMathBase.exec(AlgebraicLongMathBase.java:37)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNext(POUserFunc.java:326)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNextLong(POUserFunc.java:410)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.getNext(PhysicalOperator.java:351)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.processPlan(POForEach.java:400)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:317)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:474)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:442)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:422)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:269)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:346)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassCastException: org.apache.pig.data.DataByteArray cannot be cast to java.lang.Number
at org.apache.pig.builtin.AlgebraicLongMathBase.doTupleWork(AlgebraicLongMathBase.java:77)
... 20 more
2017-09-03 07:48:03,212 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
2017-09-03 07:48:03,212 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_local1294624349_0011 has failed! Stop running all dependent jobs
2017-09-03 07:48:03,212 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2017-09-03 07:48:03,213 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2017-09-03 07:48:03,214 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2017-09-03 07:48:03,214 [main] ERROR org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil - 1 map reduce job(s) failed!
2017-09-03 07:48:03,215 [main] INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.8.1 0.15.0 goldi 2017-09-03 07:48:01 2017-09-03 07:48:03 GROUP_BY
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
job_local1294624349_0011 Cric,Match,Run_Group_All,Run_M GROUP_BY Message: Job failed! file:/tmp/temp-1949037811/tmp1601097545,
Input(s):
Failed to read data from "/home/goldi/Batting.csv"
Output(s):
Failed to produce result in "file:/tmp/temp-1949037811/tmp1601097545"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_local1294624349_0011
2017-09-03 07:48:03,217 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
2017-09-03 07:48:03,218 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias Run_M
Details at logfile: /home/goldi/pig_1504365116860.log
Replace '(Match.Player, Match.Run)' with 'group'.
Run_M = foreach Run_Group_All generate FLATTEN(group) as (player,run) , MAX(Match.Run);

How do you control text formatting when launching GAP scripts from the command line?

I would like to understand GAP's behaviour when I launch a script from the command line, for example
$ gap mytest.gap
as opposed to calling it from inside GAP
gap> Read("mytest.gap");
In particular, I've tried to suppress automatic formatting with line breaks and indentation. If the file mytest.gap is the following
SetPrintFormattingStatus( "*stdout*", false );
Print( Primes{[1..30]}, "\n" );
then I get the expected behaviour when calling it with Read(), namely
[ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113 ]
whereas launching it from the command line, I still get
[ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71,
73, 79, 83, 89, 97, 101, 103, 107, 109, 113 ]
Can somebody please give an explanation for this behaviour? Is GAP's treatment of scripts launched from a command line invocation documented somewhere? I couldn't find it in the manual, but the man page does say usage: gap [OPTIONS] [FILES] with documentation only of how the options are treated.
I am afraid that it is currently not possible to completely disable the output formatting of Print the way you tried.
However, you can work around the problem by using the newer stream APIs and PrintTo, like this:
s:=OutputTextUser();
SetPrintFormattingStatus( s, false );
PrintTo( s, Primes{[1..30]}, "\n" );
I logged this as a bug in the GAP issue tracker, and perhaps we can fix it in the next release (or perhaps somebody will explain why it's "not a bug but a feature" ;-).

Error unexpected double quote occur while Parshing Common apache Apache log file

I was trying to parse apache log :
159.142.136.231 - - [08/Aug/1995:21:56:04 -0400] "GET /shuttle/countdown/ HTTP/1.0" 200 4673
Code:
log = load "/myhdfs/project/TestLog.txt" USING org.apache.pig.piggybank.storage.apachelog.ApacheCommonLogLoader AS (address, logname, user, time,method, uri, proto,status, bytes);
Error :
<line 1, column 9> Unexpected character '"'
2015-12-12 00:49:10,187 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1200: <line 1, column 9> Unexpected character '"'.
I don't know why this error occur?
Try using single quote ' instead of double when giving hdfs path

Elasticsearch Parse Exception error when attempting to index PDF

I'm just getting started with elasticsearch. Our requirement has us needing to index thousands of PDF files and I'm having a hard time getting just ONE of them to index successfully.
Installed the Attachment Type plugin and got response: Installed mapper-attachments.
Followed the Attachment Type in Action tutorial but the process hangs and I don't know how to interpret the error message. Also tried the gist which hangs in the same place.
$ curl -X POST "localhost:9200/test/attachment/" -d json.file
{"error":"ElasticSearchParseException[Failed to derive xcontent from (offset=0, length=9): [106, 115, 111, 110, 46, 102, 105, 108, 101]]","status":400}
More details:
The json.file contains an embedded Base64 PDF file (as per instructions). The first line of the file appears correct (to me anyway): {"file":"JVBERi0xLjQNJeLjz9MNCjE1OCAwIG9iaiA8...
I'm not sure if maybe the json.file is invalid or if maybe elasticsearch just isn't set up to parse PDFs properly?!?
Encoding - Here's how we're encoding the PDF into json.file (as per tutorial):
coded=`cat fn6742.pdf | perl -MMIME::Base64 -ne 'print encode_base64($_)'`
json="{\"file\":\"${coded}\"}"
echo "$json" > json.file
also tried:
coded=`openssl base64 -in fn6742.pdf
log:
[2012-06-07 12:32:16,742][DEBUG][action.index ] [Bailey, Paul] [test][0], node[AHLHFKBWSsuPnTIRVhNcuw], [P], s[STARTED]: Failed to execute [index {[test][attachment][DauMB-vtTIaYGyKD4P8Y_w], source[json.file]}]
org.elasticsearch.ElasticSearchParseException: Failed to derive xcontent from (offset=0, length=9): [106, 115, 111, 110, 46, 102, 105, 108, 101]
at org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:147)
at org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:50)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:451)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:437)
at org.elasticsearch.index.shard.service.InternalIndexShard.prepareCreate(InternalIndexShard.java:290)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:210)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:532)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:430)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Hoping someone can help me see what I'm missing or did wrong?
The following error points to the source of the problem.
Failed to derive xcontent from (offset=0, length=9): [106, 115, 111, 110, 46, 102, 105, 108, 101]
The UTF-8 codes [106, 115, 111, ...] show that you are trying to index string "json.file" instead of content of the file.
To index content of the file simply add letter "#" in front of the file name.
curl -X POST "localhost:9200/test/attachment/" -d #json.file
Turns out it's necessary to export ES_JAVA_OPTS=-Djava.awt.headless=true before running a java app on a 'headless' server... who would'a thought!?!