I have this ssh file which i run on my linux server with Pentaho. On my local machine it works fine.
but if i run this command on my server
./incremental_job.sh
it returns me this on the log output
2022/08/19 14:17:32 - job_etl_incremental - Starting entry [Job failed alert]
2022/08/19 14:17:37 - job_etl_incremental - Finished job entry [Job failed alert] (result=[true])
2022/08/19 14:17:37 - job_etl_incremental - Finished job entry [Job ERP Stock] (result=[true])
2022/08/19 14:17:37 - job_etl_incremental - Job execution finished
2022/08/19 14:17:37 - Carte - Installing timer to purge stale objects after 1440 minutes.
2022/08/19 14:17:37 - Kitchen - Finished!
Any ideas how to fix i this? I run other SSH files without having this issue.
Related
Im running this job on Pentaho 8.2 and I have this error log.
2022/05/17 14:47:51 - job_etl_incremental - Finished job entry [Job User Sales Support] (result=[false])
2022/05/17 14:47:51 - job_etl_incremental - Finished job entry [Job User] (result=[false])
2022/05/17 14:47:51 - job_etl_incremental - Job execution finished
2022/05/17 14:47:51 - Kitchen - Finished!
2022/05/17 14:47:51 - Kitchen - ERROR (version 8.2.0.0-342, build 8.2.0.0-342 from 2018-11-14 10.30.55 by buildguy) : Finished with errors
2022/05/17 14:47:51 - Kitchen - Start=2022/05/17 14:46:04.982, Stop=2022/05/17 14:47:51.711
2022/05/17 14:47:51 - Kitchen - Processing ended after 1 minutes and 46 seconds (106 seconds total).
it does run some jobs but not completely with (result=[false]). Any ideas on how to fix this?
I do have this pentaho-metadata-8.2.0.0-342.jar in my /home/pentaho/data-integration/lib
I have a circleci configuration that runs a script to deploy a storybook site. It's essentially cding into my frontend monorepo packages, running yarn install, builds the storybook and syncs it to an S3 bucket.
(redacting a few things like names of packages and files)
It starts a with job within my circle's config file:
deploy-package-storybook:
<<: *defaults
working_directory: ~/root
steps:
- checkout
- <<: *install_aws
- attach_workspace:
at: ~/
- run:
name: Deploy Storybook
command: |
~/root/bin/deploy-storybook.sh PACKAGE
The script looks like this:
echo "${CYAN}deploying storybook\n"
cd packages/${TAG}
yarn build-storybook
aws s3 sync ./artifacts/storybook s3://storybook.website.us/${TAG} --delete || slack_alert storybook-deploy-fail
slack_alert "deploy-storybook-success"
When it runs in Circle, it seemingly finishes syncing and even sends a slack alert to my channel that it successfully deploys, but however at the end of it, it shows this:
upload: artifacts/storybook/vendors~main.8c562e1c344f6a5f2073.bundle.js to s3://storybook.website.us/package/vendors~main.8c562e1c344f6a5f2073.bundle.js
0
ok
Exited with code exit status 1
CircleCI received exit code 1
However I'm not entirely sure why it does this. It's successfully synced, so it should pass, right?
Things I've done:
I've tried adding a --debug flag to aws s3 sync like so:
aws s3 sync ./artifacts/storybook s3://storybook.website.us/${TAG} --delete --debug
and it returns with something like this:
2020-03-16 13:14:28,349 - ThreadPoolExecutor-0_2 - botocore.hooks - DEBUG - Event needs-retry.s3.PutObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7fcc35212dd0>
2020-03-16 13:14:28,349 - ThreadPoolExecutor-0_2 - botocore.retryhandler - DEBUG - No retry needed.
2020-03-16 13:14:28,350 - ThreadPoolExecutor-0_2 - botocore.hooks - DEBUG - Event needs-retry.s3.PutObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7fcc35212e10>>
2020-03-16 13:14:28,350 - ThreadPoolExecutor-0_2 - botocore.hooks - DEBUG - Event after-call.s3.PutObject: calling handler <function enhance_error_msg at 0x7fcc35c79320>
2020-03-16 13:14:28,350 - ThreadPoolExecutor-0_2 - s3transfer.utils - DEBUG - Releasing acquire 34/None
upload: artifacts/storybook/vendors~main.5f43fbfd82bbe3ed3177.bundle.js to s3://storybook.website.us/package/vendors~main.5f43fbfd82bbe3ed3177.bundle.js
2020-03-16 13:14:28,365 - Thread-1 - awscli.customizations.s3.results - DEBUG - Shutdown request received in result processing thread, shutting down result thread.
0
ok
Exited with code exit status 1
CircleCI received exit code 1
This isn't my area of expertise, so I'm really lost on what to do with errors like these. Could someone please help?
Using filebeat 7.5.2:
I'm using a filebeat configuration with close_eof enabled and I run filebeat with the flag --once. I can see the harvester reaching eof but the filebeat keeps going.
Flebeat conf:
filebeat.inputs:
- type: log
close_eof: true
enabled: true
paths:
- "${LOGS_PATH}"
scan_frequency: 1s
fields: {
machine: "${HOST}"
}
output.logstash:
hosts: ["192.168.41.6:5044"]
bulk_max_size: 1024
timeout: 30s
pipelining: 1
workers: 1
And I run it using:
filebeat run --once -v -c "PATH TO CONF..."
And some logs from the filebeat instance:
...
2020-02-04T18:30:16.950Z INFO instance/beat.go:297 Setup Beat: filebeat; Version: 7.5.2
2020-02-04T18:30:17.059Z INFO [publisher] pipeline/module.go:97 Beat name: logstash
2020-02-04T18:30:17.167Z WARN beater/filebeat.go:152 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch out
put is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-02-04T18:30:17.168Z INFO instance/beat.go:429 filebeat start running.
2020-02-04T18:30:17.168Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2020-02-04T18:30:17.168Z INFO registrar/migrate.go:104 No registry home found. Create: /tmp/tmp.BXJtfiaEzb/data/registry/filebeat
2020-02-04T18:30:17.179Z INFO registrar/migrate.go:112 Initialize registry meta file
2020-02-04T18:30:17.192Z INFO registrar/registrar.go:108 No registry file found under: /tmp/tmp.BXJtfiaEzb/data/registry/filebeat/data.json. Creating a new re
gistry file.
2020-02-04T18:30:17.193Z INFO registrar/registrar.go:145 Loading registrar data from /tmp/tmp.BXJtfiaEzb/data/registry/filebeat/data.json
2020-02-04T18:30:17.193Z INFO registrar/registrar.go:152 States Loaded from registrar: 0
2020-02-04T18:30:17.193Z WARN beater/filebeat.go:368 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch out
put is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-02-04T18:30:17.193Z INFO crawler/crawler.go:72 Loading Inputs: 1
2020-02-04T18:30:17.194Z INFO log/input.go:152 Configured paths: [/tmp/tmp.BXJtfiaEzb/*.log]
2020-02-04T18:30:17.206Z INFO input/input.go:114 Starting input of type: log; ID: 13918413832820009056
2020-02-04T18:30:17.225Z INFO input/input.go:167 Stopping Input: 13918413832820009056
2020-02-04T18:30:17.225Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2020-02-04T18:30:17.225Z INFO log/harvester.go:251 Harvester started for file: /tmp/tmp.BXJtfiaEzb/dcbgw-20200124080032_darkblue.log
2020-02-04T18:30:17.231Z INFO beater/filebeat.go:384 Running filebeat once. Waiting for completion ...
2020-02-04T18:30:17.231Z INFO beater/filebeat.go:386 All data collection completed. Shutting down.
2020-02-04T18:30:17.231Z INFO crawler/crawler.go:139 Stopping Crawler
2020-02-04T18:30:17.231Z INFO crawler/crawler.go:149 Stopping 1 inputs
2020-02-04T18:30:17.258Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://192.168.41.6:5044))
2020-02-04T18:30:17.296Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://192.168.41.6:5044)) established
... Only metrics here ...
2020-02-04T18:35:55.686Z INFO log/harvester.go:274 End of file reached: /tmp/tmp.BXJtfiaEzb/dcbgw-20200124080032_darkblue.log. Closing because close_eof is enabled.
2020-02-04T18:35:55.686Z INFO crawler/crawler.go:165 Crawler stopped
... MORE METRICS ...
2020-02-04T18:36:26.609Z ERROR logstash/async.go:256 Failed to publish events caused by: read tcp 192.168.41.6:49662->192.168.41.6:5044: i/o timeout
2020-02-04T18:36:26.621Z ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2020-02-04T18:36:28.520Z ERROR pipeline/output.go:121 Failed to publish events: client is not connected
2020-02-04T18:36:28.520Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://192.168.41.6:5044))
2020-02-04T18:36:28.521Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://192.168.41.6:5044)) established
... MORE METRICS ...
From this I'm outputing this to Logstash 7.5.2 running in the same Ubuntu 18 VM. Running Logstash with log level trace does not output any error.
I am trying to run a simple Pig script and have scheduled it via Oozie , however , I get the following Oozie error after the script is run.
I am using Cloudera Enterprise Data Hub Edition Trial 5.6.0 (#54 built by jenkins on 20160211-1910 git: 1c2be84380aa23bd5d6993ec300e144c78b37bf2) .
> 2016-04-09 06:37:06,229 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler
> - Using reducer estimator: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
> 2016-04-09 06:37:06,237 [uber-SubtaskRunner] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2017: Internal error creating
> job configuration.
> <<< Invocation of Main class completed <<<
> Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2]
> Oozie Launcher failed, finishing Hadoop job gracefully
> Oozie Launcher, uploading action data to HDFS sequence file: hdfs://node.xxxx.com:8020/user/admin/oozie-oozi/0000000-160409060732867-oozie-oozi-W/pig--pig/action-data.seq
EDIT.
Additional log info by using the oozie command shell as follows.
oozie job -log 0000001-160409063446097-oozie-oozi-W -oozie http://xxxnode:11000/oozie
Gives only the following
63446097-oozie-oozi-W] ACTION[0000001-160409063446097-oozie-oozi-W#FirstJob] Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2]
I'm running a kettle job which calls another job and then a transformation, the job runs smoothly in the PDI Spoon but when i call the job via a kitchen job it throws
2016/03/07 20:51:50 - Purifier_success - Executing command :
/home/ubuntu/dataintegration/data-integration/null/kettle_6a3db0e2-e4a6-11e5-b201-d7ec3590f1c5shell
2016/03/07 20:51:50 - Purifier_success - ERROR (version 5.4.0.1-130,
build 1 from 2015-06-14_12-34-55 by buildguy) : (stderr)
/home/ubuntu/dataintegration/data-integration/null/kettle_6a3db0e2-e4a6-11e5-b201-d7ec3590f1c5shell:
1:
/home/ubuntu/dataintegration/data-integration/null/kettle_6a3db0e2-e4a6-11e5-b201-d7ec3590f1c5shell:
2016/03/07 20:51:50 - Purifier_success - ERROR (version 5.4.0.1-130,
build 1 from 2015-06-14_12-34-55 by buildguy) : (stderr) : not found
But the job runs successfully. But its awkward to see to see the shell screen saying error.
Is there any way to solve it.