CircleCI deploy looks successful, but exits with code 1? - amazon-s3

I have a circleci configuration that runs a script to deploy a storybook site. It's essentially cding into my frontend monorepo packages, running yarn install, builds the storybook and syncs it to an S3 bucket.
(redacting a few things like names of packages and files)
It starts a with job within my circle's config file:
deploy-package-storybook:
<<: *defaults
working_directory: ~/root
steps:
- checkout
- <<: *install_aws
- attach_workspace:
at: ~/
- run:
name: Deploy Storybook
command: |
~/root/bin/deploy-storybook.sh PACKAGE
The script looks like this:
echo "${CYAN}deploying storybook\n"
cd packages/${TAG}
yarn build-storybook
aws s3 sync ./artifacts/storybook s3://storybook.website.us/${TAG} --delete || slack_alert storybook-deploy-fail
slack_alert "deploy-storybook-success"
When it runs in Circle, it seemingly finishes syncing and even sends a slack alert to my channel that it successfully deploys, but however at the end of it, it shows this:
upload: artifacts/storybook/vendors~main.8c562e1c344f6a5f2073.bundle.js to s3://storybook.website.us/package/vendors~main.8c562e1c344f6a5f2073.bundle.js
0
ok
Exited with code exit status 1
CircleCI received exit code 1
However I'm not entirely sure why it does this. It's successfully synced, so it should pass, right?
Things I've done:
I've tried adding a --debug flag to aws s3 sync like so:
aws s3 sync ./artifacts/storybook s3://storybook.website.us/${TAG} --delete --debug
and it returns with something like this:
2020-03-16 13:14:28,349 - ThreadPoolExecutor-0_2 - botocore.hooks - DEBUG - Event needs-retry.s3.PutObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7fcc35212dd0>
2020-03-16 13:14:28,349 - ThreadPoolExecutor-0_2 - botocore.retryhandler - DEBUG - No retry needed.
2020-03-16 13:14:28,350 - ThreadPoolExecutor-0_2 - botocore.hooks - DEBUG - Event needs-retry.s3.PutObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7fcc35212e10>>
2020-03-16 13:14:28,350 - ThreadPoolExecutor-0_2 - botocore.hooks - DEBUG - Event after-call.s3.PutObject: calling handler <function enhance_error_msg at 0x7fcc35c79320>
2020-03-16 13:14:28,350 - ThreadPoolExecutor-0_2 - s3transfer.utils - DEBUG - Releasing acquire 34/None
upload: artifacts/storybook/vendors~main.5f43fbfd82bbe3ed3177.bundle.js to s3://storybook.website.us/package/vendors~main.5f43fbfd82bbe3ed3177.bundle.js
2020-03-16 13:14:28,365 - Thread-1 - awscli.customizations.s3.results - DEBUG - Shutdown request received in result processing thread, shutting down result thread.
0
ok
Exited with code exit status 1
CircleCI received exit code 1
This isn't my area of expertise, so I'm really lost on what to do with errors like these. Could someone please help?

Related

AttributeError when using Github self-hosted runners to run unit test

Hello~ I am trying to use Github workflow to run the unit test for the code in my repository.
So I wrote a yaml file, its function is when I push my code to my repository, it can let me use my local environment to execute my code on Github, and the purpose of these codes is to run the unit tests.
But I can't run this workflow successfully, this error message always appears. I'm curious why I can execute unit test successfully on the local IDE, but it doesn't work when I use workflow to automatically execute it for me.
============================== warnings summary ===============================
..\..\..\..\anaconda3\lib\site-packages\pyreadline\py3k_compat.py:8
C:\Users\COA\anaconda3\lib\site-packages\pyreadline\py3k_compat.py:8: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
return isinstance(x, collections.Callable)
-- Docs: https://docs.pytest.org/en/stable/warnings.html
=========================== short test summary info ===========================
ERROR test/test_createds_for_fnn.py - AttributeError: type object 'h5py.h5.H5...
ERROR test/test_fnn.py - AttributeError: type object 'h5py.h5.H5PYConfig' has...
ERROR test/test_unet.py - AttributeError: type object 'h5py.h5.H5PYConfig' ha...
!!!!!!!!!!!!!!!!!!! Interrupted: 3 errors during collection !!!!!!!!!!!!!!!!!!!
======================== 1 warning, 3 errors in 2.76s =========================
Error: Process completed with exit code 1.
My environment:
Windows 10 OS
Python 3.8.8
Tensorflow-gpu 2.7.0
pytest 6.2.3
h5py 3.6.0
Workflow
# This is a basic workflow to help you get started with Actions
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ main ]
pull_request:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: self-hosted
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v3
# Runs a single command using the runners shell
- name: Run a one-line script
run: echo Hello, world!
# Runs a set of commands using the runners shell
- name: Run a multi-line script
run: |
echo Add other actions to build,
echo test, and deploy your project
echo train the fnn
pytest

Filebeat does not complete on close_eof + --once

Using filebeat 7.5.2:
I'm using a filebeat configuration with close_eof enabled and I run filebeat with the flag --once. I can see the harvester reaching eof but the filebeat keeps going.
Flebeat conf:
filebeat.inputs:
- type: log
close_eof: true
enabled: true
paths:
- "${LOGS_PATH}"
scan_frequency: 1s
fields: {
machine: "${HOST}"
}
output.logstash:
hosts: ["192.168.41.6:5044"]
bulk_max_size: 1024
timeout: 30s
pipelining: 1
workers: 1
And I run it using:
filebeat run --once -v -c "PATH TO CONF..."
And some logs from the filebeat instance:
...
2020-02-04T18:30:16.950Z INFO instance/beat.go:297 Setup Beat: filebeat; Version: 7.5.2
2020-02-04T18:30:17.059Z INFO [publisher] pipeline/module.go:97 Beat name: logstash
2020-02-04T18:30:17.167Z WARN beater/filebeat.go:152 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch out
put is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-02-04T18:30:17.168Z INFO instance/beat.go:429 filebeat start running.
2020-02-04T18:30:17.168Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2020-02-04T18:30:17.168Z INFO registrar/migrate.go:104 No registry home found. Create: /tmp/tmp.BXJtfiaEzb/data/registry/filebeat
2020-02-04T18:30:17.179Z INFO registrar/migrate.go:112 Initialize registry meta file
2020-02-04T18:30:17.192Z INFO registrar/registrar.go:108 No registry file found under: /tmp/tmp.BXJtfiaEzb/data/registry/filebeat/data.json. Creating a new re
gistry file.
2020-02-04T18:30:17.193Z INFO registrar/registrar.go:145 Loading registrar data from /tmp/tmp.BXJtfiaEzb/data/registry/filebeat/data.json
2020-02-04T18:30:17.193Z INFO registrar/registrar.go:152 States Loaded from registrar: 0
2020-02-04T18:30:17.193Z WARN beater/filebeat.go:368 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch out
put is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-02-04T18:30:17.193Z INFO crawler/crawler.go:72 Loading Inputs: 1
2020-02-04T18:30:17.194Z INFO log/input.go:152 Configured paths: [/tmp/tmp.BXJtfiaEzb/*.log]
2020-02-04T18:30:17.206Z INFO input/input.go:114 Starting input of type: log; ID: 13918413832820009056
2020-02-04T18:30:17.225Z INFO input/input.go:167 Stopping Input: 13918413832820009056
2020-02-04T18:30:17.225Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2020-02-04T18:30:17.225Z INFO log/harvester.go:251 Harvester started for file: /tmp/tmp.BXJtfiaEzb/dcbgw-20200124080032_darkblue.log
2020-02-04T18:30:17.231Z INFO beater/filebeat.go:384 Running filebeat once. Waiting for completion ...
2020-02-04T18:30:17.231Z INFO beater/filebeat.go:386 All data collection completed. Shutting down.
2020-02-04T18:30:17.231Z INFO crawler/crawler.go:139 Stopping Crawler
2020-02-04T18:30:17.231Z INFO crawler/crawler.go:149 Stopping 1 inputs
2020-02-04T18:30:17.258Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://192.168.41.6:5044))
2020-02-04T18:30:17.296Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://192.168.41.6:5044)) established
... Only metrics here ...
2020-02-04T18:35:55.686Z INFO log/harvester.go:274 End of file reached: /tmp/tmp.BXJtfiaEzb/dcbgw-20200124080032_darkblue.log. Closing because close_eof is enabled.
2020-02-04T18:35:55.686Z INFO crawler/crawler.go:165 Crawler stopped
... MORE METRICS ...
2020-02-04T18:36:26.609Z ERROR logstash/async.go:256 Failed to publish events caused by: read tcp 192.168.41.6:49662->192.168.41.6:5044: i/o timeout
2020-02-04T18:36:26.621Z ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2020-02-04T18:36:28.520Z ERROR pipeline/output.go:121 Failed to publish events: client is not connected
2020-02-04T18:36:28.520Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://192.168.41.6:5044))
2020-02-04T18:36:28.521Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://192.168.41.6:5044)) established
... MORE METRICS ...
From this I'm outputing this to Logstash 7.5.2 running in the same Ubuntu 18 VM. Running Logstash with log level trace does not output any error.

How to get get Application Id in submitting Flink jobs into Yarn use command line interface?

my team is building a flink based realtime computation platform. We submit flink job to Yarn.
We create a Process and run commit command use CLI. In order to get yarn application id, we create a thread and parse process output. Application id is used in other methods.
For example, we submit job by this command:
nohup flink run -m yarn-cluster -d -yqu root.default
-ynm BDP_RTC_FLINK_10457_MultiOutputTestFrontEnd -yjm 1024
-yn 2 -ytm 1024 -ys 2
The output is shown below:
2018-10-10 11:21:04 [info] 2018-10-10 11:21:04,629 INFO org.apache.flink.yarn.AbstractYarnClusterDescriptor - Submitting application master application_1536669298614_67675
2018-10-10 11:21:04 [info] 2018-10-10 11:21:04,654 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted application application_1536669298614_67675
2018-10-10 11:21:04 [info] 2018-10-10 11:21:04,656 INFO org.apache.flink.yarn.AbstractYarnClusterDescriptor - Deploying cluster, current state ACCEPTED
2018-10-10 11:21:12 [info] 2018-10-10 11:21:12,699 INFO org.apache.flink.yarn.AbstractYarnClusterDescriptor - YARN application has been deployed successfully.
2018-10-10 11:21:12 [info] 2018-10-10 11:21:12,700 INFO org.apache.flink.yarn.AbstractYarnClusterDescriptor - The Flink YARN client has been started in detached mode.
We parse process output and get application id: application_1536669298614_67675.
Are there any other elegant solutions to get application id in our situation?
Maybe you can get the relation between the yarn application and the flink job.
Firstly, list the yarn application.
yarn application -list
Then, you get the application list, and you can list flink job on the yarn application.
./bin/flink list -m yarn-cluster -yid <Yarn Application Id>
By the way, you can use
./bin/flink run -d
not use
nohup

How to get a slack notification when build fails?

When my build is successful I get a slack notification, when it fails I do not. Looking at the Drone web UI it looks like it stops once the build fails and the slack plugin is never run.
A successful build results in notify happening:
A failed build does not get to the notify stage:
The key parts of the .drone.yml are as follows:
build:
image: propheris/ruby:2.4.0
secrets: [gems_password]
commands:
- exit 0
notify:
image: plugins/slack
webhook: https://example.com/hooks/token
channel: dev
username: drone
icon_emoji: drone
I change exit 0 or exit 1 to simulate a successful or failed build.
Drone 0.7
plugin/slack
I've taken a look at the docs and it seems your missing the following line:
when:
status: [ success, failure ]
The docs state:
Example configuration for success and failure messages:
pipeline:
slack:
image: plugins/slack
webhook: https://hooks.slack.com/services/...
channel: dev
when:
status: [ success, failure ]
You can also add custom messages:
Example configuration with a custom message template:
pipeline:
slack:
image: plugins/slack
webhook: https://hooks.slack.com/services/...
channel: dev
template: >
{{#success build.status}}
build {{build.number}} succeeded. Good job.
{{else}}
build {{build.number}} failed. Fix me please.
{{/success}}

Listen gem not forwarding events to guard

I can't get Guard to run any action.
I'm using:
Gentoo x64 (3.14.14)
rbx-2.5.2
guard 2.11.1
listen 2.8.5
Guardfile is just catch-it-all from Understanding guard
guard :rspec, cmd: "bundle exec rspec" do
watch(/(.*)/) { |m| Guard::UI.puts "Unknown file: #{m[1]}"; nil }
end
And here is the output of $ LISTEN_GEM_DEBUGGING=2 bundle exec guard -d
I, [2015-02-04T08:11:34.995518 #25370] INFO -- : Celluloid loglevel set to: 0
I, [2015-02-04T08:11:34.997566 #25370] INFO -- : Listen version: 2.8.5
08:11:35 - DEBUG - Notiffany: gntp not available (Please add "gem 'ruby_gntp'" to your Gemfile and run your app with "bundle exec".).
08:11:35 - DEBUG - Notiffany: growl not available (Unsupported platform "linux-gnu").
08:11:35 - DEBUG - Notiffany: terminal_notifier not available (Unsupported platform "linux-gnu").
08:11:35 - DEBUG - Notiffany: libnotify not available (Please add "gem 'libnotify'" to your Gemfile and run your app with "bundle exec".).
08:11:35 - DEBUG - Notiffany: notifysend not available (Please add "gem 'notify_send'" to your Gemfile and run your app with "bundle exec".).
08:11:35 - DEBUG - Notiffany: notifu not available (Unsupported platform "linux-gnu").
08:11:35 - DEBUG - Command execution: emacsclient --eval '1'
08:11:35 - DEBUG - Notiffany: emacs not available (Emacs client failed).
08:11:35 - DEBUG - Notiffany: tmux not available (:tmux notifier is only available inside a TMux session.).
08:11:35 - DEBUG - Notiffany: file not available (No :path option given).
08:11:35 - DEBUG - Notiffany is using TerminalTitle to send notifications.
08:11:35 - DEBUG - Command execution: hash stty
08:11:35 - DEBUG - Guard starts all plugins
08:11:35 - DEBUG - Hook :start_begin executed for Guard::RSpec
08:11:35 - INFO - Guard::RSpec is running
08:11:35 - DEBUG - Hook :start_end executed for Guard::RSpec
D, [2015-02-04T08:11:35.245730 #25370] DEBUG -- : Adapter: considering TCP ...
D, [2015-02-04T08:11:35.245850 #25370] DEBUG -- : Adapter: considering polling ...
D, [2015-02-04T08:11:35.245900 #25370] DEBUG -- : Adapter: considering optimized backend...
I, [2015-02-04T08:11:35.286148 #25370] INFO -- : Record.build(): 0.03905487060546875 seconds
08:11:35 - INFO - Guard is now watching at '/media/I/08projects/programming/rails/tests/guard_test'
08:11:35 - DEBUG - Start interactor
[1] guard(main)> D, [2015-02-04T08:11:44.639225 #25370] DEBUG -- : inotify: app/controllers/users_controller.rb ([:attrib])
D, [2015-02-04T08:11:44.639598 #25370] DEBUG -- : raw queue: [:file, #<Pathname:/media/I/08projects/programming/rails/tests/guard_test>, "app/controllers/users_controller.rb", {:change=>:modified}]
D, [2015-02-04T08:11:44.640532 #25370] DEBUG -- : inotify: app/controllers/users_controller.rb ([:close, :close_write])
D, [2015-02-04T08:11:44.640733 #25370] DEBUG -- : raw queue: [:file, #<Pathname:/media/I/08projects/programming/rails/tests/guard_test>, "app/controllers/users_controller.rb", {:change=>:modified}]
As far as I can see, listen detects file change but for some reason won't forward it to guard