Error using brownie on Vscode while running scripts - solidity

I get the following ERROR when I try to run scripts with brownie, using the following PowerShell command;
brownie run scripts/simple_collectible/deploy_simple
I have look all over stacked and other pages for info on this and I can't seem to find much, I would really like to carry on with my project but I am stuck at this point. any hekp would be wonderful.
Cheers!
INFO MESSAGE:
PS C:\Users\charl\OneDrive\Desktop\NFT Development\NFT-mix-main> brownie run scripts/simple_collectible/deploy_simple
INFO: Could not find files for the given pattern(s).
Brownie v1.17.2 - Python development framework for Ethereum
NftMixMainProject is the active project.
Launching 'ganache-cli.cmd --port 8545 --gasLimit 12000000 --accounts 10 --hardfork istanbul --mnemonic brownie'...
File "C:\Users\charl.local\pipx\venvs\eth-brownie\lib\site-packages\brownie_cli_main_.py", line 64, in main
importlib.import_module(f"brownie._cli.{cmd}").main()
File "C:\Users\charl.local\pipx\venvs\eth-brownie\lib\site-packages\brownie_cli\run.py", line 46, in main
path, _ = _get_path(args[""])
File "C:\Users\charl.local\pipx\venvs\eth-brownie\lib\site-packages\brownie\project\scripts.py", line 130, in _get_path
raise FileNotFoundError(f"Cannot find {path_str}")
FileNotFoundError: Cannot find scripts/simple_collectible/deploy_simple
Terminating local RPC client...
I have the following packages installed:
ganache-cli
pip
pipx
Brownie (installed through pipx, and initialized)
I have run the brownie command to make sure the install is good.
I have install Python Venv
I have tried uninstalling all packaged and reinstalling
I have done the same with my VScode and Vsbuildtools
I have done the same with Python itself (reinstalled from the website)
The code snippet I have for my Script that I am trying to run is here:
#!/usr/bin/python3
import os
from brownie import SimpleCollectible, accounts, config, network
def main():
dev = accounts.add(config["wallets"]["from_key"])
print(network.show_active())
publish_source = True if os.getenv("ETHERSCAN_TOKEN") else False
SimpleCollectible.deploy({"from": dev}, publish_source=publish_source)
And finally for your reference I have my brownie-config.yaml contents here:
# exclude SafeMath when calculating test coverage
# https://eth-brownie.readthedocs.io/en/v1.10.3/config.html#exclude_paths
reports:
exclude_contracts:
- SafeMath
dependencies:
- smartcontractkit/chainlink-brownie-contracts#1.1.1
- OpenZeppelin/openzeppelin-contracts#3.4.0
compiler:
solc:
remappings:
- '#chainlink=smartcontractkit/chainlink-brownie-contracts#1.1.1'
- '#openzeppelin=OpenZeppelin/openzeppelin-contracts#3.4.0'
# automatically fetch contract sources from Etherscan
autofetch_sources: True
dotenv: .env
# set a custom mnemonic for the development network
networks:
default: development
kovan:
vrf_coordinator: '0xdD3782915140c8f3b190B5D67eAc6dc5760C46E9'
link_token: '0xa36085F69e2889c224210F603D836748e7dC0088'
keyhash: '0x6c3699283bda56ad74f6b855546325b68d482e983852a7a82979cc4807b641f4'
fee: 100000000000000000
oracle: '0x2f90A6D021db21e1B2A077c5a37B3C7E75D15b7e'
jobId: '29fa9aa13bf1468788b7cc4a500a45b8'
eth_usd_price_feed: '0x9326BFA02ADD2366b30bacB125260Af641031331'
rinkeby:
vrf_coordinator: '0xb3dCcb4Cf7a26f6cf6B120Cf5A73875B7BBc655B'
link_token: '0x01be23585060835e02b77ef475b0cc51aa1e0709'
keyhash: '0x2ed0feb3e7fd2022120aa84fab1945545a9f2ffc9076fd6156fa96eaff4c1311'
fee: 100000000000000000
oracle: '0x7AFe1118Ea78C1eae84ca8feE5C65Bc76CcF879e'
jobId: '6d1bfe27e7034b1d87b5270556b17277'
eth_usd_price_feed: '0x8A753747A1Fa494EC906cE90E9f37563A8AF630e'
mumbai:
eth_usd_price_feed: '0x0715A7794a1dc8e42615F059dD6e406A6594651A'
binance:
# link_token: ??
eth_usd_price_feed: '0x9ef1B8c0E4F7dc8bF5719Ea496883DC6401d5b2e'
binance-fork:
eth_usd_price_feed: '0x9ef1B8c0E4F7dc8bF5719Ea496883DC6401d5b2e'
mainnet-fork:
eth_usd_price_feed: '0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419'
matic-fork:
eth_usd_price_feed: '0xF9680D99D6C9589e2a93a78A04A279e509205945'
wallets:
from_key: ${PRIVATE_KEY}
from_mnemonic: ${MNEMONIC}
# You'd have to change the accounts.add to accounts.from_mnemonic to use from_mnemonic

Change scripts/simple_collectible/deploy_simple -> scripts/simple_collectible/deploy_simple.py

Related

AttributeError when using Github self-hosted runners to run unit test

Hello~ I am trying to use Github workflow to run the unit test for the code in my repository.
So I wrote a yaml file, its function is when I push my code to my repository, it can let me use my local environment to execute my code on Github, and the purpose of these codes is to run the unit tests.
But I can't run this workflow successfully, this error message always appears. I'm curious why I can execute unit test successfully on the local IDE, but it doesn't work when I use workflow to automatically execute it for me.
============================== warnings summary ===============================
..\..\..\..\anaconda3\lib\site-packages\pyreadline\py3k_compat.py:8
C:\Users\COA\anaconda3\lib\site-packages\pyreadline\py3k_compat.py:8: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
return isinstance(x, collections.Callable)
-- Docs: https://docs.pytest.org/en/stable/warnings.html
=========================== short test summary info ===========================
ERROR test/test_createds_for_fnn.py - AttributeError: type object 'h5py.h5.H5...
ERROR test/test_fnn.py - AttributeError: type object 'h5py.h5.H5PYConfig' has...
ERROR test/test_unet.py - AttributeError: type object 'h5py.h5.H5PYConfig' ha...
!!!!!!!!!!!!!!!!!!! Interrupted: 3 errors during collection !!!!!!!!!!!!!!!!!!!
======================== 1 warning, 3 errors in 2.76s =========================
Error: Process completed with exit code 1.
My environment:
Windows 10 OS
Python 3.8.8
Tensorflow-gpu 2.7.0
pytest 6.2.3
h5py 3.6.0
Workflow
# This is a basic workflow to help you get started with Actions
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ main ]
pull_request:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: self-hosted
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v3
# Runs a single command using the runners shell
- name: Run a one-line script
run: echo Hello, world!
# Runs a set of commands using the runners shell
- name: Run a multi-line script
run: |
echo Add other actions to build,
echo test, and deploy your project
echo train the fnn
pytest

Cannot install kernel-devsrc

I'm trying to set up my environment to use Yocto's generated SDK to compile my out-of-tree module, but for some reason, I'm getting an error.
cp: cannot stat 'arch/arm/kernel/module.lds': No such file or directory
I'm using Poky distribution and meta-raspberrypi which is needed because I'm using the RPI ZeroW board.
Apart from this everything works fine. I'm able to compile the entire image and load it on the board.
Here is the line I've added to local.conf
TOOLCHAIN_TARGET_TASK_append = " kernel-devsrc"
as I've found in the documentation.
Also below you can find the whole log from the compilation.
DEBUG: Executing python function extend_recipe_sysroot
NOTE: Direct dependencies are ['virtual:native:/home/pp/yocto-hh/sources/poky/meta/recipes-bsp/u-boot/u-boot-tools_2020.07.bb:do_populate_sysroot', '/home/pp/yocto-hh/sources/poky/meta/recipes-devtools/binutils/binutils-cross_2.35.1.bb:do_populate_sysroot', '/home/pp/yocto-hh/sources/poky/meta/recipes-kernel/kmod/kmod-native_git.bb:do_populate_sysroot', '/home/pp/yocto-hh/sources/poky/meta/recipes-devtools/gcc/gcc-cross_10.2.bb:do_populate_sysroot', 'virtual:native:/home/pp/yocto-hh/sources/poky/meta/recipes-support/gmp/gmp_6.2.0.bb:do_populate_sysroot', 'virtual:native:/home/pp/yocto-hh/sources/poky/meta/recipes-devtools/pkgconfig/pkgconfig_git.bb:do_populate_sysroot', 'virtual:native:/home/pp/yocto-hh/sources/poky/meta/recipes-extended/xz/xz_5.2.5.bb:do_populate_sysroot', 'virtual:native:/home/pp/yocto-hh/sources/poky/meta/recipes-connectivity/openssl/openssl_1.1.1k.bb:do_populate_sysroot', 'virtual:native:/home/pp/yocto-hh/sources/poky/meta/recipes-devtools/bison/bison_3.7.2.bb:do_populate_sysroot', '/home/pp/yocto-hh/sources/poky/meta/recipes-devtools/quilt/quilt-native_0.66.bb:do_populate_sysroot', 'virtual:native:/home/pp/yocto-hh/sources/poky/meta/recipes-extended/bc/bc_1.07.1.bb:do_populate_sysroot', '/home/pp/yocto-hh/sources/poky/meta/recipes-core/glibc/glibc_2.32.bb:do_populate_sysroot', 'virtual:native:/home/pp/yocto-hh/sources/poky/meta/recipes-devtools/patch/patch_2.7.6.bb:do_populate_sysroot', '/home/pp/yocto-hh/sources/poky/meta/recipes-kernel/kern-tools/kern-tools-native_git.bb:do_populate_sysroot', 'virtual:native:/home/pp/yocto-hh/sources/poky/meta/recipes-devtools/pseudo/pseudo_git.bb:do_populate_sysroot', '/home/pp/yocto-hh/sources/poky/meta/recipes-devtools/gcc/gcc-runtime_10.2.bb:do_populate_sysroot']
NOTE: Installed into sysroot: []
NOTE: Skipping as already exists in sysroot: ['u-boot-tools-native', 'binutils-cross-arm', 'kmod-native', 'gcc-cross-arm', 'gmp-native', 'pkgconfig-native', 'xz-native', 'openssl-native', 'bison-native', 'quilt-native', 'bc-native', 'glibc', 'patch-native', 'kern-tools-native', 'pseudo-native', 'gcc-runtime', 'python3-native', 'gnu-config-native', 'autoconf-native', 'libtool-native', 'gtk-doc-native', 'automake-native', 'zlib-native', 'texinfo-dummy-native', 'readline-native', 'flex-native', 'attr-native', 'libmpc-native', 'mpfr-native', 'linux-libc-headers', 'gettext-minimal-native', 'libgcc', 'libnsl2-native', 'gdbm-native', 'libffi-native', 'bzip2-native', 'util-linux-native', 'sqlite3-native', 'libtirpc-native', 'm4-native', 'ncurses-native', 'libcap-ng-native', 'libpcre2-native']
DEBUG: Python function extend_recipe_sysroot finished
DEBUG: Executing shell function do_install
cp: cannot stat 'arch/arm/kernel/module.lds': No such file or directory
WARNING: exit code 1 from a shell command.
ERROR: Execution of '/home/pp/yocto-hh/build/tmp/work/hhctrl-poky-linux-gnueabi/kernel-devsrc/1.0-r0/temp/run.do_install.109942' failed with exit code 1:
cp: cannot stat 'arch/arm/kernel/module.lds': No such file or directory
WARNING: exit code 1 from a shell command
The command I am using to produce the SDK is:
bitbake name-of-my-image -c populate_sdk
What could be the problem here? Or how should I debug it? I've found a few touching on the subject and it seems that it should be already fixed, but for some reason, in my environment, it still does not work.
Missing the module.lds file in the latest kernel. Apply the following source code as a patch in the kernel and build the image.
diff -Naur a/arch/arm/kernel/module.lds b/arch/arm/kernel/module.lds
--- a/arch/arm/kernel/module.lds 1970-01-01 05:30:00.000000000 +0530
+++ b/arch/arm/kernel/module.lds 2020-02-28 21:53:45.000000000 +0530
## -0,0 +1,5 ##
+/* SPDX-License-Identifier: GPL-2.0 */
+SECTIONS {
+ .plt : { BYTE(0) }
+ .init.plt : { BYTE(0) }
+}

Serverless python packaging numpy dependency error

I have been running into issues when making function calls from my deployed Python3.7 Lambda function that, from the error message, are related to numpy. The issue states that there is an inability to import the package and despite trying many of the solutions I have read about, I haven't had any success and I am wondering what to test out next or how to debug further.
I have tried the following:
Install Docker, add the plugin serverless-python-requirements, configure in yml
Install packages in app directory to be bundled and deployed, pip install -t src/vendor -r requirements.txt --no-cache-dir
Uninstalled setuptools and numpy and reinstalled in that order
Error Message (Displayed after running sls invoke -f auth):
{
"errorMessage": "Unable to import module 'data': Unable to import required dependencies:\nnumpy: \n\nIMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!\n\nImporting the numpy c-extensions failed.\n- Try uninstalling and reinstalling numpy.\n- If you have already done that, then:\n 1. Check that you expected to use Python3.7 from \"/var/lang/bin/python3.7\",\n and that you have no directories in your PATH or PYTHONPATH that can\n interfere with the Python and numpy version \"1.18.1\" you're trying to use.\n 2. If (1) looks fine, you can open a new issue at\n https://github.com/numpy/numpy/issues. Please include details on:\n - how you installed Python\n - how you installed numpy\n - your operating system\n - whether or not you have multiple versions of Python installed\n - if you built from source, your compiler versions and ideally a build log\n\n- If you're working with a numpy git repository, try `git clean -xdf`\n (removes all files not under version control) and rebuild numpy.\n\nNote: this error has many possible causes, so please don't comment on\nan existing issue about this - open a new one instead.\n\nOriginal error was: No module named 'numpy.core._multiarray_umath'\n",
"errorType": "Runtime.ImportModuleError"
}
Provided is my setup:
OS: Mac OS X
Local Python: /Users/me/miniconda3/bin/python
Local Python version: Python 3.7.4
Serverless Environment Information (Runtime = Python3.7):
Operating System: darwin
Node Version: 12.14.0
Framework Version: 1.67.3
Plugin Version: 3.6.6
SDK Version: 2.3.0
Components Version: 2.29.1
Docker:
Docker version 19.03.13, build 4484c46d9d
serverless.yml:
service: understand-your-sleep-api
plugins:
- serverless-python-requirements
- serverless-offline-python
custom:
pythonRequirements:
dockerizePip: true # non-linux
slim: true
useStaticCache: false
useDownloadCache: false
invalidateCaches: true
provider:
name: aws
runtime: python3.7
stage: ${opt:stage, 'dev'}
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action:
- ssm:GetParameter
Resource: "arn:aws:ssm:us-east-1:*id*:parameter/*"
environment:
STAGE: ${self:provider.stage}
functions:
auth:
handler: data.auth
events:
- http:
path: /auth
method: get
cors: true
package:
exclude:
- env.yml
- node_modules/**
requirements.txt:
pandas==1.0.0
fitbit==0.3.1
oauthlib==3.1.0
requests==2.22.0
requests-oauthlib==1.3.0
data.py:
import sys
sys.path.insert(0, 'src/vendor') # Location of packages that follow
import json
from datetime import timedelta, datetime, date
import math
import pandas as pd
from requests_oauthlib import OAuth2Session
from urllib.parse import urlparse, parse_qs
import fitbit
import requests
import webbrowser
import base64
import os
import logging
def auth(event, context):
...
Use lambda layer to pack all your requirements, Make sure you have numpy in requirements.txt file. Try it once.
This works only when serverless-python-requirements plugin is listed in plugins section.
Replace your custom key with this and give the functions a reference to use that layer
custom:
pythonRequirements:
layer: true
functions:
auth:
handler: data.auth
events:
- http:
path: /auth
method: get
cors: true
layers:
- { Ref: PythonRequirementsLambdaLayer}
I checked with zipinfo .requirements.zip and found that macos dynlib where loaded instead of linux so files
I fixed this by using dockerizePip: non-linux
be aware that this will not be triggered if in the working dir a .requirements.zip already exists so
git clean -xfd before running sls deploy
Since you are using serverless-python-requirements plugin, it will package the libraries for you. In order words, you don't need to do pip install -t src/vendor -r requirements.txt --no-cache-dir all that stuff manually.
To solve you problem, remove src/vendor and the following two lines in data.py:
import sys
sys.path.insert(0, 'src/vendor') # Location of packages that follow
Then sit back, and let serverless-python-requirements do the work for you.

How is the correct syntax/call to get the sls-file working - salt

I'm trying to build a reactor sls file, which starts running when an event occurs.
The content of the sls file should be as the following cli commands:
sudo salt minion git.add /srv/salt .
sudo salt minion git.commit /srv/salt test
sudo salt minion git.push /srv/salt origin master identity=/home/autogit/.ssh/id_rsa
If i run the code bellow triggered by the reactor. I get the following error message.
[DEBUG ] Reactor is populating module client cache
[ERROR ] An un-handled exception from the multiprocessing process 'Reactor-9:1' was caught:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/utils/process.py", line 765, in _run
return self._original_run()
File "/usr/lib/python2.7/dist-packages/salt/utils/reactor.py", line 271, in run
self.call_reactions(chunks)
File "/usr/lib/python2.7/dist-packages/salt/utils/reactor.py", line 228, in call_reactions
self.wrap.run(chunk)
File "/usr/lib/python2.7/dist-packages/salt/utils/reactor.py", line 330, in run
self.populate_client_cache(low)
File "/usr/lib/python2.7/dist-packages/salt/utils/reactor.py", line 324, in populate_client_cache
self.reaction_class[reaction_type](self.opts['conf_file'])
KeyError: u'module'
[CRITICAL] Engine 'reactor' could not be started!
I've tried different syntax (old style and new style) but couldn't figure out what the problem is. Always getting an KeyError: u'module' or u'git'.
Also tried it with runner function to run it locally on the master.
git pull:
module.run:
- git.pull:
- cwd: /srv/salt
- remote: git#git.xyz.com:user/sbt.git
- identity: /home/autogit/.ssh/id_rsa
- git.add:
- cwd: /srv/salt
- filename: .
- git.commit:
- cwd: /srv/salt
- remote: git#git.xyz.com:user/sbt.git
- git.push:
- cwd: /srv/salt
- remote: git#git.xyz.com:user/sbt.git
- identity: /home/autogit/.ssh/id_rsa
salt --versions-report
Salt Version:
Salt: 2019.2.0
Dependency Versions:
cffi: Not Installed
cherrypy: unknown
dateutil: 2.6.1
docker-py: Not Installed
gitdb: 2.0.3
gitpython: 2.1.8
ioflo: Not Installed
Jinja2: 2.10
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: 1.0.7
msgpack-pure: Not Installed
msgpack-python: 0.5.6
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: Not Installed
Python: 2.7.15rc1 (default, Nov 12 2018, 14:31:15)
python-gnupg: 0.4.1
PyYAML: 3.12
PyZMQ: 16.0.2
RAET: Not Installed
smmap: 2.0.3
timelib: Not Installed
Tornado: 4.5.3
ZMQ: 4.2.5
System Versions:
dist: Ubuntu 18.04 bionic
locale: UTF-8
machine: x86_64
release: 4.15.0-46-generic
system: Linux
version: Ubuntu 18.04 bionic
Since i'm quite new to Salt, hopefully you can give me a hint what i'm doing wrong:
You didn't provide the master config.
About module.run confusion: add in your settings (minion and maybe to master since I don't know your use-case)
use_superseded:
- module.run
That will enable your syntax, more doc about this here: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.module.html#salt.states.module.run
In general: you are executing execution modules from the place that state modules are allowed only (the term module is heavily overused in salt...)
You didn't provide the full Master config. Reactor requires dedicated config to match events to sls files:
https://docs.saltstack.com/en/latest/ref/configuration/master.html#master-reactor-settings
You can also check the doc I've written some time ago about events and reactors:
https://github.com/kiemlicz/util/wiki/Salt-Events-and-Reactor
Assuming you've configured your events-to-sls-files-matching in master config, your provided sls:
git pull:
module.run:
- git.pull:
- cwd: /srv/salt
- remote: git#git.xyz.com:user/
...
will not work.
Mind that reaction happens on Salt Master thus the reaction sls file need to provide type of reaction (local, runner etc.) since it's no longer 'view of one minion' but possibly of tons of minions!
First create runner reaction type (which delegates to some orchestration sls file which will contain your logic wrapped with (I think) salt.function )
Help yourself with aforementioned github link to my attempt of explaining Reactor.
Refer to official doc as well: https://docs.saltstack.com/en/latest/topics/reactor/index.html

TensorFlow Serving Error

I have installed Tensorflow Serving as outlined on the install page at https://tensorflow.github.io/serving/setup. However, when I follow the build instruction on the page I get the following error:
$ bazel build tensorflow_serving/...
ERROR: /home/**PATH**/external/org_tensorflow/third_party/py/python_configure.bzl:183:20: unexpected keyword 'environ' in call to repository_rule(implementation: function, *, attrs: dict or NoneType = None, local: bool = False).
ERROR: com.google.devtools.build.lib.packages.BuildFileContainsErrorsException: error loading package '': Extension file 'third_party/py/python_configure.bzl' has errors.
INFO: Elapsed time: 0.623s
I am running on Ubuntu and TensorFlow 1.0.1 build. I am using Python 2.7 and have set up a virtualenv.
I can successfully build the bazel hello example and also am able to complete the gRPC quick start found at http://www.grpc.io/docs/quickstart/python.html.
Any suggestions?
-Dave
The trouble was an old copy of bazel. To determine your version
$ bazel version
Build label: 0.4.5
Build target: bazel-out/local-fastbuild/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Thu Mar 16 12:19:38 2017 (1489666778)
Build timestamp: 1489666778
Build timestamp as int: 1489666778
In my case it required a manual removal of the old version
rm -fr ~/.bazel ~/.bazelrc
Next, I chose the install using the installer for ubuntu.
$ ./bazel-0.4.5-installer-linux-x86_64.sh
Bazel installer
---------------
Bazel is bundled with software licensed under the GPLv2 with Classpath exception.
You can find the sources next to the installer on our release page:
https://github.com/bazelbuild/bazel/releases
# Release 0.4.5 (2017-03-16)
There was still another trick to getting it to work.
$cd ..
$ bazel test tensorflow_serving/...
Python Configuration Error: 'PYTHON_BIN_PATH' environment variable is not set
This error is also related to versioning, but in this case it was an issue with serving. The solution was to revert to an earlier version and update the submodule from git (I had previously cloned the repository). From the serving directory:
$ git checkout 0.5.1
M tensorflow
M tf_models
Note: checking out '0.5.1'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b <new-branch-name>
HEAD is now at 51bb356... Merge pull request #325 from kirilg/0.5.1
(tensorflow) $ git submodule update
Submodule path 'tensorflow': checked out '07bb8ea2379bd459832b23951fb20ec47f3fdbd4'
Submodule path 'tf_models': checked out '2fd3dcf3f31707820126a4d9ce595e6a1547385d'
(tensorflow) $ bazel test tensorflow_serving/...
Serving now reports success:
INFO: Found 199 targets and 57 test targets...
[1,299 / 4,037] Still waiting for 200 jobs to complete:
Running (standalone):