Related
i use anaconda distribution
and i try to work with Pandas API on Spark but its not working
i pip install:
pip install pyspark
pip install pyspark[sql]
pip install pyspark[pandas_on_spark] plotly
I import:
import os
os.environ["PYARROW_IGNORE_TIMEZONE"] = "1"
import pandas as pd
import numpy as np
import pyspark.pandas as ps
from pyspark.sql import SparkSession
and when i try to run:
psdf = ps.DataFrame(
{'a': [1, 2, 3, 4, 5, 6],
'b': [100, 200, 300, 400, 500, 600],
'c': ["one", "two", "three", "four", "five", "six"]},
index=[10, 20, 30, 40, 50, 60])
it's just thinking and not doing nothing
The following example doesn't work in VSCODE. It works (with %matplotlib notebook in a Jupyter notebook in a web browser though).
# creating 3d plot using matplotlib
# in python
# for creating a responsive plot
# use %matplotlib widget in VSCODE
#%matplotlib widget
%matplotlib notebook
# importing required libraries
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
# creating random dataset
xs = [14, 24, 43, 47, 54, 66, 74, 89, 12,
44, 1, 2, 3, 4, 5, 9, 8, 7, 6, 5]
ys = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 6, 3,
5, 2, 4, 1, 8, 7, 0, 5]
zs = [9, 6, 3, 5, 2, 4, 1, 8, 7, 0, 1, 2,
3, 4, 5, 6, 7, 8, 9, 0]
# creating figure
fig = plt.figure()
ax = Axes3D(fig)
# creating the plot
plot_geeks = ax.scatter(xs, ys, zs, color='green')
# setting title and labels
ax.set_title("3D plot")
ax.set_xlabel('x-axis')
ax.set_ylabel('y-axis')
ax.set_zlabel('z-axis')
# displaying the plot
plt.show()
The result should be that you get a plot that can be e.g. rotated interactively using the mouse arrow.
In VSCODE one can click on the </> and a renderer is presented. Chosen is JupyterIPWidget Renderer. Other renderers show the plot but don't allow for interactive manipulation.
Also a warning appears:
/var/folders/kc/5p61t70n0llbn05934gj4r_w0000gn/T/ipykernel_22590/1606073246.py:23:
MatplotlibDeprecationWarning: Axes3D(fig) adding itself to the figure is
deprecated since 3.4. Pass the keyword argument auto_add_to_figure=False
and use fig.add_axes(ax) to suppress this warning. The default value of
auto_add_to_figure will change to False in mpl3.5 and True values will
no longer work in 3.6. This is consistent with other Axes classes.
ax = Axes3D(fig)
This behavior is expected-- %matplotlib notebook is not supported in VS Code. You should use %matplotlib widget instead. See https://github.com/microsoft/vscode-jupyter/wiki/Using-%25matplotlib-widget-instead-of-%25matplotlib-notebook,tk,etc
Hello i have followed all the steps to make an inference and successfully run it on the model in this link : https://pjreddie.com/media/files/yolov3.weights
but when i tried it on a model i trained with darknet i get this error :
[ INFO ] Creating Inference Engine...
[ INFO ] Loading network files:
newyolo.xml
newyolo.bin
[ INFO ] Preparing inputs
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference...
To close the application, press 'CTRL+C' here or switch to the output window and press ESC key
To switch between sync/async modes, press TAB key in the output window
yolo_original.py:280: DeprecationWarning: shape property of IENetLayer is deprecated. Please use shape property of DataPtr instead objects returned by in_data or out_data property to access shape of input or output data on corresponding ports
out_blob = out_blob.reshape(net.layers[net.layers[layer_name].parents[0]].shape)
[ INFO ] Layer detector/yolo-v3/Conv_14/BiasAdd/YoloRegion parameters:
[ INFO ] classes : 10
[ INFO ] num : 3
[ INFO ] coords : 4
[ INFO ] anchors : [55.0, 56.0, 42.0, 87.0, 68.0, 81.0]
Traceback (most recent call last):
File "yolo_original.py", line 363, in <module>
sys.exit(main() or 0)
File "yolo_original.py", line 286, in main
args.prob_threshold)
File "yolo_original.py", line 153, in parse_yolo_region
h_scale=orig_im_h, w_scale=orig_im_w))
File "yolo_original.py", line 99, in scale_bbox
xmin = int((x - w / 2) * w_scale)
ValueError: cannot convert float NaN to integer
knowing that i have provided the right shape and changed yolo_v3.json to match my model
here is the content of my yolo_v3.json:
[
{
"id": "TFYOLOV3",
"match_kind": "general",
"custom_attributes": {
"classes": 10,
"anchors": [18,22,31,33,33,50,55, 56,42,87,68,81,111,98,73,158,156,202],
"coords": 4,
"num": 9,
"masks":[[6, 7, 8], [3, 4, 5], [0, 1, 2]],
"entry_points": ["detector/yolo-v3/Reshape", "detector/yolo-v3/Reshape_4", "detector/yolo-v3/Reshape_8"]
}
}
]
i have tried multiple things to debug this like not providing the jsonfile ....etc
ps : yolo_original.py is the same demo that comes with openvino just renamed,
i'm using openvino version 2020.1
transforming NaN to float or skipping values with Nan didn't slove the problem.
My understanding is that I should be able to grab a TensorFlow model from Google's AI Hub, deploy it to TensorFlow Serving and use it to make predictions by POSTing images via REST requests using curl.
I could not find any bbox predictors on AI Hub at this time but I did find one on the TensorFlow model zoo:
http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz
I have the model deployed to TensorFlow serving, but the documentation is unclear with respect to exactly what should be included in the JSON of the REST request.
My understanding is that
The SignatureDefinition of the model determines what the JSON should look like
I should base64 encode the images
I was able to get the signature definition of the model like so:
>python tensorflow/tensorflow/python/tools/saved_model_cli.py show --dir /Users/alexryan/alpine/git/tfserving-tutorial3/model-volume/models/bbox/1/ --all
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['in'] tensor_info:
dtype: DT_UINT8
shape: (-1, -1, -1, 3)
name: image_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['out'] tensor_info:
dtype: DT_FLOAT
shape: unknown_rank
name: detection_boxes:0
Method name is: tensorflow/serving/predict
I think the shape info here is telling me that the model can handle images of any dimensions?
The input layer looks like this in Tensorboard:
But how do I convert this SignatureDefinition to a valid JSON request?
I'm assuming that I'm supposed to use the predict API ...
and Google's doc says ...
URL
POST
http://host:port/v1/models/${MODEL_NAME}[/versions/${MODEL_VERSION}]:predict
/versions/${MODEL_VERSION} is optional. If omitted the latest version
is used.
Request format
The request body for predict API must be JSON object
formatted as follows:
{
// (Optional) Serving signature to use.
// If unspecifed default serving signature is used.
"signature_name": <string>,
// Input Tensors in row ("instances") or columnar ("inputs") format.
// A request can have either of them but NOT both.
"instances": <value>|<(nested)list>|<list-of-objects>
"inputs": <value>|<(nested)list>|<object>
}
Encoding binary values JSON uses UTF-8 encoding. If you have input
feature or tensor values that need to be binary (like image bytes),
you must Base64 encode the data and encapsulate it in a JSON object
having b64 as the key as follows:
{ "b64": "base64 encoded string" }
You can specify this object as a value for an input feature or tensor.
The same format is used to encode output response as well.
A classification request with image (binary data) and caption features
is shown below:
{ "signature_name": "classify_objects", "examples": [
{
"image": { "b64": "aW1hZ2UgYnl0ZXM=" },
"caption": "seaside"
},
{
"image": { "b64": "YXdlc29tZSBpbWFnZSBieXRlcw==" },
"caption": "mountains"
} ] }
Uncertainties include:
should I use "instances" in my JSON
should I base64 encode a JPG or PNG or something else?
Should the image be of a particular
width and height?
In Serving Image-Based Deep Learning Models with TensorFlow-Serving’s RESTful API this format is suggested:
{
"instances": [
{"b64": "iVBORw"},
{"b64": "pT4rmN"},
{"b64": "w0KGg2"}
]
}
I used this image:
https://tensorflow.org/images/blogs/serving/cat.jpg
and base64 encoded it like so:
# Download the image
dl_request = requests.get(IMAGE_URL, stream=True)
dl_request.raise_for_status()
# Compose a JSON Predict request (send JPEG image in base64).
jpeg_bytes = base64.b64encode(dl_request.content).decode('utf-8')
predict_request = '{"instances" : [{"b64": "%s"}]}' % jpeg_bytes
But when I use curl to POST the base64 encoded image like so:
{"instances" : [{"b64": "/9j/4AAQSkZJRgABAQAASABIAAD/4QBYRXhpZgAATU0AKgAA
...
KACiiigAooooAKKKKACiiigAooooA//Z"}]}
I get a response like this:
>./test_local_tfs.sh
HEADER=|Content-Type:application/json;charset=UTF-8|
URL=|http://127.0.0.1:8501/v1/models/saved_model/versions/1:predict|
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8501 (#0)
> POST /v1/models/saved_model/versions/1:predict HTTP/1.1
> Host: 127.0.0.1:8501
> User-Agent: curl/7.54.0
> Accept: */*
> Content-Type:application/json;charset=UTF-8
> Content-Length: 85033
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
* We are completely uploaded and fine
< HTTP/1.1 400 Bad Request
< Content-Type: application/json
< Date: Tue, 17 Sep 2019 10:47:18 GMT
< Content-Length: 85175
<
{ "error": "Failed to process element: 0 of \'instances\' list. Error: Invalid argument: JSON Value: {\n \"b64\": \"/9j/4AAQSkZJRgABAQAAS
...
ooooA//Z\"\n} Type: Object is not of expected type: uint8" }
I've tried converting a local version of the same file to base64 like so (confirming that the dtype is uint8) ...
img = cv2.imread('cat.jpg')
print('dtype: ' + str(img.dtype))
_, buf = cv2.imencode('.jpg', img)
jpeg_bytes = base64.b64encode(buf).decode('utf-8')
predict_request = '{"instances" : [{"b64": "%s"}]}' % jpeg_bytes
But posting this JSON generates the same error.
However, when the json is formated like so ...
{'instances': [[[[112, 71, 48], [104, 63, 40], [107, 70, 20], [108, 72, 21], [109, 77, 0], [106, 75, 0], [92, 66, 0], [106, 80, 0], [101, 80, 0], [98, 77, 0], [100, 75, 0], [104, 80, 0], [114, 88, 17], [94, 68, 0], [85, 54, 0], [103, 72, 11], [93, 62, 0], [120, 89, 25], [131, 101, 37], [125, 95, 31], [119, 91, 27], [121, 93, 29], [133, 105, 40], [119, 91, 27], [119, 96, 56], [120, 97, 57], [119, 96, 53], [102, 78, 36], [132, 103, 44], [117, 88, 28], [125, 89, 4], [128, 93, 8], [133, 94, 0], [126, 87, 0], [110, 74, 0], [123, 87, 2], [120, 92, 30], [124, 95, 33], [114, 90, 32],
...
, [43, 24, 33], [30, 17, 36], [24, 11, 30], [29, 20, 38], [37, 28, 46]]]]}
... it works.
The problem is this json file is >11 MB in size.
How do I make the base64 encoded version of the json work?
UPDATE: It seems that we have to edit the pretrained model to accept base64 images at the input layer
This article describes how to edit the model ...
Medium: Serving Image-Based Deep Learning Models with TensorFlow-Serving’s RESTful API
... unfortunately, it assumes that we have access to the code which generated the model.
user260826's solution provides a work-around using an estimator but it assumes the model is a keras model. Not true in this case.
Is there a generic method to make a model ready for TensorFlow Serving REST interface with a base64 encoded image that works with any of the TensorFlow model formats?
The first step is to export the trained model in the appropriate format. Use export_inference_graph.py like this
python export_inference_graph \
--input_type encoded_image_string_tensor \
--pipeline_config_path path/to/ssd_inception_v2.config \
--trained_checkpoint_prefix path/to/model.ckpt \
--output_directory path/to/exported_model_directory
in the above code snippet, it is important to specify
--input_type encoded_image_string_tensor
after exporting the model, run the tensorflow server as usual with the newly exported model.
The inference code will look like this:
from __future__ import print_function
import base64
import requests
SERVER_URL = 'http://localhost:8501/v1/models/vedNet:predict'
IMAGE_URL = 'test_images/19_inp.jpg'
def main():
with open(IMAGE_URL, "rb") as image_file:
jpeg_bytes = base64.b64encode(image_file.read()).decode('utf-8')
predict_request = '{"instances" : [{"b64": "%s"}]}' % jpeg_bytes
response = requests.post(SERVER_URL, predict_request)
response.raise_for_status()
prediction = response.json()['predictions'][0]
if __name__ == '__main__':
main()
As you mentioned JSON is a very inefficient approach, as payload normally exceeds original filesize, you need to convert the model to be able to process the image bytes written to a string using Base64 encoding:
{"b64": base64_encoded_string}
This new conversion will reduce the prediction time and bandwidth utilization used to transfer image from prediction client to your infrastructure.
I recently used a Transfer Learning model with TF Hub and Keras which was using a JSON as input, as you mentioned this is not optimal for prediction.
I used the following function to overwrite it:
Using the following code we add a new serving function which will be able to process Base64 encoded images.
Using TF estimator model:
h5_model_path = os.path.join('models/h5/best_model.h5')
tf_model_path = os.path.join('models/tf')
estimator = keras.estimator.model_to_estimator(
keras_model_path=h5_model_path,
model_dir=tf_model_path)
def image_preprocessing(image):
"""
This implements the standard preprocessing that needs to be applied to the
image tensors before passing them to the model. This is used for all input
types.
"""
image = tf.expand_dims(image, 0)
image = tf.image.resize_bilinear(image, [HEIGHT, WIDTH], align_corners=False)
image = tf.squeeze(image, axis=[0])
image = tf.cast(image, dtype=tf.uint8)
return image
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tf.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
return image_preprocessing(image)
input_ph = tf.placeholder(tf.string, shape=[None])
images_tensor = tf.map_fn(
prepare_image, input_ph, back_prop=False, dtype=tf.uint8)
images_tensor = tf.image.convert_image_dtype(images_tensor, dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver(
{'input': images_tensor},
{'image_bytes': input_ph})
export_path = os.path.join('/tmp/models/json_b64', version)
if os.path.exists(export_path): # clean up old exports with this version
shutil.rmtree(export_path)
estimator.export_savedmodel(
export_path,
serving_input_receiver_fn=serving_input_receiver_fn)
A good example here
I have been struggling with the same problem. Finally I could make it work. I just had to add a new signature to the model:
import tensorflow as tf
model = tf.saved_model.load("/path/to/the/original/model")
# This is the current signature, that only accepts image tensors as input
signature = model.signatures["default"]
#tf.function()
def my_predict(image_b64):
# Model doesn't support batch!!
img_dec = tf.image.decode_png(image_b64[0], channels=3)
img_tensor = tf.image.convert_image_dtype(img_dec, tf.float32)[tf.newaxis, ...]
prediction = signature(img_tensor)
return prediction
# Create new signature, to read b64 images
new_signature = my_predict.get_concrete_function(
image_b64=tf.TensorSpec([None], dtype=tf.string, name="image_b64")
)
tf.saved_model.save(
model,
export_dir="/path/to/the/saved/model",
signatures=new_signature
)
Finally, after serving I can make predictions passing an input like this:
{
"instances": [
{
"b64": "youBase64ImageHere"
}
]
}
I just want monitor my running spider's stats.I get the latest scrapy-plugins/scrapy-jsonrpc and set the spider as follows:
EXTENSIONS = {
'scrapy_jsonrpc.webservice.WebService': 500,
}
JSONRPC_ENABLED = True
JSONRPC_PORT = [60853]
but when I browse the http://localhost:60853/ , it just return
{"resources": ["crawler"]}
and I just can get the running spiders name without the stats.
anyone who can told me, which place I set wrong, thanks!
http://localhost:60853/ returns the resources available, /crawler being the only top-level one.
If you want to get stats for a spider, you'll need to query the /crawler/stats endpoint and call get_stats().
Here's an example using python-jsonrpc: (here I configured the webservice to listen on localhost and port 6024)
>>> import pyjsonrpc
>>> http_client = pyjsonrpc.HttpClient('http://localhost:6024/crawler/stats')
>>> http_client.call('get_stats', 'httpbin')
{u'log_count/DEBUG': 4, u'scheduler/dequeued': 4, u'log_count/INFO': 9, u'downloader/response_count': 2, u'downloader/response_status_count/200': 2, u'log_count/WARNING': 1, u'scheduler/enqueued/memory': 4, u'downloader/response_bytes': 639, u'start_time': u'2016-09-28 08:49:57', u'scheduler/dequeued/memory': 4, u'scheduler/enqueued': 4, u'downloader/request_bytes': 862, u'response_received_count': 2, u'downloader/request_method_count/GET': 4, u'downloader/request_count': 4}
>>> http_client.call('get_stats')
{u'log_count/DEBUG': 4, u'scheduler/dequeued': 4, u'log_count/INFO': 9, u'downloader/response_count': 2, u'downloader/response_status_count/200': 2, u'log_count/WARNING': 1, u'scheduler/enqueued/memory': 4, u'downloader/response_bytes': 639, u'start_time': u'2016-09-28 08:49:57', u'scheduler/dequeued/memory': 4, u'scheduler/enqueued': 4, u'downloader/request_bytes': 862, u'response_received_count': 2, u'downloader/request_method_count/GET': 4, u'downloader/request_count': 4}
>>> from pprint import pprint
>>> pprint(http_client.call('get_stats'))
{u'downloader/request_bytes': 862,
u'downloader/request_count': 4,
u'downloader/request_method_count/GET': 4,
u'downloader/response_bytes': 639,
u'downloader/response_count': 2,
u'downloader/response_status_count/200': 2,
u'log_count/DEBUG': 4,
u'log_count/INFO': 9,
u'log_count/WARNING': 1,
u'response_received_count': 2,
u'scheduler/dequeued': 4,
u'scheduler/dequeued/memory': 4,
u'scheduler/enqueued': 4,
u'scheduler/enqueued/memory': 4,
u'start_time': u'2016-09-28 08:49:57'}
>>>
You can also use jsonrpc_client_call from scrapy_jsonrpc.jsonrpc.
>>> from scrapy_jsonrpc.jsonrpc import jsonrpc_client_call
>>> jsonrpc_client_call('http://localhost:6024/crawler/stats', 'get_stats', 'httpbin')
{u'log_count/DEBUG': 5, u'scheduler/dequeued': 4, u'log_count/INFO': 11, u'downloader/response_count': 3, u'downloader/response_status_count/200': 3, u'log_count/WARNING': 1, u'scheduler/enqueued/memory': 4, u'downloader/response_bytes': 870, u'start_time': u'2016-09-28 09:01:47', u'scheduler/dequeued/memory': 4, u'scheduler/enqueued': 4, u'downloader/request_bytes': 862, u'response_received_count': 3, u'downloader/request_method_count/GET': 4, u'downloader/request_count': 4}
This is what you get "on the wire" for a request made with a modified example-client.py (see code a bit below, the example in https://github.com/scrapy-plugins/scrapy-jsonrpc is outdated as I write these lines):
POST /crawler/stats HTTP/1.1
Accept-Encoding: identity
Content-Length: 73
Host: localhost:6024
Content-Type: application/x-www-form-urlencoded
Connection: close
User-Agent: Python-urllib/2.7
{"params": ["httpbin"], "jsonrpc": "2.0", "method": "get_stats", "id": 1}
And the response
HTTP/1.1 200 OK
Content-Length: 504
Access-Control-Allow-Headers: X-Requested-With
Server: TwistedWeb/16.4.1
Connection: close
Date: Tue, 27 Sep 2016 11:21:43 GMT
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PATCH, PUT, DELETE
Content-Type: application/json
{"jsonrpc": "2.0", "result": {"log_count/DEBUG": 5, "scheduler/dequeued": 4, "log_count/INFO": 11, "downloader/response_count": 3, "downloader/response_status_count/200": 3, "log_count/WARNING": 3, "scheduler/enqueued/memory": 4, "downloader/response_bytes": 870, "start_time": "2016-09-27 11:16:25", "scheduler/dequeued/memory": 4, "scheduler/enqueued": 4, "downloader/request_bytes": 862, "response_received_count": 3, "downloader/request_method_count/GET": 4, "downloader/request_count": 4}, "id": 1}
Here's the modified client to query /crawler/stats, which I called with ./example-client.py -H localhost -P 6024 get-spider-stats httpbin (for a running "httpbin" spider, JSONRPC_PORT being 6024 for me)
#!/usr/bin/env python
"""
Example script to control a Scrapy server using its JSON-RPC web service.
It only provides a reduced functionality as its main purpose is to illustrate
how to write a web service client. Feel free to improve or write you own.
Also, keep in mind that the JSON-RPC API is not stable. The recommended way for
controlling a Scrapy server is through the execution queue (see the "queue"
command).
"""
from __future__ import print_function
import sys, optparse, urllib, json
from six.moves.urllib.parse import urljoin
from scrapy_jsonrpc.jsonrpc import jsonrpc_client_call, JsonRpcError
def get_commands():
return {
'help': cmd_help,
'stop': cmd_stop,
'list-available': cmd_list_available,
'list-running': cmd_list_running,
'list-resources': cmd_list_resources,
'get-global-stats': cmd_get_global_stats,
'get-spider-stats': cmd_get_spider_stats,
}
def cmd_help(args, opts):
"""help - list available commands"""
print("Available commands:")
for _, func in sorted(get_commands().items()):
print(" ", func.__doc__)
def cmd_stop(args, opts):
"""stop <spider> - stop a running spider"""
jsonrpc_call(opts, 'crawler/engine', 'close_spider', args[0])
def cmd_list_running(args, opts):
"""list-running - list running spiders"""
for x in json_get(opts, 'crawler/engine/open_spiders'):
print(x)
def cmd_list_available(args, opts):
"""list-available - list name of available spiders"""
for x in jsonrpc_call(opts, 'crawler/spiders', 'list'):
print(x)
def cmd_list_resources(args, opts):
"""list-resources - list available web service resources"""
for x in json_get(opts, '')['resources']:
print(x)
def cmd_get_spider_stats(args, opts):
"""get-spider-stats <spider> - get stats of a running spider"""
stats = jsonrpc_call(opts, 'crawler/stats', 'get_stats', args[0])
for name, value in stats.items():
print("%-40s %s" % (name, value))
def cmd_get_global_stats(args, opts):
"""get-global-stats - get global stats"""
stats = jsonrpc_call(opts, 'crawler/stats', 'get_stats')
for name, value in stats.items():
print("%-40s %s" % (name, value))
def get_wsurl(opts, path):
return urljoin("http://%s:%s/"% (opts.host, opts.port), path)
def jsonrpc_call(opts, path, method, *args, **kwargs):
url = get_wsurl(opts, path)
return jsonrpc_client_call(url, method, *args, **kwargs)
def json_get(opts, path):
url = get_wsurl(opts, path)
return json.loads(urllib.urlopen(url).read())
def parse_opts():
usage = "%prog [options] <command> [arg] ..."
description = "Scrapy web service control script. Use '%prog help' " \
"to see the list of available commands."
op = optparse.OptionParser(usage=usage, description=description)
op.add_option("-H", dest="host", default="localhost", \
help="Scrapy host to connect to")
op.add_option("-P", dest="port", type="int", default=6080, \
help="Scrapy port to connect to")
opts, args = op.parse_args()
if not args:
op.print_help()
sys.exit(2)
cmdname, cmdargs, opts = args[0], args[1:], opts
commands = get_commands()
if cmdname not in commands:
sys.stderr.write("Unknown command: %s\n\n" % cmdname)
cmd_help(None, None)
sys.exit(1)
return commands[cmdname], cmdargs, opts
def main():
cmd, args, opts = parse_opts()
try:
cmd(args, opts)
except IndexError:
print(cmd.__doc__)
except JsonRpcError as e:
print(str(e))
if e.data:
print("Server Traceback below:")
print(e.data)
if __name__ == '__main__':
main()
In the example command above, I got this:
log_count/DEBUG 5
scheduler/dequeued 4
log_count/INFO 11
downloader/response_count 3
downloader/response_status_count/200 3
log_count/WARNING 3
scheduler/enqueued/memory 4
downloader/response_bytes 870
start_time 2016-09-27 11:16:25
scheduler/dequeued/memory 4
scheduler/enqueued 4
downloader/request_bytes 862
response_received_count 3
downloader/request_method_count/GET 4
downloader/request_count 4