Webrtc crashed and videoFrame has a width of 0 - webrtc

A crash in one place
webrtc-sourceCode
#
# Fatal error in: ../../api/video/i420_buffer.cc, line 52
# last system error: 0
# Check failed: width > 0 (0 vs. 0)
# (lldb)
webrtcVersion: m-79
I don't know what causes the empty frame, would like to ask, when will WEBRTC produce empty frame?

Related

TFMA run_model_analysis not parsing TFRecord files properly

I am trying to use the run_model_analysis function of TFMA library to evaluate my model.
The data has been written to a TFRecord following the tf.train.example format.
The model called for the evalaution expects an input shape of (None, 1, 5).
Somehow on running the TFRecord bytestring saved by the SerializeToString function dosen't get parsed and gets passed as it is.
On running it gives this error -
WARNING:absl:Tensorflow version (2.8.1) found. Note that TFMA support for TF 2.0 is currently in beta
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.8 interpreter.
The thread 0x3 has exited with code 0 (0x0).
The thread 0x4 has exited with code 0 (0x0).
The thread 0x5 has exited with code 0 (0x0).
The thread 0x6 has exited with code 0 (0x0).
WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.
WARNING:absl:Large batch_size 1 failed with error Fail to call signature func with signature_name: serving_default.
the inputs are:
[b'\n\xab\x01\n\x1e\n\x15DistinctCurrencyCodes\x12\x05\x1a\x03\n\x01\x00\n\x1e\n\x12AvgInvoiceQuantity\x12\x08\x12\x06\n\x04\xb0\xc2\xd8=\n\x16\n\nAvgTaxRate\x12\x08\x12\x06\n\x04<\xbf1>\n\x1c\n\x10InvoiceTaxAmount\x12\x08\x12\x06\n\x04\x88\x10\x8d\xbb\n\x1f\n\x13AvgInvoiceUnitPrice\x12\x08\x12\x06\n\x04\xc0\x95\xf4>\n\x12\n\tIsAnomaly\x12\x05\x1a\x03\n\x01\x00'].
The input_specs are:
{'input_1': TensorSpec(shape=(None, 1, 5), dtype=tf.float32, name='input_1')}.. Attempting to run batch through serially. Note that this will significantly affect the performance.
The thread 0x7 has exited with code 0 (0x0).
The thread 0x2 has exited with code 0 (0x0).
Fail to call signature func with signature_name: serving_default.
the inputs are:
[b'\n\xab\x01\n\x1e\n\x15DistinctCurrencyCodes\x12\x05\x1a\x03\n\x01\x00\n\x1e\n\x12AvgInvoiceQuantity\x12\x08\x12\x06\n\x04\xb0\xc2\xd8=\n\x16\n\nAvgTaxRate\x12\x08\x12\x06\n\x04<\xbf1>\n\x1c\n\x10InvoiceTaxAmount\x12\x08\x12\x06\n\x04\x88\x10\x8d\xbb\n\x1f\n\x13AvgInvoiceUnitPrice\x12\x08\x12\x06\n\x04\xc0\x95\xf4>\n\x12\n\tIsAnomaly\x12\x05\x1a\x03\n\x01\x00'].
The input_specs are:
{'input_1': TensorSpec(shape=(None, 1, 5), dtype=tf.float32, name='input_1')}. [while running 'ExtractEvaluateAndWriteResults/ExtractAndEvaluate/ExtractPredictions/Predict']
Stack trace:
>
>During handling of the above exception, another exception occurred:
>
>
>The above exception was the direct cause of the following exception:
>
>
>During handling of the above exception, another exception occurred:
>
>
>During handling of the above exception, another exception occurred:
>
>
>The above exception was the direct cause of the following exception:
>
>
>During handling of the above exception, another exception occurred:
>
> File "C:\Users\t-ankbiswas\OneDrive - Microsoft\Desktop\EC.VL.CommerceTools\DataScience\AnomalyDetection\AnomalyDetector\AnomalyDetector\Evaluator\TFEvaluator.py", line 163, in evaluateModel
> evalResult = tfma.run_model_analysis(
> File "C:\Users\t-ankbiswas\OneDrive - Microsoft\Desktop\EC.VL.CommerceTools\DataScience\AnomalyDetection\AnomalyDetector\AnomalyDetector\Helpers\ExecutionManager.py", line 115, in CheckEvaluatorStages
> result = TFEvaluator(config).evaluateModel(config, dataProducer, dataPreprocessor,
> File "C:\Users\t-ankbiswas\OneDrive - Microsoft\Desktop\EC.VL.CommerceTools\DataScience\AnomalyDetection\AnomalyDetector\AnomalyDetector\Helpers\ExecutionManager.py", line 56, in Execute
> CheckEvaluatorStages(config)
> File "C:\Users\t-ankbiswas\OneDrive - Microsoft\Desktop\EC.VL.CommerceTools\DataScience\AnomalyDetection\AnomalyDetector\AnomalyDetector\Main.py", line 53, in <module> (Current frame)
> ExecutionManager.Execute()
Backend QtAgg is interactive backend. Turning interactive mode on.
Loaded 'tensorflow.python.eager.function'
Loaded 'tensorflow.python.eager.execute'
Loaded 'tensorflow.python.saved_model.load'
Loaded 'tensorflow_model_analysis.utils.model_util'
Loaded 'apache_beam.runners.common'
Loaded 'apache_beam.runners.worker.operations'
Loaded 'apache_beam.runners.worker.bundle_processor'
Loaded 'apache_beam.runners.worker.sdk_worker'
Loaded 'apache_beam.runners.portability.fn_api_runner.worker_handlers'
Loaded 'apache_beam.runners.portability.fn_api_runner.fn_runner'
Loaded 'apache_beam.runners.direct.direct_runner'
Loaded 'apache_beam.pipeline'
Loaded 'tensorflow_model_analysis.api.model_eval_lib'
Loaded 'Evaluator.TFEvaluator'
Loaded 'Helpers.ExecutionManager'
Loaded '__main__'
Loaded 'runpy'
The program 'python.exe' has exited with code 0 (0x0).
Any idea what causes this error and how it can be fixed ?

How to terminate code upon RuntimeWarning

I am using scipy.optimize.fsolve to solve two nonlinear equations. When the boundary conditions cannot be satisfied, I would like to program to terminate and print a warning message. I have set the maximum number of iterations such that maxfev = 20
sol = fsolve(f, [1e-6,1e-6], xtol=1e-6, maxfev=20, full_output=False, col_deriv=True)
How can I terminate the programm when I get the following RuntimeWarning?
RuntimeWarning: The number of calls to function has reached maxfev = 20.
You could use warnings.simplefilter for instance.
Here an example that don't stop with DeprecationWarning, but stop with RuntimeWarning
def fn():
warnings.warn('deprecation', DeprecationWarning)
print('running after deprecation warning')
warnings.warn('runtime', RuntimeWarning)
print('running after runtime warning')
fn() # ends normally
warnings.simplefilter('error', RuntimeWarning)
fn() # raises an error on RuntimeWarning

PIL loading single channel for tif image data type

I've a satelite image with tiff file format. When i try to open the file using pil and then print size, I get only one channel:
im = Image.open('1989.tif',mode='r')
print(im.size) -- > (687,1091)
If i try to open with matplotlib, it loads all the channel but I get a blank image when I use imshow( r,g,b values of the images are all zero when I print the values) :
im=plt.imread("1989.tif")
print(im.shape) -- > (687,1091,4)
plt.imshow(im) -- > shows blank image
I don't know how to fix either of them.
Adding the link to the image :
https://drive.google.com/open?id=1uNQxyCplD7rYd_ZWfFntP1bN_Qg49ybU
Your image is an uncompressed 32-bit floating point single channel image. PIL/Pillow seems able to read it fine - it will have problems displaying it, but we can work on that next...
from PIL import Image
import numpy as np
# Load image and make into Numpy array
im = Image.open('a.tif')
n = np.array(im)
# Check max value
print(n.max()) # prints 0.54
# Make an 8-bit version for display
Image.fromarray((n*200).astype(np.uint8)).show()
You can inspect the image with tiffinfo that comes with libtiff:
tiffinfo a.tif
Output
TIFFReadDirectory: Warning, Unknown field with tag 33550 (0x830e) encountered.
TIFFReadDirectory: Warning, Unknown field with tag 33922 (0x8482) encountered.
TIFFReadDirectory: Warning, Unknown field with tag 34735 (0x87af) encountered.
TIFFReadDirectory: Warning, Unknown field with tag 34736 (0x87b0) encountered.
TIFFReadDirectory: Warning, Unknown field with tag 34737 (0x87b1) encountered.
TIFFReadDirectory: Warning, Unknown field with tag 42112 (0xa480) encountered.
TIFFReadDirectory: Warning, Unknown field with tag 42113 (0xa481) encountered.
TIFF Directory at offset 0x8 (8)
Image Width: 1091 Image Length: 687
Tile Width: 128 Tile Length: 128
Bits/Sample: 32
Sample Format: IEEE floating point
Compression Scheme: None
Photometric Interpretation: min-is-black
Samples/Pixel: 1
Planar Configuration: single image plane
Tag 33550: 30.000000,30.000000,0.000000
Tag 33922: 0.000000,0.000000,0.000000,357075.000000,2904735.000000,0.000000
Tag 34735: 1,1,0,16,1024,0,1,1,1025,0,1,1,1026,34737,24,0,2048,0,1,4326,2049,34737,84,24,2050,0,1,6326,2051,0,1,8901,2054,0,1,9102,2055,34736,1,0,2056,0,1,7030,2057,34736,1,1,2059,34736,1,2,2061,34736,1,3,3072,0,1,32646,3073,34737,410,108,3076,0,1,9001
Tag 34736: 0.017453,6378137.000000,298.257224,0.000000
Tag 34737: PCS Name = UTM_Zone_46N|GCS Name = GCS_WGS_1984|Datum = D_WGS_1984|Ellipsoid = WGS_1984|Primem = Greenwich||ESRI PE String = PROJCS["UTM_Zone_46N",GEOGCS["GCS_WGS_1984",DATUM["D_WGS_1984",SPHEROID["WGS_1984",6378137.0,298.257223563]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]],PROJECTION["Transverse_Mercator"],PARAMETER["False_Easting",500000.0],PARAMETER["False_Northing",0.0],PARAMETER["Central_Meridian",93.0],PARAMETER["Scale_Factor",0.9996],PARAMETER["Latitude_Of_Origin",0.0],UNIT["Meter",1.0]]|
Tag 42112: <GDALMetadata>
<Item name="STATISTICS_EXCLUDEDVALUES" sample="0"></Item>
<Item name="STATISTICS_MAXIMUM" sample="0">0.53153151273727</Item>
<Item name="STATISTICS_MEAN" sample="0">0.14108245105659</Item>
<Item name="STATISTICS_MINIMUM" sample="0">-0.48148149251938</Item>
<Item name="STATISTICS_SKIPFACTORX" sample="0">1</Item>
<Item name="STATISTICS_SKIPFACTORY" sample="0">1</Item>
<Item name="STATISTICS_STDDEV" sample="0">0.15760411626121</Item>
</GDALMetadata>
Tag 42113: -3.4028234663852886e+38

numpy memmap runtime error.... 64bits system with 2Gigas limit?

I'm trying to create a large file with numpy memmap
big_file = np.memmap(fnamemm, dtype=np.float32, mode='w+', shape=(np.prod(dims[1:]), len_im), order='F')
The system is a Windows 10-64bits operating in a 64bits python
In [2]: sys.maxsize
Out[2]: 9223372036854775807
With enough virtual memory (maximum of 120000Megas)
However every time I try to create a file which resulting size should exceed 2Gigas I get a runtime error
In [29]: big_file = np.memmap(fnamemm, dtype=np.int16, mode='w+', shape=(np.prod(dims[1:]), len_im), order=order)
C:\Users\nuria\AppData\Local\Continuum\anaconda3\envs\caiman\lib\site-packages\numpy\core\memmap.py:247: RuntimeWarning: overflow encountered in long_scalars
bytes = long(offset + size*_dbytes)
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-29-66578da2d3f6> in <module>()
----> 1 big_file = np.memmap(fnamemm, dtype=np.int16, mode='w+', shape=(np.prod(dims[1:]), len_im), order=order)
~\AppData\Local\Continuum\anaconda3\envs\caiman\lib\site-packages\numpy\core\memmap.py in __new__(subtype, filename, dtype, mode, offset, shape, order)
248
249 if mode == 'w+' or (mode == 'r+' and flen < bytes):
--> 250 fid.seek(bytes - 1, 0)
251 fid.write(b'\0')
252 fid.flush()
OSError: [Errno 22] Invalid argument
This error does not happen when the files sizes are under 2Gigas...
I have replicated the same problem with another windows 7 also 64bits
Have I forgotten something? Why is memmap acting as I have a 32bits system?
EDIT: The error is not exactly a runtime error. the variable "bytes" gets an runtime warning when trying to get the length of the file, resulting I guess in a bad argument that raises the Errno 22
I had a similar error and it turned out that it was because one of the shape=(A,B) argument was with int32 instead of int64. Try the following:
len_im64 = np.array(len_im,dtype='int64')
big_file = np.memmap(fnamemm, dtype=np.float32, mode='w+', shape=(np.prod(dims[1:]).astype('int64'), len_im), order='F')
It fixed it for me.
Even though the system is 64 bit, problem may be because the application is built with 32 bit target. Check your shell execution mode (32 bit or 64 bit).
For such applications you have make them large address aware. Then the 32 applications can get access to 4GB memory on 64 bit machines.
How to do that? Here is someone's article.
https://github.com/pyinstaller/pyinstaller/issues/1288
Note: If your application is already built with 64 bit target.. ignore this and put in comment, Will delete this answer.

Errors using onehot_encode incorrect input format?

I'm trying to use the mx.nd.onehot_encode function, which should be straightforward, but I'm getting errors that are difficult to parse. Here is the example usage I'm trying.
m0 = mx.nd.zeros(15)
mx.nd.onehot_encode(mx.nd.array([0]), m0)
I expect this to return a 15 dim vector (at same address as m0) with only the first element set to 1. Instead I get the error:
src/ndarray/./ndarray_function.h:73: Check failed: index.ndim() == 1 && proptype.ndim() == 2 OneHotEncode only support 1d index.
Neither ndarray is of dimension 2, so why am I getting this error? Is there some other input format I should be using?
It seems that mxnet.ndarray.onehot_encode requires the target ndarray to explicitly have the shape [1, X].
I tried:
m0 = mx.nd.zeros((1, 15))
mx.nd.onehot_encode(mx.nd.array([0]), m0)
It reported no error.