Incompatible function arguments - optimization

I am trying to optimize a custom model using Torch2trt. Before, I was facing not implemented error. Trying the solution provided here. However, now I am facing this problem:
TypeError: add_matrix_multiply(): incompatible function arguments. The following argument types are supported: 1. (self: tensorrt.tensorrt.INetworkDefinition, input0: tensorrt.tensorrt.ITensor, op0: tensorrt.tensorrt.MatrixOperation, input1: tensorrt.tensorrt.ITensor, op1: tensorrt.tensorrt.MatrixOperation) -> tensorrt.tensorrt.IMatrixMultiplyLayer
According to documentation the data type of input to add_matrix_multiply() should be tensorrt.tensorrt.ITensor.
I also printed the data types:
input_a_trt, input_b_trt = trt_(ctx.network, input_a, input_b)
print(type(input_a_trt))
print(type(input_b_trt))
Output:
<class 'tensorrt.tensorrt.ITensor'> <class 'tensorrt.tensorrt.ITensor'>

Related

ValidationError in web3py

uniswap_v3_quoter_contract.functions.quoteExactInputSingle(wbtc_token, weth_token, web3.toWei(0.01, 'ether'), 3000, 0).call()
raise error
ValidationError:
Could not identify the intended function with name `quoteExactInputSingle`, positional argument(s) of type `(<class 'str'>, <class 'str'>, <class 'int'>, <class 'int'>, <class 'int'>)` and keyword argument(s) of type `{}`.
Found 1 function(s) with the name `quoteExactInputSingle`: ['quoteExactInputSingle(address,address,uint24,uint256,uint160)']
Function invocation failed due to no matching argument types.
I've tried many times but it doesn't work
You have the error because you submit string instead of address format.
if you are using web3 python (https://web3py.readthedocs.io/en/latest/web3.main.html#addresses) you can use is_checksum_address to verify the format or to_checksum_address to change your string to address format.

readNetFromDarknet assertion error separator_index < line.size()

I am failing to use readNetFromDarknet function for the reasons I do not understand. I am trying to use yolo3-spp with this configuration file https://github.com/pjreddie/darknet/blob/master/cfg/yolov3-spp.cfg.
import cv2
net = cv2.dnn.readNetFromDarknet("../models/yolov3-spp.weights", "../models/yolov3-spp.cfg")
However, this gives me the following error:
error: OpenCV(4.5.4) /tmp/pip-req-build-3129w7z7/opencv/modules/dnn/src/darknet/darknet_io.cpp:660: error: (-215:Assertion failed) separator_index < line.size() in function 'ReadDarknetFromCfgStream
Interestingly, if I use readNet instead of readNetFromDarknet, it seems to work just fine. Should I stick to using readNet instead and why readNetFromDarknet is not actually working?
Your problem in order function arguments. readNetFromDarknet waits first config file and second - weights. A readNet function can swap arguments if they have a wrong order.
So right code:
net = cv2.dnn.readNetFromDarknet("../models/yolov3-spp.cfg", "../models/yolov3-spp.weights")

How to fix VK_KHR_portability_subset error on mac m1 while following vulkan tutorial

Hi i'm having an error on compile. Apparently i'm missing an extension :
validation layer: Validation Error: [ VUID-VkDeviceCreateInfo-pProperties-04451 ] Object 0: handle = 0x1055040c0, type = VK_OBJECT_TYPE_PHYSICAL_DEVICE; | MessageID = 0x3a3b6ca0 | vkCreateDevice: VK_KHR_portability_subset must be enabled because physical device VkPhysicalDevice 0x1055040c0[] supports it The Vulkan spec states: If the [VK_KHR_portability_subset] extension is included in pProperties of vkEnumerateDeviceExtensionProperties, ppEnabledExtensions must include "VK_KHR_portability_subset". (https://vulkan.lunarg.com/doc/view/1.2.176.1/mac/1.2-extensions/vkspec.html#VUID-VkDeviceCreateInfo-pProperties-04451)
I naively added to the deviceExtension vector "VK_KHR_portability_subset" and then got a second error who seems to be similar to the previous one.
validation layer: Validation Error: [ VUID-vkCreateDevice-ppEnabledExtensionNames-01387 ] Object 0: VK_NULL_HANDLE, type = VK_OBJECT_TYPE_INSTANCE; | MessageID = 0x12537a2c | Missing extension required by the device extension VK_KHR_portability_subset: VK_KHR_get_physical_device_properties2. The Vulkan spec states: All required extensions for each extension in the VkDeviceCreateInfo::ppEnabledExtensionNames list must also be present in that list (https://vulkan.lunarg.com/doc/view/1.2.176.1/mac/1.2-extensions/vkspec.html#VUID-vkCreateDevice-ppEnabledExtensionNames-01387)
I added to the deviceExtension vector "VK_KHR_get_physical_device_properties2" and then got a third error :
libc++abi: terminating with uncaught exception of type std::runtime_error: failed to find a suitable GPU!
The thing is that previously he recognized that i was using a m1 chip but now no informations on the device shows :(
I added to the deviceExtension vector "VK_KHR_get_physical_device_properties2"
VK_KHR_get_physical_device_properties2 is an instance extension, and as such belongs to vkCreateInstance (not device).
Reportedly, that fixed your issue.
I have encountered same message on same device and after some research understood it's not error, just a warning. If you don't really need to handle this special case, you can just ignore it.

CSV file input not working together with set field value step in Pentaho Kettle

I have a very simple Pentaho Kettle transformation that causes a strange error. It consists of reading a field X from a CSV, add a field Y, set Y=X and finally write it back to another CSV.
Here you can see the steps and the configuration for them:
You can also download the ktr file from here. The input data is just this:
1
2
3
When I run this transformation, I get this error message:
ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Unexpected error
ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : org.pentaho.di.core.exception.KettleStepException:
Error writing line
Error writing field content to file
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeRowToFile(TextFiIeOutput.java:273)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFileOutput.processRow(TextFiIeOutput.java:195)
at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
atjava.Iang.Thread.run(Unknown Source)
Caused by: org.pentaho.di.core.exception.KettleStepException:
Error writing field content to file
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeField(TextFileOutput.java:435)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeRowToFile(TextFiIeOutput.java:249)
3 more
Caused by: org.pentaho.di.core.exception.KettleVaIueException:
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.core.row.vaIue.VaIueMetaBase.getBinaryString(VaIueMetaBase.java:2185)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.formatField(TextFiIeOutput.java:290)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeField(TextFileOutput.java:392)
4 more
All of the above lines start with 2015/09/23 12:51:18 - Text file output.0 -, but I edited it out for brevity. I think the relevant, and confusing, part of the error message is this:
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
Some further notes:
If I bypass the set field value step by using the lower hop instead, the transformation finish without errors. This leads me to believe that it is the set field value step that causes the problem.
If I replace the CSV file input with a data frame with the same data (1,2,3) everything works just fine.
If I replace the file output step with a dummy the transformation finish without errors. However, if I preview the dummy, it causes a similar error and the field Y has the value <null> on all three rows.
Before I created this MCVE I got the error on all sorts of seemingly random steps, even when there was no file output present. So I do not think this is related to the file output.
If I change the format from Number to Integer, nothing changes. But if I change it to string the transformations finish without errors, and I get this output:
X;Y
1;[B#49e96951
2;[B#7b016abf
3;[B#1a0760b0
Is this a bug? Am I doing something wrong? How can I make this work?
It's because of lazy conversion. Turn it off. This is behaving exactly as designed - although admittedly the error and user experience could be improved.
Lazy conversion must not be used when you need to access the field value in your transformation. That's exactly what it does. The default should probably be off rather than on.
If your field is going directly to a database, then use it and it will be faster.
You can even have "partially lazy" streams, where you use lazy conversion for speed, but then use select values step, to "un-lazify" the fields you want to access, whilst the remainder remain lazy.
Cunning huh?

NuPIC OPF Runtime error getOutputData unknown output categoriesOut

I'm trying to run TemporalClassification model using OPF to recognize patterns from stream. I've adjusted model params so it has two Sensor inputs: ScalarEncoder and SDRCategoryEncoder. The latter marked as classifierOnly. And also it's set as predictedField in inferences.
When trying to feed model with input data I get
RuntimeError: getOutputData unknown output 'categoriesOut' on region Classifier.
NontemporalClassification (only inferenceType changed) model runs without such error.
I've found 6 occurances of categoriesOut in nupic code: https://github.com/numenta/nupic/search?utf8=%E2%9C%93&q=categoriesOut
And error arises in nupic/frameworks/opf/clamodel.py at line 558
classificationDist = classifier.getOutputData('categoriesOut')
Seems that ClassifierRegion in the network is not prepared properly to output data.
Can anyone explain why the classfier region doesn't have 'categoriesOut'? I guess there's misconfiguration in my model params, but there were no errors or warnings during initialization of model. Is there any mandatory parameters and assignments (except noticed in NUPIC documentation) necessary for TemporalClassification model to run?
There are several types of ClassifierRegions in NuPIC. You can find them in nupic/regions folder. I've checked sources and found that 'categoriesOut' is in the outputs dict of the KNNClassifierRegion
https://github.com/numenta/nupic/blob/469f6372082e95dd5d2a96181b745ba36d2e7a8a/nupic/regions/KNNClassifierRegion.py
outputs=dict(
categoriesOut=dict(
description='A vector representing, for each category '
'index, the likelihood that the input to the node belongs '
'to that category based on the number of neighbors of '
'that category that are among the nearest K.',
dataType='Real32',
count=0,
regionLevel=True,
isDefaultOutput=True),
Ensure you use KNNClassifierRegion when configuring your TemporalClassification model. Samples for NontemporalClassification use CLAClassifier, but CLAClassifierRegion has no categoriesOut in its outputs and error described in your question will arise if you keep
'regionName' : 'CLAClassifierRegion'
for TemporalClassification model.