I have one file that I created in matlab. I used it very well in
python load:
import cntk as C
z = C.Function.load("Net.onnx", format=C.ModelFormat.ONNX)
in c++ I have exception Selected CPU as the process wide default device.
About to throw exception:
'Gemm: Invalid shape, input A and B are expected to be rank=2
matrices'
I used the nuget imported : CNTK.CPUOnly CNTK.Deps.MKL CNTK.Deps.OpenCV.Zip
#include <stdio.h>
#include "CNTKLibrary.h"
void main(){
std::wstring modelFile(L"Net.onnx");
//line crash
CNTK::FunctionPtr modelFunc = CNTK::Function::Load(modelFile, CNTK::DeviceDescriptor::CPUDevice(), CNTK::ModelFormat::ONNX);
}
finaly i made other solution i save in python to model cntk than loaded it from c++
in cntk format (where original model was exported from matlab to onnx long way)
python code
import cntk as C
z = C.Function.load("Net.onnx", format=C.ModelFormat.ONNX)
z.save(os.path.join("folder", "net" + ".dnn"))
c++ loading
#include "CNTKLibrary.h"
std::wstring modelFile(L"net.dnn");
CNTK::FunctionPtr modelFunc = CNTK::Function::Load(modelFile, CNTK::DeviceDescriptor::CPUDevice());
Related
I am trying to compile a TFLite micro-based Arduino sketch using MicroMutableOpsResolver class (to only include required operations for reducing the memory usage).
Though see similar usage in TF lite example here - https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/micro_speech/micro_speech_test.cc
But keep hitting the below compilation error.
IMU_Classifier_TinyML:22:1: error: 'micro_op_resolver' does not name a type
micro_op_resolver.AddFullyConnected();
^~~~~~~~~~~~~~~~~
IMU_Classifier_TinyML:23:1: error: 'micro_op_resolver' does not name a type
micro_op_resolver.AddSoftmax();
^~~~~~~~~~~~~~~~~
IMU_Classifier_TinyML:24:1: error: 'micro_op_resolver' does not name a type
micro_op_resolver.AddRelu();
^~~~~~~~~~~~~~~~~
Using library Arduino_LSM9DS1 at version 1.1.0 in folder: /home/balaji/Arduino/libraries/Arduino_LSM9DS1
Using library Wire in folder: /home/balaji/.arduino15/packages/arduino/hardware/mbed/1.3.2/libraries/Wire (legacy)
Using library Arduino_TensorFlowLite at version 2.4.0-ALPHA in folder: /home/balaji/Arduino/libraries/Arduino_TensorFlowLite
exit status 1
'micro_op_resolver' does not name a type
The code snippet looks as below:
#include <Arduino_LSM9DS1.h>
#include <TensorFlowLite.h>
#include <tensorflow/lite/micro/micro_mutable_op_resolver.h>
#include <tensorflow/lite/micro/kernels/micro_ops.h>
#include <tensorflow/lite/micro/micro_error_reporter.h>
#include <tensorflow/lite/micro/micro_interpreter.h>
#include <tensorflow/lite/schema/schema_generated.h>
#include <tensorflow/lite/version.h>
// Include the TFlite converted model header file
#include "model.h"
const float accelThreshold = 2.5;
const int numOfSamples = 119; // acceleration sample-rate
int samplesRead = numOfSamples;
tflite::MicroErrorReporter tfLiteErrorReporter;
/*Import only the required ops to reduce the memory usage*/
static tflite::MicroMutableOpResolver<3> micro_op_resolver;
micro_op_resolver.AddFullyConnected();
micro_op_resolver.AddSoftmax();
micro_op_resolver.AddRelu();
Am I missing any dependency or could this be due to TF lite version mismatch?
At least the function calls like micro_op_resolver.AddFullyConnected(); must be placed into a function body. Something like this should compile:
#include <Arduino_LSM9DS1.h>
#include <TensorFlowLite.h>
#include <tensorflow/lite/micro/micro_mutable_op_resolver.h>
#include <tensorflow/lite/micro/kernels/micro_ops.h>
#include <tensorflow/lite/micro/micro_error_reporter.h>
#include <tensorflow/lite/micro/micro_interpreter.h>
#include <tensorflow/lite/schema/schema_generated.h>
#include <tensorflow/lite/version.h>
// Include the TFlite converted model header file
#include "model.h"
const float accelThreshold = 2.5;
const int numOfSamples = 119; // acceleration sample-rate
int samplesRead = numOfSamples;
tflite::MicroErrorReporter tfLiteErrorReporter;
/*Import only the required ops to reduce the memory usage*/
static tflite::MicroMutableOpResolver<3> micro_op_resolver;
void setup() {
micro_op_resolver.AddFullyConnected();
micro_op_resolver.AddSoftmax();
micro_op_resolver.AddRelu();
}
void loop() {
// put your main code here, to run repeatedly:
}
I want to use isWritable() from QFileInfo. According to the docs, you have to somehow set qt_ntfs_permission_lookup to 1 to get a meaningful result on Windows. The C++ code for this is
extern Q_CORE_EXPORT int qt_ntfs_permission_lookup;
qt_ntfs_permission_lookup++; // turn checking on
qt_ntfs_permission_lookup--; // turn it off again
How do I "translate" the extern statement into Python?
One possible solution is to create functions that change the state of that variable in C++ and export it to python. To export a C++ function to python there are options like pybind11, SWIG, sip, shiboken2, etc.
In this case, implement a small library using pybind11
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
namespace py = pybind11;
#ifdef Q_OS_WIN
QT_BEGIN_NAMESPACE
extern Q_CORE_EXPORT int qt_ntfs_permission_lookup;
QT_END_NAMESPACE
#endif
PYBIND11_MODULE(qt_ntfs_permission, m) {
m.def("enable", [](){
#ifdef Q_OS_WIN
qt_ntfs_permission_lookup = 1;
#endif
});
m.def("disable", [](){
#ifdef Q_OS_WIN
qt_ntfs_permission_lookup = 0;
#endif
});
#ifdef VERSION_INFO
m.attr("__version__") = VERSION_INFO;
#else
m.attr("__version__") = "dev";
#endif
}
and you can install it by following these steps:
Requirements:
Qt5
Visual Studio
cmake
git clone https://github.com/eyllanesc/qt_ntfs_permission_lookup.git
python setup.py install
Also with the help of github actions I have created the wheels for some versions of Qt and python so download it from here, extract the .whl and run:
python -m pip install qt_ntfs_permission-0.1.0-cp38-cp38-win_amd64.whl
Then you run it as:
from PyQt5.QtCore import QFileInfo
import qt_ntfs_permission
qt_ntfs_permission.enable()
qt_ntfs_permission.disable()
I am starting to learn PyCUDA on Google Colab. I’m trying to run the "printf" example.
Everything works fine, but I do not get any output on the last line. How can I solve it?
import pycuda.driver as drv
import pycuda.autoinit
from pycuda.compiler import SourceModule
mod = SourceModule("""
#include <stdio.h>
__global__ void myfirst_kernel()
{
printf("Hello,PyCUDA!!!");
}
""")
function = mod.get_function("myfirst_kernel")
function(block=(4,4,1))
# Flush context printf buffer
cuda.Context.synchronize()
drv.Context.synchronize()
just make this change, it will work now.
from gpu_device.cc
// NOTE(tucker): We need to discriminate between Eigen GPU
// operations and all others. If an operation is Eigen
// implemented (or otherwise tries to launch a cuda kernel
// directly), we need to establish a stacked-scoped environment
// that directs it to execute on the proper device. Otherwise we
// expect the Op to use StreamExecutor directly and correctly. The
// way we make this discrimination is quite hacky: At the moment
// the only non-Eigen GPU Op is the recv-op, which is known to be
// asynchronous.
and gpu_device only waits when different context. (sync_every_op is false)
But in argmax_op.h, for example,
template <typename Device, typename T>
struct ArgMin {
#define DECLARE_COMPUTE_SPEC(Dims) \
EIGEN_ALWAYS_INLINE static void Reduce##Dims( \
const Device& d, typename TTypes<T, Dims>::ConstTensor input, \
const int32 dimension, \
typename TTypes<int64, Dims - 1>::Tensor output) { \
output.device(d) = input.argmin(dimension).template cast<int64>(); \
}
use device compute directly. Is that correct?
I missed something. cuda stream is passed to eigen device. so there's no problem
I am generating Pdf files using LibHaru libraries. My code is following
#include <iostream>
#include "hpdf.h"
using namespace std;
void error_handler(HPDF_STATUS error_no, HPDF_STATUS detail_no, void *user_data)
{
}
int main()
{
cout<<"Compression"<<endl;
HPDF_Doc pdf = HPDF_New(error_handler, NULL);
if (!pdf)
return 0;
HPDF_STATUS Status = HPDF_SetCompressionMode(pdf, HPDF_COMP_ALL);
return 0;
}
PROBLEM: I debugged the code and found that HPDF_SetCompressionMode() returns 4129, which is the error code for Invalid value set when invoking HPDF_SetCommpressionMode(). .
If you step into the code, you will see you are getting the error because the ZLIB compression library was not compiled into your copy of HaruPDF.
First: comment out this line in ..\win32\include\hpdf_config.h:
/* zlib is not available */
//#define LIBHPDF_HAVE_NOZLIB
Second: find, download and unzip the ZLIB code. You can obtain the source from the following Website:
http://www.zlib.net/
Third: tell HaruPDF where it can find the ZLIB code, and recompile HaruPDF.
You should now be able to use compression.
Ain't Open Source grand?