How to configure ModelConfig.txt for TensorRT Models in Tensorflow Serving - tensorflow

Currently, in Tensorflow Serving you can specify a ModelConfig.txt that maps to a ModelConfig.proto file. This file contains a list of configurations for multiple models that will run within the Tensorflow Serving instance.
For Example:
model_config_list: {
config: {
name: "ssd_mobilenet_v1_coco",
base_path: "/test_models/ssd_mobilenet_v1_coco/",
model_platform: "tensorflow"
},
config: {
name: "faster_rcnn_inception_v2_coco",
base_path: "/test_models/faster_rcnn_inception_v2_coco/",
model_platform: "tensorflow"
}
}
As it stands when I attempt to place a TensorRT optimized model into the ModelConfig.txt the system fails.
How can I resolve this?

Related

DeepPavlov error loading the model from Tensorflow (from_tf=True)

I'm trying to load the ruBERT model into Deeppavlov as follows:
#is a dict
config_path = {
"chainer": {
"in": [
"x"
],
"in_y": [
"y"
],
"out": [
"y_pred_labels",
"y_pred_probas"
],
"pipe": [
...
}
}
model = build_model(config_path, download=False)
At the same time, I have all the files of the original ruBERT model locally. However, an error throws when building the model:
OSError: Error no file named pytorch_model.bin found in directory ruBERT_hFace2 but there is a file for TensorFlow weights. Use `from_tf=True` to load this model from those weights.
At the same time, there is nowhere a clear explanation of how to pass this parameter through the build_model function.
How to pass this parameter across build_model correctly?
UPDATE 1
At the moment, the version of Deep Pavlov 1.0.2 is installed.
The checkpoint of the model consists of following files:
Currently there is no way to pass any parameter via build_model. In case of additional parameter you should align the configuration file accordingly. Alternatively you can change it via Python code.
from deeppavlov import build_model, configs, evaluate_model
from deeppavlov.core.commands.utils import parse_config
config = parse_config(f"config.json")
...
model = build_model(config, download=True, install=True)
But first please make sure that you are using the latest version of DeepPavlov. In addition please take a look at out recent article on Medium. If you need a further assistance please provide more details.

How to split Prisma Model into separate file?

I'm learning Prisma ORM from video tutorials and official docs. They are explain and write All model code in one file called schema.prisma. It's ok but, when application grow it became messy. So, how should I separate my Model definition into separate file?
At this point in time Prisma doesn't support file segmentation. I can recommend 3 solutions though.
Option 1: Prismix
Prismix utilizes models and enums to create relations across files for your Prisma schema via a prismix configuration file.
{
"mixers": [
{
"input": [
"base.prisma",
"./modules/auth/auth.prisma",
"./modules/posts/posts.prisma",
],
"output": "prisma/schema.prisma"
}
]
}
Placing this inside of a prismix.config.json file which will define how you'd like to merge your Prisma segmentations.
Option 2: Schemix
Schemix Utilizes Typescript configurations to handle schema segmenting.
For example:
// _schema.ts
import { createSchema } from "schemix";
export const PrismaSchema = createSchema({
datasource: {
provider: "postgresql",
url: {
env: "DATABASE_URL"
},
},
generator: {
provider: "prisma-client-js",
},
});
export const UserModel = PrismaSchema.createModel("User");
import "./models/User.model";
PrismaSchema.export("./", "schema");
Inside of User.model
// models/User.model.ts
import { UserModel, PostModel, PostTypeEnum } from "../_schema";
UserModel
.string("id", { id: true, default: { uuid: true } })
.int("registrantNumber", { default: { autoincrement: true } })
.boolean("isBanned", { default: false })
.relation("posts", PostModel, { list: true })
.raw('##map("service_user")');
This will then generate your prisma/schema.prisma containing your full schema. I used only one database as an example (taken from documentation) but you should get the point.
Option 3: Cat -> Generate
Segmenting your schema into chunk part filenames and run:
cat *.part.prisma > schema.prisma
yarn prisma generate
Most of these if not all of them are referenced here in the currently Open issue regarding support for Prisma schema file segmentation https://github.com/prisma/prisma/issues/2377
This is not yet possible with Prisma. See this outstanding issue for possible workarounds https://github.com/prisma/prisma/issues/2377.
There is a library called Prismix that allows you to write as many schema files as you'd like, here you go the link

Vue multiple pages with a webworker

Using vue cli 3 I have a project using harp.gl where I need a webworker to decode map tiles.
My vue.config.js has the following:
module.exports = {
pages: {
app: {
entry: './src/main.js',
filename: 'index.html',
title: 'Contextual Map HARP.GL/Vue',
},
decoder: {
target: "webworker",
entry: "./src/decoder.js",
output: {
filename: "[name].bundle.js",
},
devtool: 'source-map',
...
When I run this I have both the app and the decode.js running as a webworker of type "script" (when inspecting it using Chrome).
However, after upgrading to vue cli 4 the above code does not work, as inspecting it using Chrome the webworker type is text/html and it appears to serve the default index.html. It alomst as if the type: "webworker" is not working the same as with version 3.
I am at loss as how to fix this, the move from vue cli 3 to 4 changed something, but I cannot figure out what to change.

Error when restoring model (Multiple OpKernel registrations match NodeDef)

I'm getting an error when attempting to restore a model from a checkpoint.
This is with the nightly Windows GPU build for python 3.5 on 2017-06-13.
InvalidArgumentError (see above for traceback):
Multiple OpKernel registrations match NodeDef 'Decoder/decoder/GatherTree = GatherTree[T=DT_INT32, _device="/device:CPU:0"](Decoder/decoder/TensorArrayStack_1/TensorArrayGatherV3, Decoder/decoder/TensorArrayStack
_2/TensorArrayGatherV3, Decoder/decoder/while/Exit_18)': 'op: "GatherTree" device_type: "GPU" constraint { name: "T" allowed_values { list { type: DT_INT32 } } }' and 'op: "GatherTree" device_type: "GPU" constraint { name: "T" allowed_values { list { type: DT_
INT32 } } }'[[Node: Decoder/decoder/GatherTree = GatherTree[T=DT_INT32, _device="/device:CPU:0"](Decoder/decoder/TensorArrayStack_1/TensorArrayGatherV3, Decoder/decoder/TensorArrayStack_2/TensorArrayGatherV3, Decoder/decoder/while/Exit_18)]]
The model is using dynamic_decode with beam search, which otherwise works fine in training mode when not using beam search for decoding.
Any ideas on what this means or how to debug it?
I also faced the same issue a day ago. Turns out it was a bug in tensorflow. It's resolved now and BeamSearchDecoder should work with the latest build of tensorflow.

Webpack2 rules 'test'-expression doesn't load '?'-modules after migration

I want to be able to use two different loaders for '*.less' files.
Default one for all, but other for '*.less?module' files.
In webpack 1.x it was possible through:
module: {
rules: [
{
test: /\.less$|\.css$/,
use: ...
},
{
test: /\.less\?module$/,
use: ...
},
How to do it in webpack 2 or 3?