I'm trying to add multiple model_spec & their respected inputs into single predict_pb2.PredictRequest() as follow:
tmp = predict_pb2.PredictRequest()
tmp.model_spec.name = '1'
tmp.inputs['tokens'].CopyFrom(make_tensor_proto([1,2,3]))
tmp.model_spec.name = '2'
tmp.inputs['tokens'].CopyFrom(make_tensor_proto([4,5,6]))
But I'm only getting 2's information:
>> tmp
model_spec {
name: "2"
}
inputs {
key: "tokens"
value {
dtype: DT_INT32
tensor_shape {
dim {
size: 3
}
}
tensor_content: "\004\000\000\000\005\000\000\000\006\000\000\000"
}
}
How can I get a single PredictRequest() for multiple models with their respective inputs?
My aim is to create a single request and send it to the tensorflow serving which is serving two models. Is there any other way around this? Creating two separate requests for both models and getting results from tf_serving one after another works, but I'm wondering if I can just combine two requests into one.
I'm afraid it's not possible. In tensorflow_serving/api/predict.proto, each PredictRequest has only one ModelSpec. You may try to add some code to do this.
Did you try using Configuration File.
Contents of Config file can be as shown below:
model_config_list {
config {
name: 'my_first_model'
base_path: '/tmp/my_first_model/'
}
config {
name: 'my_second_model'
base_path: '/tmp/my_second_model/'
}
}
For more information, you can refer the link shown below:
https://www.tensorflow.org/tfx/serving/serving_config
Related
I need a help with a terraform module that I've created. It works perfectly, but I need to add some automation.
I created a module which creates multiple private endpoints, but I always need to put the variable values in manually.
This is the module:
resource "azurerm_private_endpoint" "endpoint" {
for_each = try({ for endpoint in var.endpoints : endpoint.name => endpoint }, toset([]))
name = each.key
location = var.location
resource_group_name = var.resource_group_name
subnet_id = each.value.subnet_id
dynamic "private_service_connection" {
for_each = each.value.private_service_connection
content {
name = each.key
private_connection_resource_id = private_service_connection.value.private_connection_resource_id
is_manual_connection = false
subresource_names = var.subresource_name ### see values on : https://learn.microsoft.com/fr-fr/azure/private-link/private-endpoint-overview#private-link-resource
}
}
lifecycle {
ignore_changes = [
private_dns_zone_group
]
}
tags = var.tags
}
I need to have:
1 - for the private endpoint name : I need it to be automatically provided: "pendp-(the subresource_name value in lower cases- my resource_name =>(mysql server for example))"
2 - for the private connection name: I need the values to be automatically: "connection-(the subresource_name value in lower cases- my ressource_name =>(mysql server for exemple))"
3 - some automation to detect automatically the subresource_name ( if I create a private endpoint for a blob or for a mariadb or for a mysqlserver, the module should detected it.
terraform version:
terraform {
required_version = "~> 1"
required_providers {
azurerm = "~> 3.0"
}
}
The easiest way to combine values automatically would be to use the Terraform string join() function to join multiple strings together. For lower case strings, you can use the lower() function.
Some examples:
name = join("-", ["pandp", lower(var.subresource_name)])
...
name = join("-", ["connection", lower(var.subresource_name), lower(each.key)])
For your third rule, you want to use a conditional expression to determine if it's a blob, or mariadb, or mysqlserver.
In this example, we set an example_name local with a value some-blob-value if var.subresource_name contains a string that starts with "blob", and set it to something-else if the condition is false:
locals {
example_name = startswith(lower(var.subresource_name), "blob") ? "some-blob-value" : "something-else"
}
There are many options available for doing a conditional on if a value is passed to what you expect and then determine a result based on that value. What exactly you want isn't clear in the question, but hopefully this will get you pointed in the right direction.
Terraform even has several helper functions that might help you if you only need part of a string, such as startswith(), endswith(), or contains() depending on your needs.
I need a workaround for using count.index inside a module block for some input variables. I have a habit of over-complicating problems, so maybe there's a much easier solution.
File/Folder Structure:
modules/
main.tf
ignition/
main.tf
modules/
files/
main.tf
template_files/
main.tf
End Goal: Create an Ignition file for each instance I'm deploying. Each Ignition file has instance-specific info like hostname, IP address, etc.
All of this code works if I use a static value or a variable without cound.index. I need help coming up with a workaround for the address, gateway, and hostname variables specifically. If I need to process the count.index inside one of the child modules, that's totally fine. I can't seem to wrap my brain around that though. I've tried null_data_source and null_resource blocks from the child modules to achieve that, but so far no luck.
Variables:
workers = {
Lab1 = {
"lab1k8sc8r001" = "192.168.17.100/24"
}
Lab2 = {
"lab2k8sc8r001" = "192.168.18.100/24"
}
}
gateway = {
Lab1 = [
"192.168.17.1",
]
Lab2 = [
"192.168.18.1",
]
}
From modules/main.tf, I'm calling the ignition module:
module "ignition_workers" {
source = "./modules/ignition"
virtual_machines = var.workers[terraform.workspace]
ssh_public_keys = var.ssh_public_keys
files = [
"files_90-disable-auto-updates.yaml",
"files_90-disable-console-logs.yaml",
]
template_files = {
"files_eth0.nmconnection.yaml" = {
interface-name = "eth0",
address = element(values(var.workers[terraform.workspace]), count.index),
gateway = element(var.gateway, count.index % length(var.gateway)),
dns = join(";", var.dns_servers),
dns-search = var.domain,
}
"files_etc_hostname.yaml" = {
hostname = element(keys(var.workers[terraform.workspace]), count.index),
}
"files_chronyd.yaml" = {
ntp_server = var.ntp_server,
}
}
}
From modules/ignition/main.tf I take the files and template_files variables to build the Ignition config:
module "ingition_file_snippets" {
source = "./modules/files"
files = var.files
}
module "ingition_template_file_snippets" {
source = "./modules/template_files"
template_files = var.template_files
}
data "ct_config" "fedora-coreos-config" {
count = length(var.virtual_machines)
content = templatefile("${path.module}/assets/files_ssh_authorized_keys.yaml", {
ssh_public_keys = var.ssh_public_keys
})
pretty_print = true
snippets = setunion(values(module.ingition_file_snippets.files), values(module.ingition_template_file_snippets.files))
}
I am not quite sure what you are trying to achieve so I can not give any detailed examples.
But modules in terraform do not support count or for_each yet. So you can also not use count.index.
You might want to change your module to take lists/maps of input and create those lists/maps via for-expressions by transforming them from some input variables.
You can combine for with if to create a filtered subset of your source list/map. Like in:
[for s in var.list : upper(s) if s != ""]
I hope this helps you work around the missing count support.
I had written karate tests for one environment only (staging). Since the tests are successful on capturing bugs (thanks a lot Karate and Intuit team!), there is now request to run the tests on production.
Our tests are graphql-based where most of the requests are query. I wonder if it is possible for us to switch variables based on karate.env we passed on terminal?
Most of our requests look like this:
And def variables = {objectID:"1234566", cursor:"1", cursorType:PAGE, size:'10', objectType:USER}
And request { query: '#(query)', variables: '#(variables)' }
When method POST
Then status 200
I had tried reading the conditional-logic page on github page but haven't yet found a success.
What I tried so far is:
* if (karate.env == 'staging') * def variables = {objectID:"1234566", cursor:"1", cursorType:PAGE, size:'10', objectType:USER}
But to no success.
Any help will be greatly appreciated. Thanks a lot!
We keep our graphql queries & variables in separate json files, but, we're attempting to solve the same issue. Based on what Peter wrote I came up with this, though it will likely get cleaned up before deployment.
Given def query = read('graphqlQuery.graphql')
And def prodVariable = read('prod-variables.json')
And def stageVariable = read('stage-variables.json')
And def variables = karate.env == 'prod' ? prodV : stageV
And path 'api/' + 'graphql'
And request { query: '#(query)', variables: '#(variables)' }
When method post
Then status 200
This should be easy:
* def variables = karate.env == 'staging' ? { objectID: "1234566", cursor: "1", cursorType: 'PAGE', size: '10', objectType: 'USER' } : { }
Here is another hint:
* def data = { staging: { foo: 'bar }, production: { foo: 'baz' } }
* def variables = data[karate.env]
EDIT: also see this explanation: https://stackoverflow.com/a/59162760/143475
I have a cluster.json file that looks like this:
{
"__default__":
{
"queue":"normal",
"memory":"12288",
"nCPU":"1",
"name":"{rule}_{wildcards.sample}",
"o":"logs/cluster/{wildcards.sample}/{rule}.o",
"e":"logs/cluster/{wildcards.sample}/{rule}.e",
"jvm":"10240m"
},
"aln_pe":
{
"memory":"61440",
"nCPU":"16"
},
"GenotypeGVCFs":
{
"jvm":"102400m",
"memory":"122880"
}
}
In my snakefile I have a few rules that try to access the cluster_config object in their params
params:
memory=cluster_config['__default__']['jvm']
But this will give me a 'KeyError'
KeyError in line 27 of home/bwubb/projects/Germline/S0330901/haplotype.snake:
'__default__'
Does this have something to do with '__default__' being a special object? It pprints in a visually appealing dictionary where as the others are labeled OrderDict, but when I look at the json it looks the same.
If nothing is wrong with my json, then should I refrain from accessing '__default__'?
The default value is accessed via the keyword "cluster", not
__default__
See here in this example in the tutorial:
{
"__default__" :
{
"account" : "my account",
"time" : "00:15:00",
"n" : 1,
"partition" : "core"
},
"compute1" :
{
"time" : "00:20:00"
}
}
The JSON list in the URL above and listed above is the one being accessed in this example. It's unfortunate they are not on the same page.
To access time, J.K. uses the following call.
#!python
#!/usr/bin/env python3
import os
import sys
from snakemake.utils import read_job_properties
jobscript = sys.argv[1]
job_properties = read_job_properties(jobscript)
# do something useful with the threads
threads = job_properties[threads]
# access property defined in the cluster configuration file (Snakemake >=3.6.0)
job_properties["cluster"]["time"]
os.system("qsub -t {threads} {script}".format(threads=threads, script=jobscript))
I want to create and train a model, export it and run inference in C++.
I'm following the tutorial listed here: https://www.tensorflow.org/tutorials/wide_and_deep
I'm also trying to use the SavedModel approach as described here since this is the canonical way to export TensorFlow graphs for serving:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md.
At the very end, I export the saved model as follows:
feature_spec = tf.contrib.layers.create_feature_spec_for_parsing(feature_columns)
serving_input_fn = input_fn_utils.build_parsing_serving_input_fn(feature_spec)
output = model.export_savedmodel(model_dir, serving_input_fn, as_text=True)
print('Model saved to {}'.format(output))
I see the saved_model.pbtxt has the following signature definition.
signature_def {
key: "serving_default"
value {
inputs {
key: "inputs"
value {
name: "input_example_tensor:0"
dtype: DT_STRING
tensor_shape {
dim {
size: -1
}
}
}
}
outputs {
...
I can load the saved model on the C++ side
SavedModelBundle bundle;
const std::string graph_path = "models/1498572863";
const std::unordered_set<std::string> tags = {"serve"};
Status status = LoadSavedModel(session_options,
run_options, graph_path,
tags, &bundle);
I'm stuck at the last part where I need to feed the input into this model.
The Run function expects the input parameter to be of the form: std::vector<std::pair<string, Tensor>>.
I would have expected this to be a vector of pairs where the key is the feature name used in the python code and the Tensor is multiple values for that feature.
However, it seems to expect the string to be "input_example_tensor".
I'm not sure how I'm supposed to now feed the model with different features using a single Tensor.
std::vector<string> output_tensor_names = {
"binary_logistic_head/_classification_output_alternatives/classes_tensor"};
// How do I create input_tensor?
status = bundle.session->Run({{"input_example_tensor", input_tensor}}
output_tensor_names, {}, &outputs);
Solution
I did something like this
tensorflow::Example example;
auto& tf_feature_map = *(example.mutable_features()->mutable_feature());
tf_feature_map["name"].mutable_int64_list()->add_value(15);
const std::string& serialized = example.SerializeAsString();
tensorflow::Input input({serialized});
status = bundle.session->Run({{"input_example_tensor", input.tensor()}}
output_tensor_names, {}, &outputs);
Your model signature suggests that it is expecting a DT_STRING tensor as input. When using tensorflow::Example, this typically means that the protocol buffer needs to be serialized into a tensor with a string as the type of its elements.
To convert the tensorflow::Example object to a string, you can use the protocol buffer methods such as SerializeToString, SerializeAsString etc.
Hope that helps.