How to create a model for csv data set and to calculate predicted result using TensorFlow JS - tensorflow

I am new to TensorFlow JS.
I followed the TensorFlow JS documentation to create model and train it to calculate predicted result from the created model.
But I don't know how to train the created model for CSV file and calculate predicted result for two or more column in the CSV file.
Could anybody guide me with a sample for creating, training model with CSV file and to calculate the predicted result?
const csvUrl = 'https://storage.googleapis.com/tfjs-examples/multivariate-linear-regression/data/boston-housing-train.csv';
function save(model) {
return model.save('downloads://boston_model');
}
function load() {
return tf.loadModel('indexeddb://boston_model');
}
async function run() {
// We want to predict the column "medv", which represents a median value of a
// home (in $1000s), so we mark it as a label.
const csvDataset = tf.data.csv(
csvUrl, {
columnConfigs: {
medv: {
isLabel: true
}
}
});
// Number of features is the number of column names minus one for the label
// column.
const numOfFeatures = (await csvDataset.columnNames()).length - 1;
// Prepare the Dataset for training.
const flattenedDataset =
csvDataset
.map(([rawFeatures, rawLabel]) =>
// Convert rows from object form (keyed by column name) to array form.
[Object.values(rawFeatures), Object.values(rawLabel)])
.batch(10);
// Define the model.
const model = tf.sequential();
model.add(tf.layers.dense({
inputShape: [numOfFeatures],
units: 1
}));
model.compile({
optimizer: tf.train.sgd(0.000001),
loss: 'meanSquaredError'
});
// Fit the model using the prepared Dataset
model.fitDataset(flattenedDataset, {
epochs: 10,
callbacks: {
onEpochEnd: async (epoch, logs) => {
console.log(epoch, logs.loss);
}
}
});
const savedModel=save(model);
}
run().then(() => console.log('Done'));

Using tf.data.csv you can train a model with a csv file.
However the browser cannot directly read a file. Thus, you have to serve the csv file on a local server
Update
Your model uses only a single perceptron. Using multiple perceptrons can help improving the accuracy of the model, ie adding multiple layers. You can have a look here as how it is done.

Related

Error when checking : expected input to have shape [null,300,300,3] but got array with shape [1,300,300,4]

I'm using tfjs-node for loading model and predicting results in my Node.js application. It was providing decent results but for some image, the following error was shown:
Error when checking : expected input to have shape [null,300,300,3] but got array with shape [1,300,300,4].
Code for loading and predicting results:
const loadModel = async (imagePath) => {
const image = fs.readFileSync(imagePath);
let tensor = tf.node.decodeImage(image);
const resizedImage = tensor.resizeNearestNeighbor([300, 300]);
const batchedImage = resizedImage.expandDims(0);
const input = batchedImage.toFloat().div(tf.scalar(255));
const model = await tf.loadLayersModel(
process.env.ML_MODEL_PATH || "file://./ml-model/model.json"
);
let predictions = await model.predict(input).data();
predictions = Array.from(predictions);
};
How to fix this? Thanks.
Use
let tensor = tf.node.decodeImage(image, 3);
it will remove alpha if the image is allowed to have transparency like png.
Tensorflow documentation source.
Another possibility is that you had CMYK image. For which I wouldn't know the answer, except that you may try to convert to RGB.

How to switch between policies in multiagent rllib?

For my thesis, I want to train a PPO model on a collaborative multi-agent situation. The 'teammates' are already trained Behavior Cloning models each representing different kind of behavior. These models will not be trained simultaneously to the PPO. I want the PPO encounter all these different models during training, meaning: I want to switch out the 'teammate' of the PPO every x amount of epochs. How do I do this?
I am using the overcooked-ai environment. All code would be too much to include. But the generation of the trainer ends up returning this:
trainer = PPOTrainer(env="overcooked_multi_agent", config={
"multiagent": multi_agent_config,
"callbacks" : TrainingCallbacks,
"custom_eval_function" : get_rllib_eval_function(evaluation_params, environment_params['eval_mdp_params'], environment_params['env_params'],
environment_params["outer_shape"], 'ppo', 'ppo' if self_play else 'bc',
verbose=params['verbose']),
"env_config" : environment_params,
"eager" : False,
**training_params
}, logger_creator=custom_logger_creator
and the multi-agent-config is created like
def gen_policy(policy_type="ppo"):
# supported policy types thus far
assert policy_type in ["ppo", "bc"]
if policy_type == "ppo":
config = {
"model" : {
"custom_options" : model_params,
"custom_model" : "MyPPOModel"
}
}
return (None, env.ppo_observation_space, env.action_space, config)
elif policy_type == "bc":
bc_cls = bc_params['bc_policy_cls']
bc_config = bc_params['bc_config']
return (bc_cls, env.bc_observation_space, env.action_space, bc_config)
# Rllib compatible way of setting the directory we store agent checkpoints in
logdir_prefix = "{0}_{1}_{2}".format(params["experiment_name"], params['training_params']['seed'], timestr)
def custom_logger_creator(config):
"""Creates a Unified logger that stores results in <params['results_dir']>/<params["experiment_name"]>_<seed>_<timestamp>
"""
results_dir = params['results_dir']
if not os.path.exists(results_dir):
try:
os.makedirs(results_dir)
except Exception as e:
print("error creating custom logging dir. Falling back to default logdir {}".format(DEFAULT_RESULTS_DIR))
results_dir = DEFAULT_RESULTS_DIR
logdir = tempfile.mkdtemp(
prefix=logdir_prefix, dir=results_dir)
logger = UnifiedLogger(config, logdir, loggers=None)
return logger
# Create rllib compatible multi-agent config based on params
multi_agent_config = {}
all_policies = ['ppo']
# Whether both agents should be learned
self_play = iterable_equal(multi_agent_params['bc_schedule'], OvercookedMultiAgent.self_play_bc_schedule)
if not self_play:
all_policies.append('bc')
multi_agent_config['policies'] = { policy : gen_policy(policy) for policy in all_policies }
def select_policy(agent_id):
if agent_id.startswith('ppo'):
return 'ppo'
if agent_id.startswith('bc'):
return 'bc'
multi_agent_config['policy_mapping_fn'] = select_policy
multi_agent_config['policies_to_train'] = 'ppo'
Do I already change my configuration to include multiple BHC models (code above is built to handle 1), or can I somehow change the models in the trainingloop?
Any ideas would be super helpful!

Forge Data Visualization not working on Revit rooms [ITA]

I followed the tutorials from the Forge Data Visualization extension documentation: https://forge.autodesk.com/en/docs/dataviz/v1/developers_guide/quickstart/ on a Revit file. I used the generateMasterViews option to translate the model and I can see the Rooms on the viewer, however I have problems coloring the surfaces of the floors: it seems that the ModelStructureInfo has no rooms.
The result of the ModelStructureInfo on the viewer.model is:
t {model: d, rooms: null}
Here is my code, I added the ITA localized versions of Rooms as 3rd parameter ("Locali"):
const dataVizExtn = await this.viewer.loadExtension("Autodesk.DataVisualization");
// Model Structure Info
let viewerDocument = this.viewer.model.getDocumentNode().getDocument();
const aecModelData = await viewerDocument.downloadAecModelData();
let levelsExt;
if (aecModelData) {
levelsExt = await viewer.loadExtension("Autodesk.AEC.LevelsExtension", {
doNotCreateUI: true
});
}
// get FloorInfo
const floorData = levelsExt.floorSelector.floorData;
const floor = floorData[2];
levelsExt.floorSelector.selectFloor(floor.index, true);
const model = this.viewer.model;
const structureInfo = new Autodesk.DataVisualization.Core.ModelStructureInfo(model);
let levelRoomsMap = await structureInfo.getLevelRoomsMap();
let rooms = levelRoomsMap.getRoomsOnLevel("2 - P2", false);
// Generates `SurfaceShadingData` after assigning each device to a room (Rooms--> Locali).
const shadingData = await structureInfo.generateSurfaceShadingData(devices, undefined, "Locali");
// Use the resulting shading data to generate heatmap from.
await dataVizExtn.setupSurfaceShading(model, shadingData, {
type: "PlanarHeatmap",
placePosition: "min",
usingSlicing: true,
});
// Register a few color stops for sensor values in range [0.0, 1.0]
const sensorType = "Temperature";
const sensorColors = [0x0000ff, 0x00ff00, 0xffff00, 0xff0000];
dataVizExtn.registerSurfaceShadingColors(sensorType, sensorColors);
// Function that provides a [0,1] value for the planar heatmap
function getSensorValue(surfaceShadingPoint, sensorType, pointData) {
const { x, y } = pointData;
const sensorValue = computeSensorValue(x, y);
return clamp(sensorValue, 0.0, 1.0);
}
const sensorType = "Temperature";
dataVizExtn.renderSurfaceShading(floor.name, sensorType, getSensorValue);
How can I solve this issue? Is there something else to do when using a different localization?
Here is a snapshot of what I get from the console:
Which viewer version you're using? There was an issue causing ModelStructureInfo cannot produce the correct LevelRoomsMap, but it gets fixed now. Please use v7.43.0 and try again. Here is the snapshot of my test:
BTW, if you see t {model: d, rooms: null} while constructing the ModelStructureInfo, it's alright, since the room data will be produced after you called ModelStructureInfo#getLevelRoomsMap or ModelStructureInfo#getRoomList.

How to pass list/array of request objects to tensorflow serving in one server call?

After loading the wide and deep model, i was able to make prediction for one request object using the map of features and then serializing it to string for predictions as shown below-
is there a way we can create a batch of requests objects and send them for prediction to tensorflow server?
Code for single prediction looks like this-
for (each feature in feature list) {
Feature feature = null;
feature = Feature.newBuilder().setBytesList(BytesList.newBuilder().addValue(ByteString.copyFromUtf8("dummy string"))).build();
if (feature != null) {
inputFeatureMap.put(name, feature);
}
}
//Converting features(in inputFeatureMap) corresponding to one request into 'Features' Proto object
Features features = Features.newBuilder().putAllFeature(inputFeatureMap).build();
inputStr = Example.newBuilder().setFeatures(features).build().toByteString();
}
TensorProto proto = TensorProto.newBuilder()
.addStringVal(inputStr)
.setTensorShape(TensorShapeProto.newBuilder().addDim(TensorShapeProto.Dim.newBuilder().setSize(1).build()).build())
.setDtype(DataType.DT_STRING)
.build();
PredictRequest req = PredictRequest.newBuilder()
.setModelSpec(ModelSpec.newBuilder()
.setName("your serving model name")
.setSignatureName("serving_default")
.setVersion(Int64Value.newBuilder().setValue(modelVer)))
.putAllInputs(ImmutableMap.of("inputs", proto))
.build();
PredictResponse response = stub.predict(req);
System.out.println(response.getOutputsMap());
Is there a way we can send the list of Features Object for predictions, something similar to this-
List<Features> = {someway to create array/list of inputFeatureMap's which can be converted to serialized string.}
For anyone stumbling here, I found a simple workaround with Example proto to do batch request. I will borrow code from this question and modify it for the batch.
Features features =
Features.newBuilder()
.putFeature("Attribute1", feature("A12"))
.putFeature("Attribute2", feature(12))
.putFeature("Attribute3", feature("A32"))
.putFeature("Attribute4", feature("A40"))
.putFeature("Attribute5", feature(7472))
.putFeature("Attribute6", feature("A65"))
.putFeature("Attribute7", feature("A71"))
.putFeature("Attribute8", feature(1))
.putFeature("Attribute9", feature("A92"))
.putFeature("Attribute10", feature("A101"))
.putFeature("Attribute11", feature(2))
.putFeature("Attribute12", feature("A121"))
.putFeature("Attribute13", feature(24))
.putFeature("Attribute14", feature("A143"))
.putFeature("Attribute15", feature("A151"))
.putFeature("Attribute16", feature(1))
.putFeature("Attribute17", feature("A171"))
.putFeature("Attribute18", feature(1))
.putFeature("Attribute19", feature("A191"))
.putFeature("Attribute20", feature("A201"))
.build();
Example example = Example.newBuilder().setFeatures(features).build();
String pfad = System.getProperty("user.dir") + "\\1511523781";
try (SavedModelBundle model = SavedModelBundle.load(pfad, "serve")) {
Session session = model.session();
final String xName = "input_example_tensor";
final String scoresName = "dnn/head/predictions/probabilities:0";
try (Tensor<String> inputBatch = Tensors.create(new byte[][] {example.toByteArray(), example.toByteArray(), example.toByteArray(), example.toByteArray()});
Tensor<Float> output =
session
.runner()
.feed(xName, inputBatch)
.fetch(scoresName)
.run()
.get(0)
.expect(Float.class)) {
System.out.println(Arrays.deepToString(output.copyTo(new float[4][2])));
}
}
Essentially you can pass each example as an object in byte[4][] and you will receive the result in the same shape float[4][2]

How to feed inputs into a loaded Tensorflow model using C++

I want to create and train a model, export it and run inference in C++.
I'm following the tutorial listed here: https://www.tensorflow.org/tutorials/wide_and_deep
I'm also trying to use the SavedModel approach as described here since this is the canonical way to export TensorFlow graphs for serving:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md.
At the very end, I export the saved model as follows:
feature_spec = tf.contrib.layers.create_feature_spec_for_parsing(feature_columns)
serving_input_fn = input_fn_utils.build_parsing_serving_input_fn(feature_spec)
output = model.export_savedmodel(model_dir, serving_input_fn, as_text=True)
print('Model saved to {}'.format(output))
I see the saved_model.pbtxt has the following signature definition.
signature_def {
key: "serving_default"
value {
inputs {
key: "inputs"
value {
name: "input_example_tensor:0"
dtype: DT_STRING
tensor_shape {
dim {
size: -1
}
}
}
}
outputs {
...
I can load the saved model on the C++ side
SavedModelBundle bundle;
const std::string graph_path = "models/1498572863";
const std::unordered_set<std::string> tags = {"serve"};
Status status = LoadSavedModel(session_options,
run_options, graph_path,
tags, &bundle);
I'm stuck at the last part where I need to feed the input into this model.
The Run function expects the input parameter to be of the form: std::vector<std::pair<string, Tensor>>.
I would have expected this to be a vector of pairs where the key is the feature name used in the python code and the Tensor is multiple values for that feature.
However, it seems to expect the string to be "input_example_tensor".
I'm not sure how I'm supposed to now feed the model with different features using a single Tensor.
std::vector<string> output_tensor_names = {
"binary_logistic_head/_classification_output_alternatives/classes_tensor"};
// How do I create input_tensor?
status = bundle.session->Run({{"input_example_tensor", input_tensor}}
output_tensor_names, {}, &outputs);
Solution
I did something like this
tensorflow::Example example;
auto& tf_feature_map = *(example.mutable_features()->mutable_feature());
tf_feature_map["name"].mutable_int64_list()->add_value(15);
const std::string& serialized = example.SerializeAsString();
tensorflow::Input input({serialized});
status = bundle.session->Run({{"input_example_tensor", input.tensor()}}
output_tensor_names, {}, &outputs);
Your model signature suggests that it is expecting a DT_STRING tensor as input. When using tensorflow::Example, this typically means that the protocol buffer needs to be serialized into a tensor with a string as the type of its elements.
To convert the tensorflow::Example object to a string, you can use the protocol buffer methods such as SerializeToString, SerializeAsString etc.
Hope that helps.