I know that we can get schema information of tables using INFORMATION_SCHEMA.TABLES. Is there a similar method for BigQuery Models?
You can inspect everything related to the evaluation metrics by using ML.EVALUATE
https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-evaluate-overview
The ML.TRIAL_INFO function is used to display information regarding trials from a hyperparameter tuning model.
https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-trial-info
And there are many other functions to inspect a model using SQL, you can learn more about them here:
https://cloud.google.com/bigquery-ml/docs/reference#bigquery-ml-standard-sql-reference
BQML doesn't support information schema for Models yet.
Related
I have some data in my MySQL database that I want to make predictions on. Now, I also have a model developed using TensorFlow on similar data to make these predictions.
I want to use the power of in-database machine learning in order to make my predictions. I am thinking of using MindsDB for this purpose.
I have already used MindsDB for another use case, where I trained the model on the data in my database and then subsequently used it for making predictions. But, is it possible to use my pre-developed model in order to make predictions? If so, how do I do it?
Some example code would be greatly appreciated.
I am using the TF Object Detection API. I have a custom data set. I am training using SLURM jobs and calling the API scripts from within there. I am looking to try and tune hyperparameters found in the pipeline.config files. Unfortunately, in the documentation, this kind of process is not outlined. It seems like the process is to either use the sample configs or tune the hyperparameters by hand.
Tuning by hand is somewhat feasible, for example adjusting for two parameters for three values (batch size and steps) results in nine different .configs, but adding another hyperparameter to that boosts it up to twenty-seven files I need to keep track of. This does not seem like a good way to do it, particularly because it limits the values I can try and is clumsy.
It seems like there are libraries out there that hook into Keras and other more high-level frameworks, but I have found nothing that looks like it can take the results of the Object Detection API and actually optimize it.
Is it possible to do this with a pre-built library I don't know about? I would like to avoid having to edit the API implementation or coding this myself to minimize errors.
I'm trying to incorporate image normalization in my keras model to run on Google's cloud TPU. Therefore I inserted a line into my code:
with strategy.scope():
input_shape=(128,128,3)
image_0 = Input(shape=input_shape)
**image_1 = tf.image.per_image_standardization(image_0)**
...
There was nor error thrown, but according the documentation of google tf.image.per_image_standardization
is not a supported function. Does anybody know if it works anyhow, or does anybody have an idea how to check if it works?
From the TensorFlow Model Garden reference for ResNet, the mean and standard deviation of a dataset is often calculated beforehand and each batch is standardized via mean subtract and dividing by the standard deviation. See here for a reference (this uses ImageNet statistics).
I would suggest creating a separate script that calculates the mean and standardization and doing the same. Could you also point to the documentation where tf.image.per_image_standardization is not supported? I don't see why this wouldn't work, but you shouldn't apply it as a layer like in the provided code snippet. It should be in the data preprocessing pipeline like in the above reference.
I just got into Rasa to make a chatbot and wanted to know if there was some way to train Rasa Core on datasets(preferably subreddit datasets) either by its own or using tensorflow or something.
Thanks in advance.
Some ideas:
Core training data in Rasa follows a specific story format which to date requires a predefined, closed set of intents and actions to operate. You could use data from a dataset for NLU intent examples and use it to train a model in that way, provided you format it correctly. You could also use data from a dataset to define responses for retrieval actions, if you were wanting to generate natural sounding replies.
I am trying to implement a dynamic network, which is able to change the network structure according to the input data. Here is an example https://arxiv.org/pdf/1511.02799v3.pdf
I wonder it that possible to use TensorFlow to implement dynamic network?
I think we may need to use placeholder to control the network?
Thank you very much.
This was announced a few months after your question. This is a roundabout way to do that. I have heard other libraries like MxNet will support this too.
https://research.googleblog.com/2017/02/announcing-tensorflow-fold-deep.html
You might want to checkout DyeNet for true dynamic graphs.