Confused about Tensorflow Algorithm function - tensorflow

Colab notebook
Under the section on Feature Columns, there is this specific line of code
feature_columns = [ ]
for feature_name in CATEGORICAL_COLUMNS:
vocabulary = dftrain[feature_name].unique()
I'm struggling to understand what this is doing. I don't really know what to search up too as I'm still quite new to programming. Why is there a need for this line? I understand that it outputs all unique values of the specified feature_name, but don't get how it's linked to the next line.

When you don't understand a function just google the module name (TensorFlow) and the function name. I found the documentation for tf.feature_column.categorical_column_with_vocabulary_list described here. To quote the documentation:
Use this when your inputs are in string or integer format, and you have an in-memory vocabulary mapping each value to an integer ID. By default, out-of-vocabulary values are ignored.
What this section of code is doing is going through each column and mapping each unique string value to a unique integer (its location in the vocabulary list). Transforming your column using this type of mapping is common for categorical data. The reason that unique is needed is because tf.feature_column.categorical_column_with_vocabulary_list needs a unique list as an argument before it can work its magic.
In the future please put all necessary code in the question. It should not be required to visit another link to answer your question.

Related

Automatically detect security identifier columns using Visions

I'm interested in using the Visions library to automate the process of identifying certain types of security (stock) identifiers. The documentation mentions that it could be used in such a way for ISBN codes but I'm looking for a more concrete example of how to do it. I think the process would be pretty much identical for the fields I'm thinking of as they all have check digits (ISIN, SEDOL, CUSIP).
My general idea is that I would create custom types for the different identifier types and could use those types to
Take a dataframe where the types are unknown and identify columns matching the types (even if it's not a 100% match)
Validate the types on a dataframe where the intended type is known
Great question and use-case! Unfortunately, the documentation on making new types probably needs a little love right now as there were API breaking changes with the 0.7.0 release. Both the previous link and this post from August, 2020 should cover the conceptual idea of type creation in greater detail. If any of those examples break then mea culpa and our apologies, we switched to a dispatch based implementation to support different backends (pandas, numpy, dask, spark, etc...) for each type. You shouldn't have to worry about that for now but if you're interested you can find the default type definitions here with their backends here.
Building an ISBN Type
We need to make two basic decisions when defining a type:
What defines the type
What other types are our new type related to?
For the ISBN use-case O'Reilly provides a validation regex to match ISBN-10 and ISBN-13 codes. So,
What defines a type?
We want every element in the sequence to be a string which matches a corresponding ISBN-10 or ISBN-13 regex
What other types are our new type related to?
Since ISBN's are themselves strings we can use the default String type provided by visions.
Type Definition
from typing import Sequence
import pandas as pd
from visions.relations import IdentityRelation, TypeRelation
from visions.types.string import String
from visions.types.type import VisionsBaseType
isbn_regex = "^(?:ISBN(?:-1[03])?:?●)?(?=[0-9X]{10}$|(?=(?:[0-9]+[-●]){3})[-●0-9X]{13}$|97[89][0-9]{10}$|(?=(?:[0-9]+[-●]){4})[-●0-9]{17}$)(?:97[89][-●]?)?[0-9]{1,5}[-●]?[0-9]+[-●]?[0-9]+[-●]?[0-9X]$"
class ISBN(VisionsBaseType):
#staticmethod
def get_relations() -> Sequence[TypeRelation]:
relations = [
IdentityRelation(String),
]
return relations
#staticmethod
def contains_op(series: pd.Series, state: dict) -> bool:
return series.str.contains(isbn_regex).all()
Looking at this closely there are three things to take note of.
The new type inherits from VisionsBaseType
We had to define a get_relations method which is how we relate a new type to others we might want to use in a typeset. In this case, I've used an IdentityRelation to String which means ISBNs are subsets of String. We can also use InferenceRelation's when we want to support relations which change the underlying data (say converting the string '4.2' to the float 4.2).
A contains_op this is our definition of the type. In this case, we are applying a regex string to every element in the input and verifying it matched the regex provided by O'Reilly.
Extensions
In theory ISBNs can be encoded in what looks like a 10 or 13 digit integer as well - to work with those you might want to create an InferenceRelation between Integer and ISBN. A simple implementation would involve coercing Integers to string and applying the above regex.

How to serialize data in example-in-example format for tensorflow-ranking?

I'm building a ranking model with tensorflow-ranking. I'm trying to serialize a data set in the TFRecord format and read it back at training time.
The tutorial doesn't show how to do this. There is some documentation here on an example-in-example data format but it's hard for me to understand: I'm not sure what the serialized_context or serialized_examples fields are or how they fit into examples and I'm not sure what the Serialize() function in the code block is.
Concretely, how can I write and read data in example-in-example format?
The context is a map from feature name to tf.train.Feature. The examples list is a list of maps from feature name to tf.train.Feature. Once you have these, the following code will create an "example-in-example":
context = {...}
examples = [{...}, {...}, ...]
serialized_context = tf.train.Example(features=tf.train.Features(feature=context)).SerializeToString()
serialized_examples = tf.train.BytesList()
for example in examples:
tf_example = tf.train.Example(features=tf.train.Features(feature=example))
serialized_examples.value.append(tf_example.SerializeToString())
example_in_example = tf.train.Example(features=tf.train.Features(feature={
'serialized_context': tf.train.Feature(bytes_list=tf.train.BytesList(value=[serialized_context])),
'serialized_examples': tf.train.Feature(bytes_list=serialized_examples)
}))
To read the examples back, you may call
tfr.data.parse_from_example_in_example(example_pb,
context_feature_spec = context_feature_spec,
example_feature_spec = example_feature_spec)
where context_feature_spec and example_feature_spec are maps from feature name to tf.io.FixedLenFeature or tf.io.VarLenFeature.
First of all, I recommend reading this article to ensure that you know how to create a tf.Example as well as tf.SequenceExample (which by the way, is the other data format supported by TF-Ranking):
Tensorflow Records? What they are and how to use them
In the second part of this article, you will see that a tf.SequenceExample has two components: 1) Context and 2)Sequence (or examples). This is the same idea that Example-in-Example is trying to implement. Basically, context is the set of features that are independent of the items that you want to rank (a search query in the case of search, or user features in the case of a recommendation system) and the sequence part is a list of items (aka examples). This could be a list of documents (in search) or movies (in recommendation).
Once you are comfortable with tf.Example, Example-in-Example will be easier to understand. Take a look at this piece of code for how to create an EIE instance:
https://www.gitmemory.com/issue/tensorflow/ranking/95/518480361
1) bundle context features together in a tf.Example object and serialize it
2) bundle sequence(example) features (each of which could contain a list of values) in another tf.Example object and serialize this one too.
3) wrap these inside a parent tf.Example
4) (if you're writing to tfrecords) serialize the parent tf.Example object and write to your tfrecord file.

In tensorflow serving, how to store a list in feature dictionary?

I'm pretty new with tensorflow serving, now I'm working with client-end coding.
With the basic tutorial, I know I need to build a feature dictionary like:
feature_dict={
'input_content':tf.train.Feature(...)
'input_label':tf.train.Feature(...)
}
Then,
model_input=tf.train.Example(feature=tf.train.Features(feature=feature_dict))
Now, my question is, how can I put a list into the feature_dict?
Like, I have a 10 dimension list, I want to set it as the 'input_content', how can I get that?
A tf.train.Feature contains lists which may hold zero or more values. The lists could be of type BytesList, FloatList, or Int64List.
The following code adds a single float element (float_element) to the tf.train.Feature
tf.train.Feature(float_list=tf.train.FloatList(value=[float_element]))
Notice that the float_element is surrounded by square brackets ([]), i.e., a list is being created with a single element.
While trying to add a list (float_list), one should not use square brackets like the following code snippet.
tf.train.Feature(float_list=tf.train.FloatList(value=float_list))

PostgreSQL full text search doesn't work in some case (Django)

I notice that in django when there is a sentence containing PLAZA/MASTERPIECE then when we search masterpiece I can't find this sentence. Is this a limitation of PostgreSQL full text search. Or how to solve this?
finalquery = SearchQuery("keyword")
vector = SearchVector('thefieldIwanttosearch')
self.search_results = self.search_results.annotate(search=vector).filter(search=finalquery).annotate(rank=SearchRank(vector, finalquery))
Is there any document about this? Thanks!
Yes, this is all documented.
When you write filter(search=finalquery) you're not specifying a lookup type.
As a convenience when no lookup type is provided (like in Entry.objects.get(id=14)) the lookup type is assumed to be exact.
So you're filtering on an exact match for "masterpiece". What you probably want is contains or icontains.

how do you flatten and unflatten an array of doubles in labview?

I have created a simple LabView program shown below that attempts to flatten an array [1,0,3] and then unflatten it and print out the contents.
However, I am unsuccessful in doing so. What am I doing wrong?
What am I doing wrong?
You're not going through tutorials or you're not reading the context help for the unflatten function (Ctrl+H) or you're not reading the full help for the function (right click>>Help) or you're not looking at the examples (from the help or Help>>Find Examples). Take your pick (preferably all four).
If you want an actual answer it is that LV is strictly typed, and therefore you need to tell the unflatten function which data type you want it to output (1D DBL array) and you're not doing that, but the real answer is what's in the previous paragraph - you should use those tools to learn how to find such an answer yourself.
The string returned by Flatten to String only contains the data, not the description of what data type was passed in, so in order to unflatten it again you need to tell Unflatten from String what type it was. You do this by wiring some data of the appropriate type (any data - if it's an array it can be an empty one) to the Type terminal.
I don't think this is immediately obvious from the LabVIEW 2012 help but I think it's fairly clear if you follow the link from the Unflatten from String help page to one of the examples. The Read Flattened Data.vi example has an array wired to the Type input.