How to deal with several number in camel case? - naming-conventions

I have a variable like model_1_5_7 and I need to rename it in camel case. I need it because earlier my models were functions in Python but now I need to make them as classes.

You could use a letter instead of the underscore, for instance p (for point), which would give model1p5p7. If the module numbers are dense you could also use a three-dimensional array named models and designate the individual models with three indices, for example models[1][5][7].

Related

How to add user-defined words to spaCy

I am a novice of spaCy and am using spaCy to process medical literature. I found that Tokenizer would divide the Latin name composed of two words into two independent words, which is inappropriate. In addition, I have thousands of customized words, which are basically biological names (usually composed of two words, such as Angelica sinensis). How can I add these customized words to spaCy and let Tokenizer recognize these multi-word words as a single token without splitting them. Thank you.
If you have a list of multi-word expressions that you would like to treat as tokens, the easiest thing to do is use an EntityRuler to mark them as entities and then use the merge_entitites component.

Alternative to case statments when changing a lot of numeric controls

I'm pretty new to LabVIEW, but I do have experience in other programing languages like Python and C++. The code I'm going to ask about works, but there was a lot of manual work involved when putting it together. Basically I read from a text file and change control values based on values in the text file, in this case its 40 values.
I have set it up to pull from a text file and split the string by commas. Then I loop through all the values and set the indicator to read the corresponding value. I had to create 40 separate case statements to achieve this. I'm sure there is a better way of doing this. Does anyone have any suggestions?
There could be done following improvements (additionally to suggested by sweber:
If file contains just data, without "label - value" format, then you could read it as csv (comma separated values) format, and read actually just 1st row.
Currently, you set values based on order. In this case, you could: create reference to all indicators, build them to array in proper order, in For Loop assign values to indicators via property node Value.
Overall, I support sweber that if it is some key - value data, then better to use either JSON format, or .ini file format, which support such structure.
Let's start with some optimization:
It seems your data file contains nothing more than just 40 numbers. You can wire an 1D DBL array to the default input of the string-to-array VI, and you will get just a 1D array out. No need for a 2D array.
Second, there is no need to convert the FOR index value to a string, the CASE accepts integers, too.
Now, about your question: The simplest solution is to display the values as array, just as they come from the string-to-array VI.
But I guess each value has a special meaning, and you would like to display it's name/description somehow. In this case, create a cluster with 40 values, edit their labels as you like, and make sure their order in the cluster is the same as the order of the values in the files.
Then, wire the 1D array of values to this cluster via an array-to-cluster VI.
If you plan to use the text file to store and load the values, converting the cluster data to JSON and vv. might be something for you, as it transports the labels of the cluster into the file, too. (However, changing labels is an issue, then)

Impute missing values in Tensorflow?

I know about sklearn.preprocessing.Imputer but does Tensorflow have built-in functions to do this as well?
In case your imputation cannot be the same for all entries as suggested before, you may want to use tensorflow-transform.
For example, if you want to use the mean or the median as the value to impute for the missing values in the corresponding entries, you can not do so with a default one as such values are dynamic and depend on the whole dataset (or a subset depending on your needs/rules).
Check out one of the examples on how you would do that in the official repository.
As far as I know, there isn't a handy function that does the same thing as sklearn.preprocessing.Imputer.
There are a few ways of dealing with missing values using built-in functions:
While reading in data: For example, you can set the default value for a missing value when reading in a CSV using the record_defaults field.
If you have the data already: You can replace the nans using tf.where (example)

Using symbols as data

I am going to have a column in my table that will be binary (the two states are 'private' and 'public'. Is there a way to use symbols for this purpose (:public and :private)? I would prefer this than using binary (ones and zeroes) and I know symbols are less memory-intensive than strings.
By the way I'm doing this in a Rails app with active record.
You can't use symbols in the database.
If you're worried about memory usage of strings over symbols, you can do better than both: Just use a boolean column, private, and have a public? accessor which returns !private?

Equivalent of Python pickling in SWI Prolog?

I've got a Prolog program where I'm doing some brute force search on all strings up to a certain length. I'm checking which strings match a certain pattern, keeping adding patterns until hopefully I find a set of patterns that covers all strings. I would like to store which ones to a file which don't match any of my patterns, so that when I add a new pattern, I only need to check the leftovers, instead of doing the entire brute force search again.
If I were writing this in python, I would just pickle the list of strings, and load it from the file. Does anybody know how to do something similar in Prolog?
I have a good amount of Prolog programming experience, but very little with Prolog IO. I could probably write a predicate to read a file and parse it into a term, but I figured there might be a way to do it more easily.
If you want to write out a term and be able to read it back later at any time barring variables names, use the ISO built-in write_canonical/1 or write_canonical/2. It is quite well supported by current systems. writeq/1 and write/1 work often too, but not always. writeq/1 uses operator syntax (so you need to read it back with the very same operators present) and write/1 does not use quotes. So they work "most of the time" — until they break.
Alternatively, you may use the ISO write-options [quoted(true), ignore_ops(true), numbervars(false)] in write_term/2 or write_term/3. This might be interesting to you if you want to use further options like variable_names/1 to retain also the names of the variables.
Also note that the term written does not include a period at the end. So you have to write a space and a period manually at the end. The space is needed to ensure that an atom consisting of graphic characters does not clobber with the period at the end. Think of writing the atom '---' which must be written as --- . and not as ---. You might write the space only in case of an atom. Or an atom that does not "glue" with .
writeq and read make a similar job, but read the note on writeq about operators, if you declare any.
Consider using read/1 to read a Prolog term. For more complex or different kinds of parsing, consider using DCGs and then phrase_from_file/2 with SWI's library(pio).