Principle of setting 'hash_bucket_size' parameter? - tensorflow

Question 1:
In wide_n_deep_tutorial.py, there is a hyper-parameter named hash_bucket_size for both tf.feature_column.categorical_column_with_hash_bucket and tf.feature_column.crossed_column methods, and the value is hash_bucket_size=1000.
But why 1000? How to set this parameter ?
Question 2:
The second question about crossed_columns, that is,
crossed_columns = [
tf.feature_column.crossed_column( ["education", "occupation"], hash_bucket_size=1000),
tf.feature_column.crossed_column( [age_buckets, "education", "occupation"], hash_bucket_size=1000),
tf.feature_column.crossed_column( ["native_country", "occupation"], hash_bucket_size=1000) ]
in wide_n_deep_tutorial.py,
Why choose ["education", "occupation"], [age_buckets, "education", "occupation"] and ["native_country", "occupation"] as crossed_columns, are there any rule of thumb ?

Regarding the hash_bucket_size: if you set it too low, there will be many hash collisions, where different categories will be mapped to the same bucket, forcing the neural network to use other features to distinguish them. If you set it too high then you will use a lot of RAM for nothing: I am assuming that you will wrap the categorical_column_with_hash_bucket() in an embedding_column() (as you generally should), in which case the hash_bucket_size will determine the number of rows of the embedding matrix.
The probability of a collision if there are k categories is approximately equal to: 1 - exp(-k*(k-1)/2/hash_bucket_size) (source), so if there are 40 categories and you use hash_bucket_size=1000, the probability is surprisingly high: about 54%! To convince yourself, try running len(np.unique(np.random.randint(1000, size=40))) several times (it picks 40 random numbers between 0 and 999 and counts how many unique numbers there are), and you will see that the result is quite often less than 40. You can use this equation to choose a value of hash_bucket_size that does not cause too many collisions.
That said, if there are just a couple collisions, it's probably not going to be too bad in practice, as the neural network will still be able to use other features to distinguish the colliding categories. The best option may be to experiment with different values of hash_bucket_size to find the value below which performance starts to degrade, then increase it by 10-20% to be safe.

For the hash_bucket
The general idea is that ideally the result of the hash functions should not result in any collisions (otherwise you/the algorithm would not be able to distinguish between two cases). Hence the 1000 is in this case 'just' a value. If you look at the unique entries for occupation and country (16 and 43) you'll see that this number is high enough:
edb#lapelidb:/tmp$ cat adult.data | cut -d , -f 7 | sort | uniq -c | wc -l
16
edb#lapelidb:/tmp$ cat adult.data | cut -d , -f 14 | sort | uniq -c | wc -l
43
Feature crossing
I think the rule of thumb there is that crossing makes sense if the combination of the features actually has meaning. In this example education and occupation are linked. As for the second one it probably make sense to define people as 'junior engineer with a ph.d' vs 'senior cleaning staff without a degree'. Another typical example you see quite often is the crossing of longitude and latitude since they have more meaning together than individually.

Related

what's best way for awk to check arbitrary integer precision

from GNU gawk's page
https://www.gnu.org/software/gawk/manual/html_node/Checking-for-MPFR.html
they have a formula to check arbitrary precision
function adequate_math_precision(n) { return (1 != (1+(1/(2^(n-1))))) }
My question is : wouldn't it be more efficient by staying within integer math domain with a formula such as
( 2^abs(n) - 1 ) % 2 # note 2^(n-1) vs. 2^|n| - 1
Since any power of 2 must also be even, then subtracting 1 must always be odd, then its modulo (%) over 2 becomes indicator function for is_odd() for n >= 0, while the abs(n) handles the cases where it's negative.
Or does the modulo necessitate a casting to float point, thus nullifying any gains ?
Good question. Let's tackle it.
The proposed snippet aims at checking wether gawk was invoked with the -M option.
I'll attach some digression on that option at the bottom.
The argument n of the function is the floating point precision needed for whatever operation you'll have to perform. So, say your script is in a library somewhere and will get called but you have no control over it. You'll run that function at the beginning of the script to promptly throw exception and bail out, suggesting that the end result will be wrong due to lack of bits to store numbers.
Your code stays in the integer realm: a power of two of an integer is an integer. There is no need to use abs(n) here, because there is no point in specifying how many bits you'll need as a negative number in the first place.
Then you subtract one from an even, integer number. Now, unless n=0, in which case 2^0=1 and then your code reads (1 - 1) % 2 = 0, your snippet shall always return 1, because the quotient (%) of an odd number divided by two is 1.
Problem is: you are trying to calculate a potentially stupidly large number in a function that should check if you are able to do so in the first place.
Since any power of 2 must also be even, then subtracting 1 must always
be odd, then its modulo (%) over 2 becomes indicator function for
is_odd() for n >= 0, while the abs(n) handles the cases where it's
negative.
Except when n=0 as we discussed above, you are right. The snippet will tell that any power of 2 is even, and any power of 2, minus 1, is odd. We were discussing another subject entirely thought.
Let's analyze the other function instead:
return (1 != (1+(1/(2^(n-1)))))
Remember that booleans in awk runs like this: 0=false and non zero equal true. So, if 1+x where x is a very small number, typically a large power of two (2^122 in the example page) is mathematically guaranteed to be !=1, in the digital world that's not the case. At one point, floating computation will reach a precision rock bottom, will be rounded down, and x=0 will be suddenly declared. At that point, the arbitrary precision function will return 0: false: 1 is equal 1.
A larger discussion on types and data representation
The page you link explains precision for gawk invoked with the -M option. This sounds like technoblahblah, let's decipher it.
At one point, your OS architecture has to decide how to store data, how to represent it in memory so that it can be accessed again and displayed. Terms like Integer, Float, Double, Unsigned Integer are examples of data representation. We here are addressing Integer representation: how is an integer stored in memory?
A 32-bit system will use 4 bytes to represent and integer, which in turn determines how larger the integer will be. The 32 bits are read from most significative (MSB) to less significative (LSB) and if signed, one bit will represent the sign (the MSB typically, drastically reducing the max size of the integer).
If asked to compute a large number, a machine will try to fit in in the max number available. If the end result is larger than that, you have overflow and end up with a wrong result or an error. Many online challenges typically ask you to write code for arbitrary long loops or large sums, then test it with inputs that will break the 64bit barrier, to see if you master proper types for indexes.
AWK is not a strongly typed language. Meaning, any variable can store data, regardless of the type. The data type can change and it is determined at runtime by the interpreter, so that the developer doesn't need to care. For instance:
$awk '{a="this is text"; print a; a=2; print a; print a+3.0*2}'
-| this is text
-| 2
-| 8
In the example, a is text, then is an integer and can be summed to a floating point number and printed as integer without any special type handling.
The Arbitrary Precision Page presents the following snippet:
$ gawk -M 'BEGIN {
> s = 2.0
> for (i = 1; i <= 7; i++)
> s = s * (s - 1) + 1
> print s
> }'
-| 113423713055421845118910464
There is some math voodoo behind, we will skip that. Since s is interpreted as a floating point number, the end result is computed as floating point.
Try to input that number on Windows calculator as decimal, and it will fail. Although you can compute it as a binary. You'll need the programmer setting and to add up to 53 bits to be able to fit it as unsigned integer.
53 is a magic number here: with the -M option, gawk uses arbitrary precision for numbers. In other words, it commandeers how many bits are necessary, track them and breaks free of the native OS architecture. The default option says that gawk will allocate 53 bits for any given arbitrary number. Fun fact, the actual result of that snippet is wrong, and it would take up to 100 bits to compute correctly.
To implement arbitrary large numbers handling, gawk relies on an external library called MPFR. Provided with an arbitrary large number, MPFR will handle the memory allocation and bit requisition to store it. However, the interface between gawk and MPFR is not perfect, and gawk can't always control the type that MPFR will use. In case of integers, that's not an issue. For floating point numbers, that will result in rounding errors.
This brings us back to the snippet at the beginning: if gawk was called with the -M option, numbers up to 2^53 can be stored as integers. Floating points will be smaller than that (you'll need to make the comma disappear somehow, or rather represent it spending some of the bits allocated for that number, just like the sign). Following the example of the page, and asking an arbitrary precision larger than 32, the snippet will return TRUE only if the -M option was passed, otherwise 1/2^(n-1) will be rounded down to be 0.

Is a /start/,/end/ range expression ever useful in awk?

I've always contended that you should never use a range expression like:
/start/,/end/
in awk because although it makes the trivial case where you only want to print matching text including the start and end lines slightly briefer than the alternative*:
/start/{f=1} f{print; if (/end/) f=0}
when you want to tweak it even slightly to do anything else, it requires a complete re-write or results in duplicated or otherwise undesirable code. e.g. if you want to print the matching text excluding the range delimiters using the second form above you'd just tweak it to move the components around:
f{if (/end/) f=0; else print} /start/{f=1}
but if you started with /start/,/end/ you'd need to abandon that approach in favor of what I just posted or you'd have to write something like:
/start/,/end/{ if (!/start|end/) print }
i.e. duplicate the conditions which is undesirable.
Then I saw a question posted that required identifying the LAST end in a file and where a range expression was used in the solution and I thought it seemed like that might have some value (see https://stackoverflow.com/a/21145009/1745001).
Now, though, I'm back to thinking that it's just not worth bothering with range expressions at all and a solution that doesn't use range expressions would have worked just as well for that case.
So - does anyone have an example where a range expression actually adds noticeable value to a solution?
*I used to use:
/start/{f=1} f; /end/{f=0}
but too many times I found I had to do something additional when f is true and /end/ is found (or to put it another way ONLY do something when /end/ is found IF f were true) so now I just try to stick to the slightly less brief but much more robust and extensible:
/start/{f=1} f{print; if (/end/) f=0}
Interesting. I also often start with a range expression and then later on switch to using a variable..
I think a situation where this could be useful, aside from the pure range-only situations is if you want to print a match, but only if it lies in a certain range. Also because it is immediately obvious what it does. For example:
awk '/start/,/end/{if(/ppp/)print}' file
with this input:
start
dfgd gd
ppp 1
gfdg
fd gfd
end
ppp 2
ppp 3
start
ppp 4
ppp 5
end
ppp 6
ppp 7
gfdgdgd
will produce:
ppp 1
ppp 4
ppp 5
--
One could of course also use:
awk '/start/{f=1} /ppp/ && f; /end/{f=0}' file
But it is longer and somewhat less readable..
While you are right that the /start/,/end/ range expression can easily be reimplemented with a conditional, it has many interesting use-cases where it is used on its own. As you observe it, it might have little value for processing of tabular data, the main but not only use case of awk.
So - does anyone have an example where a range expression actually adds noticeable value to a solution?
In the mentioned use-cases, the range expression improves legibility. Here are a few examples, where the range expression accurately selects the text to be processed. These are only a hand of examples, but there is countlessly similar applications, demonstrating the incredible versatility of awk.
Filter logs within a time range
Assuming each log line starts with an ISO timestamp, the filter below selects all events in a given range of 1 hour:
awk '/^2015-06-30T12:00:00Z/,/^2015-06-30T13:00:00Z/'
Extract a document from a file
awk '/---- begin file.data ----/,/---- end file.data ----/'
This can be used to bundle resources with shell scripts (with cat), to extract parts of GPG-signed messages (prepared with --clearsign) or more generally of MIME-messages.
Process LaTeX files
The range pattern can be used to match LaTeX environments, so for instance we can select the abstracts of all articles in our directory:
awk '/begin{abstract}/,/end{abstract}/' *.tex
or all the theorems, to prepare a theorem database!
awk '/begin{theorem}/,/end{theorem}/' *.tex
or write a linter ensuring that theorems do not contain citations (if we regard this as bad style):
awk '
/begin{theorem}/,/end{theorem}/ { if(/\\cite{/) { c+= 1 } }
END { printf("There were %d bad-style citations.\n", c) }
'
or preprocess tables, etc.

Tools for generating strong passphrases?

Passphrases seem like a good alternative for traditional
guidelines for strong passwords. See http://xkcd.com/936/ for an entertaining take on passwords vs. passphrases.
There are many tools for generating more traditional passwords (eg. pwgen.)
Such tools are useful when for example providing users with good initial passwords.
What tools are available for generating good passphrases?
Do you have experience on using them or insight about their security or other features?
I've recently released a couple of Perl scripts, gen-password and gen-passphrase, on GitHub here.
The gen-passphrase script could suit your needs. It takes three arguments: a word used as a sequence of initials, a minimum length, and a maximum length. For example:
$ gen-passphrase abcde 6 8
acrimony borrowed chasten drifts educable
or you can ask for a number of words without specifying their initials (a new feature I just added):
$ gen-passphrase 5 6 8
poplin outbreak aconites academic azimuths
It requires a word list; it uses /usr/share/dict/words by default if it exists. It uses /dev/urandom by default, but can be told to use /dev/random. See this answer of mine on superuser.com for more information about /dev/urandom vs. /dev/urandom.
NOTE: So far, nobody other than me has tested these scripts. I've made my best effort to have them generate strong passwords/passphrases, but I guarantee nothing.
I wrote a command line based Perl script passphrase-generator.
The passphrase-generator defaults to only 3 words instead of the 4 suggested by XKCD, but uses a larger dictionary found in many linux based systems at /usr/share/dict/words. It also provides estimates for the entropy of the generated passphrases. The randomization is based on /dev/urandom and SHA1.
Example run:
$ passphrase-generator
Random passphrase generator
Entropy per passphrase is 43.2 bits (per word: 14.4 bits.)
For reference, entropy of completely random 8 character (very hard to memorize)
password of upper and lowercase letters plus numbers is 47.6 bits
Entropy of a typical human generated "strong" 8 character password is in the
ballpark of 20 - 30 bits.
Below is a list of 16 passphrases.
Assuming you select one of these based on some non random preference
your new passphrase will have entropy of 39.2 bits.
Note that first letter is always capitalized and spaces are
replaced with '1' to meet password requirements of many systems.
Goatees1maneuver1pods
Aught1fuel1hungers
Flavor1knock1foreman
Holding1holster1smarts
Vitamin1mislead1abhors
Proverbs1lactose1brat
... and so on 10 more
There are also some browser/javascript based tools:
http://preshing.com/20110811/xkcd-password-generator
http://passphra.se/
http://lightsecond.com/passphrase.html
CPAN hosts a Perl module for generating XKCD style passphrases:
http://metacpan.org/pod/Crypt::XkcdPassword

How input string is represented in magnetic tapes?

I know that in turing machines, the (different) tapes are used for both input and output and for stack too. In a problem of adding 2 numbers using turing machine, the input is dealing with many symbols like 1,0,B(blank),+.
(Tough this questions is related to physics, I asked here since I thought they mayn't know about turing machines and their inputs.)
And my doubt is ,
If the input is BBBBB1111+111111BB,
then in magnetic tape,
1->represented by North polarity(say).
0->represented by south polarity(say).
B->represented by No polarity.
Then,
How '+' will be represented?
I doesn't think that there will be some codes(like ASCII) for special symbols.
Since the number and type of special symbols will be implementation dependent. Also special codes will make the algorithm more tedious.
or
Is the input symbol representation in tapes is entirely different from the above mentioned method?If yes, please explain.
You would probably do this by having each character encoded with multiple bits. For example:
B: 00
0: 01
1: 10
+: 11
Your read head would then have size two and would always move two steps to the left or the right when making a move.
Symbol: Representation
0:1 ; 1:11 ; 2:111 ; n:n+1 ; Blank:B

How do I create a sub arrary in awk?

Given a list like:
Dog bone
Cat catnip
Human ipad
Dog collar
Dog collar
Cat collar
Human car
Human laptop
Cat catnip
Human ipad
How can I get results like this, using awk:
Dog bone 1
Dog collar 2
Cat catnip 2
Cat collar 1
Human car 1
Human laptop 1
Human ipad 2
Do I need a sub array? It seems to me like a need an array of "owners" which is populated by arrays of "things."
I'd like to use awk to do this, as this is a subscript of another program in awk, and for now, I'd rather not create a separate program.
By the way, I can already do it using sort and grep -c, and a few other pipes, but I really won't be able to do that on gigantic data files, as it would be too slow. Awk is generally much faster for this kind of thing, I'm told.
Thanks,
Kevin
EDIT: Be aware, that the columns are actually not next to eachother like this, in the real file, they are more like column $8 and $11. I say this because I suppose if they were next to eachother I could incorporate an awk regex ~/Dog\ Collar/ or something. But I won't have that option. -thanks!
awk does not have multi-dimensional arrays, but you can manage by constructing 2D-ish array keys:
awk '{count[$1 " " $2]++} END {for (key in count) print key, count[key]}' | sort
which, from your input, outputs
Cat catnip 2
Cat collar 1
Dog bone 1
Dog collar 2
Human car 1
Human ipad 2
Human laptop 1
Here, I use a space to separate the key values. If your data contains spaces, you can use some other character that does not appear in your input. I typically use array[$a FS $b] when I have a specific field separator, since that's guaranteed not to appear in the field values.
GNU Awk has some support for multi-dimensional arrays, but it's really just cleverly concatenating keys to form a sort of compound key.
I'd recommend learning Perl, which will be fairly familiar to you if you like awk, but Perl supports true Lists of Lists. In general, Perl will take you much further than awk.
Re your comment:
I'm not trying to be superior. I understand you asked how to accomplish a task with a specific tool, awk. I did give a link to the documentation for simulating multi-dimensional arrays in awk. But awk doesn't do that task well, and it was effectively replaced by Perl nearly 20 years ago.
If you ask how to cross a lake on a bicycle, and I tell you it'll be easier in a boat, I don't think that's unreasonable. If I tell you it'll be easier to first build a bridge, or first invent a Star Trek transporter, then that would be unreasonable.