What's the difference between np.random.standard_normal and np.random.randn? - numpy

Both seem to draw samples from the standard normal distribution, just that standard_normal takes arguments for size and randn takes arguments for dimensions. I'm still new to this so can someone tell me if their uses will be different? Will the results they give be different?

As documented in randn:
This is a convenience function. If you want an interface that takes a tuple as the first argument, use numpy.random.standard_normal instead.
1.17 documentation is even more explicit (note randn has been moved in that version):
This is a convenience function for users porting code from Matlab, and wraps numpy.random.standard_normal.

Related

Checking to see if an image format supports a usage in Vulkan?

If I want to see what an image format can be used for I can do the vkGetPhysicalDeviceImageFormatProperties2() and set the usage flag for the image format. I've noticed if the format isn't supported for those usages and settings the structure I pass in is set to all zero, and I can know if the format supports those uses. So if I want to know if VK_FORMAT_R8G8B8_UINT supports sampling from a shader I set the VK_IMAGE_USAGE_SAMPLED_BIT in the usage flags and call that function.
What I wanted to know is if that's equivalent to calling another function, called vkGetPhysicalDeviceFormatProperties2(), exactly the same name but without 'image' in the name, give that function the format, and check whether the VK_IMAGE_USAGE_SAMPLED_BIT is set.
So using the first method I give the format and usages I want from it, and then check if the values returned are zero max width, max height, etc, meaning those usages aren't supported, versus the second method of passing the format, getting back the flags and then checking the flags.
Are these two methods equivalent?
TL;DR: Do your image format checking properly: ask how you can use the format, then ask what functionality is available from usable format&usage combinations.
If you call vkGetPhysicalDeviceImageFormatProperties2 with usage flags and the like that don't correspond to a supported image type, you get an error: VK_ERROR_FORMAT_NOT_SUPPORTED. It inherits this due to the fact that it is said to "behave similarly to vkGetPhysicalDeviceImageFormatProperties", which has an explicit statement about this error:
If format is not a supported image format, or if the combination of format, type, tiling, usage, and flags is not supported for images, then vkGetPhysicalDeviceImageFormatProperties returns VK_ERROR_FORMAT_NOT_SUPPORTED.
Now normally, a function which gives rise to an error will yield undefined values in any return values. But there is a weird exception:
If the combination of parameters to vkGetPhysicalDeviceImageFormatProperties2 is not supported by the implementation for use in vkCreateImage, then all members of imageFormatProperties will be filled with zero.
However, there's an explicit note saying that this was old, bad behavior and is only preserved for compatibility's sake. Being a compatibility feature means that you can rely on it, but you shouldn't. Also, it only applies to the imageFormatProperties data and not any of the extension structures you can pass.
So it's best to just ignore this and ask your questions in the right order.

: after variables in pascal

I know that by using 0:3 in this code in Pascal will put 3 decimal places to the result
var a,b:real;
begin
a:=23;
b:=7;
writeln(a/b:0:3);
readln;
end.
What I would like to know is if anyone has a source to learn what this : will do with other variables or if adding for example 0:3:4 will make a difference. Basically what : can do to a variable
For the exact definition of write parameters take a look at ISO standards 7185 and 10206, “Standard Pascal” and “Extended Pascal” respectively. These references are useless though if your compiler’s documentation does not make a statement regarding compliance with them. Other compilers have their own non-standard extensions, so the only reliable source of reference is your compiler’s documentation or even its source code if available.
[…] what this : will do with other variables […] Basically what : can do to a variable
As MartynA already noted this language is imprecise: The variables’ values are only read by write/writeLn/writeStr, thus leaving them unmodified.
[…] if adding for example 0:3:4 will make a difference.
To my knowledge a third write parameter is/was only allowed in PXSC, Pascal eXtensions for Scientific Computing. In this case the third parameter would indicate for the rounding mode (nonexistent or 0: closest printable number; greater than zero: round up; less than zero: round down).

what is the difference between FixedLenSequenceFeature and VarLenFeature?

I used to specify VarLenFeature when I want to decode variable-length input feature, but recently I noticed there is a FixedLenSequenceFeature could do the same thing for me, so what's the difference between these two class? and when should I use one instead of the other? I can get nearly nothing from the documentation.
VarLenFeature
FixedLenSequenceFeature

TFAgents: how to take into account invalid actions

I'm using TF-Agents library for reinforcement learning,
and I would like to take into account that, for a given state,
some actions are invalid.
How can this be implemented?
Should I define a "observation_and_action_constraint_splitter" function when
creating the DqnAgent?
If yes: do you know any tutorial on this?
Yes you need to define the function, pass it to the agent and also appropriately change the environment output so that the function can work with it. I am not aware on any tutorials on this, however you can look at this repo I have been working on.
Note that it is very messy and a lot of the files in there actually are not being used and the docstrings are terrible and often wrong (I forked this and didn't bother to sort everything out). However it is definetly working correctly. The parts that are relevant to your question are:
rl_env.py in the HanabiEnv.__init__ where the _observation_spec is defined as a dictionary of ArraySpecs (here). You can ignore game_obs, hand_obs and knowledge_obs which are used to run the environment verbosely, they are not fed to the agent.
rl_env.py in the HanabiEnv._reset at line 110 gives an idea of how the timestep observations are constructed and returned from the environment. legal_moves are passed through a np.logical_not since my specific environment marks legal_moves with 0 and illegal ones with -inf; whilst TF-Agents expects a 1/True for a legal move. My vector when cast to bool would therefore result in the exact opposite of what it should be for TF-agents.
These observations will then be fed to the observation_and_action_constraint_splitter in utility.py (here) where a tuple containing the observations and the action constraints is returned. Note that game_obs, hand_obs and knowledge_obs are implicitly thrown away (and not fed to the agent as previosuly mentioned.
Finally this observation_and_action_constraint_splitter is fed to the agent in utility.py in the create_agent function at line 198 for example.

Why does official prefer concantane than hstack/vstack in Numpy?

I find that the latest documentation about hstack/vstack note that "you should prefer np.concatenate or np.stack".
But I think their readability is better than concatenate(a, 0) or concatenate(a, 1)
All 3 'stack' functions use concatenate (as does np.append and column_stack). It's instructive to look at their code. np.source(np.hstack) for example.
What they all do is massage the dimensions of the input arrays, making sure they are are 1d or 2d etc, and then call concatenate with the appropriate axis. So in the long run it's a good idea to know how to use concatenate without the 'crutch' of the others.
But people will continue to use hstack and vstack where convenient. dstack and column_stack are less common. np.append is frequently misused and should be banished.
I think this 'preferred' note was added when np.stack was added. np.stack also uses concatenate, but in a somewhat more sophisticated way. It inserts a new axis (with expand_dims). I view it as a generalization of np.array. When given a list of matching arrays, np.array joins them on a new initial axis. np.stack does the same thing as a default, but lets us specify a different 'new' axis for concatenation.
I should qualify my answer. It is not official. Rather I'm making an educated guess based on knowledge of the code.