When I use tensorboard, I find something interesting. And I have explained a little, but I want to know more.
The problem is show as below,
I define a tensor which equals [-1, 28, 28, 1], and use tensorboard to display the node, there are some attributes.
dtype {"type":"DT_INT32"}
value {"tensor":{"dtype":"DT_INT32","tensor_shape":{"dim":[{"size":4}]},"tensor_content":"\\377\\377\\377\\377\\034\\000\\000\\000\\034\\000\\000\\000\\001\\000\\000\\000"}}
Look at the tensor_content, the binary of 377 is 011 111 111, we need last 8 bits, so 377,377,377,377 = 111......111 (32 bits), which equals -1 in decimal, and 034 = 000 101 100, we need last 8 bits 00101100, so 034,000,000,000 = 00101100...(... means 24 bits of 0), and we should look it from right to left, so it equals 28, the remaining 28 and 1 are the same.
And I want to ask whether my explanation is right. And if it is right, why don't use 377 (3 3 bits-binary) rather than 15,15 (2 4 bits-binary)? Can anyone provide me statement by official materials?
Related
So if I was given a sorted list/array i.e. [1,6,8,15,40], the size of the array, and the requested number..
How would you find the minimum number of values required from that list to sum to the requested number?
For example given the array [1,6,8,15,40], I requested the number 23, it would take 2 values from the list (8 and 15) to equal 23. The function would then return 2 (# of values). Furthermore, there are an unlimited number of 1s in the array (so you the function will always return a value)
Any help is appreciated
The NP-complete subset-sum problem trivially reduces to your problem: given a set S of integers and a target value s, we construct set S' having values (n+1) xk for each xk in S and set the target equal to (n+1) s. If there's a subset of the original set S summing to s, then there will be a subset of size at most n in the new set summing to (n+1) s, and such a set cannot involve extra 1s. If there is no such subset, then the subset produced as an answer must contain at least n+1 elements since it needs enough 1s to get to a multiple of n+1.
So, the problem will not admit any polynomial-time solution without a revolution in computing. With that disclaimer out of the way, you can consider some pseudopolynomial-time solutions to the problem which work well in practice if the maximum size of the set is small.
Here's a Python algorithm that will do this:
import functools
S = [1, 6, 8, 15, 40] # must contain only positive integers
#functools.lru_cache(maxsize=None) # memoizing decorator
def min_subset(k, s):
# returns the minimum size of a subset of S[:k] summing to s, including any extra 1s needed to get there
best = s # use all ones
for i, j in enumerate(S[:k]):
if j <= s:
sz = min_subset(i, s-j)+1
if sz < best: best = sz
return best
print min_subset(len(S), 23) # prints 2
This is tractable even for fairly large lists (I tested a random list of n=50 elements), provided their values are bounded. With S = [random.randint(1, 500) for _ in xrange(50)], min_subset(len(S), 8489) takes less than 10 seconds to run.
There may be a simpler solution, but if your lists are sufficiently short, you can just try every set of values, i.e.:
1 --> Not 23
6 --> Not 23
...
1 + 6 = 7 --> Not 23
1 + 8 = 9 --> Not 23
...
1 + 40 = 41 --> Not 23
6 + 8 = 14 --> Not 23
...
8 + 15 = 23 --> Oh look, it's 23, and we added 2 values
If you know your list is sorted, you can skip some tests, since if 6 + 20 > 23, then there's no need to test 6 + 40.
I can do python/pandas to basic stuff, but I still struggle with the "no loops necessary" world of pandas. I tend to fall back to converting to lists and doing loops like in VBA and then just bring those list back to dfs. I know there is a simpler way, but I can't figure it out.
I simple example is just a very basic strategy of creating a signal of -1 if a series is above 70 and keep it -1 until the series breaks below 30 when the signal changes to 1 and keep this until a value above 70 again and so on.
I can do this via simple list looping, but I know this is far from "Pythonic"! Can anyone help "translating" this to some nicer code without loops?
#rsi_list is just a list from a df column of numbers. Simple example:
rsi={'rsi':[35, 45, 75, 56, 34, 29, 26, 34, 67. 78]}
rsi=pd.DataFrame(rsi)
rsi_list=rsi['rsi'].tolist()
signal_list=[]
hasShort=0
hasLong=0
for i in range(len(rsi_list)-1):
if rsi_list[i] >= 70 or hasShort==1:
signal_list.append(-1)
if rsi_list[i+1] >= 30:
hasShort=1
else:
hasShort=0
elif rsi_list[i] <= 30 or hasLong==1:
signal_list.append(1)
if rsi_list[i+1] <= 70:
hasLong=1
else:
hasLong=0
else:
signal_list.append(0)
#last part just for the list to be the same lenght of the original df as I put it back as a column
if rsi_list[-1]>=70:
signal_list.append(-1)
else:
signal_list.append(1)
First clip the values to 30 in lower and 70 in upper bound, use where to change to nan all the values that are not 30 or 70. replace by 1 and -1 and propagate these values with ffill. fillna with 0 the values before the first 30 or 70.
rsi['rsi_cut'] = (
rsi['rsi'].clip(lower=30,upper=70)
.where(lambda x: x.isin([30,70]))
.replace({30:1, 70:-1})
.ffill()
.fillna(0)
)
print(rsi)
rsi rsi_cut
0 35 0.0
1 45 0.0
2 75 -1.0
3 56 -1.0
4 34 -1.0
5 29 1.0
6 26 1.0
7 34 1.0
8 67 1.0
9 78 -1.0
Edit: maybe a bit easier, use ge (greater or equal) and le (less or equal) and do a subtraction, then replace the 0s with the ffill method
print((rsi['rsi'].le(30).astype(int) - rsi['rsi'].gt(70))
.replace(to_replace=0, method='ffill'))
I want to pass a layer, say 9 x 1 through a kernel of size, say 2 x 1
Now what I want to do is convolve the following values together ->
1 and 2, 2 and 3, 4 and 5, 5 and 6, 7 and 8, 8 and 9
and then offcourse padd it.
What you can see from this example is that I am trying to make the stride in width dimension of the pattern ->
1, 2, 1, 2, 1, 2, ...
and after every '1' I want to padd it so that finally the size doesnt change.
To simply see it I want to slice the main matrix into smaller matrices along a dimension, pass each of them separately through conv2d layers, padd them, and then concat them again along the same dimension but I want to do all this without actually cutting it up. I hope you understand what I am trying to ask. Is it possible?
Edit : Sorry should have mentioned this, I am using tensorflow libraries and I am talking about the tf.nn.conv2d function
I'm wondering if it is possible to represent a number as a sequence of bits, each having approximately the same significance, such that if we flip one of the bits, the overall value does not change by much.
For example, we can use sequences of 4-bits, where each group represents a value from 0 to 15 and the overall value is the sum of all these values.
0110 0101 1101 1010 1011 → 6 + 5 + 13 + 10 + 11 = 45
and now flipping any bit can only incur in a maximum difference of 8 in the final value.
Some drawbacks obviously exist with this approach:
values have multiple representations, with some values having more representations than other ones (for example, there are 39280 distinct representations for the number 38, and only 1 for the number 0);
the amount of values that can be represented is greatly reduced (this representation allows for integers from 0 to 75, while 20 bits could normally represent 220 ~ 1 million different integers).
Are there any resources I can find concerning this problem? I can't seem to find anything online, but maybe I'm not searching with the right keywords. What other alternatives exist to my approach? Do they improve on its disadvantages?
I can easily calculate the values for sinc(x) curve used in Lanczos, and I have read the previous explanations about Lanczos resize, but being new to this area I do not understand how to actually apply them.
To resample with lanczos imagine you
overlay the output and input over
eachother, with points signifying
where the pixel locations are. For
each output pixel location you take a
box +- 3 output pixels from that
point. For every input pixel that lies
in that box, calculate the value of
the lanczos function at that location
with the distance from the output
location in output pixel coordinates
as the parameter. You then need to
normalize the calculated values by
scaling them so that they add up to 1.
After that multiply each input pixel
value with the corresponding scaling
value and add the results together to
get the value of the output pixel.
For example, what does "overlay the input and output" actually mean in programming terms?
In the equation given
lanczos(x) = {
0 if abs(x) > 3,
1 if x == 0,
else sin(x*pi)/x
}
what is x?
As a simple example, suppose I have an input image with 14 values (i.e. in addresses In0-In13):
20 25 30 35 40 45 50 45 40 35 30 25 20 15
and I want to scale this up by 2, i.e. to an image with 28 values (i.e. in addresses Out0-Out27).
Clearly, the value in address Out13 is going to be similar to the value in address In7, but which values do I actually multiply to calculate the correct value for Out13?
What is x in the algorithm?
If the values in your input data is at t coordinates [0 1 2 3 ...], then your output (which is scaled up by 2) has t coordinates at [0 .5 1 1.5 2 2.5 3 ...]. So to get the first output value, you center your filter at 0 and multiply by all of the input values. Then to get the second output, you center your filter at 1/2 and multiply by all of the input values. Etc ...