Set a dynamic stride for tf.nn.conv2d layer - tensorflow

I want to pass a layer, say 9 x 1 through a kernel of size, say 2 x 1
Now what I want to do is convolve the following values together ->
1 and 2, 2 and 3, 4 and 5, 5 and 6, 7 and 8, 8 and 9
and then offcourse padd it.
What you can see from this example is that I am trying to make the stride in width dimension of the pattern ->
1, 2, 1, 2, 1, 2, ...
and after every '1' I want to padd it so that finally the size doesnt change.
To simply see it I want to slice the main matrix into smaller matrices along a dimension, pass each of them separately through conv2d layers, padd them, and then concat them again along the same dimension but I want to do all this without actually cutting it up. I hope you understand what I am trying to ask. Is it possible?
Edit : Sorry should have mentioned this, I am using tensorflow libraries and I am talking about the tf.nn.conv2d function

Related

BigQuery ST_SIMPLIFY returns GEOMETRYCOLLECTION instead of POLYGON

I found one issue in BigQuery function ST_SIMPLIFY.
I'm querying big geometries and some stats for them. In case I need to visualize them e.g. in Kepler it is not possible because Kepler do not consume output from BigQuery ST_SIMPLIFY. I analyzed results from ST_SIMPLIFY and found this:
As an input that I used is in all cases POLYGON parsed from OSM.
When I call ST_SIMPLIFY I get results of mixed geometry types like POLYGON and GEOMETRYCOLLECTION containing MULTILINESTRINGs, LINESTRINGs, POLYGONs.
Maybe it is not so strange but when I tried to visualize these geometries, they do not make sense. Especially LINESTRINGs inside GEOMETRYCOLLECTIONs like in this geojson
When I tried use these geometries in simplify function in Shapely I got valid results containing only POLYGONs
Why BigQuery ST_SIMPLIFY returns GEOMETRYCOLLECTIONS with geom mixed types instead of simple one POLYGON or MULTIPOLYGON?
To reproduce this issue you can init BQ table from these data
When simplifying, BigQuery can reduce dimensions of the shape if the lower dimension shape represents the original shape with requested precision.
E.g.
with data as (
select st_geogfromtext(
'polygon((1 1, 2 1, 2 2, 1.5 2, 1.5 3, 1.499 2, 1 2, 1 1))') g
)
select g, st_simplify(g, 10000) s from data
Here the "spike" at the top of the shape is converted into a line, and we get
GEOMETRYCOLLECTION(LINESTRING(1.5 2, 1.5 3), POLYGON((1 1, 2 1, 2 2, 1.5 2, 1 2, 1 1))).
Use
with data as (
select st_geogfromtext(
'polygon((1 1, 2 1, 2 2, 1.5 2, 1.5 3, 1.499 2, 1 2, 1 1))') g
)
select st_union(st_dump(st_simplify(g, 10000),2)) s from data
to extract polygon only if you are OK with ignoring such spikes.
I wrote BigQuery function that solved my issue. Basically this function ignore all geometries except POLYGON and MULTIPOLYGON. Tested results are possible to visualize in Kepler and other tools
CREATE OR REPLACE FUNCTION
my_project.repair_simplified_geom(geom ANY TYPE) AS ((
SELECT
ST_UNION(ARRAY_AGG(geom))
FROM
UNNEST(ST_DUMP(geom)) AS geom
WHERE
STARTS_WITH(ST_ASTEXT(geom), 'POLYGON')
OR STARTS_WITH(ST_ASTEXT(geom), 'MULTIPOLYGON')))
Usage
select my_project.repair_simplified_geom(ST_SIMPLIFY(geometry,100)) as geom from
`admin_levels_svk.admin_levels_3_svk`

Explain the difference in this while loop please

Example:
x = 0
while x <= 10:
x += 2
print(x)
Results for this will be 0, 2, 4, 6, 8 , 10
If I switch the postion of print(x) and x += 2. The results will be 2, 4, 6, 8, 10, 12.
Please explain to me the thought process for this.
Thanks
I suppose this is Python. Basically the code will get executed in the order it is written. I suggest you to take a sheet of paper and follow the lines of code, trying to figure out what happens and what is the value of your variable at each step. The output will be the values that your variable had when it got printed with the print function.

From tensorboard, how to explain tensor_content?

When I use tensorboard, I find something interesting. And I have explained a little, but I want to know more.
The problem is show as below,
I define a tensor which equals [-1, 28, 28, 1], and use tensorboard to display the node, there are some attributes.
dtype {"type":"DT_INT32"}
value {"tensor":{"dtype":"DT_INT32","tensor_shape":{"dim":[{"size":4}]},"tensor_content":"\\377\\377\\377\\377\\034\\000\\000\\000\\034\\000\\000\\000\\001\\000\\000\\000"}}
Look at the tensor_content, the binary of 377 is 011 111 111, we need last 8 bits, so 377,377,377,377 = 111......111 (32 bits), which equals -1 in decimal, and 034 = 000 101 100, we need last 8 bits 00101100, so 034,000,000,000 = 00101100...(... means 24 bits of 0), and we should look it from right to left, so it equals 28, the remaining 28 and 1 are the same.
And I want to ask whether my explanation is right. And if it is right, why don't use 377 (3 3 bits-binary) rather than 15,15 (2 4 bits-binary)? Can anyone provide me statement by official materials?

(KeyError): MultiIndex Slicing requires the index to be fully lexsorted tuple ... Why is this caused by a list, but not by a tuple?

This question is partially here to help me understand what lex-sorting is in the context of multi-indexes.
Say I have some MultiIndexed DataFrame df, and for the index I want to use:
a = (1, 1, 1)
So to pull the value from the dataframe I write:
df.loc[a, df.columns[i]]
Which works. But the following doesn't:
df.loc[list(a), df.columns[i]]
Giving me the error:
*** KeyError: 'MultiIndex Slicing requires the index to be fully lexsorted tuple len (1), lexsort depth (0)'
Why is this?
Also, another question, what does the following performance warning mean?
PerformanceWarning: indexing past lexsort depth may impact performance.
I'll illustrate the difference between passing a tuple and a list to .loc, using the example with df being
0 1 2
first second
bar one 4 4 7
two 3 4 7
foo one 8 1 8
two 7 5 4
Here df.loc[('foo', 'two')] returns the row indexed by this tuple, namely (7, 5, 4). The parameter specifies both levels of the multiindex.
But df.loc[['foo', 'two']] means you want all rows with the top level of the multiindex being either 'foo' or 'two'. A list means these are the options you want, and since only one level is provided in each option, the selection is based on the first (leftmost) level. The result:
0 1 2
first second
foo one 8 1 8
two 7 5 4
(Since there are no multiindices that begin with 'two', only those with 'foo' are present.)
Without seeing your dataframe, I can't tell where this difference leads to getting KeyError, but I hope the difference itself is clear now.

Step through range in D

Is there a way to create a step in D ranges?
For example, in python,
range(1, 10, 2)
gives me
[1, 3, 5, 7, 9]
all odds within 1 .. 10
Is there a way to do this in D using foreach?
foreach(x; 1 .. 10) {
}
I know I can use iota(start, end, step), but I also want to add an int to the very beginning and I don't know how to convert type Result to an int.
chain([2],iota(3,16,2));
chain concatenates ranges lazily
or you can go the other way around with filter!q{a==2||a&1}(iota(2,16));