YAML root node !Null but size is 0 - yaml-cpp

I'm new to YAML and trying to get an example up and running. I have the Sprites_list YAML file from http://www.gamedev.net/page/resources/_/technical/apis-and-tools/yaml-basics-and-parsing-with-yaml-cpp-r3508 and the root/base node is always NOT Null but the size is Always 0, the type is Scalar, and trying to access a node throws a YAML::BadSubscript exception. Line 118 in impl.h of Yaml 0.5.3. Why does the root node have a 0 size and why isn't accessing a node possible?
YAML::Node root_node_ = YAML::Load( file );
if( root_node_.IsNull() )
{
// Never entered
}
int sz = root_node_.size(); // Always 0
YAML::Node a_node = root_node_[ "Sprites_List" ]; // Exception
Edit-->
Full file contents ( pasted into a file called sprites.yml )
Sprites_List: [Player, Monster, Gem]
Player:
SpriteSheet: /Resources/Textures/duotone.png
Anim_Names: [run, idle, jump, die]
run:
Offset: {x: 0, y: 0}
Size: {w: 32, h: 32}
Frame_Durations: [80, 80, 80, 80, 80, 80]
idle:
Offset: {x: 0, y: 32}
Size: {w: 32, h: 32}
Frame_Durations: [80, 120, 80, 30, 30, 130] #Notice the different durations!
jump:
Offset: {x: 0, y: 64}
Size: {w: 32, h: 32}
Frame_Durations: [80, 80, 120, 80, 80, 0] #Can I say 0 mean no skipping?
die:
Offset: {x: 0, y: 192} #192? Yup, it is the last row in that sheet.
Size: {w: 32, h: 32}
Frame_Durations: [80, 80, 80, 80, 80] #this one has only 5 frames.
Monster: #lol that lam nam
SpriteSheet: /Resources/Textures/duotone.png
Anim_Names: [hover, die]
hover:
Offset: {x: 0, y: 128}
Size: {w: 32, h: 32}
Frame_Durations: [120, 80, 120, 80]
die:
Offset: {x: 0, y: 160}
Size: {w: 32, h: 32}
Frame_Durations: [80, 80, 80, 80, 80]
Gem:
SpriteSheet: /Resources/Textures/duotone.png
Anim_Names: [shine]
shine:
Offset: {x: 0, y: 96}
Size: {w: 32, h: 32}
Frame_Durations: [80, 80, 80, 80, 80, 80]
In class declaration -
YAML::Node root_node_;
In class definition -
CConfigFile( std::string const & file ) :
root_node_ ( YAML::Load( file ) )
The file path is the full absolute path and it's escaped with \\ for each \.

YAML::Load loads a YAML string, not a file. To load a file, you need to use YAML::LoadFile.

Related

I am facing problem with - Error: <circle> attribute r: Expected length, "NaN" using Billboard js chart

I am try to display a billboardjs bubble chart using javascript and html.
I included the lib in my index.html file as:
src="/lib/billboard.pkgd.js"
The codes are very simple but when ran, I got the "NaN" error.
this.d3Chart = bb.generate({
bindto: "#d3bubbleChart",
data: {
type: "bubble",
label: false,
xs: {
data1: "x1",
data2: "x2"
},
columns: [,
["x1", 10, 20, 50, 100, 50],
["x2", 30, 40, 90, 100, 170],
["data1", [20, 50], [30, 10], [50, 28], [60, 70], [100, 80]],
["data2", [350, 50, 30], [230, 90], [200, 100],[250, 150],[200, 200]]
]
},
bubble: {
maxR: 50
},
grid: { y: { show: true } },
axis: {
x: {
label: { text: "AOV", position: "outer-left"},
height: 50,
tick: { format: function(x) { return ("$"+x);}}
},
y: {
fit: false,
min: {fit: true, value: 0},
max: 450,
labels: "yes",
label: { text: "Conversion Rate", position: "outer-bottom"},
tick: { format: function(x) { return (x+"%");}}
}
}
});
Checkout the allowed data for bubble type from API doc:
For 'bubble' type, data can contain dimension value:
- an array of [y, z] data following the order
- or an object with 'y' and 'z' key value
'y' is for y axis coordination and 'z' is the bubble radius value
I'm seeing from your example, that the first data of data2 contains additional value. Removing that will work fine.
this.d3Chart = bb.generate({
data: {
...
columns: [,
// the length for each bubble should be 2
["data2", [350, 50, 30], ...]
]

How to dot product (1,10^{13}) (10^13,1) scipy sparse matrix

Basically what the title entails.
The two matrices are mostly zeros. And the first is 1 x 9999999999999 and the second is 9999999999999 x 1
When I try to do a dot product I get this.
Unable to allocate 72.8 TiB for an array with shape (10000000000000,) and data type int64
Full traceback </br>
MemoryError: Unable to allocate 72.8 TiB for an array with shape (10000000000000,) and data type int64
In [31]: imputed.dot(s)
---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
<ipython-input-31-670cfc69d4cf> in <module>
----> 1 imputed.dot(s)
~/.local/lib/python3.8/site-packages/scipy/sparse/base.py in dot(self, other)
357
358 """
--> 359 return self * other
360
361 def power(self, n, dtype=None):
~/.local/lib/python3.8/site-packages/scipy/sparse/base.py in __mul__(self, other)
478 if self.shape[1] != other.shape[0]:
479 raise ValueError('dimension mismatch')
--> 480 return self._mul_sparse_matrix(other)
481
482 # If it's a list or whatever, treat it like a matrix
~/.local/lib/python3.8/site-packages/scipy/sparse/compressed.py in _mul_sparse_matrix(self, other)
499
500 major_axis = self._swap((M, N))[0]
--> 501 other = self.__class__(other) # convert to this format
502
503 idx_dtype = get_index_dtype((self.indptr, self.indices,
~/.local/lib/python3.8/site-packages/scipy/sparse/compressed.py in __init__(self, arg1, shape, dtype, copy)
32 arg1 = arg1.copy()
33 else:
---> 34 arg1 = arg1.asformat(self.format)
35 self._set_self(arg1)
36
~/.local/lib/python3.8/site-packages/scipy/sparse/base.py in asformat(self, format, copy)
320 # Forward the copy kwarg, if it's accepted.
321 try:
--> 322 return convert_method(copy=copy)
323 except TypeError:
324 return convert_method()
~/.local/lib/python3.8/site-packages/scipy/sparse/csc.py in tocsr(self, copy)
135 idx_dtype = get_index_dtype((self.indptr, self.indices),
136 maxval=max(self.nnz, N))
--> 137 indptr = np.empty(M + 1, dtype=idx_dtype)
138 indices = np.empty(self.nnz, dtype=idx_dtype)
139 data = np.empty(self.nnz, dtype=upcast(self.dtype))
MemoryError: Unable to allocate 72.8 TiB for an array with shape (10000000000000,) and data type int64
It seems the scipy is trying to create a temp array.
I am using the .dot method that scipy provides.
I am also open to non-scipy solutions.
Thanks!
In [105]: from scipy import sparse
If I make a (100,1) csr matrix:
In [106]: A = sparse.random(100,1,format='csr')
In [107]: A
Out[107]:
<100x1 sparse matrix of type '<class 'numpy.float64'>'
with 1 stored elements in Compressed Sparse Row format>
The data and indices are:
In [109]: A.data
Out[109]: array([0.19060481])
In [110]: A.indices
Out[110]: array([0], dtype=int32)
In [112]: A.indptr
Out[112]:
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32)
So even with only 1 nonzero term, one array is large (101).
On the other hand the csc format for the same array has a much smaller storage. But csc with (1,100) shape will look like the csr.
In [113]: Ac = A.tocsc()
In [114]: Ac.indptr
Out[114]: array([0, 1], dtype=int32)
In [115]: Ac.indices
Out[115]: array([88], dtype=int32)
Math, especially matrix products is done with csr/csc formats. So it may be hard to avoid this 80 TB memory use.
Looking at the traceback I see that it's trying to convert other to the format that matches self.
So with A.dot(B), and A is (1,N) csr, the small shape. B is (N,1) csc, also the small shape. But B.tocsr() requires the large (N+1,) shaped indptr.
Let's try an alternative to dot
First 2 matrices:
In [122]: A = sparse.random(1,100, .2,format='csr')
In [123]: B = sparse.random(100,1, .2,format='csc')
In [124]: A
Out[124]:
<1x100 sparse matrix of type '<class 'numpy.float64'>'
with 20 stored elements in Compressed Sparse Row format>
In [125]: B
Out[125]:
<100x1 sparse matrix of type '<class 'numpy.float64'>'
with 20 stored elements in Compressed Sparse Column format>
In [126]: A#B
Out[126]:
<1x1 sparse matrix of type '<class 'numpy.float64'>'
with 1 stored elements in Compressed Sparse Row format>
In [127]: _.A
Out[127]: array([[1.33661021]])
Their nonzero element indices. Only the ones that match matter.
In [128]: A.indices, B.indices
Out[128]:
(array([16, 20, 23, 28, 30, 37, 39, 40, 43, 49, 54, 59, 61, 63, 67, 70, 74,
91, 94, 99], dtype=int32),
array([ 5, 8, 15, 25, 34, 35, 40, 46, 47, 51, 53, 60, 68, 70, 75, 81, 87,
90, 91, 94], dtype=int32))
equality matrix:
In [129]: mask = A.indices[:,None]==B.indices
In [132]: np.nonzero(mask.any(axis=0))
Out[132]: (array([ 6, 13, 18, 19]),)
In [133]: np.nonzero(mask.any(axis=1))
Out[133]: (array([ 7, 15, 17, 18]),)
The matching indices:
In [139]: A.indices[Out[133]]
Out[139]: array([40, 70, 91, 94], dtype=int32)
In [140]: B.indices[Out[132]]
Out[140]: array([40, 70, 91, 94], dtype=int32)
sum of the corresponding data values matches [127]
In [141]: (A.data[Out[133]]*B.data[Out[132]]).sum()
Out[141]: 1.3366102138511582

File Upload to Local using Acr in Elixir

I am using Arc.Definition(https://github.com/stavro/arc) for uploading an image to the Local Storage.
My file_service.ex is below:
defmodule MyApp.FileService do
use Arc.Definition
use Arc.Ecto.Definition
#image_types ~w(.jpg .jpeg .png .gif)
#versions [:original]
#default_filename "image.png"
#heights %{
medium: 400
}
#widths %{
medium: 400
}
def __storage, do: Arc.Storage.Local
def upload_image(%Plug.Upload{} = image, resource_type, resource_id) do
store({%Plug.Upload{path: image.path, filename: #default_filename},
%{resource_type: resource_type, resource_id: resource_id}})
end
def upload_base64_image(base64_image, resource_type, resource_id) do
store({%{filename: #default_filename, binary: base64_image_to_binary(base64_image)}})
end
def delete_file(image_url, resource) do
delete({image_url, resource})
end
defp base64_image_to_binary("data:image/" <> rest) do
rest
|> String.replace("\n", "")
|> String.split(",")
|> Enum.at(1)
|> Base.decode64!
end
defp base64_image_to_binary(base64_image) do
base64_image
|> String.replace("\n", "")
|> Base.decode64!
end
end
But, I am getting an error saying "no function clause matching in Arc.Actions.Store.store".
The stack trace is below:
** (FunctionClauseError) no function clause matching in Arc.Actions.Store.store/2
(arc) lib/arc/actions/store.ex:8: Arc.Actions.Store.store(MyApp.FileService, {%{binary: <<255, 216,
255, 225, 3, 48, 69, 120, 105, 102, 0, 0, 73, 73, 42, 0, 8, 0, 0, 0,
58, 0, 50, 1, 2, 0, 20, 0, 0, 0, 198, 2, 0, 0, 15, 1, 2, 0, 10, 0, 0,
0, 218, 2, 0, 0, 1, 1, ...>>, filename: "image.png"}})
Anyone, please help?
Your code
def upload_base64_image(base64_image, resource_type, resource_id) do
store({%{filename: #default_filename, binary: base64_image_to_binary(base64_image)}})
end
's store is using wrong.
It only accept tuple(file, scope) or filepath(map).
So it should be: store(%{filename: #default_filename, binary: base64_image_to_binary(base64_image)}).
See github's example:
# Store a file from a connection body
{:ok, data, _conn} = Plug.Conn.read_body(conn)
Avatar.store(%{filename: "file.png", binary: data})
I figure it out by reading traceback and arc's store implementaion:
def store(definition, {file, scope}) when is_binary(file) or is_map(file) do
put(definition, {Arc.File.new(file), scope})
end
def store(definition, filepath) when is_binary(filepath) or is_map(filepath) do
store(definition, {filepath, nil})
end

how tf.contrib.layers.bucketized_column works?

I am trying to figure out how to use
age = tf.contrib.layers.real_valued_column("age")
age_buckets = tf.contrib.layers.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
addressed in https://www.tensorflow.org/tutorials/wide.
One example usage defined as a comment in: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/feature_column.py
Copying from that file:
age_column = real_valued_column("age")
bucketized_age_column = bucketized_column(
source_column=age_column,
boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
first_layer = input_from_feature_columns(...,
feature_columns=[..., bucketized_age_column])
second_layer = fully_connected(first_layer, ...)
# For linear models
predicitions, _, _ = weighted_sum_from_feature_columns(...,
feature_columns=[..., bucketized_age_column])

kineticjs line points array contentes

what are the interpretation values in the"" points assignment. all I want is a "plane jane" line where I can manipulate the length and width. Any help will greatly appreciated.
var line = new Kinetic.Line({
x: 100,
y: 50,
points: [73, 70, 340, 23, 450, 60, 500, 20],
stroke: 'red',
tension: 1
});
The points array is a series of x,y coordinates:
// [73, 70, 340, 23, 450, 60, 500, 20],
{x:73,y:70},
{x:340,y:23},
{x:450,y:60},
{x:500,y:20}
This is your "plain jane" line:
// draw a black line from 25,25 to 100,50 and width of 5
var line = new Kinetic.Line({
points:[25,25, 100,50],
y:100,
stroke: 'black',
strokeWidth: 5
});