Error when simplifying graph after converting graph to gdfs and then back to graph - osmnx

I ran into this problem when converting a graph to gdfs, then uploading gdfs into a postgres/postgis database and then downloading them and reconstructing the graph. I think (!?) I have simplified the issue so it can be recreated easily. Basically, I convert a graph to gdfs and then reconstruct the graph. Although NO errors occur when I create the graph from the gdfs, when I try to run some operations (e.g., simplify_graph) on the graph I get an error. Here is a simple example:
G = ox.graph_from_place('Encinitas, CA', simplify=False, network_type='drive_service')
gdf_nodes, gdf_edges = ox.graph_to_gdfs(G, nodes=True, edges=True, node_geometry=True, fill_edge_geometry=True)
G_new = ox.graph_from_gdfs(gdf_nodes, gdf_edges)
G_new_simplified = ox.simplify_graph(G_new)
This returns the following error:
...\AppData\Local\Continuum\anaconda3\envs\bananas\lib\site-packages\osmnx\simplification.py", line 273, in simplify_graph elif len(set(path_attributes[attr])) == 1: TypeError: unhashable type: 'LineString'
I get no error if I simplify the graph before I convert to gdfs and back, i.e.,:
G = ox.graph_from_place('Encinitas, CA', simplify=False, network_type='drive_service')
G_simplified = ox.simplify_graph(G)
This suggests it has something to do with converting to gdfs and then back to a graph.
This is similar to this previous gdfs to graph and vise versa question, but I am using the newest version of OSMNX (i.e., 1.1.2).
It might also be related to this other previous post but I'm still struggling with some of the specifics in the answer and how the graph class is constructed (especially with regard to attributes and its relation to path_attributes in the simplify function).

Related

In MediaPipe, is it possible to see augmented landmarks rendered in real time?

So I am using MediaPipe Holistic Solutions to extract keypoints from a body, hands and face, and I am using the data from this extraction for my calculations just fine. The problem is, I want to see if my data augmentation works, but I am unable to see it in real time. An example of how the keypoints are extracted:
lh_arr = (np.array([[result .x, result .y, result .z] for result in results.left_hand_landmarks.landmark]).flatten()
if I then do lets say, lh_arr [10:15]*2, I cant use this new data in the draw_landmarks function, as lh_arr is not class 'mediapipe.python.solution_base.SolutionOutputs'. Is there a way to get draw_landmarks() to use an np array instead or can I convert the np array back into the correct format? I have tried to get get the flattened array back into a dictionary of the same format of results, but it did not work. I can neither augment the results directly, as they are unsupported operand types.

RuntimeError: HDF5File - describe

I want to reproduct the code of cross modal focal loss cvpr2021. But I ran into some difficulties and I don't know where to find the solution. The difficulties are the following.
File "/data/run01/scz1974/chenjiawei/bob.paper.cross_modal_focal_loss_cvpr2021/src/bob.io.stream/bob/io/stream/stream_file.py", line 117, in get_stream_shape
descriptor = self.hdf5_file.describe(data_path)
RuntimeError: HDF5File - describe ('/HOME/scz1974/run/yanghao/fasdata/HQ-WMCA/MC-PixBiS-224/preprocessed/face-station/26.02.19/1_01_0064_0000_00_00_000-48a8d5a0.hdf5'): C++ exception caught: 'Cannot find dataset BASLER_BGR' at /HOME/scz1974/run/yanghao/fasdata/HQ-WMCA/MC-PixBiS-224/preprocessed/face-station/26.02.19/1_01_0064_0000_00_00_000-48a8d5a0.hdf5:''
The instructions assumes that you have obtained the raw dataset which has all the data channels. The preprocessed files only contains grayscale and swir differences. If you want to use grayscale and one of the swir differences as two channels you can skip the preprocessing part as given in the documentation.

How to use tensorflow's FFT?

I am having some trouble reconciling my FFT results from MATLAB and TF. The results are actually very different. Here is what I have done:
1). I would attach my data file here but didn't find a way to do so. Anyways, my data is stored in a .mat file, and the variable we will work with is called 'TD'. In MATLAB, I first subtract the mean of the data, and then perform fft:
f_hat = TD-mean(TD);
x = fft(f_hat);
2). In TF, I use
tf.math.reduce_mean
to calculate the mean, and it only differs from MATLAB's mean on the order of 10^-8. So in TF I have:
mean_TD = tf.reduce_mean(TD)
f_hat_int = TD - mean_TD
f_hat_tf = tf.dtypes.cast(f_hat_int,tf.complex64)
x_tf = tf.signal.fft(f_hat_tf)
So up until 'f_hat' and 'f_hat_tf', the difference is very slight and is caused only by the difference in the mean. However, x and x_tf are very different. I am wondering did I not use TF's FFT correctly?
Thanks!
Picture showing the difference

H2OTwoDimTable seems to be missing functionality

I discovered that I can get a collection of EigenVectors from glrm_model (H2O Generalized Low Rank Model Estimateor glrm (Sorry I can't put this in the tags)) this way:
EV = glrm_model._model_json["output"]['eigenvectors'])
However the type of EV is H2OTwoDimTable which is not very capable.
If I try to do (where M is an H2O Data Frame):
M.mult(EV)
I get the error
AttributeError: 'H2OTwoDimTable' object has no attribute 'nrows'
If I try to convert EV to a numpy matrix:
EV.as_matrix()
I get the error:
AttributeError: 'H2OTwoDimTable' object has no attribute 'as_matrix'
I can convert EV to a panda data frame and then convert it to a numpy matrix, which is an extra step and do the matrix multiplication
IMHO, it would be better if the eigenvector reference return an H2O Data Frame.
Also, it would be good if H2OTwoDimTable could better support matrix multiplication either as a left or right operand.
And EV.as_data_frame() has no use_pandas=False option.
Here's the python code which could be modified to better support matrix type things:
https://github.com/h2oai/h2o-3/blob/master/h2o-py/h2o/two_dim_table.py
The "TwoDimTable" class is used to store lightweight tabular data in a model. I am agreement with you about using H2OFrames instead of TwoDimTables, but it's a design choice that was made a long time ago (can't change it now).
Since H2OFrames can contain non-numeric data, there is an .as_data_frame() method to from an H2OFrame or TwoDimTable to a Pandas DataFrame. So you can chain .as_data_frame().as_matrix() together to get a matrix (numpy.ndarray) if that's what you're looking for. Here's an example:
import h2o
from h2o.estimators.glrm import H2OGeneralizedLowRankEstimator
h2o.init()
data = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/glrm_test/cancar.csv")
# Train a GLRM model with recover_svd=True to keep eigenvectors
glrm = H2OGeneralizedLowRankEstimator(k=4,
transform="NONE",
loss="Quadratic",
regularization_x="None",
regularization_y="None",
max_iterations=1000,
recover_svd=True)
glrm.train(x=data.names, training_frame=data)
# Get eigenvector TwoDimTable from the model
EV = glrm._model_json["output"]['eigenvectors']
# Convert to various formats
evdf = EV.as_data_frame() #pandas.core.frame.DataFrame
evmat = evdf.as_matrix() #numpy.ndarray
# or directly
evmat = EV.as_data_frame().as_matrix()
If you're interested in adding a .as_matrix() method to the TwoDimTable class, you could submit a pull request on the h2o-3 repo for that. I think that would be a useful extension. There's more info about how to contribute to H2O in our contributing guide.

MXNET build model error on r

When I try to use mxnet to build a feedforward model it appeared the following error:
Error in mx.io.internal.arrayiter(as.array(data), as.array(label), unif.rnds, :
basic_string::_M_replace_aux
I follow the R regression example on mxnet website but I change the data into my own data which contains 109 examples and 1876 variables. The first several steps can run without error until ran the model building step. I just can't understand the error information mean. I wonder that it is because of my dataset or the way I deal with the data.
Can you provide the code snippet you are using? That gives more details on the issue. Also, any stacktrace will be useful.
You get this error message mainly due to invalid column/row access and shape (dimension) mismatch. Can you verify if you are using correct "index" values in creating matrix. Let me know if this fixes the issue.
However, MXNet can be better at printing details about error in the stacktrace. I have created a issue to follow up on this - https://github.com/dmlc/mxnet/issues/4206