I've checked out the newest code in the github master branch of kepler.gl and I'm running the demo. But I don't see the H3HexGrid layer option. How do I use it? The git commits seem to indicate it can be used in the demo app. Also how do I load sample data from here: sampleH3Data from './data/sample-hex-id-csv';
Thanks!
You will need to have a dataset containing h3 hexagon id. Kepler.gl will look for column name hex_id or hexagon_id to automatically create an H3 layer.
You can save /data/sample-hex-id-csv.js to a .csv file by removing the export default `` and ``; at the beginning and end of the file. Then drag and drop it into kepler.gl app to see how it works.
Related
I'm running a simple penguin pipeline in interactive mode with a split train/eval, the transform step run but i can't get post_transform_statistics artifacts.
Inside the dedicated artifacts folder /tmp/tfx-penguin_custom_INTERACTIVE-nq5dn56x/Transform/post_transform_stats/5, i have just one FeaturesStats.pb inside, but not subfolders Split-train and Split-eval with a FeaturesStats.pb inside each.
However, I have the subfolders inside artifacts dedicated to transformed examples (/tmp/tfx-penguin_custom_INTERACTIVE-nq5dn56x/Transform/transformed_examples/5/).
Here is how i define the transform components by explicitly providing splits and also disable_statistics=False:
transform = tfx.components.Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
disable_statistics=False,
splits_config= transform_pb2.SplitsConfig(
analyze=['train'], transform=['train', 'eval']),
module_file=_transformer_module_file)
I went to the docstring and even the __init__ of the component https://github.com/tensorflow/tfx/blob/master/tfx/components/transform/component.py, it seems there is nothing i would have forgotten or mistaken but i was very disturbed to read following comment with an untraceable location for stats....
disable_statistics: If True, do not invoke TFDV to compute pre-transform
and post-transform statistics. When statistics are computed, they will
will be stored in the `pre_transform_feature_stats/` and
`post_transform_feature_stats/` subfolders of the `transform_graph`
export.
For now, the workaround is to explicitly disable stats in the transform component and define next to it, a dedicated statistics components to work on transformed features splits but it would have been great to have the splits statistics inside transform component directly.
Thanks for any help
This is expected as StatisticsGen in Transform is currently working on the entire transform dataset regardless of split/span.
To generate separate statistics for different splits, please use StatisticsGen component.
Thank you!
I looked everywhere, but there are not any guides or explanations of how to use QSkyBoxEntity.
I created Entity and filled it with transform (set translation and 3d scale). Also changed name and extension.
When I'm trying to run program it says
"Qt3D.Renderer.OpenGL.Backend: Unable to find suitable Texture Unit for "skyboxTexture""
I checked several times and tried different png files but no luck.
My image (I know it's fake transparency, but it shouldn't change anything, right?)
And here's part of a code:
Qt3DCore::QEntity *resultEntity = new Qt3DCore::QEntity;
Qt3DExtras::QSkyboxEntity *skyboxEntity = new Qt3DExtras::QSkyboxEntity(resultEntity);
skyboxEntity->setBaseName("skybox"); //I tried using path as well
skyboxEntity->setExtension("png");
Qt3DCore::QTransform *skyTransform = new Qt3DCore::QTransform(skyboxEntity);
skyTransform->setTranslation(QVector3D(0.0f,0.0f,0.0f));
skyTransform->setScale3D(QVector3D(0.1f,0.1f,0.1f));
skyboxEntity->addComponent(skyTransform);
Looks like it's not finding the skybox texture. Did you use an absolute path when you say "I tried using path as well"? The path you set is relative to the build path, i.e. it's not where your C++ file lies.
Alternatively, you could use a resources file and then load then image using
"qrc:/[prefix]/[filename without extension]"
You can also check out the Qt3D manual SkyBox test here:
https://github.com/qt/qt3d/tree/dev/tests/manual/skybox
It's important to properly name files in order for skybox to work and use resource file for storing.
I recommend .tga, but other formats should work as well.
You can read about it here:
https://doc.qt.io/qt-6/qml-qt3d-extras-skyboxentity.html
And here's example how it should look
I struggle to understand how – using one AudioContext – I would achieve the following:
I use createMediaStreamSource to create the source of the context – works.
I then connect a volume node to the source – works.
I then want to create TWO outputs: One is the "standard" output (the speakers) and the second would be used to feed into a mediaRecorder
I struggle with the 3rd point. How do I specify a different output than speakers? Is the output still a stream I can feed into MediaRecorder?
From what you describe, I assume that you have some code that looks like this:
mediaStreamAudioSourceNode
.connect(gainNode) // your volume node
.connect(audioContext.destination);
You just need to add another MediaStreamAudioDestinationNode and use that as an additional output.
const mediaStreamAudioDestinationNode = new MediaStreamAudioDestinationNode(audioContext);
gainNode.connect(mediaStreamAudioDestinationNode);
const mediaRecoder = new MediaRecorder(mediaStreamAudioDestinationNode.stream);
I want to use a different language pair at the example provided in TernsorFlow website, Google Colab notebook only picks spanish-english
https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/text/nmt_with_attention.ipynb
I tried changing the link to the esp-eng data that download's from it, but that didn't help
How can I try a different language set, without locally setting-up colab, it did mention at the end on that page, that I can try a different language set.
The final note on using a different dataset refers to this website which includes tab-delimited files.
You mainly need to change the values in this cell according to the link to the zip file you need.
# Download the file
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
You can try other datasets from:
OPUS
WMT
However, in these corpora, the source and target are in two separate files, so you have to adjust the code that extracts pairs, instead of split('\t') it should open two files and get the source and target line by line.
I have been trying to export data to a google bigquery dataset from datasift, but except for 4 empty rows, no other relevant data has been pushed.
I followed instruction from this link: http://dev.datasift.com/docs/push/connectors/bigquery. Not sure if it's the csdl code that I used the cause.
For example I configured a stream using:
wikipedia.title contains "Audi".
The live preview has no output. Also, the only data sources that I've set as active are Interaction and Wikipedia.
Please let me know what may be the reason for this. At the end of every stream recording I don't see any changes, expect the creation of the table mentioned in the destination with 4 empty rows(some row have null values, and interaction.type is ds_verify).
Thank you!