What replaces deprecated batch scene queries in nvidia physx 3.4? - physics

Apparently, "The batched query feature has been deprecated in PhysX version 3.4". Does anyone know what replaces batch queries going forward?

Batched scene queries are marked as deprecated and will be replaced by new system in future releases.
So you can still use them.

Related

BigTable to TensorFlow 2.x - Any connector?

Is there a direct BigTable connector for TensorFlow 2.x ?
I haven't found any clear track of such package in the master branches of the Github repos tensorflow and tensorflow_io.
Thanks !
Support on Bigtable was started on tfio v0.5.0 but was removed on v0.13.0. Here's the note:
BigTable has been broken from the beginning, and was never maintained.
In addition, there are lots of code duplication as it implment 6 datasets
in C++ with very minimal difference. Due to the recent changes in TF upsteram
API change, it gets harder and harder to even make it compile.
This PR removes BigTable from the code tree. The plan is to add BigTable in the future
with right implementation, ideally utilize the better maintained google cloud C++ big table API: https://github.com/googleapis/google-cloud-cpp/tree/master/google/cloud/bigtable
Check their commit log: https://github.com/tensorflow/io/commit/f08f7954631cd13b3ace059dbc05f0b71dcd857d

DM script/command for centerZLP under EFTEM mode

Is there any command that can do the thing similar as the centerZLP function in EFTEM mode??
Or perhaps, setting energy loss as zero also workable?
Thanks
From GMS 3.2 onward there should be a GIF tuning command that represents that functionality. It is not part of the officially documented script API though.
Number GT_CenterZLP()

Parallel Write Operations in GraphDB Free vs. Standard Edition

I am trying to run several SPARQL queries in parallel in Ontotext GraphDB. These queries are identical except for the named graph which they read from. I've attempted a multithreading solution in Scala to launch 3 queries against the database in parallel (see picture below).
The issue is that I am using the Free Edition of GraphDB, which only supports a single core for write operations. What this seems to mean is that the queries which are supposed to run in parallel basically just queue up to run against the single core. As you can see, the first query has completed 41,145 operations in 12 seconds, but no operations have completed on the two other queries. Once the first query completes, the second query will run to completion, and once that completes, the third will run.
I understand this is likely expected behavior for the Free Edition. My question is, will upgrading to the Standard Edition fix this problem and allow the queries to actually run in parallel? Based on the documentation I've looked at, it seems that multiple cores can be made available for the Standard Edition to complete write operations. However, I also saw something which implied that single write queries launched against the Standard Edition would automatically be processed over multiple cores, which might make the multithreading approach obsolete anyway?
Anyone have experience with launching parallel write operations against GraphDB and is able to weigh in?
You could find the difference in the official GraphDB 9.1 benchmark statistics page:
http://graphdb.ontotext.com/documentation/standard/benchmark.html.

Which model should i use for tensorflow (contrib or models)?

For example if i want to use renset_v2, there are two model file on tensorflow:
one is here, another is here. Lots of tensorflow model are both in models/research and tensorflow/contrib.
I am very confused: which model is better? which model should i use?
In general, tf.contrib contains contributed code mostly by the community. It is meant to contain features and contributions that eventually should get merged into core TensorFlow, but whose interfaces may still change, or which require some testing to see whether they can find broader acceptance.
The code in tf.contrib isn't supported by the Tensorflow team. It is included in the hope that it is helpful, but it might change or be removed at any time; there are no guarantees.
tf.research folder contains machine learning models implemented by researchers in TensorFlow. The models are maintained by their respective authors and have a lower chance of being deprecated.
On the other hand models present directly are officially supported by Tensorflow team and are generally preferred as they have a lower chance of being deprecated in future releases, If you have a model implemented in both, you should generally avoid using the contrib version keeping in mind future compatibility, but the community does do some awesome stuff there, so you might find some models/work not present in the main repository but would be helpful if you used them directly from contrib branch.
Also notice the phrase generally avoid since it is a bit application dependent.
Hope that answers your question, comment with your doubts.
With Tensorflow 2.0 (that will come soon) tf.contrib will be removed.
Therefore, you have to start using models/research if you want your project to be up-to-date and still working in the next months.

Does Vulkan have a TransformFeedback equivalent

Does Vulkan have support for saving the vertices output from a pipeline stage? I've been looking and I can't find any examples or references, maybe someone else knows otherwise?
Transform Feedback didn't make the cut for the initial Vulkan release, and there is no direct equivalent to it.
So you actually have to do it yourself by e.g. writing to a SSBO from a geometry shader using PrimitiveIDs or go with compute shaders.
Note that the geometry shader version might not work on all devices, as it requires support for the vertexPipelineStoresAndAtomics feature.
Update
Support for TransformFeedback has been made available as an extension since 1.1.88.