Tensorboard seems to arbitrarily select which nodes belong to the main graph and which do not. During graph visualization I can manually add/remove the nodes but it is tedious to do it every run.
Is there a way to programmatically embed this information (which nodes belong on the main graph) during writing the graph summary?
According to this github issue , it's not feasible at the moment.
And according to this quote :
Thanks #lightcatcher for the suggestion. #danmane, please take a look
at this issue. If it is something we will not do in the short-term
maybe mark it contributions welcome. If it is something you are
planning to include in your plugin API anyways, please close the issue
to keep the backlog clear.
, and the status of the issue (contributions:welcomed), it's not something that is to be expected in the short term.
Related
I am trying to develop a model which detects the items that are picked up by a user from a basket. Is this achievable using Tensorflow? My doubt is since the basket would contain the same items the user picks up (say fruits), is it possible to report the product that in the user's hand(the products that are picked up by the user) in real-time, rather than the items in the basket? Please advice on what would be a good starting point to achieve this.
I have read and watched general object detection methods using Tensor Flow and various models but nothing seems to deal with a similar solution or I am unable to co-relate. If there are any tutorials on achieving, the links for the same would be even more helpful. Thanks in advance. Please bear with me if my question is naive, I am still a newbie at ML and Tensorflow.
It is achievable using Tensorflow. First consider how you would approach the problem. Since you need to distinguish between objects inside basket you will need ANN with some basic classification and object detection.
Here are some links to get you started:
https://www.tensorflow.org/tutorials/keras/basic_classification
https://www.edureka.co/blog/tensorflow-object-detection-tutorial/
Good luck!
I have a task: to determine the sound source location.
I had some experience working with tensorflow, creating predictions on some simple features and datasets. I assume that for this task, there would be necessary to analyze the sound frequences and probably other related data on training and then prediction steps. The sound goes from the headset, so human ear is able to detect the direction.
1) Did somebody already perform that? (unfortunately couldn't find any similar project)
2) What kind of caveats could I meet while trying to achieve that?
3) Am I able to do that using this technology approach? Are there any other sound processing frameworks / technologies / open source projects that could help me ?
I am asking that here, since my research on google, github, stackoverflow didn't show me any relevant results on that specific topic, so any help is highly appreciated!
This is typically done with more traditional DSP with multiple sensors. You might want to look into time difference of arrival(TDOA) and direction of arrival(DOA). Algorithms such as GCC-PHAT and MUSIC will be helpful.
Issues that you might encounter are: DOA accuracy is function of the direct to reverberant ratio of the source, i.e. the more reverberant the environment the harder it is to determine the source location.
Also you might want to consider the number of location dimensions you want to resolve. A point in 3D space is much more difficult than a direction relative to the sensors
Using ML as an approach to this is not entirely without merit but you will have to consider what it is you would be learning, i.e. you probably don't want to learn the test rooms reverberant properties but instead the sensors spatial properties.
Is it possible to customise Tensorboard with our own buttons, sliders and colours to create a sort of web application ?
Thanks !
Yes, you are able to do this by creating a Tensorboard plugin. This blog post can give you a good idea of what capabilities you can add via a plugin. You can follow this tutorial to get started.
Broadly speaking, the 3 parts of a Tensorboard plugin are:
A summary op that gathers the data you need from the Tensorflow session.
A post-processing python script that serves that data to the web client.
Front-end code to display and interact with the data.
As it sounds like your interests are mostly just pertaining to the presentation, it is likely that you can use data already already gathered by Tensorflow and steps 1 & 2 may be very small or non-existent for your case.
The documentation for Tensorboard plugins has moved to here.
I understand that the kinect is using some predefined skeleton model to return the skeleton based on the depth data. That's nice, but this will only allow you the get a skeleton for people. Is it possible to define a custom skeleton model? for example, maybe you want to track your dog while he's doing something. So, is there a way to define a model for four legs, a tail and a head and to track this?
Short answer, no. Using the Microsoft Kinect for Windows SDK's skeleton tracker you are stuck with the one they give you. There is no way inject a new set of logic or rules.
Long answer, sure. You are not able to use the pre-built skeleton tracker, but you can write your own. The skeleton tracker uses data from the depth to determine where a person's joints are. You could take that same data and process it for a different skeleton structure.
Microsoft does not provide access to all the internal functions that process and output the human skeleton, so we would be unable to use it as any type of reference for how the skeleton is built.
In order to track anything but a human skeleton you'd have to rebuild it all from the ground up. It would be a significant amount of work, but it is doable... just not easily.
there is a way to learn a bit about this subject by watching the dll exemple:
Face Tracking
from the sdk exemples :
http://www.microsoft.com/en-us/kinectforwindows/develop/
We currently have a dynamically updated network graph with around 1,500 nodes and 2,000 edges. It's ever-growing. Our current layout engine uses Prefuse - the force directed layout in particular - and it takes about 10 minutes with a hefty server to get a nice, stable layout.
I've looked a little GraphViz's sfpd algorithm, but haven't tested it yet...
Are there faster alternatives I should look at?
I don't care about the visual appearance of the nodes and edges - we process that separately - just putting x, y on the nodes.
We do need to be able to tinker with the layout properties for specific parts of the graph, for instance, applying special tighter or looser springs for certain nodes.
Thanks in advance, and please comment if you need more specific information to answer!
EDIT: I'm particularly looking for speed comparisons between the layout engine options. Benchmarks, specific examples, or just personal experience would suffice!
I wrote a JavaScript-based graph drawing library VivaGraph.js.
It calculates layout and renders graph with 2K+ vertices, 8.5K edges in ~10-15 seconds. If you don't need rendering part it should be even faster.
Here is a video demonstrating it in action: WebGL Graph Rendering With VivaGraphJS.
Online demo is available here. WebGL is required to view the demo but is not needed to calculate graphs layouts. The library also works under node.js, thus could be used as a service.
Example of API usage (layout only):
var graph = Viva.Graph.graph(),
layout = Viva.Graph.Layout.forceDirected(graph);
graph.addLink(1, 2);
layout.run(50); // runs 50 iterations of graph layout
// print results:
graph.forEachNode(function(node) { console.log(node.position); })
Hope this helps :)
I would have a look at OGDF, specifically http://www.ogdf.net/doku.php/tech:howto:frcl
I have not used OGDF, but I do know that Fast Multipole Multilevel is a good performant algorithm and when you're dealing with the types of runtimes involved with force directed layout with the number of nodes you want, that matters a lot.
Why, among other reasons, that algorithm is awesome: Fast Multipole method. The fast multipole method is a matrix multiplication approximation which reduces the O() runtime of matrix multiplication for approximation to a small degree. Ideally, you'd have code from something like this: http://mgarland.org/files/papers/layoutgpu.pdf but I can't find it anywhere; maybe a CUDA solution isn't up your alley anyways.
Good luck.
The Gephi Toolkit might be what you need: some layouts are very fast yet with a good quality: http://gephi.org/toolkit/
30 secondes to 2 minutes are enough to layout such a graph, depending on your machine.
You can use the ForAtlas layout, or the Yifan Hu Multilevel layout.
For very large graphs (+50K nodes and 500K links), the OpenOrd layout wil
In a commercial scenario, you might also want to look at the family of yFiles graph layout and visualization libraries.
Even the JavaScript version of it can perform layouts for thousands of nodes and edges using different arrangement styles. The "organic" layout style is an implementation of a force directed layout algorithm similar in nature to the one used in Neo4j's browser application. But there are a lot more layout algorithms available that can give better visualizations for certain types of graph structures and diagrams. Depending on the settings and structure of the problem, some of the algorithms take only seconds, while more complex implementations can also bring your JavaScript engine to its knees. The Java and .net based variants still perform quite a bit better, as of today, but the JavaScript engines are catching up.
You can play with these algorithms and settings in this online demo.
Disclaimer: I work for yWorks, which is the maker of these libraries, but I do not represent my employer on SO.
I would take a look at http://neo4j.org/ its open source which is beneficial in your case so you can customize it to your needs. The github account can be found here.