I'm facing a problem with modelling diffusion in Dymola.
I want to have two seprate volumes (filled with air), which can be joined and thus exchange heat via diffusion.
My approach was using the Modelica.Fluid library and connect two ClosedVolumes with a Valve.
But as I found out, this library doesn't regard difussion.
What would be the best way to accomplish such a model?
This limitation is due to the use of stream connector in the Modelica.Fluid library.
One way to solve this is to develop a fluid connector which do not rely on stream connector but only on potential and flow variables. Unfortunately in this case you'll have to solve yourself numerical problems for solving flow reversal and zero-flow singularity.
One example is described in the paper "A physical solution for solving the zero-flow singularity in static thermal-hydraulics mixing models" presenting in the Modelica conference 2014. Basically, adding diffusion helps to solve zero-flow singularity and they use a regularized step function to solve flow reversal. Other regularization functions can be found in Modelica.Fluid.Utilities.
Hope this help,
Best regards.
Related
I am trying to understand what are the basic difference between Tensorflow Mirror Strategy and Horovod Distribution Strategy.
From the documentation and the source code investigation I found that Horovod (https://github.com/horovod/horovod) is using Message Passing Protocol (MPI) to communicate between multiple nodes. Specifically it uses all_reduce, all_gather of MPI.
From my observation (I may be wrong) Mirror Strategy is also using all_reduce algorithm (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/distribute).
Both of them are using data-parallel, synchronous training approach.
So I am a bit confused how they are different? Is the difference only in implementation or there are other (theoretical) difference?
And how is the performance of mirror strategy compared to horovod?
Mirror Strategy has its own all_reduce algorithm which use remote procedural calls (gRPC) under the hood.
Like you mentioned Horovod uses MPI/GLOO to communicate between multiple processes.
Regarding the performance, one of my colleagues have performed experiments before using 4 Tesla V100 GPUs using the codes from here. The results suggested that 3 settings work the best: replicated with all_reduce_spec=nccl, collective_all_reduce with properly tuned allreduce_merge_scope (e.g. 32), and horovod. I did not see significant differences among these 3.
I'm currently working on a large scale timetabling problem from my university. I'm using CPLEX to create the model and solve it, but due to it's size and processing time, I'm considering trying out a local search algorithm like G.A to solve it, but I'm lost on how to properly do it. Is there a way of applying a local search on it without having to reformulate the whole model?
one possible manner to tackle your problem is to use the CPLEX callbacks.
You may implement a heuristic callback. In this callback, you can implement your GA within the CPLEX model and use it to find a feasible solution (which I think is very difficult in various timetabling problems) or to improve your current solution.
I have a task: to determine the sound source location.
I had some experience working with tensorflow, creating predictions on some simple features and datasets. I assume that for this task, there would be necessary to analyze the sound frequences and probably other related data on training and then prediction steps. The sound goes from the headset, so human ear is able to detect the direction.
1) Did somebody already perform that? (unfortunately couldn't find any similar project)
2) What kind of caveats could I meet while trying to achieve that?
3) Am I able to do that using this technology approach? Are there any other sound processing frameworks / technologies / open source projects that could help me ?
I am asking that here, since my research on google, github, stackoverflow didn't show me any relevant results on that specific topic, so any help is highly appreciated!
This is typically done with more traditional DSP with multiple sensors. You might want to look into time difference of arrival(TDOA) and direction of arrival(DOA). Algorithms such as GCC-PHAT and MUSIC will be helpful.
Issues that you might encounter are: DOA accuracy is function of the direct to reverberant ratio of the source, i.e. the more reverberant the environment the harder it is to determine the source location.
Also you might want to consider the number of location dimensions you want to resolve. A point in 3D space is much more difficult than a direction relative to the sensors
Using ML as an approach to this is not entirely without merit but you will have to consider what it is you would be learning, i.e. you probably don't want to learn the test rooms reverberant properties but instead the sensors spatial properties.
I'm currently using a Processing Kinect library which supplies a depth map. I was wondering how I could take that and use it to create a 2D skeleton, if possible. Not looking for any code here, just a general process I could use to achieve those results.
Also, given that we've seen this in several of the Kinect games so far, would it be difficult to have multiple skeletons running at once?
Disclaimer: the reason why you still didn't get an answer for this question is probably because that's a current research problem. So I can't give you a direct answer but will try to help with some information and useful resources for this topic.
There are mainly 2 different approaches to create a skeleton from a depth map. The first one is to use machine learning, the second is purely algorithmic.
For the machine learning one, you'd need many samples of people doing a predetermined move, and use those samples to train your favorite learning algorithm. That's the approach that was taken and implemented by Microsoft in the XBox (source), it works really well BUT you need millions of samples to make it reliable... quite a drawback.
The "algorithmic" approach (understand without using a training set) can be done in many different ways and is a research problem. It's often based on modeling the possible body postures and trying to match that with the depth image received. That's the approach that was chosen by PrimeSense (the guys behind the kinect depth camera technology) for their skeleton tracking tool NITE.
The OpenKinect community maintains a wiki where they list some interesting research material about this topic. You might also be interested in this thread on the OpenNI mailing list.
If you're looking for an implementation of a skeleton tracking tool, PrimeSense released NITE (closed source), the one they made: it's part of the OpenNI framework. That's what's used in most of the videos you might have seen that involve skeleton tracking. I think it's able to handle up to 2 skeletons at the same time, but that requires confirmation.
The best solution is to use FAAST (http://projects.ict.usc.edu/mxr/faast/) which requires OpenNI. I have struggled to get OpenNI to work on my computer. I have not seen an approach yet using Code Laboratories' CL NUI.
An algorithmic approach is http://code.google.com/p/skeletonization/ but you may have a problem because your depthmap only represents surfaces and no closed objects.
I guess those who have worked in communities and social networks might have some experience in this.
I am trying to plot a graph of all the friendships that exists on my site and in doing so identify clusters of strongly interconnected users.
Does anyone have any experience in doing something like this? Also, does SQL Server 2008 BI have tools that allows for this type of modelling?
Thanks
Programming Collective Intelligence's chapter 5 is dedicated to optimization and network visualization. Using the modules available here and the snippet below, I could make the following image:
>>> import optimization
>>> import socialnetwork
>>> sol = optimization.annealingoptimize(socialnetwork.domain, socialnetwork.crosscount, step=50, cool=0.99)
>>> socialnetwork.drawnetwork(sol)
The advantages of this approach is that you can easily change the cost function, use different optimization algorithms, or use another library to view the solution.
Take a look at neato from the Graphviz command line tool suite. AS input it takes a so called .dot file. The format is straight forward you should just be able to iterate over all friendship relations in your system and write them into the file.
For inspiration, take a look at these social graphs from "Visual Complexity" collection.
Many visualizations have explanatory papers and articles mentioning graphing tools, libraries and algorithms used to obtain the images.
Examples from "Social Networks" category:
Your graph will be probably reasonably large, so GraphViz is a poor choice. It does a nice job for tiny graphs, but not for huge ones. I'd recommend that you try aiSee instead (here are some example graphs). It requires graphs to be specified in a simple human-readable format called GDL.
(source: aisee.com)
Sample social network http://www.aisee.com/graph_of_the_month/pubmed5.gif
(source: aisee.com)
For visualization, have a look at the Javascript Infovis Toolkit.
You might take a look at the Girvan-Newman algorithm, the output of which gives you an idea of community structure in the form of a dendrogram.
You should look at Mark Shepherd's SpringGraph which is a neat and sexy way of showing big graphs.
Please take a look at the prefuse visualization toolkit
Check out Wikipedia -- Social Network which does talk about social network analysis and graphing relations between users. I think the basic idea is you use a graph to map all the relations and then the more shared relations there are, the higher the interconnected relationships.