Is it possible to model a pipe immersed in a fluid cavity using OpenModelica standard library? - fluid

I would like to model a pipe immersed in a fluid cavity to study the heat transfer between the two fluids. I modeled this by using two DynamicPipe connected to the same WallConstProp but I'm not sure it is a correct way to model it. My question is : is there a specific component available in the MSL to model such a configuration or should I look in other libraries ?
Best regards,
Maxime

There is no such component for the heat transfer of a pipe to a surrounding fluid in the Modelica Standard Library as far as I know. If you only need heat transfer orthogonal to the flow in the wall then it is a good assumption to model both fluids with a pipe connected via a heat transfer. You can create your own heat transfer model based e.g. on a Nusselt correlation in order to model the heat transfer to a surrounding fluid for the second pipe.
The MSL offers basic components to provide a common basis for all Modelica users and works as a starting point. Specific applications can be covered by specific commercial or open source libraries.

Related

Question about convection heat transfer in the dynamic pipe of the modelica standard library fluid

So I'm new to the tool openmodelica and I'm trying to simulate the flow of a fluid in a pipe with heat transfer.
Here's the configuration: a fluid with an inlet pressure and temperature flow through a pipe, the outside temperature is higher than the inlet fluid temperature. The pressure in the outlet of the pipe is inferior to the inlet pressure to allow the flow of the fluid.
I assigned a material to the wall of pipe using WallConstProps. I want to simulate the convection heat transfer between the wall and the ambient so I used the convection component of the thermal library. I have attached a picture of my current system.
My question is : to simulate the convection heat transfert between the wall and the fluid, do I have to just connect the Wall to the heatport of the dynamic pipe and set the use_HeatTransfer to "true"?
Moreover, I don't really understand how the heatports of the Wallconstprops and the dynamic pipe. When I connect them, I have to select which heatport to use, so does the heat transfer apply to the whole component or just to the segment of the component corresponding to that heatport?
Thanks in advance and have a great day,
Maxime
The convective heat transfer of the fluid inside the pipe can be specified with the parameter HeatTransfer in the Assumptions tab of the DynamicPipe model. The drop-down menu gives you different heat transfer correlations to choose from. The default IdealHeatTransfer has no (convective) thermal resistance — that is the temperatures of the fluid volumes are equal to the heatPort temperatures. You might want to use LocalPipeFlowHeatTransfer which models the fluid convective heat transfer under both laminar and turbulent conditions.
As for your second question: Since the DynamicPipe model is discretized in nNodes segments, when you connect the pipe to the wall model (discretized in n segments) you are asked which segments to connect. Accepting the default [:] pipe segments 1:nNodes are connected to wall segments 1:n. Obviously, the number of wall and pipe segments should be equal. In the wall model you could set n=pipe.nNodes.
The default value of nNodes is 2. If you want to use only one segment you also need to change the model structure of the pipe with the parameter modelStructure to one different from av_vb. This parameter can be found in the Advanced tab.

Pipe and Channels in modelica

I am trying to model a microchannel for fluid transport in openmodelica. The channel is non-circular though. I am not an expert in modelica, so if i may ask, from where can i start ? is the static and the dynamic pipes in the fluid library suitable for my purpose with some modifications ?
Here's the link of how the channel looks like:
https://www.dropbox.com/s/7opdu6dwnvqfsfc/pipe.png?dl=0
Any help would be appreciated.
I don't think there is a way to model non-circular pipes with the MSL, but probably you can break it down to a circular pipe that has the same properties as your non-circular one.
I am no expert on fluid dynamics, but if your current can be assumed to always be orthogonal to the cross section of the pipe (no chaotic behavior) you should be able to linearly map the cross section area to a diameter of a circular pipe and use that instead.

How can i create a 3D modeling app? What resources i will required?

I want to create a application which converts 2d-images/video into a 3d model. While researching on it i found out similar application like Trnio, Scann3D, Qlone,and few others(Though few of them provide poor output 3D model). I also find out about a technology launched by the microsoft research called mobileFusion which showed the same vision i was hoping for my application but these apps were non like that.
Creating a 3D modelling app is complex task, and achieving it to a high standard requires a lot of studying. To point you in the right direction, you most likely want to perform something called Structure-from-Motion(SfM) or Simultaneous Localization and Mapping (SLAM).
If you want to program this yourself OpenCV is a good place to start if you know C++ or Python. A typical pipeline involves; feature extraction and matching, camera pose estimation, triangulation and then optimised using a bundle adjustment. All pipelines for SfM and SLAM follow these general steps (with exceptions of course). All of these steps are possible is OpenCV although Googles Ceres Solver is an excellent open-source bundle adjustment. SfM generally goes onto dense matching which is where you get very dense point clouds which are good for creating meshes. A free open-source pipeline for this is OpenSfM. Another good source for tools is OpenMVG which has all of the tools you need to make a full pipeline.
SLAM is similar to SfM, however, has more of a focus on real-time application and less on absolute accuracy. Applications for this is more centred around robotics where a robot wants to know where it is relative to its environment, but it not so concerned on absolute accuracy. The top SLAM algorithms are ORB-SLAM and LSD-SLAM. Both are open-source and free for you to implement into your own software.
So really it depends what you want... SfM for high accuracy, SLAM for real-time. If you want a good 3D model I would recommend using existing algorithms as they are very good.
The best commercial software in my opinion... Agisoft Photoscan. If you can make anything half as good as this i'd be very impressed. To answer your question what resources will you require. In my opinion, python/c++ skills, the ability to google well and a spare time to read up on photogrammetry and SfM properly.

Robot odometry in labview

I am currently working on a (school-)project involving a robot having to navigate a corn field.
We need to make the complete software in NI Labview.
Because of the tasks the robot has to be able to perform the robot has to know it's position.
As sensors we have a 6-DOF IMU, some unrealiable wheel encoders and a 2D laser scanner (SICK TIM351).
Until now I am unable to figure out any algorithms or tutorials, and thus really stuck on this problem.
I am wondering if anyone ever attempted in making SLAM work in labview, and if so are there any examples or explanations to do this?
Or is there perhaps a toolkit for LabVIEW that contains this function/algorithm?
Kind regards,
Jesse Bax
3rd year mechatronic student
As Slavo mentioned, there's the LabVIEW Robotics module that contains algorithms like A* for pathfinding. But there's not very much there that can help you solve the SLAM problem, that I am aware of. The SLAM problem consist of the following parts: Landmark extraction, data association, state estimation and updating of state.
For landmark extraction, you have to pick one or multiple features that you want the robot to recognize. This can for example be a corner or a line(wall in 3D). You can for example use clustering, split and merge or the RANSAC algorithm. I believe your laser scanner extract and store the points in a list sorted by angle, this makes the Split and Merge algorithm very feasible. Although RANSAC is the most accurate of them, but also has a higher complexity. I recommend starting with some optimal data points for testing the line extraction. You can for example put your laser scanner in a small room with straight walls and perform one scan and save it to an array or a file. Make sure the contour is a bit more complex than just four walls. And remove noise either before or after measurement.
I haven't read up on good methods for data association, but you could for example just consider a landmark new if it is a certain distance away from any existing landmarks or update an old landmark if not.
State estimation and updating of state can be achieved with the complementary filter or the Extended Kalman Filter (EKF). EKF is the de facto for nonlinear state estimation [1] and tend to work very well in practice. The theory behind EKF is quite though, but it should be a tad easier to implement. I would recommend using the MathScript module if you are going to program EKF. The point of these two filters are to estimate the position of the robot from the wheel encoders and landmarks extracted from the laser scanner.
As the SLAM problem is a big task, I would recommend program it in multiple smaller SubVI's. So that you can properly test your parts without too much added complexity.
There's also a lot of good papers on SLAM.
http://www.cs.berkeley.edu/~pabbeel/cs287-fa09/readings/Durrant-Whyte_Bailey_SLAM-tutorial-I.pdf
http://ocw.mit.edu/courses/aeronautics-and-astronautics/16-412j-cognitive-robotics-spring-2005/projects/1aslam_blas_repo.pdf
The book "Probabalistic Robotics".
https://wiki.csem.flinders.edu.au/pub/CSEMThesisProjects/ProjectSmit0949/Thesis.pdf
LabVIEW provides LabVIEW Robotics module. There are also plenty of templates for robotics module. Firstly you can check the Starter Kit 2.0 template Which will provide you simple working self driving robot project. You can base on such template and develop your own application from working model, not from scratch.

is it possible to track an arbitrary skeleton model with the kinect?

I understand that the kinect is using some predefined skeleton model to return the skeleton based on the depth data. That's nice, but this will only allow you the get a skeleton for people. Is it possible to define a custom skeleton model? for example, maybe you want to track your dog while he's doing something. So, is there a way to define a model for four legs, a tail and a head and to track this?
Short answer, no. Using the Microsoft Kinect for Windows SDK's skeleton tracker you are stuck with the one they give you. There is no way inject a new set of logic or rules.
Long answer, sure. You are not able to use the pre-built skeleton tracker, but you can write your own. The skeleton tracker uses data from the depth to determine where a person's joints are. You could take that same data and process it for a different skeleton structure.
Microsoft does not provide access to all the internal functions that process and output the human skeleton, so we would be unable to use it as any type of reference for how the skeleton is built.
In order to track anything but a human skeleton you'd have to rebuild it all from the ground up. It would be a significant amount of work, but it is doable... just not easily.
there is a way to learn a bit about this subject by watching the dll exemple:
Face Tracking
from the sdk exemples :
http://www.microsoft.com/en-us/kinectforwindows/develop/