Question about convection heat transfer in the dynamic pipe of the modelica standard library fluid - fluid

So I'm new to the tool openmodelica and I'm trying to simulate the flow of a fluid in a pipe with heat transfer.
Here's the configuration: a fluid with an inlet pressure and temperature flow through a pipe, the outside temperature is higher than the inlet fluid temperature. The pressure in the outlet of the pipe is inferior to the inlet pressure to allow the flow of the fluid.
I assigned a material to the wall of pipe using WallConstProps. I want to simulate the convection heat transfer between the wall and the ambient so I used the convection component of the thermal library. I have attached a picture of my current system.
My question is : to simulate the convection heat transfert between the wall and the fluid, do I have to just connect the Wall to the heatport of the dynamic pipe and set the use_HeatTransfer to "true"?
Moreover, I don't really understand how the heatports of the Wallconstprops and the dynamic pipe. When I connect them, I have to select which heatport to use, so does the heat transfer apply to the whole component or just to the segment of the component corresponding to that heatport?
Thanks in advance and have a great day,
Maxime

The convective heat transfer of the fluid inside the pipe can be specified with the parameter HeatTransfer in the Assumptions tab of the DynamicPipe model. The drop-down menu gives you different heat transfer correlations to choose from. The default IdealHeatTransfer has no (convective) thermal resistance — that is the temperatures of the fluid volumes are equal to the heatPort temperatures. You might want to use LocalPipeFlowHeatTransfer which models the fluid convective heat transfer under both laminar and turbulent conditions.
As for your second question: Since the DynamicPipe model is discretized in nNodes segments, when you connect the pipe to the wall model (discretized in n segments) you are asked which segments to connect. Accepting the default [:] pipe segments 1:nNodes are connected to wall segments 1:n. Obviously, the number of wall and pipe segments should be equal. In the wall model you could set n=pipe.nNodes.
The default value of nNodes is 2. If you want to use only one segment you also need to change the model structure of the pipe with the parameter modelStructure to one different from av_vb. This parameter can be found in the Advanced tab.

Related

Is it possible to model a pipe immersed in a fluid cavity using OpenModelica standard library?

I would like to model a pipe immersed in a fluid cavity to study the heat transfer between the two fluids. I modeled this by using two DynamicPipe connected to the same WallConstProp but I'm not sure it is a correct way to model it. My question is : is there a specific component available in the MSL to model such a configuration or should I look in other libraries ?
Best regards,
Maxime
There is no such component for the heat transfer of a pipe to a surrounding fluid in the Modelica Standard Library as far as I know. If you only need heat transfer orthogonal to the flow in the wall then it is a good assumption to model both fluids with a pipe connected via a heat transfer. You can create your own heat transfer model based e.g. on a Nusselt correlation in order to model the heat transfer to a surrounding fluid for the second pipe.
The MSL offers basic components to provide a common basis for all Modelica users and works as a starting point. Specific applications can be covered by specific commercial or open source libraries.

How to save Gnuradio Waterfall Plot?

I want to measure spectrum Occupancy of any one of the GSM band using Gnuradio and a USRP for 24 hours.
Is there any way to save the waterfall plot of gnuradio in image file or any other format?
If not is there any other way to show the spectrum occupancy for certain amount of time in one image or graph?
Is there any way to save the waterfall plot of gnuradio in image file or any other format?
Middle-mouse-button -> Save.
If not is there any other way to show the spectrum occupancy for certain amount of time in one image or graph?
This is a typical case for "offline processing and visualization". I'd recommend you just build a GNU Radio flow graph that takes the samples from the USRP, applies decimating band pass filters (best case: in shape of the GSM pulse shape), and then calculates the power of the resulting sample streams (complex_to_mag_squared) and then just saves these power vectors.
Then you could later easily visualize them with e.g. numpy/matplotlib, or whatever tool you prefer.
The problem really is that GSM spectrum access happens in the order of microseconds, and you want to observe for 24 hours – no visualization in this world can both represent accurately what's happening and still be compact. You will need to come up with some intelligent measure built atop of the pure occupancy information.

Finding centre line of a pipe using python

I am currently writing a code in Python for flows through pipes. In this regard, I have to find a centre line passing thorough a 3D pipe geometry defined by a nastran mesh (cells with three or four edges whose coordinates i can access). I am using the pyNastran module in Python to get all the relevant data and functions.
My question is what would be the most efficient way of finding the centre line of the pipe. The pipe is a 3d pipe with bends in all direction. ( I have all the coordinates of every single point on the mesh in an array)
That is not so easy topic :-/ Problem is that center line is not local property.
For sure, each point on center line corresponds to one slice through pipe or simpler to one perimeter on pipe surface. For any kind of defining that relation on local characteristics, it is easy to find example with bendings or changing pipe diameter that produces 'strange' result.
One way to solve it is to look for topics that have similar properties that are needed here. Like we want slices to be 'parallel' and to uniformly pass through pipe. On one project we used heat diffusion/transfer to tackle a problem. Idea is to put boundary conditions, on one pipe side set boundary condition 1 and on the other side boundary condition 0. Heat will transfer from one side to the other and isoterm will have good properties.
After that, choose center line discterization (on [0-1]), for each point find isoterm on that temperature and find center of mass of that isoterm. Connecting these centers will produce center line.
It is possible to make diffusion on 3D (volume) and 2D (surface) case. It is probably faster to do it on surface.

Dual kinect calibration using powerfull IR LED illuminator

i am using multiple Kinects within the scene. So I need to calibrate them and find the extrinsic parameters like translation and rotation world coordinate system. Once I have that information, i can reconstruct the scene at highest level of accuracy. the important point is : i want to get submillimeter accuracy and may be it would be nice if i could use powerfull IR projector in my system. But i do not have any Background about IR sensor and calibration methods. So i need to know about tow subject : 1- is it possible to add IR LED illuminator to kinect and manage it? 2- if i could add how to calibrate my new system?
Calibration (determining relative transforms (rotation, scale, position)) is only part of the problem. You also need to consider whether each Kinect can handle the interference of the other Kinect's projected IR reference patterns.
"Shake n Sense" (by Microsoft Research) is a novel approach that you may be able to use that has been demonstrated to work.
https://www.youtube.com/watch?v=CSBDY0RuhS4

It is possible to recognize all objects from a room with Microsoft Kinect?

I have a project where I have to recognize an entire room so I can calculate the distances between objects (like big ones eg. bed, table, etc.) and a person in that room. It is possible something like that using Microsoft Kinect?
Thank you!
Kinect provides you following
Depth Stream
Color Stream
Skeleton information
Its up to you how you use this data.
To answer your question - Official Micorosft Kinect SDK doesnt provides shape detection out of the box. But it does provide you skeleton data/face tracking with which you can detect distance of user from kinect.
Also with mapping color stream to depth stream you can detect how far a particular pixel is from kinect. In your implementation if you have unique characteristics of different objects like color,shape and size you can probably detect them and also detect the distance.
OpenCV is one of the library that i use for computer vision etc.
Again its up to you how you use this data.
Kinect camera provides depth and consequently 3D information (point cloud) about matte objects in the range 0.5-10 meters. With this information it is possible to segment out the floor (by fitting a plane) of the room and possibly walls and the ceiling. This step is important since these surfaces often connect separate objects making them a one big object.
The remaining parts of point cloud can be segmented by depth if they don't touch each other physically. Using color one can separate the objects even further. Note that we implicitly define an object as 3D dense and color consistent entity while other definitions are also possible.
As soon as you have your objects segmented you can measure the distances between your segments, analyse their shape, recognize artifacts or humans, etc. To the best of my knowledge however a Skeleton library can recognize humans after they moved for a few seconds. Below is a simple depth map that was broken on a few segments using depth but not color information.