Glass is opaque, won't show fluid inside (blender 2.83) - blender

I simply want to be able to see things inside my glass tube including fluid. A little like this video.
Currently, the glass is reflective yet opaque, like a mirror. This is the tube in wireframe, you can clearly see the liquid and inflow object inside. However, rendered, they are both hidden. You can see I used the glass BSDF shader.
I seem to also be having issues getting my liquid to be a mesh instead of rainbow dots but I think that's a separate problem and doesn't explain why my glass is opaque?
Thanks in advance,
Yosef

There are a number of things that could be the issue here but I'll provide some insight on some of the most likely culprits. This could be caused by:
If you're using Cycles to render, make sure that the number of bounces for transparent and translucent materials is at least 2, otherwise the glass would render as black.
Make sure the lighting in your scene is rendering as intended, as insufficient light or a poorly placed camera can make glass look just like a mirror surface.
Similarly, check the material properties of the liquid inside the tube. If the liquid is being rendered as black/opaque, it's also possible that it is causing the tube to act as a mirror.
These possible causes all assume that the materials are set up correctly - and it would be easier to help diagnose the problem with a little more information on your material setup (the nodes), the rendering engine you're using (Cycles, Eevee, etc.), and information about the lighting in your scene.

You go
Step 1 material tab
Step 2 principled BSDF
Step 3 turn transmission up to 1
Step 4 render properties tab
Step 5 tick screen space reflection
Step 6 drop down and tick refraction
Step 7 material tab
Step 8 scroll down and tick screen space refraction
Step 9 turn roughness down to 0 to make it clearer.

Related

Metal multisampling results in darkened textures

So I'm trying to implement full-screen MSAA in my Metal app. I have it working and when drawing solid-filled polygons the edges appear smooth as expected. However, my textured polygons appear dark, and get darker as I increase the number of samples, indicating that the shader might be taking only one sample of the texture per fragment and blending it with n - 1 samples of black therefore making it darker.
However, in my app I also have textures that I render to and then draw to the screen. These textures show up perfectly fine. I can't really see a difference between the two kinds of textures that would change the behavior of multisampling.
Anyway, if anyone could maybe give me any clues as to what's going on, I would greatly appreciate it. I'm pretty stumped on this one.
EDIT:
Here is how I am setting up all my pipeline state(s)
Here is how the texture pipeline state is set up specifically
I figured it out. The problem was that I hadn't set my stencil draw pipeline state to be multisampled. Therefore it was only reading the value in the stencil buffer for 1 out of n samples and hence darkening the output. Works fine now.

Positioning items dynamically for different screensizes

Good morning!
I'm currently trying to create a view similar to this mockup.
I have put down 3 different screen sizes so you can see the issue.
I have a header background image (grey box) with an angled bottom. On the right I want to display an image, which obviously needs to be positioned.
Positioning it horizontally is no issue but how can I position the image vertically? I have it positioned fixed for one screen size but obviously need to make it flexible.
Any ideas? Help would be much appreciated!
David
You can definitely use measure as #rajesh pointed out, or you can use Dimensions. As far as getting the layout consistent across devices, using position absolute and measuring the device height should allow you to get consistency across these devices.
Check out this example I set up, it should be a good starting point at least.
https://rnplay.org/apps/pSzCKg

Kivy: Depth Oder not so in Depth

Now I could be wrong about this but after testing it all day, I have discovered...
When adding a widget and setting the z-index, the value "0" seems to be the magic depth.
If a widget's Z is at 0, it will be drawn on top of everything that's not at 0, Z wise.
It doesn't matter if a widget has a z-index of 99, -999, 10, -2 or what ever... It will not appear on top of a widget who's z-index is set to 0.
It gets more strange though...
Any index less than -2 or greater than 2 seems to create an "index out of range" error. Funny thing is...when I was working with a background and sprite widget, the background's Z was set to 999 and no errors. When I added another sprite widget, that's when the -2 to 2 z-index limitation appeared.
Yeah I know...sounds whacked!
My question is, am I right about "0" being the magic Z value?
If so, creating a simple 23D effect like making a sprite move being a big rock will take some unwanted code.
Since you can only set Z when adding,a widget, one must remove and immediately add back, with the new Z value...a widget.
You'll have to do this with the moving sprite and the overlapping object in question. Hell, I already have that code practically written but I want to find out from Kivy pros, is there a way to set z-index without removing and adding a widget.
If not, I'll have to settle for the painful way.
My version of Kivy is 1.9.0
What do you mean by z-order? Drawing order is determined entirely by order of widgets being added to the parent, and the index argument to add_widget is just a list index at which the widget will be inserted. The correct way to change drawing order amongs widgets is to remove and add them (actually you can mess with the canvases manually but this is the same thing just lower level, and not a better idea).
I found a working solution using basic logic based on the fact widgets have to be removed and added again in order to control depth/draw order.
I knew the Main Character widget had to be removed along with the object in question...so I created a Main Character Parent widget, which defines and control the Main Character, apart from its Graphic widget.
My test involves the Main Character walking in front of a large rock, then behind it...creating a 23D effect.
I simply used the "y-" theory along with widget attach and detach code to create the desired effect.
The only thing that caught me off guard was the fact my Graphic widget for my Actor was loading textures. That was a big no no because the fps died.
Simple fix, moved the texture loading to the Main Character Parent widget and the loading is done once for all-time.
PS, if anyone knows how to hide the scrollbars and wish to share that knowledge, it'll be much appreciated. I haven't looked for an API solution for it yet but I will soon.
Right now I'm just trying to make sure I can do the basic operations necessary for creating a commercial 23D game (handhelds).
I'm a graphic artist and web developer so coming up with lovely visuals won't be an issue. I'm more concerned with what'll be "under the hood" so to say. Hopefully enough, lol.

How to make a PhysicsBody based on Alpha Values

Suppose there there is a scene as follows:
There is a scene with the same size as the frame of the device. The scene has a red ball, which is able to move throughout the 'world'. This world is defined by black and white areas, where the ball is ONLY able to move in the area that is white. Here is a picture to help explain:
Parts of the black area can be erased, as if the user is drawing with white color over the scene. This would mean that the area in which the ball can be moved is constantly changing. Now, how would one go about implementing a physicsBody for the an edge between the white and black areas?
I tried redefining the physicsBody every time it is changed, but once the shape becomes complex enough, this isn't a viable solution at all. I tried creating a two-dimensional array of 'boxes' that are invisible and specify whether most of the area within each box is white or black, and if the ball touched a box that was black, it would be pushed back. However, this required heavy rendering and iterating over the array too much. Since my original array contained boxes a little bigger than a pixel, I tried making these boxes bigger to smooth the motion a little, but this eventually caused part of the ball to be stopped by white areas and appear to be inside the black area. This was undesired, since the user could feel invisible barriers that they seemed to be hitting.
I tried searching for other methods to implement this 'destructible terrain' type scene, but the solutions that I found and tried were using other game engines. To further clarify, I am using Objective-C and Apple's SpriteKit framework; and I am not looking for a detailed class full of code, but rather some pseudo-code or implementation ideas that would lead me to a solution.
Thank you.
If your deployment target is iOS 8, this may be what you're looking for...
+ bodyWithTexture:alphaThreshold:size:
Here's a description from Apple's documentation
Creates a physics body from the contents of a texture. Only texels
that exceed a certain transparency value are included in the physics
body.
where a texel is a texture element. You will need to convert an image to the texture before creating the SKPhysicsBody.
I'm not sure if it will allow for a hole in the middle like your drawing. If not, I suspect you can connect two physics bodies, a left half and a right half, to form the hole.

windows 8 metro app designing for multiple Resolution

I am designing a simple music app where the user gets to play instruments i.e. Drums, and the problem that I am facing is with resolutions.
The drums are images, which I have converted them into buttons. Everything looks great at the state that I have designed it.
However, when I switch to other resolution states, the button(image) are distorted, e.g. skewed, scaled, and looks nasty.
I have tried designing or arranging them via selecting 'Enable state Recording', but the specific designs for that state are not being saved.
Have you tried the approaches discussed here? http://msdn.microsoft.com/en-us/library/windows/apps/hh465362.aspx For the actual button sizes, make sure you are not fixing the width/height with pixel values. Use * weighted rows and columns to layout your grids and have the buttons autosize to fill a given cell in the grid. Then match with the appropriate image resource per the article.
Grids are great for dividing up available space but they can't account for changes in aspect ratios. If your items are still set to Stretch (or Fill) then they can end up out of aspect ratio. Another option is to design the entire layout at a fixed size (let's say 1024 x 768 or 1366 x 768) and wrap the entire thing in a ViewBox. ViewBox will scale all elements equally and maintain the aspect ratio, adding letterboxing (or empty space) on the sides / top & bottom if necessary. This might be a better approach for a drum kit.
Hope that helps.
Redid the whole project of designing again.
This time, I put the image inside a specific grid and that made things lot better. :)