In my A-Frame scene I have two separate inverted spheres models with two different materials
To create the inverted sphere objects, I am using Blender. I am applying the textures in the texture menu, then applying them as materials. Then, I export the models as .dae with materials included in the export settings.
Textures window for invertedsphere2:
Materials window for invertedsphere2:
In this photo, "models/invertedsphere.dae" properly shows "glyphs.png" as the texture.
Here's how my second inverted sphere appears in Blender, and how I assume it should look in A-Frame.
However, this is how it appears in A-Frame.
The first sphere is 5 units large in every dimension, and the second sphere is 4.7 units large in every dimension, meaning that I should be able to see the first sphere through the transparent areas of the second sphere, however this is not occurring.
How do I get the texture to show properly?
Additionally, my scene code:
<html xmlns="http://www.w3.org/1999/xhtml"><head>
<meta charset="utf-8" />
<title>Aetheria</title>
<meta name="description" content="Aetheria" />
<script src="https://aframe.io/releases/0.6.0/aframe.min.js"></script>
<!-- Primitives. -->
<a-box position="-1 0.6 -3" rotation="0 45 0" color="#4CC3D9"></a-box>
<a-sphere position="0 1.35 -5" radius="1.25" color="#EF2D5E"></a-sphere>
<a-cylinder position="1 0.85 -3" radius="0.5" height="1.5" color="#FFC65D"></a-cylinder>
<a-plane position="0 0.1 -4" rotation="-90 0 0" width="4" height="4" color="#7BC8A4"></a-plane>
<a-entity collada-model="model/invertedsphere/invertedsphere.dae" scale="5 5 5" position="0 1.441 -2.752"></a-entity>
<a-entity collada-model="model/invertedsphere/invertedsphere2.dae" scale="4.7 4.7 4.7" position="0 1.441 -2.752"></a-entity>
<!-- Background sky. -->
<a-sky height="2048" radius="30" src="#skyTexture" theta-length="90" width="2048"></a-sky>
<!-- Ground. -->
</a-scene>
The solution I ended up finding involved using glTF and the process was much more complicated than I initially anticipated. I will do my best to write a condensed guide here.
Clone the glTF Blender Exporter
Make sure to place the scripts in your Blender add-ons directory and enable it.
In the model you wish to texture/add material to, choose "File > Link" then navigate to "(working directory)/glTF-Blender-Exporter/pbr_node/glTF2.blend/Node Tree/" then select either "glTF Metallic Roughness" and/or "glTF Specular Glossiness"
Open the node editor and make sure that you are displaying Shader Nodes and are taking the Shader Nodes from the object, e.g.
In the node editor menu, select "Add > Group > glTF Metallic Roughness/glTF Specular Glossiness", then click anywhere in the node editor to place it.
For reference, my object is set up as so:
Note that my Image Texture's Alpha Channel node connects to the Alpha Channel in the shader. This is the key part that makes the texture transparent.
NOTE: No other shaders will work besides glTF shaders at this time if exporting to glTF.
Export as either .glTF or .glb. If you export as .glTF, a .glTF file and a .bin file will be created. The .bin file will contain the material and texture. If you export as a .glb file, the material and texture will be self-contained within the same file as your model.
Test using the glTF viewer. I prefer it to testing in A-Frame because it gives you error output in a clear, readable fashion, and allows you to drag-and-drop models.
Related
I would like to know if the assimp FBX loader does supports PBR materials.
I am currently using it with glTF/glb files and it perfectly loads my PBR textures.
I am loading PBR textures via the "assimp/pbrmaterial.h" header file, but this file is only defining glTF macros.
How can I load PBR textures when using the FBX file format with assimp ?
Regards.
I don't think it can. glTF 2.0 uses a single texture that contains: metallic on the blue channel, roughness on the green. And from my own testing using Blender v2.93.3 (the latest right now), if you use its Shader Editor to split that single texture into separate RGB channels, the FBX won't get saved with any paths to it. Even when you import the FBX back into Blender it will only have the base color and normal map applied, nothing else. So I don't think you can expect Assimp (v5.0.1) to work with it either... But this might just be a bug in Blender, I'm not sure. Because it seems that if metallic and roughness are individual textures, Blender can correctly import the FBX back. It's a pretty big oversight that you can't export (as FBX) models that use multi-channel textures. Pretty much all PBR workflows involve having them merged into a single texture, to reduce texture lookups.
I don't know... seems like glTF 2.0 is a much better format. It comes with a GPU-friendly binary (compared to something like Wavefront OBJ which is very slow), and you can even have the textures separately if you choose the "glTF Separate" format when you export it. You automatically get a merged PNG with both metallic and roughness like I said before:
If you really wanna have support for FBX files (I know I do; it's a popular format), what you could do, is to have it correctly identify and load the base color and normal map, but then you have to manually load the "PBR" texture somewhere before the render loop starts, and then manually bind the texture and send it as a uniform to the fragment shader before drawing it. I tested this and it works.
glActiveTexture(GL_TEXTURE2); // Texture unit 2
glBindTexture(GL_TEXTURE_2D, *pMyMetRoughTexture);
pShader->SetInt("uMaterial.MetallicRoughnessTexture", 2); // Tell the sampler2D from the fragment shader to use texture unit 2
pMyModel->Draw(pShader);
But having 2/3 textures loaded automatically and 1 left up to you, to manually handle, for every... single... model... is just... bleh. I'm sorry if this isn't a "proper" answer. I'm really disappointed by the lack of PBR support, for something that's used so ubiquitously in I think all AAA games in the last few years.
I am developing an application that basically its a colaborative drawing tool.
I have a HUGE SVG original drawing (4.5 MB) and the users can login pan and draw their own paintings on this original drawing.
Im using react-native-svg and react-native-svg-transformer. I had tried 2 approaches and both failed or are not good enough:
The first one is adding an asset and importing as:
import Earth from './FinalEarth.svg';
The problem is that it show me this error:
RangeError: Maximum call stack size exceeded.
This error is located at:
in SvgComponent (at DrawView.js:265)
The other approach is loading it from an URI. Like this:
<Svg
style={styles.drawSurface}
width={this.props.width}
height={this.props.height}
>
<G
transform={{
translateX: left * resolution,
translateY: top * resolution,
scale: zoom
}}
>
<SvgCssUri
width="100%"
height="100%"
uri="myurl//EarthCompact.svg?alt=media&token=3435369a-6fb7-4d05-91ca-b6b7c7a2e28e"
/>
With this approach I have 2 issues:
I cannot have a loader. So it shows a blank screen and while it downloads the 4 MB from my server only a blank screen is show.
And then, when loaded, the performance its too bad too when zooming or panning.
Anyone has a suggestion on the best approach for this case?
We're trying to do tiling textures for AR Quick Look (iOS in USDZ (pixar) format), but issuing a problem.
What we have:
Project in the blender, where we use scaling texture via mapping (screen below) and everything looks fine like it is tiled properly.
When I do export in GLTF 2.0 you can see, that texture is not scaled (scale should be 100, 100) and that is why it looks bad. Doing not tiled textures for (for example) roads, is bad idea, so, that is why i'm using it.
The same goes to usdz.But i think that it is because of GLTF format
Not sure if while exporting from blender to gltf i should do something correctly
This question might be better suited for Blender SE, not here.
The glTF exporter is looking for a shader node called "UV Map" instead of that "Texture Coordinate" node you have there. I realize the names are almost synonymous, but the "UV Map" node has a chooser for which UV Map, and that's what the exporter wants to find. (For more detail, there is documentation.)
Also I don't know if glTF export supports that little splitter node you have in your graph there. Try drawing individual lines from the "mapping" box to each of the image textures.
So I am new to this next-gen Flash application they call "Adobe Animate CC" and I am trying to create an interactive map scene... very basic. If you click on the USA it should zoom in. Click again it should zoom out.
The issue I am having is that even though my map was imported from an SVG file -- and from what I can tell when residing in the "Adobe Animate CC" workspace it retains its vector data -- when I apply the scale tween using CreateJS the edges of the graphic become very pixelated.
Here's the code I am using:
var _this = this;
_this.stop();
_this.america.addEventListener("click", zoomMap);
function zoomMap(event) {
createjs.Tween.get(exportRoot.world1).to({scaleX: 10, scaleY: 10, x: 4000, y: 1000}, 1000);
}
And here are some images of the pixelated result:
Even more disconcerting is that that blue-green circle is a native circle object inside a symbol. Not an svg. I would expect that at least that would stay crisp under transformation.
Is this unavoidable? Is the application caching bitmap versions of my vector files on export? Can I stop this? Can I force a re-render of the vector file during and after my tween? Is there any way around this? Does this application even really support vector graphics?
Animate might be exporting as images, but it shouldn't unless you tell it to. What does your library JavaScript look like? Are any images exported? Maybe search the source for .cache to see if Adobe is doing anything funny under the hood.
If the map is an SVG source: Unfortunately, only the only SVG support in EaselJS (which underlays the Animate export) is for svg as a "bitmap source". This means it is being treated as an image of a specific dimensions, and scaling it past "100%" will interpolate the details.
It might be possible to load it as a larger bitmap, and scale it down to start, but that will:
make it much larger in memory
still only let you scale so much
Another option is to import the SVG asset into Adobe Animate, which should convert it to a vector graphic. If it is vector in EaselJS, you can scale it as much as you want, because it uses Canvas vector APIs to draw, instead of an image source.
You mentioned that the green circle is native (I assume a shape in Animate?). Are you sure its not being exported as an image, instead of a shape? Are you caching anything?
Hope that helps!
I'm creating an animation with a simple rig in Blender and using it A-Frame 0.70. I have a box with a lid that opens via Euler rotation, but all children of the armature rotate with the lid even if they are not supposed to animate. Wiring up the flaps to their respective bones also distorts the geometry of the flaps - but I am simplifying this problem to just the lid for now to try to understand what is happening.
Animation works fine in Blender and apparently works in UX3D.
Attempts to separate the mesh into pieces and de-parent them from the armature results in the de-parented meshes to not render at all despite exporting all objects.
Tried Blender 2.78c and 2.79 and virtually all combinations of glTF export options with latest Blender glTF 2.0 exporter from Khronos.
Blender Screenshot
A-Frame Demo
<a-gltf-model cursor-listener id="gift" src="#rigged-gift" animation-mixer=""></a-gltf-model>
Blender source included in CodePen link
Appreciate any direction I can get on this problem!
This is now resolved with Blender 2.8 and above.