How to transform object from Cycles Render to Blender Render - blender

I have blender file which loads the scene fine in Cycles Render.
However, I need it to correctly load the textures in Blender Render.
Any hints how I can achieve that?
Thanks!

For simple materials it can be easier to re-create the same look using blender internal material settings. For more complex materials the best you can do is bake the cycles material to an image that you can then use in blender internal.
If you have trouble with baking you may want to ask at blender.stackexchange which offers blender specific help.

Related

Godot doesn't import glTF material shaders correctly

I have a model that I am using as my gameworld in Godot and I am putting some textures on it in Blender. When exporting the model to .glb it seems to not be exporting some of the materials correctly.
The ones that are an image texture that I scaled using mapping.
It just skips the first two nodes. When I disconnect them the material looks like it does in Godot.
What it should look like (what it looks like in Blender):
What it looks like in Godot:
There also are some materials that use mix shaders to mix images together. These are also skipped and it just uses one of the two instead of mixing them.
What it should look like:
What it looks like in Godot:
The F is from an image with text that is overlayed ontop of the other image.
I am not really sure if these issues lie with the glTF format and certain shaders just don't work with it or that Godot doesn't want to import them.
I hope someone can help me with this.
Thanks in advance and have a nice day,
Rover
In general, arbitrary node graphs cannot be exported from Blender — see https://blender.stackexchange.com/a/57541/43930 for the long explanation of that. To ensure that the material will export correctly, you'll need to refer to the documentation for the Blender glTF exporter and configure your material accordingly. For more complicated effects, you can also bake Blender nodes down to simpler textures.

Cube won't render BLENDER

I am using blender to make a character, but every time I add a mesh cube, it won't render. I've done several attempts to make it work, but it just won't. I'm sure I'm not the only one with this problem.
If you can see it in the 3d viewer but not in the render, check in the list in the Outliner where you have all your objects and see if the little camera is clicked on.
if you can't see it in the 3d viewer but you have the point try to scale using S

which way can control the rotation most efficiently?

Since a large number of models are random rotating, which way can control the rotation most efficiently? Is it to bind the scripts on each model with rotation in the update?
PS:We encounter some problems when using the untiy3d engine. We use the version: 3.5.0 and the Ellipsoid Particle Emitter of the Legacy Particles with ios target platform.
Creating a prefab with the model and script is the most common and possibly most efficient way to populate your scene with many similar objects.
You can use Update() to control it, but you should use FixedUpdate() to deal with physics stuff.
You could also have a script - in another game object that would only be a container for this script - to instantiate those asset objects instead of drag & dropping each one, if the number of objects is big enough to compensate for the work.
Explain:
Are Random models same in body texture and other attributes?
If yes,
then can help you Instantiating prefabs this and you can help you can just drag and drop code of rotation on one enemy and prefabs can access it randomly or you can also by making rotation rate public.
It will all depend on your code logic afterwards.

How to directly manipulate texels in OpenGL ES?

I want to use OpenGL ES to scale and display an image on the screen. The image is going to be updated about 20 times per second, so the idea was to paint directly into the texture. While scaling should be done by the graphic card, the pixel format is guaranteed to be in the correct format by my application. My application needs to manipulate the image on a pixel-by-pixel basis. Due to the architecture of the application I would like to avoid calls like settexel(x,y,color) but write directly into memory.
Is it possible to directly access a texture in the (graphic card's?) memory and change it pixel-wise?
If not, is it possible to use something like settexel(x,y,color) to change a texture?
Thanks for any help!
Ok, after asking some guys at my company I found out that there is no clean way to access the graphic memory directly (solution 1) or to access the main memory from within a shader (solution 2).
Thus, I will store the pixels in the main memory and move the changed regions via glTextSubImage2D into the graphic memory.
Thanks to everybody who helped me with this!

How to create JPEG image on iOS from scratch

I'm trying to create an objective C classe for my iPad application which can convert a powerpoint file to a jpeg file.
Accordingly i've to read into the pptx format to see how the file is structured and create an image, from scratch, in which i can say this element goes there, this one here, this text there.
But actually i've no idea how to do this, if the best way is to use a already existing framework in iOS or an additional library?
Thanks to everyone ;)
Bye
The fastest way to visualize elements is, to me, OpenGL ES. You can use mobile GPU to visualize then there is CIImage for managing image.
Take a look at Quartz 2D, the drawing engine used as the main workhorse for 2D graphics on iOS. It gives you all the primitives for drawing shapes, fills, text and other objects you need to render the presentation.