Cinema 4d export to Illustrator with texture - adobe-illustrator

I need little help with exporting from Cinema 4d to illustrator. The idea is I want to use body paint in c4d and then I want to export to Illustrator the model with my body paint work (not just grey model). Is it possible? The Idea is I create 3d model for my game but this models must be with precise dimensions. In Cinema 4d have measurement in centimeters, meters etc without pixels. Any ideas?
Thanks in advance

I'm very confused about what exactly you're trying to do.
You want to render a model inside C4D and texture, then use it inside Illustrator as an asset/sprite?
That's totally possible you just need to render the model with alpha.
And yes Cinema4d has measurement system:
Go to Edit -> User Preferences -> Units

Related

Train Model with same image in diferents orientation

It is a good a idea to train the model with the same images , but with diferents orientations? I a have a small set of images for the training thats the reason why Im trying to cover all the mobile camera-gallery user scenarios.
For example, the image: example.png with 3 copies; example90.png, example180.png and example.270.png with their diferents rotations. And also with diferents background colors, shadows, etc.
By the way, my test is to identify the type of animal.
Is that a good idea??
If you use Core ML with the Vision framework (and you probably should), Vision will automatically rotate the image so that "up" is really up. In that case it doesn't matter how the user held their camera when they took the picture (assuming the picture still has the EXIF data that describes its orientation).

Get the 2d image that was used to render a specific face in the 3D model

I have a 3d cube model that is rendered with multiple images similar to [this] tutorial.
When I click on a point in the 3d model I want to get the image that corresponds to the face of the selected point.
Is this possible?
Thanks

How to use toon shader to convert 3D models to patent drawings

USPTO requires patent drawings to be black and white lines images.
I'm using blender to make 3D models. At first I got this:
The problem is it's grayscale with no black lines.There's a answer to suggest using toon shader. Convert 3D models to patent digrams
I checked "Edge" and set "Threshold" to max 255 in "Render" tab, I got:
It's getting better but need more edges to be drawn. I searched and found a tutorial http://www.minimaexpresion.es/?p=1070&lang=en , then I got:
It's too complicated for me and I don't know how to use render layers. So I tried another tutorial http://download.blender.org/documentation/oldsite/oldsite.blender3d.org/80_Blender%20tutorial%20Toon%20Shading.html , which says I should assign different materials with different colors to different objects, so I tried and got this:
It leaves only one way to give a shot: render layers. Is there any simple methods to make it work? I only want this and convert it to indexed colors with black and white palette:
And the "Freestyle" only has one option about line thickness:
You were close in the second image. Instead of using the Edge postprocessor, look in the Render panel check the box labelled "Freestyle".
Then in the Render Layers panel there will be a list of configurable options for Freestyle, including how thick you want the lines and the minimum angle required to render an edge.
The best results are if you use mostly shadeless materials with simple textures (just solid colour).

blender export / one sided polygons

I'm really new to 3d modeling, blender, etc.
I created a model with blender (a room). Now I exported it (as .obj) so that I can import it to CopperCube (a tool to create 3d scences).
The problem is, that the walls are only visible from outside. Take a look into the pictures:
Blender:
http://imageshack.us/photo/my-images/341/blenderg.png/
CopperCube:
http://imageshack.us/photo/my-images/829/coppercube.png/
I asked the forum of CopperCube and they said that there are only one-side polygons (or flipped). Is there a way to change this? Sorry, but I am a total beginner with this...
Here's the answer of the CopperCube forum:
I don't know blender, but are there any options you can change for exporting? It looks like your model just has one sided polygons, or the normals are flipped for some of them.
Make sure you have the normals checkbox checked in OBJ export options (at the left side, it's off by default):
You will need to model your room to have slim cubes instead of planes whenever they should be visible from both sides.
You can display the normals in Blender in edit mode. In Properties (N) scroll down to Mesh Display and check the type of normals you want to see and their length.
To recalculate the normals or flip their direction go to the Tool Shelf (T) in the Normals section.

iOS - 2d image turn into a 3d

I was checking out this cool app called Morfo. According to their product description -
Use Morfo to quickly turn a photo of your friend's face into a
talking, dancing, crazy 3D character! Once captured, you can make your
friend say anything you want in a silly voice, rock out, wear makeup,
sport a pair of huge green cat eyes, suddenly gain 300lbs, and more.
So if you take a normal 2D image of steve jobs & feed it to this app it converts it into a 3D model of that image & the user can interact with it.
My questions are as following -
How are they doing this?
How is this possible in iPad?
Isn't it computationally intensive to render and convert 2D image into 3D?
Any pointers, links to websites or libraries in objectiveC which do this is very much appreciated.
UPDATE: this demo of this product here shows how morfo, uses a template mechanism to do the conversion. i.e. after a 2D image is fed, one needs to set the boundaries of the face, where the eyes are located, size & length of lips. then it goes off to convert it into a 3D model. How is this part done? What frameworks or libraries they might be using?
This is a broad question but i can point you in the right direction of how 3D Rendering works, trust me this is a huge subject with decades of work behind it and to much to put here. Not sure how up to speed you are on 3D Rendering techniques so i will give you a basic idea of texturing and point you to a good set of tutorials.
How are they doing this?
The idea is that in 3D Rendering, 3D models can be textured with a 2d image known as a texture map. You use a 2D image and wrap it around a 3d model, be that a simple primitive like a sphere of a cube or more advanced such as the classic teapot or the model of a human head e.t.c. A texture can be taken from anywhere, I have used the camera feed in the past to texture meshes with the video from the camera stream, I have used photos from the camera which s how there doing it. So this is how the face is rendered to the 3D Model.
Is this efficient?
On iOS and most mobile devices 3D rendering uses hardware acceleration utilizing OpenGLES. In regards to your question this is really fast depending on how you implement your render code.
The way it uses the mapping (scale rotate template in the video) as mentioned by anticyclope allows you to make the texture fit a model and also place the eyes which are part of there render code.
So if you want to pick this up i recommend reading Jeff Lamarche Tutorial "from the ground up" as a primer:
http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-table-of.html
Second to that i have read about 4 books on OpenGLES, for general design and for platforms specifics. I recommend this book:
http://www.amazon.co.uk/iPhone-Programming-Developing-Graphical-Applications/dp/0596804822/ref=sr_1_1?ie=UTF8&qid=1331114559&sr=8-1
In my opinion, there is how they doing it. Just my thoughts, haven't saw the application in real-life.
They have a 3D model of human's head. When you click on certain points on 2D image, they are adjusting corresponding points in 3D model, so it is represents a specific face's features like distance between eyes, lips width and so on. Next, texture from 2D image is applied to 3D model using that control points, so we have a textured 3D model of human's head. Given the fact, that our perception is able to reconstruct a 3D shape from 2D images (say, we looking at 2D photo and still imagining a 3D person), there's no need to reconstruct 3D shape accurately, texture will do the work.
There is an issue in the rendering of 3D images, called UV mapping, takes the 3D model and defines a set of edges, and this creates an image that is used to generate different textures to the model.
Now if you notice in Morfo, you define the edge of the head, eyes, mouth and nose. with this information the Morfo knows how to place it texture to the model that has defined.
the process of loading a texture on a model is not very complex and this can be done on any device that has support of some technology such as OpenGL
Isn't it computationally intensive to render and convert 2D image into 3D?
Apple is sinking billions of dollars into developing custom chipsets, and recent models have impressive performance, considering the battery life and low operating temperature (no fans).