I have a scene and a character in Cocos3D, but the background image of the scene appears much brighter than the character. There is a CC3Light in the scene in front of the character, here's how it looks with different settings:
The light in the first screenshot (isDirectionalOnly = YES): The character lighting is very dark but the background image is normal.
The second screenshot (isDirectionalOnly = NO): The character is visible and looks like it has shadow behind, but the background of the scene is too bright.
Questions:
How can I control the objects lighting? The properties ambientColor and shininess don't have much influence. I would like both the character and the background image to have, more or less, the same brightness, to give a realistic impression that the character is in the forest.
Why is the character so dark when the light is "directional only"? This darkness doesn't exist in character's texture alone.
Based on the example project you sent, the main issue is that your light was located at the origin of the scene (global location 0,0,0). A directional light uses the global position vector of the light to determine the direction the light is coming from. That caused the light to basically have a zero vector for its direction, and the default shaders don't like that.
The reason your forest was being washed out is that it's material has ambient, diffuse, and specular colors that, when combined with the corresponding light colors, add up to more than 1.0 in each component. For future reference, you can see this in the logs during POD loading:
[rez] Creating CC3PODMaterial at index 0 from: SPODMaterial named ForestMat
ambient: (0.80, 0.80, 0.80), diffuse: (0.80, 0.80, 0.80), specular: (1.00, 1.00, 1.00), opacity: 1.00, shininess: 0.10
src RGB blend: ePODBlendFunc_ONE, src alpha blend: ePODBlendFunc_ONE
dest RGB blend: ePODBlendFunc_ZERO, dest alpha blend: ePODBlendFunc_ZERO
operation RGB blend: ePODBlendOp_ADD, operation alpha blend: ePODBlendOp_ADD
blend color: (0.00, 0.00, 0.00, 0.00), blend factor: (0.00, 0.00, 0.00, 0.00)
texture indices: (diffuse: 0, ambient: -1, specular color: -1, specular level: -1, bump: -1, emissive: -1, gloss: -1, opacity: -1, reflection: -1, refraction: -1)
flags: 0, effect none in file none
There are a couple of ways to handle this. The first option is to remove the specular color from the material, so that only ambient and diffuse lighting are used (which in your model, add up to 1.0 in each component...again see log listing above).
The other option, which can be good for backgrounds if you don't want them to be affected by lighting, is to set the shouldUseLighting property to NO. This causes the material to use only its emissionColor as its color. If you set the value of that to white, then the material will display the texture in its normal state, regardless of lighting.
Related
I have a scene = QGraphicsScene() and I added an ellipse via scene.addEllipse(100, 100, 10, 10, greenPen, greenBrush). The brush and the pen are set before. I add the QGraphicsScene right after to a QGraphicsView with MyGraphicsView.setScene(scene). All of this works except the position of the ellipse is always the center. The first 2 parameters in the addEllipse() function should be the coordinates (in this case 100, 100), but no matter what I put there, the ellipse is always in the center. Any ideas?
EDIT: now I added 3 ellipses like this (the one in the description deleted):
scene.addEllipse(10, 10, 10, 10, greenPen, greenBrush)
scene.addEllipse(-100, -10, 30, 30, bluePen, blueBrush)
scene.addEllipse(-100, -100, 60, 60, bluePen, blueBrush)
and my result is this:
So clearly the coordinates work somehow, but I still don't get how exactly. Do I have to set an origin to the scene?
And if I do this:
particleList = scene.items()
print(particleList[0].x())
print(particleList[1].x())
print(particleList[2].x())
I get:
0.0
0.0
0.0
At this point I'm totally confused and I'd really appreciate some help.
An important thing that must be always kept in mind is that the position of a QGraphicsItem does not reflect its "top left" coordinates.
In fact, you can have a QGraphicsRectItem that has a QRectF positioned at (100, 100) but its position at (50, 50). This means that the rectangle will be shown at (150, 150). The position of the shape is relative to the position of the item.
All add[Shape]() functions of QGraphicsScene have this important note in their documentation:
Note that the item's geometry is provided in item coordinates, and its position is initialized to (0, 0).
Even if you create a QGraphicsEllipseItem with coordinates (-100, -100), it will still be positioned at (0, 0), and that's because the values in the addEllipse() (as with all other functions) only describe the coordinates of the shape.
Then, when a QGraphicsScene is created, its sceneRect() is not explicitly set, and by default it corresponds to the bounding rectangle of all items. When the scene is added to a view, the view automatically positions the scene according to the alignment(), which defaults to Qt.AlignCenter:
If the whole scene is visible in the view, (i.e., there are no visible scroll bars,) the view's alignment will decide where the scene will be rendered in the view. For example, if the alignment is Qt::AlignCenter, which is default, the scene will be centered in the view, and if the alignment is (Qt::AlignLeft | Qt::AlignTop), the scene will be rendered in the top-left corner of the view.
This also means that if you have items at negative coordinates or with their shapes at negative coordinates, the view will still show the scene centered to the center of the bounding rect of all items.
So, you either set the scene sceneRect or the view sceneRect, depending on the needs. If the view's sceneRect is not set, it defaults to the scene's sceneRect.
If you want to display the items according to their position while also ensuring that negative coordinates are correctly "outside" the center, you must decide the size of the visible sceneRect and set it accordingly:
boundingRect = scene.itemsBoundingRect()
scene.setSceneRect(0, 0, boundingRect.right(), boundingRect.bottom())
I create an offscreen texture with alpha component, samples in the offscreen renderPass is VK_SAMPLE_COUNT_1_BIT.
When I use this texture in a VK_SAMPLE_COUNT_4_BIT renderPass, I get black dots pattern where there is an alpha component.
I understand why, but I don't know how to solve this problem.
I thought I could use an offscreen renderpass with VK_SAMPLE_COUNT_4_BIT with another VK_SAMPLE_COUNT_1_BIT attachment to resolve into, but my texture would still be VK_SAMPLE_COUNT_1_BIT.
So I'm not sure it will solve the black dots issue, and I prefer to seek advice before I get into big changes.
-------- edit 1 --------
The dots appear when I use these VkPipelineMultisampleStateCreateInfo setting:
.alphaToCoverageEnable = VK_TRUE;
.alphaToOneEnable = VK_TRUE;
If these two setting are set to VK_FALSE, I've got transparent texture but without any gradient, it's transparent or opaque.
Here are my VkPipelineColorBlendAttachmentState setting for both offscreen and texturing renderPass:
.srcColorBlendFactor = VK_BLEND_FACTOR_SRC_ALPHA;
.dstColorBlendFactor = VK_BLEND_FACTOR_ONE_MINUS_DST_ALPHA;
.colorBlendOp = VK_BLEND_OP_ADD;
.alphaBlendOp = VK_BLEND_OP_ADD;
When I look at the offscreen texture with a green "clear color", I can see the gradient of transparency, so I suppose the offscreen renderPass is correctly set.
-------- edit 2 --------
I modified the offscreen renderPass in order to use the same sample count as in the onscreen, or final, renderPass, by hadding a resolve attachment with VK_SAMPLE_COUNT_1_BIT.
I still have dots if I set .alphaToCoverageEnable and .alphaToOneEnable to VK_TRUE.
When set to false, the full image is semi-transparent, even the opaques parts.
I want to draw tiled images and then transform them by using the usual panning and zooming gestures. The problem that brings me here is that, whenever I have a scaling transformation of a large number of decimal places, a thin line of pixels (1 or 2) appears in the middle of the tiles. I managed to isolate the problem like this:
CGContextSaveGState(UIGraphicsGetCurrentContext());
CGContextSetFillColor(UIGraphicsGetCurrentContext(), CGColorGetComponents([UIColor redColor].CGColor));
CGContextFillRect(UIGraphicsGetCurrentContext(), rect);//rect from drawRect:
float scale = 0.7;
CGContextScaleCTM(UIGraphicsGetCurrentContext(), scale, scale);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(50, 50, 100, 100), testImage);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(150, 50, 100, 100), testImage);
CGContextRestoreGState(UIGraphicsGetCurrentContext());
With a 0.7 scale, the two images appear correctly tiled:
With a 0.777777 scale (changing line 6 to "float scale = 0.777777;"), the visual artifact appears:
Is there any way to avoid this problem? This happens with CGImage, CGLayer and primitive forms such as a rectangle. It also happens on MacOSx.
Thanks for the help!
edit: Added that this also happens with a primitive form, like CGContextFillRect
edit2: It also happens on MacOSx!
Quartz has a floating point coordinate system, so scaling may result in values that are not on pixel boundaries, resulting in visible antialiasing at the edges. If you don't want that, you have two options:
Adjust your scale factor so that all your scaled coordinates are integral. This may not always be possible, especially if you're drawing lots of things.
Disable anti-aliasing for your graphics context using CGContextSetShouldAntialias(UIGraphicsGetCurrentContext(), false);. This will result in crisp pixel boundaries, but anything but straight lines might not look very good.
When all is said and done, iOS is dealing with discrete pixels on integer boundaries. When your frames are reduced 0.7, the 50 is reduced to 35, right on a pixel boundary. At 0.777777 it is not - so iOS adapts and moves/shrinks/blends whatever.
You really have two choices. If you want to use scaling of the context, then round the desired value up or down so that it results in integral scaled frame values (your code shows 50 as the standard multiplication value.)
Otherwise, you can not scale the context, but scale the content one by one, and use CGIntegralRect to round all dimensions up or down as needed.
EDIT: If my suspicion is right, there is yet another option for you. Lets say you want a scale factor of .77777 and a frame of 50,50,100,100. You take the 50, multiply it by the scale, then round the return value up or down. Then you recompute the new frame by using that value divided by 0.7777 to get some fractional value, that when scaled by 0.7777 returns an integer. Quartz is really good at figuring out that you mean an integral value, so small rounding errors are ignored. I'd bet anything this will work just fine for you.
Hi I am working on an OBJ loader for use in iOS programming, I have managed to load the vertices and the faces but I have an issue with the transparency of the faces.
For the colours of the vertices I have just made them for now, vary from 0 - 1. So each vertex will gradually change from black to white. The problem is that the white vertices and faces seem to appear over the black ones. The darker the vertices the more they appeared covered.
For an illustration of this see the video I posted here < http://youtu.be/86Sq_NP5jrI >
The model here consists of two cubes, one large cube with a smaller one attached to a corner.
How do you assign a color to vertex? I assume, that you have RGBA render target. So you need to setup color like this:
struct color
{
u8 r, g, b, a;
};
color newColor;
newColor.a = 255;//opaque vertex, 0 - transparent
//other colors setup
I'm trying to draw a "conical"/"arcing" gradient (I don't know what would be the correct term for this) (Photoshop calls it an "angle" gradient —your friendly neighborhood stackoverflow editor) using Objective-C (IOS), pretty much exactly like the image shown in the following thread.
After days of googling and searching the internet to no avail, I've decided to ask for help here.
A little background on what I'm trying to do. My objective is to create a custom UIView, which is circular progress bar, a ring basicly, somewhat similar to the activity indicator as seen in the TweetBot iPhone app (displays when you drag to refresh, which can be seen in action here, around 17-18 seconds into the video, on top of the iphone screen). I want the progress indicator (the fill of the ring) to be a simple two color gradient, which can be set programmatically, and the view to be resizable.
Filling the ring shape with a gradient that "follows" the arc of the ring is where I'm stuck. The answers that I get from googling, reading Apple's Core Graphics documentation on gradients and searching on SO are either about radial gradients or linear/axial gradients, which is not what I'm trying to achieve.
The thread linked above suggests using pre-made images, but this isn't an option because the colors of the gradient should be settable, the view should be resizable and the fill of the progress bar isn't always 100% full obviously (which would be the state of the gradient as shown in the picture in the thread above).
The only solution that I've come up with is to draw the gradient "manually", so without using a CGGradientRef, clipping small slices of the gradient with single solid color fills within a circular path. I don't know exactly how well this will perform when the bar is being animated though, it shouldn't be that bad, but it might be a problem.
So my first question:
Is there an easier/different solution to draw a conical/arcing gradient in Objective-C (IOS) than the solution I've come up with?
Second question:
If I have to draw the gradient manually in my view using the solution I came up with, how can I determine or calculate (if this is even possible) the value (HEX or RGBA) of each color "slice" of the gradient that I'm trying to draw, as illustrated in the image below.
(Can't link image) gradient slice illustration
Looks to me like a job for a pixel shader. I remember seeing a Quartz Composer example that simulated a radar sweep, and that used a pixel shader to produce an effect like you're describing.
Edit:
Found it. This shader was written by Peter Graffignino:
kernel vec4 radarSweep(sampler image, __color color1,__color color2, float angle, vec4 rect)
{
vec4 val = sample(image, samplerCoord(image));
vec2 locCart = destCoord();
float theta, r, frac, angleDist;
locCart.x = (locCart.x - rect.z/2.0) / (rect.z/2.0);
locCart.y = (locCart.y - rect.w/2.0) / (rect.w/2.0);
// locCart is now normalized
theta = degrees(atan(locCart.y, locCart.x));
theta = (theta < 0.0) ? theta + 360.0 : theta;
r = length(locCart);
angleDist = theta - angle;
angleDist = (angleDist < 0.0) ? angleDist + 360.0 : angleDist;
frac = 1.0 - angleDist/360.0;
// sum up 3 decaying phosphors with different time constants
val = val*exp2(-frac/.005) + (val+.1)*exp2(-frac/.25)*color1 + val*exp2(-frac/.021)*color2;
val = r > 1.0 ? vec4(0.0, 0.0,0.0,0.0) : val; // constrain to circle
return val;
}
The thread linked above suggests using pre-made images, but this isn't an option because the colors of the gradient should be settable, the view should be resizable and the fill of the progress bar isn't always 100% full obviously (which would be the state of the gradient as shown in the picture in the thread above).
Not a problem!
Use the very black-to-white image from the other question (or a bigger version if you need one), in the following fashion:
Clip to whatever shape you want to draw the gradient in.
Fill with the color at the end of the gradient.
Use the black-to-white gradient image as a mask.
Fill with the color at the start of the gradient.
You can rotate the gradient by rotating the mask image.
This only supports the simplest case of a gradient with a color at each extreme end; it doesn't scale to three or more colors and doesn't support unusual gradient stop positioning.
FYI: here's also a good tutorial for creating a circular progress bar using Quartz drawing.
http://www.turnedondigital.com/blog/quartz-tutorial-how-to-draw-in-quartz/