In blender I'm trying to use the hair particles and I'd like to increase their size
There are many options in blender 2.8 where size is explicitly mentioned but none of the actually modifies the size of my hair.
I'd just like to increase the radius of the hair nothing more... I really couldn't find anything online after many checks
You can set the initial hair length in the particle tab. In the particle edit mode you can grow/shrink hair as well.
The problem is solved completely with the answer of #sambler. You can change whatever thing you want about the hair size but you won't see any of this (except for the hair length). What you have to do is to instruct your render engine as he suggested:
Select strip and enjoy size modification (this is done in eeve in cycles the procedure is similar and sambler told that)
Related
I started to write a small engine to render a 2d isometric map. A friend of mine made a small basic image of a train station to use example art for my engine. I tried to import the .png into tiled and create a tileset for it, to then use the information for the rendering of that house.
When I import the image, tiled cuts off the edges of the picture (see attachment "tiled .png import to tileset") on the right and bottom side. I looked into the menus and tried to find information about it but I could'nt find any helpful advice why it happens.
Another thing I find curious is the information within the .tsx file:
<?xml version="1.0" encoding="UTF-8"?>
<tileset version="1.2" tiledversion="1.2.1" name="bAHNHOF" tilewidth="30"
tileheight="30" tilecount="195" columns="13">
<image source="bAHNHOF.png" width="401" height="468"/>
</tileset>
Shouldn't columns(13) multiplied by tile width(30) result in width of the imported image (i.e. 401). It only is 390 though, so roughly 11 pixels less then the original width.
I probably made a mistake somewhere or am confusing something. Maybe someone can help me?
Thanks in advance :)
Seem like whatever editor you are using wants "whole tile" sizes. This is not uncommon. Increase the size of your base image so that the X and Y align to tile size boundaries to prevent this. 30 for a tile size is also very unusual. I'd expect a power of 2 like "32" or "16".
In short, your importer is culling tiles that are not full size. I'd expect it to display a warning about image size before it did this, but who knows as you didn't state the programs.
When this goes onto whatever platform you are using, a power of 2 tile size will help as well in terms of efficiency, so consider making that change sooner rather than later as well.
Finally, often tiling is done to save memory. If, when you divide up your image into tiles (tile it), you can create identical tiles, the computer can use that knowledge to lessen the amount of memory is needed.
I'm having a terrible time trying to figure out what's going on with my baked lighting. It appears that only Realtime lights affect my model. I've attached 2 images to demonstrate the problem. I have several point lights in the interior of my model. If I set them to Realtime everything looks great. However, if I set the, to Baked and change the GI accordingly they don't seem to interact with the model at all. Oddly enough the Directional Light on the exterior (and you can see it poking through the hallway door) Seems to display fine when set to Baked.
The model is generated in Blender and I do have the "Generate Lightmap UVs" import option selected. I've tried just about every combination of settings I can think of.
It turns out the interior lights were just a few pixels above the surface of my ceiling cube, causing the light to never reach the interior of the room :/
So I have my game, made with SpriteKit and Obj-C. I want to know a couple things.
1) What is the best way to make scroll-views in SpriteKit?
2) How do I get this special kind of scroll-view to work?
The kind of scroll-view I'd like to use is one that, without prior knowledge, seems like it could be pretty complicated. You're scrolling through the objects in it, and when they get close to the center of the screen, they get larger. When they're being scrolled away from the center of the screen, they get smaller and smaller until, when their limit is met, they stop minimizing. That limitation goes for getting bigger when getting closer to the center of the screen, too.
Also, I should probably note that I have tried a few different solutions for cheap remakes of scroll views, like merely adding the objects to a SKNode and moving the SKNode's position relative to the finger's, and its movement . . . but that is not what I want. Now, if there is no real way to add a scroll-view to my game, this is what I'm asking. Will I simply have to do some sort of formula? Make the images bigger when they get closer to a certain spot, and maybe run that formula each time -touchesMoved is called? If so, what sort of formula would that be? Some complicated Math equation subtracting the node's position from the center of the screen, and sizing it accordingly? Something like that? If that's the case, will you please give me some smart Math formula to do that, and give it to me in code (possibly a full-out function) format?
If ALL else fails, and there is no good way to do this, what would some other way be?
It is possible to use UIScrollViews with your SpriteKit scenes, but there's a bit of a workaround involved there. My recommendation is to take a look at this github project, that is what I based my UIScrollView off of in my own projects. From the looks of it, most of the stuff you'd want has actually been converted to Swift now, rather than Objective-C when I first looked at the project, so I don't know how that'll fare with you.
The project linked above would result in your SKScene being larger than the screen (I assume that is why it would need to be scrolled), so determining what is and is not close to the center of the scene won't be difficult. One thing you can do is use the update loop in SpriteKit to constantly update the size of Sprites (Perhaps just those on-screen) based on their distance from a fixed, known center point. For instance, if you have a screen of width and height 10, then the midpoint would be x,y = 5,5. You could then say that size = 1.0 - (2 * distance_from_midpoint). Given you are at the midpoint, the size will be 1.0 (1.0 - (2 * 0)), the farther away you get, the smaller your scale will be. This is a crude example that does not account for a max or min fixed size, and so you will need to work with it.
Good luck with your project.
Edit:
Alright, I'll go a bit out of my way here and help you out with the equation, although mine still isn't perfect.
Now, this doesn't really give you a minimum scale, but it will give you a maximum one (Basically at the midpoint). This equation here does have some flaws though. For one, you might use this to find the x and y scale of your objects based on their distance from a midpoint. However, you don't really want two different components to your scale. What if your Sprite is right next to the x midpoint, and the x_scale spits out 0.95? Well, that's almost full-sized. But if it is far away from the midpoint on the y axis, and it gives you a y scale of, say 0.20, then you have a problem.
To solve that, I just take the magnitude or hypotenuse of the vector between the current coordinate and the coordinate of the current sprite. That hypotenuse gives me an number that represents the true distance, which eliminates the problem with clashing scale values.
I've made an example of how to calculate this inside Google's Go-Playground, so you can run the code and see what different scales you get based on what coordinate you plug in. Also, the equation used in there is slightly modified, It's basically the same thing as above but without the maxscale - part of the front part of the equation.
Hope this helps out!
Embedding Attempt:
see this code in play.golang.org
I'm trying hard to nicely blur a red circle but everytime i get gradient levels of red and the image looks choppy.
Before:
http://i.imgur.com/6yzMhFI.png
After:
http://i.imgur.com/2dZl4ph.png
How i can acheive a smooth blur ?
If you are referring to the visible circles that separate the gradation levels, that is called banding Here are some ways to fix that:
Increase your document's bit level from 8-bit to 16-bit
This will increase the amount of colors your file can represent, creating more colors that can be used to represent the gradient, making it smoother in appearance.
In Photoshop navigate to Image>Mode>16-Bits/Channel
In GIMP 2.10 (or higher?), navigate to Image>Precision>16 bit..
Display or system settings might be unable to display enough colors
If changing the bit depth does not fix the issue then you might have a hardware or system settings issue.
If it's a hardware issue, your monitor might not have the capability to display enough colors to render the gradient smooth
If it's system settings you will need to go to your operating systems color depth setting, usually located under the system's display settings. It could say something like Millions of Colors, or True Color (32-bit).
The last thing related to settings is that you have a bad color profile set in your system or in your image editing software. It's beyond the scope of this answer. If you don't know how to color calibrate your monitor, then it most likely isn't this and you can skip this.
If you have to have 8-bits
If you absolutely have to keep your document in 8-bit color space then you will have to use dithering or add some noise to your image to confuse the viewers brain into seeing a smooth gradient.
Noise or dithering will confuse the viewers brain into seeing a smoother gradient by setting some focus on the imperfections of the noise/grain/dithering. This doesn't exactly answer your question, but it is about the only option you have if you keep your ultra smooth gradient in 8-bit mode.
Good Luck!
I think you are applying the Gussain-Blur to the entire image try to Select the red circle and apply the Gussain-Blur filter to it
We have a system where people are being taken a face shot via a DSLR camera. We need the people's images with transparent background. What we're currently doing is taking the image and editing and cropping it in Photoshop, removing the background image with the Magic Eraser tool.
What I am looking for is a way to parse the image and automatically erase the semi-white background we have, along with the resizing and cropping. Is there some kind of library or code sample that does this without requiring manual intervention?
This is a real complex problem. Like the answer below suggested you'll need to do a fuzzy match on each pixel and set it to be transparent but you also need to detected other nearby pixels to make sure they are not close in color. A white tag on the shirt, white eyelids, hair, pale skin reflecting the flash. All are candidates to be removed by any greedy fuzzy logic.
Think about the Magic Wand tool in Photoshop. How good is it at detecting the edges of the person in the picture? Yeah, and that's the top standard of image editing software with thousands of engineering hours behind it.
This is not a feasible request for a Q&A format, and this is one of those things that humans just do better than machine. BUT, that doesn't mean it's not possible, and who knows, you might be the one to do it. Just don't do it in VB.NET please :)
Some pseudo-code to get an idea of what you need to do:
Bitmap faceShot = Bitmap.FromFile(filepath)
foreach pixel in faceShot
//the following line is where the magic happens, you can do any fuzzy match on the color that suits you
//figure out your color range and do a fuzzy match percentage wise
if (pixel between RGB(255,255,255) and RGB(250,235,215)) //white and antique white
pixel.setAlpha=0
endif
end foreach
You could start with this as a starting point for processing a single image,
http://www.java2s.com/Code/VB/2D/ProcessanImageinvertPixel.htm
Basically, if you have a constant background color (like the TV green-screen), it's just a matter of selecting pixels close to the color you are erasing and setting their Alpha level to 0 (transparent). Treating the RGB values like XYZ coordinates, you can do a 3d distance from your background color, and make everything within a certain threshold transparent.
As an improvement, you could also make everything within another threshold semi-transparent so the edges right around hair and stuff like that look softer and less harsh.
Alternatively, you could probably do the same exact thing with good results in Photoshop, as it should support batch processing.
Edit, thinking about it some more, you may want to use a green screen type background as well instead of an off-white one like you stated, as you may make people's eyes transparent. I would definitely try to batch it in Photoshop/Gimp/etc.