I'm trying to change the opacity of an offscreen graphics buffer in p5, and can't figure it out.
consider:
var pg = createGraphics(...)
var image = loadImage(...)
tint(255, 50)
image(image, 0, 0) <----- this works
image(pg, 0, 0) <----- but this doesn't
working example here
tint(255, x) should leave the coloring unchanged and set the opacity to x, but seems to have no effect. works fine on images, though... what am I missing?
Update: It seems as though in P5 (though not in Processing), tint() does in fact work only on p5.Image objects. So, a workaround is to create a p5.Image from the offscreen graphics buffer using get() to grab pixels from the buffer (which returns an image). Confusingly, the reference article for get() also uses images, making it hard to understand what's actually happening.
An updated (working) example is here.
To reiterate, the reason this is worth doing at all is to render complex shapes (like text) only once to a buffer, then draw / manipulate that buffer as needed. This drastically reduces CPU load and speeds up the sketch.
(credit for figuring this out goes to Ian)
Related
Trying to optimize a falling sand simulation and I'm implementing optimizations that the noita devs talked about in their GDC talk. At around 10:45 they talk about how they use dirty rects. I've started trying to implement a similar system.
Currently, I am able to create a dirty rect that covers the particles that need updating. I do this by every time a valid particle(particle is not air or solid like a wall) is set inside a chunk, I call a function to update the dirty rect giving the placed particles position as an argument. From there, I can easily calculate the new min/max of the rectangle from this position.
Here's a gif of that working.
and here's the code for updating the rect:
public void UpdateDirtyRect(int2 newPos)
{
minX = Math.Min(minX, newPos.x);
minY = Math.Min(minY, newPos.y);
maxX = Math.Max(maxX, newPos.x);
maxY = Math.Max(maxY, newPos.y);
dirtyrect = .(.(minX, minY), .(maxX, maxY));
//Inflate by two pixels. Not doing this will cause the rect to not change size as particles update
dirtyrect=dirtyrect.Inflate(2);
}
The problem, as can be seen in the gif, is that I currently have no way to shrink the dirty rect. I can do a few things, such as detecting when a particle is erased/replaced with air/solid particle on the boundary edge of the dirty rect, but I'm unsure on what to do from there.
Here’s one approach that might work for you.
Keep the dirty rectangle updated by the previous frame.
Compute the dirty rectangle updated by one frame only.
Combine these two rectangles into a single one that contains both of them.
Use the rectangle from step 3 to update the screen.
Replace the previous frame rectangle with the one you have computed on step 2. Not the combined one you computed on step 3, doing so would cause the same problem you’re describing.
So I'm still trying to figure out color spaces for render textures, and how to create images without color banding. Within my gbuffer I use VK_FORMAT_R8G8B8A8_UNORM with VK_IMAGE_TILING to for my albedo texture AND VK_FORMAT_A2B10G10R10_UNORM_PACK32 with VK_IMAGE_TILING_OPTIMAL for my normal and emission render textures. For my brightness texture (which holds pixel values that are considered "bright" within a scene) and glow render texture (the final texture for bloom effects to be added later onto the final scene), I use VK_FORMAT_R8G8B8A8_SRGB and VK_IMAGE_TILING_OPTIMAL (although looking at this, I should probably make my bright and final textures R16G16B16A16 float formats instead). What I got was definitely not what I had in mind:
Changing the tiling for my normal, emission, glow, and brightness textures to VK_IMAGE_TILING_LINEAR, however, got me nice results instead, but at the cost of performance:
The nice thing about it though (and sorry about the weird border on the top left, was cropping over on MS paint...), is that the image doesn't suffer from color banding, such as when instead of using these formats for my textures, I use VK_FORMAT_R8G8B8A8_UNORM with VK_IMAGE_TILING_OPTIMAL:
Were you can see banding occuring on the top left of the helmet, as well as underneath it (where the black tubes are). Of course, I've heard of avoiding VK_IMAGE_TILING_LINEAR from this post
In general, I'm having trouble figuring out what would be the best way to avoid using VK_IMAGE_TILING_LINEAR when using srgb textures? I still would like to keep the nice crisp images that srgb gives me, but I am unsure how to solve this issue. The link might actually have the solution, but I'm not very much sure if there's a way to apply it to my gbuffer.
I would also like to clarify that VK_IMAGE_TILING_OPTIMAL works fine for Nvidia based GPUs (well, tested on a GTX 870M) but complains about using VK_IMAGE_TILING_LINEAR for srgb format, however, intel based gpus work fine with VK_IMAGE_TILING_LINEAR and sort of crap out like the first image up top this post when using VK_IMAGE_TILING_OPTIMAL.
The engine is custom made, feel free to check it out in this link
If you fancy some code, I use a function called SetUpRenderTextures() inside Engine/Renderer/Private/Renderer.cpp file, under line 1396:
VkImageCreateInfo cImageInfo = { };
VkImageViewCreateInfo cViewInfo = { };
// TODO(): Need to make this more adaptable, as intel chips have trouble with srgb optimal tiling.
cImageInfo.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
cImageInfo.usage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_SAMPLED_BIT;
cImageInfo.imageType = VK_IMAGE_TYPE_2D;
cImageInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
cImageInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
cImageInfo.mipLevels = 1;
cImageInfo.extent.depth = 1;
cImageInfo.arrayLayers = 1;
cImageInfo.extent.width = m_pWindow->Width();
cImageInfo.extent.height = m_pWindow->Height();
cImageInfo.samples = VK_SAMPLE_COUNT_1_BIT;
cImageInfo.sharingMode = VK_SHARING_MODE_EXCLUSIVE;
cImageInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
cViewInfo.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
cViewInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
cViewInfo.image = nullptr; // No need to set the image, texture->Initialize() handles this for us.
cViewInfo.viewType = VK_IMAGE_VIEW_TYPE_2D;
cViewInfo.subresourceRange = { };
cViewInfo.subresourceRange.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
cViewInfo.subresourceRange.baseArrayLayer = 0;
cViewInfo.subresourceRange.baseMipLevel = 0;
cViewInfo.subresourceRange.layerCount = 1;
cViewInfo.subresourceRange.levelCount = 1;
gbuffer_Albedo->Initialize(cImageInfo, cViewInfo);
gbuffer_Emission->Initialize(cImageInfo, cViewInfo);
cImageInfo.format = VK_FORMAT_R8G8B8A8_SRGB;
cViewInfo.format = VK_FORMAT_R8G8B8A8_SRGB;
cImageInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
GlowTarget->Initialize(cImageInfo, cViewInfo);
// It's probably best that these be 64bit float formats as well...
pbr_Bright->Initialize(cImageInfo, cViewInfo);
pbr_Final->Initialize(cImageInfo, cViewInfo);
cImageInfo.format = VK_FORMAT_A2B10G10R10_UNORM_PACK32;
cImageInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
cViewInfo.format = VK_FORMAT_A2B10G10R10_UNORM_PACK32;
gbuffer_Normal->Initialize(cImageInfo, cViewInfo);
gbuffer_Position->Initialize(cImageInfo, cViewInfo);
So yes, the rundown. How to avoid using linear image tiling for srgb textures? Is this a hardware specific thing, and is it mandatory? Also, I apologize for any form of ignorance I have on this subject.
Thank you :3
Support for this combination is mandatory, so the corruption in your first image is either an application or a driver bug.
So VK_FORMAT_R8G8B8A8_SRGB is working with VK_IMAGE_TILING_OPTIMAL on your Nvidia GPU but not on your Intel GPU? But VK_FORMAT_R8G8B8A8_SRGB does work with VK_IMAGE_TILING_LINEAR on the Intel GPU?
If so, that sounds like you've got some missing or incorrect image layout transitions. Intel is more sensitive to getting those right than Nvidia is. Do the validation layers complain about anything? You need to make sure the image is VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL when rendering to it, and VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL when sampling from it.
A little bit new to EmguCV here
Just want to ask quick question, about CopyMakeBorder function
Are the pixel values of the destination image accessible?
I want to process further the destination image, but when I tried to access pixel values from the image, it only returns me 0 (even in the location that are not supposed to be 0, for example the central pixel). When I used Imshow, it shows that the image borders are perfectly processed, but problem only persist when I try to access the pixel values, only getting 0 wherever the pixel location is.
This is not a problem when I tried to use destination images from other EmguCV functions, such as Threshold Function
Can anyone clarify? Thanks A lot!!
I am using VB.net, here is the code (I am away from my workstation for the weekend so I am just gonna try to remember the code, probably some capital letters here and there are mistyped, but I hope you get the gist.)
First I initialize the source images and destination image
Dim img As Image(Of Gray,Byte) = New Image (Of Gray, Byte)("myimage.jpg")
Dim img1 As Image(Of Gray,Byte) = New Image (Of Gray, Byte)(img.size)
CopyMakeBorder Function, extend 1 pixel to top, bottom, left and right. Border type constant 0 values
Cvinvoke.CopyMakeBorder(img,img1,1,1,1,1,BorderType.Constant, New MCvscalar(0))
Accessing pixel values from destination image, take example pixel in x = 100, y = 100, and channel 0 (as it is a grayscale image)
Console.writeline(img1.data(100,100,0))
This will make debug output to 0, and no matter where I try to take the pixel values, it is still 0, even though when I try to show the image that specific pixel should not be 0 (it is not black)
Cvinvoke.Imshow("test",img1)
You are trying to access the data through Image.Data, however, this doesn't include the added border(s); just the original bitmap.
The added border is in the Mat property, however. Through it the individual pixels can be accessed
' returns data from original bitmap
Console.WriteLine(img1.Data(100, 100, 0))
' returns data from modified bitmap
Console.WriteLine(img1.Mat.GetData(100, 100)(0))
I have a webcam directly over a chicken nest. This camera takes images and uploads them to a folder on a server. I'd like to detect if an egg has been laid from this image.
I'm thinking the best method would be to compare the contrast as the egg will be much more reflective than the straw nest. (The camera has Infrared so the image is partly grey scale)
I'd like to do this in .NET if possible.
Try to resize your image to a smaller size, maybe 10 x 10 pixel. This averages out any small disturbing details.
Const N As Integer = 10
Dim newImage As New Bitmap(N, N)
Dim fromCamera As Image = Nothing ' Get image from camera here
Using gr As Graphics = Graphics.FromImage(newImage)
gr.SmoothingMode = SmoothingMode.HighSpeed
gr.InterpolationMode = InterpolationMode.Bilinear
gr.PixelOffsetMode = PixelOffsetMode.HighSpeed
gr.DrawImage(fromCamera, New Rectangle(0, 0, N, N))
End Using
Note: you do not need a high quality, but you need a good averaging. Maybe you will have to test different quality settings.
Since now, a pixel covers a large area of your original image, a bright pixel is very likely part of an egg. It might also be a good idea to compare the brightness of the brightest pixel to the average image brightness, since that would reduce problems due to global illumination changes.
EDIT (in response to comment):
Your code is well structured and makes sense. Here some thoughts:
Calculate the gray value from the color value with:
Dim grayValue = c.R * 0.3 + c.G * 0.59 + c.B * 0.11
... instead of comparing the three color components separately. The different weights are due to the fact, that we perceive green stronger than red and red stronger than blue. Again, we do not want a beautiful thumbnail we want a good contrast. Therefore, you might want to do some experiments here as well. May be it is sufficient to use only the red component. Dependent on lighting conditions one color component might yield a better contrast than others. I would recommend, to make the gray conversion part of the thumbnail creation and to write the thumbnails to a file or to the screen. This would allow you to play with the different settings (size of the thumbnail, resizing parameters, color to gray conversion, etc.) and to compare the (intermediate) results visually. Creating a bitmap (bmp) with the (end-)result is a very good idea.
The Using statement does the Dispose() for you. It does it even if an exception should occur before End Using (There is a hidden Try Finally involved).
Many of you may know the classic windows screen saver. Does anyone have any idea how this was programmed in 3D or 2D? No real code necessary - just the an overall explanation of the algorithm would be great.
This is screenshot from Chrome. In chrome it is programmed very easily:
file_util::AppendToPath(&path, L"sspipes.scr");
CreateProcess(NULL, ...
or: Open a windows Pipe screensaver file, and Run it.
Source of xscreensaver's version is: xscreensaver-4.16/hacks/glx/pipes.c in the xscreensaver-4.16.tar.bz2 (or other version of the same package). Online version of the file.
UPDATE: How it works: It uses OpenGL to make the things beautiful
Each tube addittion is drawn as cylinder and a sphere:
glBegin(GL_QUAD_STRIP);
for (an = 0.0; an <= 2.0 * M_PI; an += M_PI / 12.0) {
glNormal3f((COSan_3 = cos(an) / 3.0), (SINan_3 = sin(an) / 3.0), 0.0);
glVertex3f(COSan_3, SINan_3, one_third);
glVertex3f(COSan_3, SINan_3, -one_third);
}
glEnd();
Rotation in space is done by glRotatef before glBegin. All rotations are 90 degrees only.
End sphere is glu object:
quadObj = gluNewQuadric();
gluQuadricDrawStyle(quadObj, (GLenum) GLU_FILL);
gluSphere(quadObj, radius, 16, 16);
gluDeleteQuadric(quadObj);
For bends, a lot of code is used to draw (function myElbow).
For not to intersect, the 3d array is used with flags "this point of space contains a pipe". All pipes have integer coordinates and are parallel to the axis. Perspective correction is from 3d library (opengl/direct3d).
Main function with logic is draw_pipes.
It draws a sphere, selects a direction as random and pipe run begins. At every step there is a random shance (with 20% prob) of bending. Also, neiborhods are checked at every step to prevent collisions. Is there is no free space to continue a pipe or the pipe is long enough (may be random too), it will stop and new pipe begins from random point.
It was done using OpenGL (back when MS was excited about OpenGL on Windows). While I can't speak authoritatively about the rest (not sure I've ever seen the source code) it looks like a pretty straightforward matter of choosing a direction (up, down, left, right, forward, backward) and a distance, with some bounds to keep it all in a cube.
The pipe has some particular diameter, and you can select a bitmap to be textured onto the pipe if you want. If you don't use a texture, it can/will choose colors. It's old enough, I believe it's written to use only the 20 (16?) colors defined by Windows as the basic palette normally supported on almost any graphics adapter -- but it's been quite a while since mainstream hardware was nearly that restricted.