Does adding more shapes slow the FPS in createjs - createjs

I am adding 37 shapes and is this the right way to add its mousedown and mouseover event handlers. With this 37 shapes the performance are not slower, but it increases with additional 100 shapes.
for(var i = 1;i<37;i++)
{
Independent_Bet_Shape = new createjs.Shape()
Independent_Bet_Shape.graphics.beginFill("#FFFFFF").drawRect(0,0,Independent_Bet_Width,Independent_Bet_Height);
Independent_Bet_Shape.alpha=0.8
Independent_Bet_Shape.cache(0,0,Independent_Bet_Width,Independent_Bet_Height)
Independent_Bet_Container.name = "Bet_Container"+i
Independent_Bet_Container.addChild(Independent_Bet_Shape)
s_oStage.addChild(Independent_Bet_Container)
if(i%3 == 0) //Splitting them in column for every 3 bets from bottom to top.
{
Current_Bet_X = Current_Bet_X+Independent_Bet_Width+0.1
Current_Bet_Y = Start_Bet_Y
for(var j = 0;j<3;j++)
{
Independent_Column_Bets_Array[columnCount][j] = j + (last_J + 1)
}
columnCount +=1
last_J = j * columnCount
}
}
for(var i = 1;i<37;i++)
{
Selection_Bet_Container = new createjs.Container();
Selection_Bet_Container.x = 5//-700
Selection_Bet_Container.y = 5//-210
Selection_Bets_Array.push(Selection_Bet_Container)
Independent_Bets_Array[i].addChild(Selection_Bet_Container)
Selection_Bet_Container.cache(0,0,Independent_Bet_Width,Independent_Bet_Height)
Independent_Chips_Array.push(Selection_Bet_Container)
}
for(var i = 1;i<37;i++)
{
Independent_Bets_Array[i].on("mousedown", Independent_TableBetFun);
Independent_Bets_Array[i].cursor='pointer'
}

Here is a quick overview:
The stage has to redraw everything every single frame.
The more content you draw each frame, the slower it will go.
Graphics and text are not hardware accelerated. This means you are limited to CPU rendering, which is going to be considerably slower than drawing bitmaps.
There are lots of things you can do to get better performance:
Cache your shapes. By caching them, they can be rendered using the GPU, which is much faster. Lots of caches is not ideal, but will still run faster than piling up Graphics. See the cache demo.
Group things that don't change, and cache that instead. If you are adding things to the stage that don't change, or that change as a group, then caching that will offer huge performance benefits. We use this approach a lot with drawing demos, which draw only new content to the stage, and don't clear the stage.
Check out these examples:
Using updateCache to "blit" new content to a cache
A drawing example using stage.autoClear=false
Dynamic drawing as a simple filter.
Typically particle systems and other high-performance content will use bitmaps, or even SpriteSheets, which let you show a bunch of different elements with one image, which will give you huge advantages in performance.
If you are able to move to Bitmap or cached content, check out StageGL, which supports most things (some stuff like masks won't work - because they use vectors).
Cheers,

Related

Why am I getting such a large alignment memory requirement for an image?

I create an image in Vulkan and I get an alignment requirement in the memory requirements of 131072. This seems like an enormous alignment and I'm not sure why anything bigger than 128 or 256 may be needed. It's so big that my memory allocation algorithm can't even handle it, and will never be able to practically handle it given that each allocation of this strict an alignment will waste too much space. What's the deal behind this? Here is how I create the image:
VkImageCreateInfo create_info{};
create_info.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
create_info.imageType = VK_IMAGE_TYPE_2D;
create_info.pNext = nullptr;
create_info.sharingMode = VK_SHARING_MODE_EXCLUSIVE;
create_info.samples = VkSampleCountFlagBits::VK_SAMPLE_COUNT_1_BIT;
create_info.queueFamilyIndexCount = 0;
image_create_info.extent.width = 1716;
image_create_info.extent.height = 1731;
image_create_info.extent.depth = 1;
image_create_info.usage = VkImageUsageFlagBits::VK_IMAGE_USAGE_SAMPLED_BIT;
image_create_info.tiling = VkImageTiling::VK_IMAGE_TILING_OPTIMAL;
image_create_info.initialLayout = VkImageLayout::VK_IMAGE_LAYOUT_UNDEFINED;
image_create_info.flags = 0;
image_create_info.mipLevels = 1;
image_create_info.format = VK_FORMAT_R8G8B8A8_UINT;
image_create_info.arrayLayers = 1;
VkImage vk_image;
VkResult result = vkCreateImage((VkDevice)VK::logicalDevice, &image_create_info, nullptr, &vk_image);
VkMemoryRequirements requirements;
vkGetImageMemoryRequirements(VK::logicalDevice, vk_image, &requirements);
Another interesting thing about the requirements returned by the function is that the memory size requirement for format VK_FORMAT_R8G8B8A8_UINT is about 12 mb, which makes sense, but with a format of VK_FORMAT_R8G8B8_UINT (so without the alpha channel), it gives a size requirement of only 3 mb, about a quarter of the size. Have I run into some sort of bug?
I know the dimensions of the image I created aren't power of two, but surely this shouldn't lead to such strange behaviour, should it?
It's so big that my memory allocation algorithm can't even handle it and will never be able to practically handle it given that each allocation of this strict an alignment will waste too much space.
Then fix that.
Implementations are allowed to require all kinds of alignments, especially for optimally-tiled images. 128KiB alignment is hardly unreasonable for images. So your sub-allocator needs to be able to account for this.
As for "waste too much space," perhaps you should take another look at those numbers. The example texture must take up at least 11'881'584 bytes. 128KiB is slightly more than 1% of that storage. That's not a lot of waste.

UICollectionViewLayout with dynamic heights - but NOT using a flow layout

Say you have a UICollectionView with a normal custom UICollectionViewLayout.
So that is >>> NOT <<< a flow layout - it's a normal custom layout.
Custom layouts are trivial, in the prepare call you simply walk down the data and lay out each rectangle. So say it's a vertical scrolling collection...
override func prepare() {
cache = []
var y: CGFloat = 0
let k = collectionView?.numberOfItems(inSection: 0) ?? 0
// or indeed, just get that direct from your data
for i in 0 ..< k {
// say you have three cell types ...
let h = ... depending on the cell type, say 100, 200 or 300
let f = CGRect(
origin: CGPoint(x: 0, y: y ),
size: CGSize(width: screen width, height: h)
)
y += thatHeight
y += your gap between cells
cache.append( .. that one)
}
}
In the example the cell height is just fixed for each of the say three cell types - all no problem.
Handling dynamic cell heights if you are using a flow layout is well-explored and indeed relatively simple. (Example, also see many explanations on the www.)
However, what if you want dynamic cell heights with a (NON-flow) completely normal everyday UICollectionViewLayout?
Where's the estimatedItemSize ?
As far as I can tell, there is NO estimatedItemSize concept in UICollectionViewLayout?
So what the heck do you do?
You could naively just - in the code above - simply calculate the final heights of each cell one way or the other (so for example calculating the height of any text blocks, etc). But that seems perfectly inefficient: nothing at all of the collection view, can be drawn, until the entire 100s of cell sizes are calculated. You would not at all be using any of iOS's dynamic heights power and nothing would be just-in-time.
I guess, you could program an entire just-in-time system from scratch. (So, something like .. make the table size actually only 1, calculate manually that height, send it along to the collection view; calculate item 2 height, send that along, and so on.) But that's pretty lame.
Is there any way to achieve dynamic height cells with a custom UICollectionViewLayout - NOT a flow layout?
(Again, of course obviously you could just do it manually, so in the code above calculate all at once all 1000 heights, and you're done, but that would be pretty lame.)
Like I say above the first puzzle is, where the hell is the "estimated size" concept in (normal, non-flow) UICollectionViewLayout?
Just a warning: custom layouts are FAR from trivial, they may deserve a research paper on their own ;)
You can implement size estimation and dynamic sizing in your own layouts. Actually, estimated sizes are nothing special; rather, dynamic sizes are. Because custom layouts give you a total control of everything, however, this involves many steps. You will need to implement three methods in your layout subclass and one method in your cells.
First, you need to implement preferredLayoutAttributesFitting(_:) in your cells (or, more generally, reusable views subclass). Here you can use whatever calculations you want. Chances are that you will use auto layout with your cells: if so, you will need to add all cell's subviews to its contentView, constrain them to the edges and then call systemLayoutSizeFitting(_:withHorizontalFittingPriority:verticalFittingPriority:) within this "preferred attributes" method. For example, if you want your cell to resize vertically, while being constrained horizontally, you would write:
override func preferredLayoutAttributesFitting(_ layoutAttributes: UICollectionViewLayoutAttributes) -> UICollectionViewLayoutAttributes {
// Ensures that cell expands horizontally while adjusting itself vertically.
let preferredSize = systemLayoutSizeFitting(layoutAttributes.size, withHorizontalFittingPriority: .required, verticalFittingPriority: .fittingSizeLevel)
layoutAttributes.size = preferredSize
return layoutAttributes
}
After the cell is asked for its preferred attributes, the shouldInvalidateLayout(forPreferredLayoutAttributes:withOriginalAttributes:) on the layout object will be called. What's important, you can't just simply type return true, since the system will reask the cell indefinitely. This is actually very clever, since many cells may react to each other's changes, so it's the layout who ultimately decides if it's done satisfying the cells' wishes. Usually, for resizing, you would write something like this:
override func shouldInvalidateLayout(forPreferredLayoutAttributes preferredAttributes: UICollectionViewLayoutAttributes, withOriginalAttributes originalAttributes: UICollectionViewLayoutAttributes) -> Bool {
if preferredAttributes.size.height.rounded() != originalAttributes.size.height.rounded() {
return true
}
return false
}
Just after that, invalidationContext(forPreferredLayoutAttributes:withOriginalAttributes:) will be called. You usually would want to customize the context class to store the information specific to your layout. One important, rather unintuitive, caveat though is that you should not call context.invalidateItems(at:) because this will cause the layout to invalidate only those items among the provided index paths that are actually visible. Just skip this method, so the layout will requery the visible rectangle.
However! You need to thoroughly think if you need to set contentOffsetAdjustment and contentSizeAdjustment: if something resizes, your collection view as a whole probably will shrink or expand. If you do not account for those, you will have jump-reloads when scrolling.
Lastly, invalidateLayout(with:) will be called. This is the step that's intended for you to actually adjust your sections/rows heights, move something that's been affected by the resizing cell etc. If you override, you will need to call super.
PS: This is really a hard topic, I just scratched the surface. You can look here how complicated it gets (but this repo is also a very rich learning tool).

Maya: Having trouble writing a script to cut a mesh into equal pieces

I want to split a mesh into sections based on a number of vertices. Essentially, I want a mesh cut into sections of 300 verts each with a remainder section of whatever is left over.
I've done this for the most part (i can get verts/faces, etc) but I'm having trouble figuring a graceful way of iterating through the extracted meshes.
I'm using polyChipOff which has no return value for the faces it chipped, so it's entirely new objects that are created that i have no handle to so i can't just continue chipping away from the previous piece as it no longer exists.
Any advice on how to go about this better?
I've thought of either iterating through all meshes in the scene for new ones (cache them at the start) or using a scriptJob to detect new objects being made. Both of those seem very hacky so was curious if anyone had advice.
You can try this method:
import maya.cmds as cmds
shape = cmds.listRelatives(p=True)
object = cmds.listRelatives(a, p=True)
selectedFace = cmds.ls(sl=True)
cmds.select(object[0] + '.f[:]', tgl=True)
unselecetedFace = cmds.ls(sl=True)
duplicated = cmds.duplicate(object, un=True)[0]
cmds.delete(duplicated, ch=True)
cmds.delete(selectedFace)
for i in range(len(unselecetedFace)):
unselecetedFace[i] = unselecetedFace[i].replace(object[0],duplicated)
cmds.delete(unselecetedFace)
cmds.select(duplicated)

Faster calculation for large amounts of data / inner loop

So, I am programming a simple Mandelbrot renderer.
My inner loop (which is executed up to ~100,000,000 times each time I draw on screen) looks like this:
Complex position = {re,im};
Complex z = {0.0, 0.0};
uint32_t it = 0;
for (; it < maxIterations; it++)
{
//Square z
double old_re = z.re;
z.re = z.re*z.re - z.im*z.im;
z.im = 2*old_re*z.im;
//Add c
z.re = z.re+position.re;
z.im = z.im+position.im;
//Exit condition (mod(z) > 5)
if (sqrt(z.re*z.re + z.im*z.im) > 5.0f)
break;
}
//Color in the pixel according to value of 'it'
Just some very simple calculations. This takes between 0.5 and a couple of seconds, depending on the zoom and so on, but i need it to be much faster, to enable (almost) smooth scrolling.
My question is: What is my best bet to achieve the maximum possible calculation speed?
OpenCl to use the GPU? Coding it in assembly? Dividing the image into small pieces and dispatch the calculation of each piece on another thread? A combination of those?
Any help is appreciated!
I have written a Mandelbrot set renderer several times... and here are the things that you should keep in mind...
The things that take the longest are the ones that never escape and take all the iterations.
a. so you can make a region in the middle out of a few rectangles and check that first.
any starting point with a real and imaginary part between -1 and 1 will never escape.
you can cache points (20, or 30) in a rolling buffer and if you ever see a point in the buffer that you just calculated means that you have a cycle and it will never escape.
You can use a more general logic that doesn't require a square root... in that if any part is less than -2 or more than 2 it will race out of control and can be considered escaped.
But you can also break this up because each point is its own thing, so you can make a separate thread or gcd dispatch or whatever for each row or quadrant... it is a very easy problem to divide up and run in parallel.
In addition to the comments by #Grady Player you could start just by optimising your code
//Add c
z.re += position.re;
z.im += position.im;
//Exit condition (mod(z) > 5)
if (z.re*z.re + z.im*z.im > 25.0f)
break;
The compiler may optimise the first, but the second will certainly help.
Why are you coding your own complex rather than using complex.h

DirectX 11.1 Disable the depth buffer

This question relates to a previous question I have asked.
I have a series of 48 textures on flat square meshes that I am rendering and they all combine to form one "scene." They each have a large percentage of of transparency with one or two smaller images, and when they are line up, I should be able to see the full scene. I expected this would work without much issue, but when when I went to test it, I see the top-most texture, and then anywhere it would have transparency, it is just the clear color.
At first, I thought it was an issue with how I was loading the image and somehow was disabling the alpha, but after playing around with the clear color, I realized that there was some transparency.
Second, I tried was to enable blending - this works if all the textures get combined on a single z plane.
I have posted my image loading and blending code on the question I linked to above.
Now I am starting to think it may be an issue with the depth buffer, so I added the following code to my window dependent resources:
Microsoft::WRL::ComPtr<ID3D11DepthStencilState> DepthDefault;
D3D11_DEPTH_STENCIL_DESC depthstencilDesc;
ZeroMemory(&depthstencilDesc, sizeof(depthstencilDesc));
depthstencilDesc.DepthEnable = FALSE;
depthstencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthstencilDesc.DepthFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.StencilEnable = FALSE;
depthstencilDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilState(&depthstencilDesc, DepthDefault.GetAddressOf() ) );
direct3d.context->OMSetDepthStencilState(DepthDefault.Get(), 0);
Even with this code, I am only seeing the topmost layer. Am I missing something, or am I setting something incorrectly?
Edit: To visualize the problem, it's as if I had 48 panes of glass that are all the same size and they are all in a row. Each piece of glass has one image somewhere on it. When you look through all the glass panes, you get one extra awesome image of all the smaller images combined. For me, directx or the pixel shader is only drawing the first glass pane and filling all the transparency of the first pane with the clear/background color.
Edit: The code I'm using to create the depthstencilview:
CD3D11_TEXTURE2D_DESC depthStencilDesc( DXGI_FORMAT_D24_UNORM_S8_UINT, backBufferDesc.Width, backBufferDesc.Height, 1, 1, D3D11_BIND_DEPTH_STENCIL );
ComPtr<ID3D11Texture2D> depthStencil;
DX::ThrowIfFailed( direct3d.device->CreateTexture2D( &depthStencilDesc, nullptr, &depthStencil ) );
auto viewDesc = CD3D11_DEPTH_STENCIL_VIEW_DESC(D3D11_DSV_DIMENSION_TEXTURE2D);
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilView( depthStencil.Get(), &viewDesc, &direct3d.depthStencil ) );
That code is literally right above my depth test/ D3D11_DEPTH_STENCIL_DESC code. I'm presuming that this creates the depth code.
I think you might need to sort the order in which you render your vertices if you want to render semi-transparencies with a depth buffer. If you don't want to use a depth buffer - perhaps just don't define/create/set it?