How do I make multiple copies of a set of polygons in a Vertex Buffer Array? - vb.net

In OpengL 1, in Visual Basic with OpenTK, if I want a hundred cubes all arranged in circle i'd write
glRef = GL.GenLists(1)
GL.NewList(glRef, ListMode.Compile)
GL.Begin(PrimitiveType.Traingles)
GL.Vertex3....for the vertices of a cube
GL.End()
GL.EndList()
which would give me glRef as a handle with which I could do
For i = 0 to 100
GL.PushMatrix()
GL.Rotate(3.6*i, 0, 0, 1)
GL.Translate(5.0, 0.0, 0.0)
GL.CallList(glRef)
GL.PopMatrix()
Next
and get a hundred cubes all arranged in a circle.
How do I do the same sort of thing in Open GL 2.0 or higher with Vertex Buffer Objects?
I start off with
GL.GenBuffer(VBOid)
Dim VertexArray() As Single = {....for the vertices of a cube }
then do some binding of it to a vertex buffer
GL.BindBuffer(BufferTarget.ArrayBuffer, VBOid(0))
GL.BufferData(BufferTarget.ArrayBuffer, SizeOf(GetType(Single)) * VertexArray.Count, VertexArray, BufferUsageHint.StaticDraw)
GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, False, 0, VertexArray)
and then in my draw routine I do something along the lines of
GL.EnableClientState(ArrayCap.VertexArray)
GL.BindBuffer(BufferTarget.ArrayBuffer, PrimitiveID(0))
GL.DrawElements(PrimitiveType.Triangles)
but at this point adding a second DrawBuffer command together with transforms doesn't seem to create me a second cube. I've been bashing my head against a wall, looking all over the internet and I can't find a straight forward reference which tells me how to do it, or even confirmation that it's possible.
Is this not the way its supposed to work, am I just supposed to send a hundred sets of cube vertices, or is there a way to copy a vertex buffer object and apply transforms to it? (Or is I'm probably doing it wrong somewhere and I need to go on a bug hunt - any tips for that would be helpful)

I don't think GL.DrawBuffer is the correct command in this place. It is used to specify in the context of FBOs which attachment points can be written.
Since you try to draw a VBO here, I would expect the use of GL.DrawArrays or GL.DrawElements.

Related

createjs combine 2 shapes in a mask

I have a symbol S1 with two shapes (lets say sh0 and sh1). On the stage I have an instance of another symbol mc. At run time, I will create an instance mc1 of the symbol S1. Using createjs, how can I use mc1 as a mask for mc?
I assume when you say "symbol", you mean a graphic or MovieClip in Adobe Animate. Unfortunately, you can only use a CreateJS "Shape" as a mask directly. There are a few options:
Combine the shapes into one yourself.
Combine the instructions. This is a bit dirty, but you could in theory concat the graphic instructions from one shape into another. I suspect this would have issues if the Shape instances have different x/y positions.
symbol.shape1.graphics._instructions.concat(symbol.shape2.graphics._instructions);
Cache the symbol as use it as a Mask with AlphaMaskFilter. The example in the documents should get you what you want.
var box = yourSymbol;
box.cache(0, 0, 100, 100);
var bmp = new createjs.Bitmap("path/to/image.jpg");
bmp.filters = [
new createjs.AlphaMaskFilter(box.cacheCanvas)
];
bmp.cache(0, 0, 100, 100);
The 3rd is probably your best option, but it is limiting and can be performance intensive due to the use of the filter (especially if content changes and you have to update it constantly).
Feel free to post more details on what you are working with in order to get a better recommendation.
Cheers.

Vulkan, what variables does an object need? As in a separate mesh that can be updated individually

So I have been experimenting, and I can add a new "object" by adding every model in the scene to the same vertex buffer, but this isn't good for a voxel game because I don't want to have to reorganize the entire world's vertices every time a player destroys a block.
And it appears I can also add a new "object" by creating a new vertex and index buffer for it, and simply binding both it and all other vertex buffers to the command buffers array at the same time like this:
vkCmdBeginRenderPass(commandBuffers[i], &renderPassInfo, VK_SUBPASS_CONTENTS_INLINE);
vkCmdBindPipeline(commandBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, graphicsPipeline);
vkCmdBindDescriptorSets(commandBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayout, 0, 1, &descriptorSets[i], 0, nullptr);
// mesh 1
VkBuffer vertexBuffers[] = { vertexBuffer };
VkDeviceSize offsets[] = { 0 };
vkCmdBindVertexBuffers(commandBuffers[i], 0, 1, vertexBuffers, offsets);
vkCmdBindIndexBuffer(commandBuffers[i], indexBuffer, 0, VK_INDEX_TYPE_UINT32);
vkCmdDrawIndexed(commandBuffers[i], static_cast<uint32_t>(indices.size()), 1, 0, 0, 0);
// mesh 2
VkBuffer vertexBuffers2[] = { vertexBuffer2 };
vkCmdBindVertexBuffers(commandBuffers[i], 0, 1, vertexBuffers2, offsets);
vkCmdBindIndexBuffer(commandBuffers[i], indexBuffer2, 0, VK_INDEX_TYPE_UINT32);
vkCmdDrawIndexed(commandBuffers[i], static_cast<uint32_t>(indices.size()), 1, 0, 0, 0);
vkCmdEndRenderPass(commandBuffers[i]);
But then this requires me to bind ALL vertex buffers to the command buffers array every time even when only a single one of those meshes is updated or created/destroyed. So how would I "add" a new "game object," the vertices and indices of which can be updated without having to loop through everything else in the scene too? Or is it relatively quick to bind to an already calculated vertex and index buffer and this is standard?
And I have tried this with a command buffer per object:
VkSubmitInfo submits[] = { submitInfo, submitInfo2 };
if (vkQueueSubmit(graphicsQueue, 2, submits, inFlightFences[currentFrame]) != VK_SUCCESS) {
throw std::runtime_error("failed to submit draw command buffer!");
}
But it only renders the last object in the queue (it will render the first object if I say the submit size is 1).
I have tried adding a separate descriptorset, descritor pool, and pipeline as well and it still only renders the last command buffer in the queue. I tried adding a new commandpool for each object but commandPool is used by dozens of other functions and it really seems like there is supposed to be only one of those.
You split your world into chunks, and draw one chunk at a time. All chunks have some space reserved for them in (a single) vertex buffer, and when something has changed, you only update that one chunk. If a chunk grows too large... Well, you will probably need some sort of a memory allocation system.
Do NOT create separate buffers for every little thing. Buffers just hold data. Any data. You can even store different vertex formats for different pipelines in one same buffer - just in different places within it and binding it with an offset. Do not rebind just to draw a different mesh if all your vertices are packed neatly into array (they most likely are). If you want to only draw a part of a buffer - just use what draw commands give you.
Command buffers are just a block of instructions for the gpu. You dont need one per object. However, one cannot be used and written to at the same time, so you will need at least one per frame in flight and one to write to. Pipelines(descriptor sets, and pretty much whatever else you bind) are just a bunch of state that your gpu starts using once you bind it. At the start of command buffer, the state is undefined - it is NOT inherited between command buffers in any way.

Making cylindrical space in Repast Simphony?

I am trying to model the interior of an epithelial space and am stuck on movement around the interior edges of a cylindrical space. Basically, I'm trying to implement StickyBorders and keep agents on those borders in a cylindrical space that I am creating.
Is there a way to use cylindrical coordinates in Repast Simphony? I found this example (https://www.researchgate.net/publication/259695792_An_Agent-Based_Model_of_Vascular_Disease_Remodeling_in_Pulmonary_Arterial_Hypertension) where they seem to have done something similar, but the paper doesn't explain methods in much depth, and I don't believe this is an example in the repast simphony models.
Currently, I have a class of epithelial cells that are set up to form a cylinder and other agents start just inside that cylinder. To move, they are choosing their most desired spot (similar to the Zombie code) then pointing to a new location in the direction of that desired location within one grid square of that original location. They check that new point before moving to it and make sure that there are at least two other epithelial cells in the immediate moore neighborhood, to ensure they stay against the wall.
GridPoint intendedpt = new GridPoint((int)Math.rint(alongX),(int)Math.rint(alongY),(int)Math.rint(alongZ));
GridCellNgh<EpithelialCell> nearEpithelium = new GridCellNgh<EpithelialCell>(mac_grid, intendedpt, EpithelialCell.class, 1,1,1);
List<GridCell<EpithelialCell>> EpiCells = nearEpithelium.getNeighborhood(false);
int nearbyEpiCellsCount=0;
for (GridCell<EpithelialCell> cell: EpiCells) {
nearbyEpiCellsCount++;
}
if (nearbyEpiCellsCount<2) {
System.out.println(this + " leaving epithelial wall /r");
RunEnvironment.getInstance().pauseRun();
//TODO: where to go if false
}
I am wondering if there is a way to either set the boundaries of the space to be a cylinder or to check which side of the agent is against the wall and restrict its movement in that direction.
The sticky border code (StickyBorders.java) essentially just checks if the point that the agent moves to is beyond any of the space's dimensions, and if so the point is clamped to that dimension. So, for example, if the space is 3x4 and an agent's movement would take it to 4,2, then that point becomes 3,2 and the agent is placed there. Can you do something like that in this case? If not, can you edit your question to explain why not and maybe that will help us understand better.
The approach we took in that model was to use a 3D grid space with custom borders and query methods. The space itself was still Cartesian - we just visualized it as a cylinder using custom display code. Using the Cartesian grid was an reasonable approximation for this application since the cell dimensions were significantly smaller that the vessel radius, so curvature effects were neglected. The boundary conditions on the vessel space were wrap around in the angular dimension, so that cells could move continuously around the circumference of the vessel, and the axial boundary conditions were also wrapped, as we assumed a long enough vessel length that this would be reasonable. The wall thickness dimension had hard boundaries at the basement membrane (y=0) and at the fluid interface (y=wall thickness).
Depending on which type of space you are using, you will need to implement a PointTranslator or GridPointTranslator that performs the border functions. If you want specific examples of the code I suggest you reach out to the author's directly.

DirectX 11.1 Disable the depth buffer

This question relates to a previous question I have asked.
I have a series of 48 textures on flat square meshes that I am rendering and they all combine to form one "scene." They each have a large percentage of of transparency with one or two smaller images, and when they are line up, I should be able to see the full scene. I expected this would work without much issue, but when when I went to test it, I see the top-most texture, and then anywhere it would have transparency, it is just the clear color.
At first, I thought it was an issue with how I was loading the image and somehow was disabling the alpha, but after playing around with the clear color, I realized that there was some transparency.
Second, I tried was to enable blending - this works if all the textures get combined on a single z plane.
I have posted my image loading and blending code on the question I linked to above.
Now I am starting to think it may be an issue with the depth buffer, so I added the following code to my window dependent resources:
Microsoft::WRL::ComPtr<ID3D11DepthStencilState> DepthDefault;
D3D11_DEPTH_STENCIL_DESC depthstencilDesc;
ZeroMemory(&depthstencilDesc, sizeof(depthstencilDesc));
depthstencilDesc.DepthEnable = FALSE;
depthstencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthstencilDesc.DepthFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.StencilEnable = FALSE;
depthstencilDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilState(&depthstencilDesc, DepthDefault.GetAddressOf() ) );
direct3d.context->OMSetDepthStencilState(DepthDefault.Get(), 0);
Even with this code, I am only seeing the topmost layer. Am I missing something, or am I setting something incorrectly?
Edit: To visualize the problem, it's as if I had 48 panes of glass that are all the same size and they are all in a row. Each piece of glass has one image somewhere on it. When you look through all the glass panes, you get one extra awesome image of all the smaller images combined. For me, directx or the pixel shader is only drawing the first glass pane and filling all the transparency of the first pane with the clear/background color.
Edit: The code I'm using to create the depthstencilview:
CD3D11_TEXTURE2D_DESC depthStencilDesc( DXGI_FORMAT_D24_UNORM_S8_UINT, backBufferDesc.Width, backBufferDesc.Height, 1, 1, D3D11_BIND_DEPTH_STENCIL );
ComPtr<ID3D11Texture2D> depthStencil;
DX::ThrowIfFailed( direct3d.device->CreateTexture2D( &depthStencilDesc, nullptr, &depthStencil ) );
auto viewDesc = CD3D11_DEPTH_STENCIL_VIEW_DESC(D3D11_DSV_DIMENSION_TEXTURE2D);
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilView( depthStencil.Get(), &viewDesc, &direct3d.depthStencil ) );
That code is literally right above my depth test/ D3D11_DEPTH_STENCIL_DESC code. I'm presuming that this creates the depth code.
I think you might need to sort the order in which you render your vertices if you want to render semi-transparencies with a depth buffer. If you don't want to use a depth buffer - perhaps just don't define/create/set it?

Cocos2d / CCDrawNode - How to draw a line?

I see there is functionality to draw circles, polys, dot and segments. I dont see one for drawing an A-B line (with given thickness), like ccDrawLine() (which seems to be deprecated).
I need to draw a 'network' between connected nodes. I have the code to draw the network, however ccDrawLine doesn't seem to support aliasing or opacity, like CCDrawNode. It also, without manual intervention, doesn't seem to support batching.
Any suggetions? Would I need to do a load of maths to draw a 2 tri-poly rectangle at the right angle between points?
UPDATE:
Based on comments below... I have an idea on how to do a 'Line' 0,0 to 10,0 with thickness 2, I'd have to do a rect at {0,0.5}, {10,0.5}, {10,-0.5}, {0,-0.5}... I can work out the clockwise triangle points to make a polygon from that easily. I, therefore, could even do horiz/vert ones easily. But how do you do that between {4,5}, {10,7}? Would you do a normal rectangle and apply a transformation matrix to it? Or would you still precalculate each 4 points and then make 2 triangles from it?
UPDATE:
Maybe it'd be better to use a scaled "line" sprite?! Eg: https://stackoverflow.com/a/8760462/224707
UPDATE:
How about a Ribbon? Would that work? Eg: https://stackoverflow.com/a/8178729/224707
Not sure a Ribbon would work for a "network" of points though...
CLARIFICATION:
Imagine this image, but with straight lines and no intersections... Something like this:
(source: relenet.com)
UPDATE:
Apparantly, my post to the Forum did go though last night just before it went down... http://www.cocos2d-iphone.org/forum/topic/224498
A line is a segment. You can take it from here... ;)
Update:
CCDrawNode can draw segments. Segments are lines with defined start and end points.