I have a Neptune DB with a graph representation of my test network. I’m using bulk loader to generate the files and updating the graph. Now let’s say a device moved from one switch to another. And when I reload the graph, the edge is retained between the device and old switch. Is there a way to drop all the edges for updated vertices before load to deal with that?
Neptune's bulk loader is either append-only or can update properties of single-cardinality in Property Graph. It doesn't have a means to delete data. If you need to remove edges, you'll need to do this via the query languages. Using Gremlin, you could do:
updatedNodeList = [<your list of vertex IDs>]
g.V(updatedNodeList).bothE().drop()
This would drop all edges related to the vertices you want to update. If you want to be more specific, you can add filters to select specific edges based on label or properties:
g.V(updatedNodeList).bothE("label1","label2").
has("propertyKey1","propertyValue1").drop()
If the updatedNodeList is large (more than a few thousand), you want to split these up into separate batches and issue separate drop() queries. In Neptune, you can take advantage of concurrency and use threading/multi-processing to issue parallel queries in batches to drive faster drops.
Related
I'm trying to draw geometry for multiple models using a single draw call. All the geometry, thusly, resizes within the same vertex/index buffers. The geometry for the different models share the same vertex format, but the vertex amounts for each model can be different.
In the vertex/fragment shaders, what's a technique that can be used to differentiate between the different models, to access their appropriate transforms/textures/etc ?
Are these static models? For traditional static batching:
You only need a single transform relative to the batch origin (position the individual models relative to the batch origin as part of the offline data packaging step).
You can batch your textures in to a single atlas (either a single 2D image with different coordinates for each object, or a texture array with a different layer for each object).
If you do it this way you don't need to different component models - they are effectively just "one large model". Which has nice performance properties ...
For more modern methods, you can try indirect draws with multiple "drawCount" values to index the settings you want. This allows variable buffer offsets and triangle counts to be used, but the rest of the state used needs to be the same.
As an alternative to texture arrays, with bindless texturing you can just programmatically select which texture to use in the shader at runtime. BUT you generally still want it to be at least warp-uniform to avoid a performance hit.
I have been following different tutorials and I don't understand why I need resources per swapchain image instead of per frame in flight.
This tutorial:
https://vulkan-tutorial.com/Uniform_buffers
has a uniform buffer per swapchain image. Why would I need that if different images are not in flight at the same time? Can I not start rewriting if the previous frame has completed?
Also lunarg tutorial on depth buffers says:
And you need only one for rendering each frame, even if the swapchain has more than one image. This is because you can reuse the same depth buffer while using each image in the swapchain.
This doesn't explain anything, it basically says you can because you can. So why can I reuse the depth buffer but not other resources?
It is to minimize synchronization in the case of the simple Hello Cube app.
Let's say your uniforms change each frame. That means main loop is something like:
Poll (or simulate)
Update (e.g. your uniforms)
Draw
Repeat
If step #2 did not have its own uniform, then it needs to write a uniform previous frame is reading. That means it has to sync with a Fence. That would mean the previous frame is no longer considered "in-flight".
It all depends on the way You are using Your resources and the performance You want to achieve.
If, after each frame, You are willing to wait for the rendering to finish and You are still happy with the final performance, You can use only one copy of each resource. Waiting is the easiest synchronization, You are sure that resources are not used anymore, so You can reuse them for the next frame. But if You want to efficiently utilize both CPU's and GPU's power, and You don't want to wait after each frame, then You need to see how each resource is being used.
Depth buffer is usually used only temporarily. If You don't perform any postprocessing, if Your render pass setup uses depth data only internally (You don't specify STORE for storeOp), then You can use only one depth buffer (depth image) all the time. This is because when rendering is done, depth data isn't used anymore, it can be safely discarded. This applies to all other resources that don't need to persist between frames.
But if different data needs to be used for each frame, or if generated data is used in the next frame, then You usually need another copy of a given resource. Updating data requires synchronization - to avoid waiting in such situations You need to have a copy a resource. So in case of uniform buffers, You update data in a given buffer and use it in a given frame. You cannot modify its contents until the frame is finished - so to prepare another frame of animation while the previous one is still being processed on a GPU, You need to use another copy.
Similarly if the generated data is required for the next frame (for example framebuffer used for screen space reflections). Reusing the same resource would cause its contents to be overwritten. That's why You need another copy.
You can find more information here: https://software.intel.com/en-us/articles/api-without-secrets-the-practical-approach-to-vulkan-part-1
In graph-tool, I have a forest of graphs and I want to add special edges across the graphs from/to specific nodes without encapsulating all the graphs into a new bigger multi-graph. Is there a way to do that?
That is not possible as it would invalidate the definition of a graph. You can, however, merge graphs together with graph_union(), and connect their vertices with edges. If necessary, you can differentiate the types of edges with property maps.
I am trying to use a compute shader for image processing. Being new to Vulkan I have some (possibly naive) questions:
I try to look at neighborhood of a pixel. So AFAIK I have 2 possiblities:
a, Pass one image to the compute shader and sample the neighborhood pixels directly (x +/- i, y +/- j)
b, Pass multiple images to the compute shader (each being offset) and sample only the current position (x, y)
Is there any difference in sample performance a vs b (aside from b needing way more memory to being passed to GPU)?
I need to pass on pixel information (+ meta info) from one pipeline stage to another (and read it back out once command is done).
a, can I do this in any other way than passing a image with storage bit set?
b, when reading back information from host I probably need to use a framebuffer?
Using a single image and sampling at offsets (maybe using textureGather?) is going to be more efficient, probably by a lot. Each texturing operation has a cost, and this uses fewer. More importantly, the texture cache in GPUs generally loads a small region around your sample point, so sampling the adjacent pixels is likely going to hit in the cache.
Even better would be to load all the pixels once into shared memory, and then work from there. Then instead of fetching pixel (i,j) from thread (i,j) and all of that thread's eight neighbors, you only fetch it once. You still need extra fetches on the edge of the region handled by a single workgroup. (For what it's worth, this technique is not Vulkan specific: you'll see it used in CUDA, OpenCL, D3D Compute, and GL Compute too).
The only way to persist data out of a compute shader is to write it to a storage buffer or storage image. To read that on the CPU, use vkCmdCopyImageToBuffer or vkCmdCopyBuffer to a host-readable resource, and then map that.
So, say I want to create SKNodes with textures from a texture atlas. Every node will be built from multiple parts layered on top of each other, some of which will never change, some will. There will be many nodes, some of which will be created from the same set of parts, and others will be made from different sets of parts.
Instead of keeping all the images in the project separately, I want to create a texture atlas, but I've never used one before. What is the best setup for this? Here are the things I could come up with:
1. Throw all of it in one texture atlas
2. All changing parts in one atlas, static parts not in an atlas
3. All parts for one "type" of node in one atlas
Put all sprites used in the same scene(s) in the same atlas. If you don't expect high texture memory usage (ie all textures combined fit into 3-4 atlases sized 4096x4096) you need not consider splitting atlases, so a single atlas is perfectly fine.
Static/dynamic and grouping by "type" (however defined) should not be a consideration at all.