What's mean "VkPipelineColorBlendAttachmentState contains the configuration per attached framebuffer"? - vulkan

I read the Color blending Vulkan tutorial.
and this page says :
The first struct, VkPipelineColorBlendAttachmentState contains the configuration per attached framebuffer and the second struct, VkPipelineColorBlendStateCreateInfo contains the global color blending settings.In our case we only have one framebuffer
The second structure references the array of structures for all of the framebuffers
However, in Framebuffers chapter, framebuffers were created as many as the number of imageView.
But the code associated with it is the same.
And per-framebuffer struct no framebuffer-related members.
How attach color blend attachment to framebuffer?
My guess is that automatically attachment VkFramebufferCreateInfo::pAttachments when command recording(render pass begin), it's right?
or VkSubpassDescription::pColorAttachments ?
because specification say:
The value of attachmentCount must be greater than the index of all color attachments that are not VK_ATTACHMENT_UNUSED in VkSubpassDescription::pColorAttachments or VkSubpassDescription2::pColorAttachments for the subpass in which this pipeline is used.

Sometimes, tutorials do not use proper wording. This is one of those times.
Recall that a pipeline is built against a specific subpass of a specific render pass. Also recall that subpasses have a list of (among other things) color attachments that represent the render targets for rendering operations in that subpass.
What the tutorial means is that VkPipelineColorBlendAttachmentState defines the blend state for a particular attachment in the subpass designated by the pipeline. The array of VkPipelineColorBlendAttachmentState structs mirrors the array of color attachments used in the subpass the pipeline is being built for. So the third element of VkPipelineColorBlendStateCreateInfo::pAttachments corresponds to the third element in VkSubpassDescription::pColorAttachments for the subpass the pipeline is being built for.
For some reason, this tutorial refers to these attachments as "attached framebuffer," as this is absolutely the wrong term to use. They're just attachments.
Framebuffers provide the images that will be used as attachments when you begin a render pass. But the pipeline doesn't (really) care what image object you use. It cares about what color attachment in the subpass you're talking about.

Related

Exploding only some part of an object in Blender 2.82a (using Explode modifier with Vertex Group?)

I am trying to explode/destroy only some part of an object.
Following Blender 2.82 manual page
https://docs.blender.org/manual/en/2.82/physics/particles/emitter/emission.html
says "You may use vertex groups to confine the emission, that is done in the Vertex Groups panel."
So, it must be possible.
As a test, I created a Blender file, attempting to explode/destroy only the left ear of Suzanne, using Explode modifier.
I tried the following:
Added a monkey object ("Suzanne").
Applied "Subdivision Modifier" with "Simple" subdivision algorithm.
Created a vertex group named "VtxGroup_Suzanne__All_vertices_in_left_ear", which contains all vertices in Suzanne's left ear.
enter image description here
Created particle system setting.
Enabled Rotation.
In "Density" field under "Vertex Groups", entered "VtxGroup_Suzanne__All_vertices_in_left_ear".
In "Render As" filed under "Render", chose "Object".
Added "Explode" modifier.
This modifier has "Vertex Group" field, but it seems it does not make any difference in the result (probably because I do not know how to use it properly???)
At this point, when I play the animation, particles erupt out of Suzanne's left ear, breaking down Suzanne little by little.
However, the destruction is not limited to the left year. Entire Suzanne starts breaking down.
Some destruction pieces are really big or unnaturally long, such as almost half of Suzanne's face shown in the screenshot.
enter image description here
Is there any way I can limit the destruction only to the left ear (which is vertex group "VtxGroup_Suzanne__All_vertices_in_left_ear".
Also, can I adjust the sizes of destruction particles, so that some of them would not be too big, nor too long?
I tried setting a whole bunch of settings, but I could not find the solution. Maybe I am attempting this completely wrong? Is there some way to accomplish this in a completely different way?
This test file is found here (zipped):
Test file for Explode modifier with Vertex Group
Thank you in advance for help.
Try splitting the object? If you want to animate the object beforehand, construct the object out of two separate objects grouped, and then ungroup them at or before the keyframe you want them to explode. I hope this helps!
:)

cytoscape show traffic between nodes along an animated path

I need to show things moving between nodes along their connection paths similar to this project. I haven't been able to find any examples of it in cytoscape, but I have used cytoscape in the past and prefer to keep using it for this as well. I would appreciate recommendations on how to approach this problem.
You've got a few options...
The easiest is the Marquee visual style. It produces a "marching ants" illusion in the direction of directed edges. Simply to the Styles tab in the Control Panel and select the "Marquee" style. In the EDGE tab, you can choose from 3 different Marquee Line Types. You could imagine mapping these 3 line types to 3 categories (or bins) of traffic density, for example. Or you could use color, thickness and/or transparency in combination with a marquee style to represent traffic density. You can see an example here:
https://youtu.be/MF0zsxEPoPc?t=44
There's also an app for animation! This takes the approach of interpolating any visual style (including position and existence) between any set of key frames you provide. So, for example, you would have a start and finish frame and then CyAnimator would make a movie file for you:
http://apps.cytoscape.org/apps/cyanimator
And yet another completely different approach: with the scripting capabilities of Cytoscape, you can pretty much do whatever you want. The Unit tests for the RCy3 package, for example, ends up being an almost psychedelic display of data vis potential (and the unit tests aren't even at full coverage, shame). So you could direct your own animations in real time with a bit of scripting in R or Python. Here's the RCy3 unit test demo and links to the scripting libs:
https://www.youtube.com/watch?v=IXqbdlUnzUE&t=1s (caution: flashing graphics)
https://bioconductor.org/packages/release/bioc/html/RCy3.html
https://py2cytoscape.readthedocs.io/en/latest/
I'm using cytoscape.js with meteor.js. My elements, stylesheet and vehicles (shown as red dots) are stored in mongo, and can be updated via an external process or edited on-screen. The graph can be restructured or reshaped on the fly, and the vehicles will discover the new least-cost route to reach their target. Moves are queued with eles.animate() Routing is handled by eles.floydWarshall().path(). This might be similar to what you had in mind.

Using materials on 3D models created in Blender

I want to make a shape that can't be constructed using the SceneKit build in geometry models so I want to use some other 3D modeling program for that. I am interested if those models (created for example in Blender) can act as models that can be created directly in SceneKit. I want to be able to apply materials and change the object's color in code, and want to know beforehand if this is possible with imported models.
I know I can export the model in .dae (Collada file) and like this I can for sure use the model, but can't change its material.
If it is possible to change it in some other way I would appreciate if you could briefly explain how the object should be exported from Blender (in which format).
Actually yes you can change the material in a Collada (dae) format.
The materials are contained in the class SCNMaterial.
Here are the methods you can use to access the material:
First you have probably the easiest method of material access:
SCNNode.geometry.firstMaterial
This method gives your the first material that the object is using.
Next you have entire material access:
SCNNode.geometry.materials
This method gives you an NSArray containing all the materials that the object is using.
Then finally you have the good'ol name access:
[SCNNode.geometry.materialWithName: NSString]
This method gives you an NSArray containing all the materials that the object is using.
And in the apple docs:
What is SCNNode.geometry? Find out here
Material access and manipulation.
A side note:
To actually control the color/image of a SCNMaterial you need to use SCNMaterialProperty
A SCNMaterial is made up of several SCNMaterialPropertys.
For more info please read the docs

Set windows size of QuickLook Plugin

I'm building a QuickLook plugin. I want to change the width of the windows that pops up when user hits the spacebar.
I've read there are two keys in the info.plist file of the project where height and width are customisable. Even if I change those values I can't get the size of the preview windows to my desired one.
I don't know what else to try. Any idea?
Thanks!
Thought I'd dig a little on this. I have not tried any of the following suggestions, so nobody get their hopes up. I'll assume you're using the generator callback:
OSStatus (*GeneratePreviewForURL)(
void *thisInterface,
QLPreviewRequestRef preview,
CFURLRef url,
CFStringRef contentTypeUTI,
CFDictionaryRef options
);
Before anything else, you might manually check the options dictionary argument and verify that the kQLPreviewPropertyWidthKey and kQLPreviewPropertyHeightKey keys are indeed mapped to the desired CFNumber values.
Referring to each of these properties, the Apple QuickLook programming guide says:
Note that this property is a hint; Quick Look might set the width
automatically for some types of previews. The value must be
encapsulated in a CFNumber object.
(Edit: If your preview representation is flexible, you might try finding a preview type for which QuickLook honors your size hints, as per the statement above. Just a thought.)
Running nm on the QuickLook framework binary revealed some undocumented kQLPreviewProperty-- constants as well as the aforementioned width and height keys. One that caught my attention was kQLPreviewPropertyAutoSizeKey. Recalling Apple's statement about ignoring the hints to set the size automatically, this might be significant? Following the convention in QuickLook.framework/Headers/QLBase.h, you might try declaring
extern const CFStringRef kQLPreviewPropertyAutoSizeKey;
Then you could try associating a CFNumber 0 with that property key in the options dictionary. There are other undocumented keys of note, such as kQLPreviewPropertyAttributesKey.
Back to the Info.plist you mentioned, Apple says about those keys QLPreviewWidth and QLPreviewHeight:
This number gives Quick Look a hint for the width (in points) of
previews. It uses these values if the generator takes too long to
produce the preview. (emphasis added)
This is where someone makes the terrible suggestion of calling sleep() in your generator. But I'm perplexed as to why Apple would make following the size hints dependent on the generator latency. (?)
Edit: Also note the above statement says the Info.plist hints must be expressed in points (not pixels), a unit dependent on the user's screen resolution.
Recently I was developing a Quick Look Plugin myself which uses HTML+CSS and faced the same problem.
The solution for my was to test the plugin not within Xcode and qlmanage as the executable but instead to try the real .qlgenerator from my user library.
When invoking the generator from my user library, the Quick Look window was resized exactly the way I specified in the *-Info.plist.
I've run into the same problem, and may offer some clues: In my case I'm generating an image quick look preview for my custom file format. I initiate the preview context to draw my preview into using
CGContextRef QLPreviewRequestCreateContext(QLPreviewRequestRef preview, CGSize size, Boolean isBitmap, CFDictionaryRef properties);
The curious thing is that if I set isBitmap to true, quick look adjusts the preview panel size to the size specified for the context (up to a certain size at least). But if you set isBitmap to false, it seems to disregard the context size and instead always shows a full size preview panel with the vector graphics image scaled to cover the entire panel.
So, if you use a bitmap graphical preview context, it seems the preview panel will be set to the size of the context you specify. However, I haven't found any way to set the size of the panel when using a vector graphic preview context (which is what I want).

.mesh file "ref" var explanation

I am trying to understand the .mesh files, usually generated for mesh visualization with Medit.
The documentation is here, but it is in french.
The thing I understand is that after every line describing and object in the file (vertex, triangle, tetrahedra, etc.) it comes a ref variable, that in the examples files I have, they usually are 0,1,2,3 and I don't understand what is their purpose.
Can somebody please explain this?
You can get an .mesh example here.
Each reference corresponds to a color in Medit. The colors are arbitrary, and can be changed in Medit (using the GUI or changing a configuration file).
The reference values in the Mesh file refers to a color index. Maybe the program uses this to display the vertices, triangles and tetrahedra with certain colors. You can ignore this value for all practical purposes.