I am trying to use the Point Source "Own Particles" feature in the Cell Fracture module of Blender.
I am loading a simple object with Import Wavefront (.obj)
I would now like to import a list of defined points to use in the cell fracture algorithm.
How do I do this?
The 'Own Particles' option allows you to use points generated by a particle system attached to the object, not a list of points imported separately.
The 'Child Verts' option allows you to use the vertices of child objects as the source, this should allow you to import the points as separate objects, parent them to the original mesh by selecting the point object/s then shift select the original mesh and press ⎈ CtrlP and select object. Parenting is explained in more detail here.
Once you have setup the parenting choose 'Child Verts' in the cell fracture options.
Related
in this video, https://youtu.be/klBvssJE5Qg I shows you how to spawn enemies outside of a fixed camera. (this is in GDscript by the way) How could I make this work with a moving camera? I wanna make a zombie fighting game with a moving camera and zombies spawning outside that.
I would really appreciate help with this.
I've tried researching on the internet about how to do it, but I just didn't find it.
N/A..................................
After looking at the video, I see they are using this line to spawn:
Global.instance_node(enemy_1, enemy_position, self)
This suggest to me a couple thing:
The position is probably either relative to the self passed as argument or global.
There must be an Autoload called Global that I need to check to make sure.
And the answer is in another castle video.
In the video Godot Wave Shooter Tutorial #2 - Player Shooting we find this code:
extends Node
func instance_node(node, location, parent):
var node_isntance = node.instance()
parent.add_child(node_instance)
node_instance.global_position = location
return node_instance
And thus, we are working with global coordinates global_position. Thus enemy_position is used as global coordinates.
Ok, instead of using enemy_position as global coordinates we are going to use it as local coordinates of the Camera2D (or a child of it). Which means you need a reference to the Camera2D (which I don't know where do you have it).
You could make your code in a child of the Camera2D, or take the transform of the Camera2D using a RemoteTransform2D. Either way, you could then work in its local coordinates. Thus you would do this:
Global.instance_node(enemy_1, to_global(enemy_position), self)
Or you could have a reference by exporting a NodePath (or in the newest Godot you can export a Camera2D) from your script and set it via the inspector. So you can do this:
Global.instance_node(enemy_1, camera.to_global(enemy_position), self)
Where camera is your reference to the Camera2D.
In the following section of Arena.gd:
func _on_Enemy_spawn_timer_timeout():
var enemy_position = Vector2(rand_range(-160, 670), rand_range(-90, 390))
I believe you can add the X and Y coordinates of the camera to their corresponding random ranges in the enemy position Vector2. This will displace the enemy depending on where the camera is currently located.
You can get the position of the camera with this:
get_parent().get_node("Name of your camera").position
When this is all put together:
func _on_Enemy_spawn_timer_timeout():
var enemy_position = Vector2(rand_range(-160, 670) + get_parent().get_node("Name of your camera").position.x, rand_range(-90, 390) + get_parent().get_node("Name of your camera").position.y)
Keep in mind that you might need to displace the values in the following while loop as well. I hope this helps.
I have been trying to create some graph visualizations using Ontotext GraphDB. I would like the colors to be consistent between various visualizations that I make of the same data. I understand that the coloring is based on the type, but it does not seem to be consistent. For example, if I create a visual graph with only nodes of type A, the color assigned to the nodes may be red, but if I create a visual graph with nodes of type A and type B, then it does not appear that the color of nodes of type A are guaranteed to still be red.
I would like to understand the mechanism by which the visualization system assigns colors based on types.
As a side note, I am also having an issue with larger networks where the nodes of the graph become larger than the size of the window, so that I cannot view all of the nodes at once, even if I zoom out all the way.
Colors are based on the type of the node and the colors for types are generated each time (we do not persist them).
Unfortunately you cannot specify the Visual Graph node colors in GraphDB Workbench without touching the source code, so you need to clone GraphDB Workbench from github and set the colors for your types in the source code but I will guide you how to do it, it is very straightforward.
Clone or fork the project from here https://github.com/Ontotext-AD/graphdb-workbench
(there is a good guide there how to run your workbench against a running GraphDB)
Open src/js/angular/graphexplore/controllers/graphs-visualizations.controller.js and find the function $scope.getColor.
You can specify your colors and types there i.e:
$scope.getColor = function (type) {
if (type === 'http://myBarType') {
return "#6495ED"
}
if (type === 'http://myFooType')
{
return "#90EE90";
}
if (angular.isUndefined(type2color[type])) {
type2color[type] = colorIndex;
colorIndex++;
}
I had the idea of creating a fantasy city, and to avoid having the same house over and over, but not have to manually create hundreds of houses I was thinking on creating collections like "windows", "doors", "roofs", etc, and then create objects with vertex's assigned to specific groups with the same names ("windows" vertex groups, "doors" vertex groups, etc), and then have blender pick for each instance of a house a random window for each of the vertex in the group, same for doors, roofs, etc.
Is there a way of doing this? (couldn't find anything online), or do I need to create a custom addon? If so, any good reference or starting point where something close to this is done?
I've thought of particle systems, or child objects, but couldn't find a way to attach to the vertex a random part of the collection. Also thought of booleans, but it doesn't have an option to attach to specific vertex, nor to use collections. So I'm out of ideas of how to approach this.
What I have in mind:
Create basic shape, and assign vertex to the "windows" vertex group:
https://i.imgur.com/DAkgDR3.png
And then have random objects within the "Windows" collection attached to those vertex, as either a particle or modifier:
https://i.imgur.com/rl5BDQL.png
Thanks for any help :)
Ok, I've found a way of doing this.
I'm using 3 particle systems (doors, roofs and windows), each using vertex as emitters, and using vector groups to define where to display one of each the different options.
To avoid the particle emitter to put more than one object per vertex, I created a small script that counts the number of vertex of each vertex group and updates each of the particle system Emission number accordingly.
import bpy
o = bpy.data.objects["buildings"]
groups = ["windows", "doors", "roofs"]
for group in groups:
vid = o.vertex_groups.find(group)
vectors = [ v for v in o.data.vertices if vid in [ vg.group for vg in v.groups ] ]
bpy.data.particles[group].count = len(vectors)
I've used someone's code from stack overflow for counting the number of vectors in a vector group, but can't find again the link to the specific question, so if you see your code here, please do comment and I'll update my answer with the proper credit.
I have a large number of different image stimuli presented in various locations on screen.
When the participant clicks on a stimulus, I need the name of that stimulus item available for use in the rest of the script.
For example, you can achieve this in E-Prime with the SlideState.HitTest(x, y) method, where x and y are mouse coordinates.
The only similar thing I've been able to see in Psychopy is the mouse.isPressedIn(shape) method. However because you must supply a specific object as an argument, it seems you would need an if clause for each of the stimuli, which seems messy (especially for larger numbers of items)
Is there a better way of doing this? I'm still learning so I might be missing something.
Thanks!
No, I think not. However, if you just add all objects to a list and loop over them, the code will be neat enough.
# List of (pointers to) stimulus objects
shapes = [shape1, shape2, shape3, shape4, shape5]
# Loop over them and check if mouse is pressed on each one of them
for shape in shapes:
if mouse.isPressedIn(shape):
# Set this variable to point to the latest pressed shape
pressed_shape = shape
# Now you can do stuff with this object
pressed_shape.fillColor = 'red'
pressed_shape.draw()
print pressed_shape.name
Note that if the mouse clicks at a location where there are two objects, pressed_shape is the latest in the list for this solution.
I need to add a batch of arrow annotations to an image, I know all the start and end points of the arrows.
And I've put them into an image (2 columns, many rows) which I used as a data sheet, how to realize it in script?
I noticed that in the DM help manual that the line annotation has the attributes-- start point and end point.
But the function to create an arrow annotation jsut looks like this:
Component NewArrowAnnotation( Number top, Number left, Number bottom, Number right )
Does that mean the number top and left define the start point, number bottom and right the end point?
I also need to change the color of the annotations, and add some text next to them (either side is OK, but please show me how to control it).
What isn't always clear from the current documentation is that annotations belong to the component object. You therefore find all required commands documented in the component section of the help documentation.
Note that also imageDisplays are themselves a subclass of the component object, so you can add "annotations" (components) to "imageDisplay" (components) using the ComponentAddChild... commands.
A script to add a simple arrow-annotation (pointing at 200/200) can therefore look like the following:
image test := RealImage( "Test", 4, 512, 512 )
test.ShowImage()
ImageDisplay disp = test.ImageGetImageDisplay(0)
component arrow = NewArrowAnnotation(100,100,200,200)
arrow.ComponentSetForegroundColor( 0, 1, 1 )
disp.ComponentAddChildAtEnd( arrow )