How to create an object with gdscript? - game-engine

as mentioned in the title, I want to know how to create a 3d object with gdscript in godot 3.1. I am new to godot. I have searched and followed some tutorials and that really helped.
I want to know how to
create a cube
add image texture to it
attach script to it
with GDScript.
I only came to know this
var cube1 = MeshInstance.new()
I know a little about scene approach but I want to follow this one if possible.
Many thanks in advance

Mirza,
Initially, your script will need to be attached to a parent node. I'm using a plain Node object. Change this to whatever type your parent node is.
The code is commented, explaining each step required to fulfill your points, to the best of my knowledge. I've changed the order which steps 1 and 3 are done, as a script needs to be set before his node is defined as a child to a parent. Otherwise, it follows the same sequence you've stated.
Before having a mesh created, you need a Physics Object. Which you use is up to you.
extends Node # or whatever object type it's attached to
# Preloads script to be attached
const my_script = preload("res://Scripts/your_script.gd")
func _ready(): # Runs when scene is initialized
# STEP 1: add a cube to the scene
# Step 1.1, create a Physics body.
# I'm using a static body but this can be any
# other type of Physics Body
var cube = StaticBody.new()
cube.transform.origin = Vector3(0, 0, 0) # change initial pos here
# STEP 3: attach script
# It is actually required to have the script
# attached before a node is defined as a child node
# to the parent. So your step 3 goes here
cube.set_script(my_script)
self.add_child(cube) # Add as a child node to self
# Step 1.2, add a collision shape to the
# Physics body, defining its shape to a Box (cube)
var coll = CollisionShape.new()
coll.shape = BoxShape.new()
cube.add_child(coll) # Add as a child node to cube
# Step 1.3, add a mesh, so that it's visible
var mesh = MeshInstance.new()
mesh.mesh = CubeMesh.new()
cube.add_child(mesh) # Add as a child to cube
# STEP 2: change texture
# Step 2.1, load your texture from pc
var new_texture = ImageTexture.new()
new_texture.load("res://path/to/new_texture.png")
# Step 2.2, get material from cube
var cube_material = mesh.get_surface_material(0)
# Step 2.3, change texture from material to your new texture
cube_material.albedo_texture = new_texture
Hope this helps. Any questions, let me know.

Related

Is there any way to access blender defined vertex group in Godot?

I'm trying to automate SoftBody creation in Godot. Everything is working perfect except for the part were I'm supposed to supply pinned points for mesh.
var node = SoftBody.new()
node.pinned_points = [] #This is the array where I'm supposed to supply vertex id like: [1, 45, 1142]
Those are the vertex points that keep mesh hold position. Now since I'll use script for multiple models I can't use Godot editor for that purpose. So I thought I'll make an additional group in blender vertex group and access those vertices in Godot but I don't know the HOW part. Is there any alternative way? I'm open to ideas. Thanks!

Can we get the list of followed edges of the current edge?

In SUMO, can we get the list of next edges (if there exits) given the current edge? Also, can we get the four incoming approaches to an normal intersection?
It depends if you want to use TraCI and need the following edge on the route of a specific vehicle or if you just want a static network analysis. For the latter you can use sumolib (at least if you can use python):
# import the library
import sumolib
# parse the net
net = sumolib.net.readNet('myNet.net.xml')
# retrieve the successor edges of an edge
nextEdges = net.getEdge('myEdgeID').getOutgoing()
# retrieve the incoming edges of the destination of an edge
nextEdges = net.getEdge('myEdgeID').getToNode().getIncoming()
see https://sumo.dlr.de/docs/Tools/Sumolib.html

How to detect only one specified class instead of all classes in tensorflow object detection?

I trained my dataset with six classes and it works fine in detection different classes. Is it possible to modify object detector script to detect only one specified class instead of all six classes? Or I must retrain my dataset for one class again from scratch? Thanks a lot for any recommendation.
Here is my drawing part of object detector script:
vis_util.visualize_boxes_and_labels_on_image_array(
image,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=1,
agnostic_mode=False,
groundtruth_box_visualization_color='black',
skip_scores=False,
skip_labels=False,
min_score_thresh=0.80)
Unless you change the code, you're going to get probabilities for all the classes. Ofc, you can select highest one among em. Makes sense?
It might not be the best solution to this problem, but you could try making a copy of the label_map.pbtxt-file (one to alter and one for safe-keeping) and delete all labels but the one you are interested in, in one of them.
Then you can lower the min_score_thresh to maybe 0.1 or something (or not modify this parameter at all), and only detect the one label you kept in the label_map.pbtxt-file.
If you are using the Object detection API from GitHub, the mscoco_label_map.pbtxt-file can be found in models-master/research/object_detection/data/ (remember to open it with a text-editor)
Before you call the visualization function add the following code -
objectOfInterest = 1 # Interested object class number as per label file
box = np.asarray(boxes)
cls = np.asarray(classes).astype(np.int32)
scr = np.asarray(scores)
bl = (cls == objectOfInterest)
classes = np.extract(boolar,cls)
scores = np.extract(boolar,scr)
boxes = np.extract(boolar,box)
The code suggested below by Suman was almost perfect, but the "boxes" array needs to be a 4 position tuple (box coordinates). To select a specific class, selecting the matching box coordinate tuple is needed. So I've added some lines before the code suggested by Suman. Check the code below:
objectOfInterest = 1 # Interested object class number as per label file
box = np.asarray(boxes)
cls = np.asarray(classes).astype(np.int32)
scr = np.asarray(scores)
boxes = []
for in range(1, len(cls)):
if cls[i] == objectOfInterest:
boxes.append(box[i])
boxes = np.array(boxes)
bl = (cls == objectOfInterest)
classes = np.extract(boolar,cls)
scores = np.extract(boolar,scr)

MDAnalysis selects atoms from the PBC box but does not shift the coordinates

MDAnalysis distance selection commands like 'around' and 'sphzere' selects atoms from periodic image (I am using a rectangular box).
universe.select_atoms("name OW and around 4 (resid 20 and name O2)")
However, the coordinates of the atoms from the PBC box reside on the other side of the box. In other words, I have to manually translate the atoms to ensure that they actually are withing the 4 Angstrom distance.
Is there a selection feature to achieve this using the select_atoms function?
If I well understand, you would like to get the atoms around a given selection in the image that is the closest to that selection.
universe.select_atoms does not modify the coordinates, and I am not aware of a function that gives you what you want. The following function could work for an orthorhombic box like yours:
def pack_around(atom_group, center):
"""
Translate atoms to their periodic image the closest to a given point.
The function assumes that the center is in the main periodic image.
"""
# Get the box for the current frame
box = atom_group.universe.dimensions
# The next steps assume that all the atoms are in the same
# periodic image, so let's make sure it is the case
atom_group.pack_into_box()
# AtomGroup.positions is a property rather than a simple attribute.
# It does not always propagate changes very well so let's work with
# a copy of the coordinates for now.
positions = atom_group.positions.copy()
# Identify the *coordinates* to translate.
sub = positions - center
culprits = numpy.where(numpy.sqrt(sub**2) > box[:3] / 2)
# Actually translate the coordinates.
positions[culprits] -= (u.dimensions[culprits[1]]
* numpy.sign(sub[culprits]))
# Propagate the new coordinates.
atom_group.positions = positions
Using that function, I got the expected behavior on one of MDAnalysis test files. You need MDAnalysisTests to be installed to run the following piece of code:
import numpy
import MDAnalysis as mda
from MDAnalysisTests.datafiles import PDB_sub_sol
u = mda.Universe(PDB_sub_sol)
selection = u.select_atoms('around 15 resid 32')
center = u.select_atoms('resid 32').center_of_mass()
# Save the initial file for latter comparison
u.atoms.write('original.pdb')
selection.write('selection_original.pdb')
# Translate the coordinates
pack_around(selection, center)
# Save the new coordinates
u.atoms.write('modified.pdb')
selection.write('selection_modified.pdb')

efficient way to draw continuous line in psychopy

I'm looking for a more efficient way to draw continuous lines in PsychoPy. That's what I've come up with, for now...
edit: the only improvement I could think of is to add a new line only if the mouse has really moved by adding if (mspos1-mspos2).any():
ms = event.Mouse(myWin)
lines = []
mspos1 = ms.getPos()
while True:
mspos2 = ms.getPos()
if (mspos1-mspos2).any():
lines.append(visual.Line(myWin, start=mspos1, end=mspos2))
for j in lines:
j.draw()
myWin.flip()
mspos1 = mspos2
edit: I tried it with Shape.Stim (code below), hoping that it would work better, but it get's edgy even more quickly..
vertices = [ms.getPos()]
con_line = visual.ShapeStim(myWin,
lineColor='red',
closeShape=False)
myclock.reset()
i = 0
while myclock.getTime() < 15:
new_pos = ms.getPos()
if (vertices[i]-new_pos).any():
vertices.append(new_pos)
i += 1
con_line.vertices=vertices
con_line.draw()
myWin.flip()
The problem is that it becomes too ressource demanding to draw those many visual.Lines or manipulate those many vertices in the visual.ShapeStim on each iteration of the loop. So it will hang on the draw (for Lines) or vertex assignment (for ShapeStim) so long that the mouse has moved enough for the line to show discontinuities ("edgy").
So it's a performance issue. Here are two ideas:
Have a lower threshold for the minimum distance travelled by the mouse before you want to add a new coordinate to the line. In the example below I impose a the criterion that the mouse position should be at least 10 pixels away from the previous vertex to be recorded. In my testing, this compressed the number of vertices recorded per second to about a third. This strategy alone will postpone the performance issue but not prevent it, so on to...
Use the ShapeStim solution but regularly use new ShapeStims, each with fewer vertices so that the stimulus to be updated isn't too complex. In the example below I set the complexity at 500 pixels before shifting to a new stimulus. There might be a small glitch while generating the new stimulus, but nothing I've noticed.
So combining these two strategies, starting and ending mouse drawing with a press on the keyboard:
# Setting things up
from psychopy import visual, event, core
import numpy as np
# The crucial controls for performance. Adjust to your system/liking.
distance_to_record = 10 # number of pixels between coordinate recordings
screenshot_interval = 500 # number of coordinate recordings before shifting to a new ShapeStim
# Stimuli
myWin = visual.Window(units='pix')
ms = event.Mouse()
myclock = core.Clock()
# The initial ShapeStim in the "stimuli" list. We can refer to the latest
# as stimuli[-1] and will do that throughout the script. The others are
# "finished" and will only be used for draw.
stimuli = [visual.ShapeStim(myWin,
lineColor='white',
closeShape=False,
vertices=np.empty((0, 2)))]
# Wait for a key, then start with this mouse position
event.waitKeys()
stimuli[-1].vertices = np.array([ms.getPos()])
myclock.reset()
while not event.getKeys():
# Get mouse position
new_pos = ms.getPos()
# Calculating distance moved since last. Pure pythagoras.
# Index -1 is the last row.index
distance_moved = np.sqrt((stimuli[-1].vertices[-1][0]-new_pos[0])**2+(stimuli[-1].vertices[-1][1]-new_pos[1])**2)
# If mouse has moved the minimum required distance, add the new vertex to the ShapeStim.
if distance_moved > distance_to_record:
stimuli[-1].vertices = np.append(stimuli[-1].vertices, np.array([new_pos]), axis=0)
# ... and show it (along with any "full" ShapeStims
for stim in stimuli:
stim.draw()
myWin.flip()
# Add a new ShapeStim once the old one is too full
if len(stimuli[-1].vertices) > screenshot_interval:
print "new shapestim now!"
stimuli.append(visual.ShapeStim(myWin,
lineColor='white',
closeShape=False,
vertices=[stimuli[-1].vertices[-1]])) # start from the last vertex