Pigs counting when crossing a line using OpenCV - numpy

I'm trying to count the number of piglets that enter and leave a zone. This is important because, in my project, there is a balance underneath the zone that computes the weight of the animals. My goal is to find the pig's weight, so, to achieve that, I will count the number of piglets that enter the zone, and if this number is zero, I have the pig's weight, and according to the number of piglets that get in I will calculate the weight of each as well.
But the weight history is for the future. Currently, I need help in the counting process.
The video can be seen here. The entrance occurs from the minute 00:40 until 02:00 and the exit starts on the minute 03:54 and goes all the way through the video because the piglets start, at this point, to enter and exit the zone.
I've successfully counted the entrance with the code below. I defined a region of interest, very small, and filter the pigs according to their colors. It works fine until the piglets start to move around and get very active, leaving and entering the zone all the time.
I'm out of ideas to proceed with this challenge. If you have any suggestions, please, tell me!
Thanks!!
import cv2
FULL_VIDEO_PATH = "PATH TO THE FULL VIDEO"
MAX_COLOR = (225, 215, 219)
MIN_COLOR = (158, 141, 148)
def get_centroid(x, y, w, h):
x1 = int(w / 2)
y1 = int(h / 2)
cx = x + x1
cy = y + y1
return cx, cy
def filter_mask(frame):
# create a copy from the ROI to be filtered
ROI = (frame[80:310, 615:620]).copy()
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
# create a green rectangle on the structure that creates noise
thicker_line_filtered = cv2.rectangle(ROI, (400, 135), (0, 165), (20, 200, 20), -1)
closing = cv2.morphologyEx(thicker_line_filtered, cv2.MORPH_CLOSE, kernel)
opening = cv2.morphologyEx(closing, cv2.MORPH_OPEN, kernel)
dilation = cv2.dilate(opening, kernel, iterations=2)
# Filter the image according to the colors
segmented_line = cv2.inRange(dilation, MIN_COLOR, MAX_COLOR)
# Resize segmented line only for plot
copy = cv2.resize(segmented_line, (200, 400))
cv2.imshow('ROI', copy)
return segmented_line
def count_pigs():
cap = cv2.VideoCapture(FULL_VIDEO_PATH)
ret, frame = cap.read()
total_pigs = 0
frames_not_seen = 0
last_center = 0
is_position_ok = False
is_size_ok = False
total_size = 0
already_counted = False
while ret:
# Window interval used for counting
count_window_interval = (615, 0, 620, 400)
# Filter frame
fg_mask = filter_mask(frame)
# Draw a line on the frame, which represents when the pigs will be counted
frame_with_line = cv2.line(frame, count_window_interval[0:2], count_window_interval[2:4],(0,0,255), 1)
contours, _ = cv2.findContours(fg_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# If no contour is found, increments the variable
if len(contours) == 0:
frames_not_seen += 1
# If no contours are found within 5 frames, set last_center to 0 to generate the position difference when
# a new counter is found.
if frames_not_seen > 5:
last_center = 0
for c in contours:
frames_not_seen = 0
# Find the contour coordinates
(x, y, w, h) = cv2.boundingRect(c)
# Calculate the rectangle's center
centroid = get_centroid(x, y, w, h)
# Get the moments from the contour to calculate its size
moments = cv2.moments(c)
# Get contour's size
size = moments['m00']
# Sum the size until count the current pig
if not already_counted:
total_size += size
# If the difference between the last center and the current one is bigger than 80 - which means a new pig
# enter the counting zone - set the position ok and set the already_counted to False to mitigate noises
# with significant differences to be counted
if abs(last_center - centroid[1]) > 80:
is_position_ok = True
already_counted = False
# Imposes limits to the size to evaluate if the contour is consistent
# Min and Max value determined experimentally
if 1300 < total_size < 5500:
is_size_ok = True
# If all conditions are True, count the pig and reset all of them.
if is_position_ok and is_size_ok and not already_counted:
is_position_ok = False
is_size_ok = False
already_counted = True
total_size = 0
total_pigs += 1
last_center = centroid[1]
frame_with_line = cv2.putText(frame_with_line, f'Pigs: {total_pigs}', (100, 370) , cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0,0,0), 2)
cv2.imshow('Frame', frame_with_line)
cv2.moveWindow('ROI', 1130, 0)
cv2.moveWindow('Frame', 0, 0)
k = cv2.waitKey(15) & 0xff
if k == 27:
break
elif k == 32:
cv2.waitKey() & 0xff
ret, frame = cap.read()
cv2.destroyAllWindows()
cap.release()
if __name__ == '__main__':
count_pigs()

Related

Non Linear MPC optimization of a 2 dimensional drone

I am trying to simulate a drone on a 2-dimensional lunar surface. The drone can apply thrust the z-axis of the body, and the drone can change the angle of its body from -90 degrees to +90 degrees.
The first planned acceleration in the y direction that the MPC function gives is a negative value that exceeds the the lunar accel_g, which I set to be 1.635 m/s^2; thus, the drone cancels out the initial velocity really quickly. This should not happen since I set the constraints of body angle in such that the thrust will never be able to reduce the vertical velocity: vertical velocity of the drone should be reduced only by the lunar gravity. I can not find what is wrong with the code.
** is there a way I can apply rotation to the marker of the plot? I want to change the cross marker so that it can represent the changes in attitude. **
function run_mpc(initial_position, initial_velocity, initial_angle)
model = Model(Ipopt.Optimizer)
Δt = 0.1
num_time_steps = 20 # Change this -> Affects Optimization
max_acceleration_Thr = 3 # Max Thrust / Mass
max_pitch_angle = 90
accel_g = 1.635 # 1/6 of Earth G
des_pos = [-1,0]
#variables model begin
position[1:2, 1:num_time_steps]
velocity[1:2, 1:num_time_steps]
acceleration[1:2, 1:num_time_steps]
-max_pitch_angle <= angle[1:num_time_steps] <= max_pitch_angle
0 <= accel_Thr[1:num_time_steps] <= max_acceleration_Thr
end
# Dynamics constraints
#NLconstraint(model, [i=2:num_time_steps, j=[1]], acceleration[j, i] == accel_Thr[i-1]*sind(angle[i-1]))
#NLconstraint(model, [i=2:num_time_steps, j=[2]], acceleration[j, i] == (accel_Thr[i-1]*cosd(angle[i-1]))-accel_g)
#NLconstraint(model, [i=2:num_time_steps, j=1:2],
velocity[j, i] == velocity[j, i - 1] + (acceleration[j, i - 1]) * Δt)
#NLconstraint(model, [i=2:num_time_steps, j=1:2],
position[j, i] == position[j, i - 1] + velocity[j, i - 1] * Δt)
# Cost function: minimize final position and final velocity
# For Moving to [-2,0] with min. vertical velocity,
# sum(([-2,0]-position[:, end]).^2)+ sum(velocity[[2], end].^2)
#NLobjective(model, Min,
100 * sum((des_pos[i]-position[i, num_time_steps])^2 for i in 1:2)+ sum(velocity[i, num_time_steps]^2 for i in 1:2))
# Initial conditions:
#NLconstraint(model, [i=1:2], position[i, 1] == initial_position[i])
#NLconstraint(model, [i=1:2], velocity[i, 1] == initial_velocity[i])
#NLconstraint(model, angle[1] == initial_angle)
optimize!(model)
return value.(position), value.(velocity), value.(acceleration), value.(angle[2:end])
end;
begin
# The robot's starting position and velocity
q = [1.0, 0.0]
v = [-2.0, 2.0]
ang = 45
Δt = 0.1
# Recording Position, Acceleration, Attitude, Planned Positions
qs_x = []
qs_y = []
as_x = []
as_y = []
angs = []
q_plans = []
u_plans = []
anim = #animate for i in 1:90 # This determies the number of MPC to be run
# Plot the current position & Attitude
plot(label = "Drone",[q[1]], [q[2]], marker=(:rect, 10), xlim=(-2, 2), ylim=(-2, 2))
plot!(label = "Body Axis",[q[1]], [q[2]], marker=(:cross, 18, :grey))
push!(qs_x,q[1])
push!(qs_y,q[2])
# Run the MPC control optimization
q_plan, v_plan, u_plan, ang_plan = run_mpc(q, v, ang)
# Draw the planned future states from the MPC optimization
plot!(label = "Opt. Path", q_plan[1, :], q_plan[2, :], linewidth=5, arrow=true, c=:orange)
# Draw the planned acceleration
plot!(label = "Opt. Accel",u_plan[1, 1:2], u_plan[2, 1:2], linewidth=3, arrow=true, c=:red)
# Save Acceleration & Angle Data to csv
u = u_plan[:, 1]
push!(as_x, u[1])
push!(as_y, u[2])
push!(angs, ang)
push!(u_plans, u_plan)
# Apply the planned acceleration&Attitude and simulate one step in time
global ang = ang_plan[1]
global v += u * Δt
global q += v * Δt
end
gif(anim, "~/Downloads/NLmpc_angle.gif", fps=60)
end

Calculating the size of an object using opencv and numpy poly1d

I'm looking to use a small numpy array to generate a curve that I can use to predict the height measurement at non-known points. I have several points that I am using to create a poly1d. I know it's possible, we use software that does it just fine at work, and when I used a different image as a tester, plugging the values into Excel and getting the polynomial, it worked fine, but I'm getting pretty drastic measurements on a different calibratable image, I get drastically different results.
Here is the image that I'm trying to measure.
The stick on the front of the pole contains known measurements. From bottom to top, they are 3'6" (42"), 6'6" (78"), 9' 8" (116"), 13' (156)
The picture has been through opencv undistort with a calibrated camera.
This is the function that actually performs the logic. x and y are gathered by cv2 EVENT_LBUTTONUP, and sent to this function.
Checking the lengths of the array is just to help me figure out why this isn't working, trying to generate a line to show the curve fit.
dist = self.firstClick-y
self.yData.append(dist)
if len(self.yData) > 4:
print(self.poly(dist))
if len(self.yData) == 4:
array = np.array(self.xData)
array = np.expand_dims(array, axis=0)
print(self.xData)
print(self.yData)
array=np.append(array, [self.yData], axis=0)
print(array)
x = array[:,0]
y = array[:,1]
self.poly = np.poly1d(np.polyfit(x, y, 2))
poly1d = np.poly1d(self.poly)
xp = np.linspace(-2, 20, 1)
_ = plt.plot(x, y, '.', xp, self.poly(xp), '-', xp, self.poly(xp), '--')
plt.ylim(0,200)
plt.show()
When I run this code, my values tend to quickly go into the tens of thousands when I'm attempting to collect the measurement at 18' 11", (the lowest wire).
Any help would be appreciated, I've been up all night trying to fit this curve.
Edit:
Sorry, I should have included the code used to display and scale the image.
self.img = cv2.imread(imagePath, cv2.IMREAD_ANYCOLOR)
self.scale_percent = 30
self.width = int(self.img.shape[1] * self.scale_percent/100)
self.height = int(self.img.shape[0] * self.scale_percent/100)
dsize = (self.width, self.height)
self.output = cv2.resize(self.img, dsize)
img = self.output
cv2.imshow('image', img)
cv2.setMouseCallback('image', self.click_event)
cv2.waitKey()
I just called this function to display the image and the below code to calibrate the values.
if self.firstClick == 0:
self.firstClick = y
cv2.putText(self.output, "Pole Base", (x, y), font, 1, (255, 255, 0), 2)
cv2.imshow('image', self.output)
elif self.firstClick != 0 and self.secondClick == 0:
self.secondClick = y
print("The difference in first and second clicks is", self.firstClick - self.secondClick)
first = self.firstClick - self.secondClick
inch = first/42
foot = inch*12
self.foot = foot
print("One foot is currently: ", foot)
self.firstLine = 3.5*12
self.secondLine = 6.5*12
self.thirdLine = 9.67*12
self.fourthLine = 13*12
self.xData = np.array([self.firstLine, self.secondLine, self.thirdLine, self.fourthLine])
self.yData.append(self.firstLine)
print(self.firstLine)
print(self.secondLine)
print(self.thirdLine)
print(self.fourthLine)

Find 7 vertices of a box using openCV

I don't know if this question have been repeating in here. If yes then i'm sorry..
I have a box that positioned to see H,W,L view. I understand steps to get vertices however most of the examples in the net only describes how to get 4 vertices from 2D plane. So my question is, how if we want to get 7 vertices (like the pic above) and handle it in numpy? How to differentiate between upper points and lower points?
I will be using Python to determine this.
Here's my attempt to get the 8 corners of the 3d rectangle. I masked on the saturation channel of the HSV color space since that separates out white.
I used findContours to get the contour of the box and then used approxPolyDP to get a six-point approximation (the six visible corners).
From there I approximated the two "hidden" corners via a parallelogram approximation. For each point I looked two points behind and created a fourth point that would make a parallelogram with that side. I then took the centroid of these parallelogram points to guess the corner. I hoped that taking the centroid of the points would help even out the error between the parallelogram assumption and the perspective warping, but it did a poor job.
If you need a better approximation there are probably ways to estimate the perspective warping to get the corners.
import cv2
import numpy as np
import random
def tup(point):
return (int(point[0]), int(point[1]));
# load image
img = cv2.imread("box.jpg");
# reduce size to fit on screen
scale = 0.25;
h,w = img.shape[:2];
h = int(scale*h);
w = int(scale*w);
img = cv2.resize(img, (w,h));
copy = np.copy(img);
# convert to hsv
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV);
h,s,v = cv2.split(hsv);
# make mask
mask = cv2.inRange(s, 30, 255);
# dilate and erode to get rid of small holes
kernel = np.ones((5,5), np.uint8);
mask = cv2.dilate(mask, kernel, iterations = 1);
mask = cv2.erode(mask, kernel, iterations = 1);
# contours # OpenCV 3.4, in OpenCV 2 or 4 it returns (contours, _)
_, contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
contour = contours[0]; # just take the first one
# approx until 6 points
num_points = 999999;
step_size = 0.01;
percent = step_size;
while num_points >= 6:
# get number of points
epsilon = percent * cv2.arcLength(contour, True);
approx = cv2.approxPolyDP(contour, epsilon, True);
num_points = len(approx);
# increment
percent += step_size;
# step back and get the points
# there could be more than 6 points if our step size misses it
percent -= step_size * 2;
epsilon = percent * cv2.arcLength(contour, True);
approx = cv2.approxPolyDP(contour, epsilon, True);
# draw contour
cv2.drawContours(img, [approx], -1, (0,0,200), 2);
# draw points
for point in approx:
point = point[0]; # drop extra layer of brackets
center = (int(point[0]), int(point[1]));
cv2.circle(img, center, 4, (150, 200, 0), -1);
# do parallelogram approx to get the two "hidden" corners to complete our 3d rectangle
proposals = [];
size = len(approx);
for a in range(size):
# get points backwards
two = approx[a - 2][0];
one = approx[a - 1][0];
curr = approx[a][0];
# get vector from one -> two
dx = two[0] - one[0];
dy = two[1] - one[1];
hidden = [curr[0] + dx, curr[1] + dy];
proposals.append([hidden, curr, a, two]);
# debug draw
c = np.copy(copy);
cv2.circle(c, tup(two), 4, (255, 0, 0), -1);
cv2.circle(c, tup(one), 4, (0,255,0), -1);
cv2.circle(c, tup(curr), 4, (0,0,255), -1);
cv2.circle(c, tup(hidden), 4, (255,255,0), -1);
cv2.line(c, tup(two), tup(one), (0,0,200), 1);
cv2.line(c, tup(curr), tup(hidden), (0,0,200), 1);
cv2.imshow("Mark", c);
cv2.waitKey(0);
# draw proposals
for point in proposals:
point = point[0];
center = (point[0], point[1]);
cv2.circle(img, center, 4, (200, 100, 0), -1);
# group points and sum up points
hidden_corners = [[0,0], [0,0]];
for point in proposals:
# get index and update hidden corners
index = point[2] % 2;
pos = point[0];
hidden_corners[index][0] += pos[0];
hidden_corners[index][1] += pos[1];
# divide to get centroid
hidden_corners[0][0] /= 3.0;
hidden_corners[0][1] /= 3.0;
hidden_corners[1][0] /= 3.0;
hidden_corners[1][1] /= 3.0;
# draw new points
for point in proposals:
# unpack
pos = point[0];
parent = point[1];
index = point[2] % 2;
source = point[3];
# draw
color = [random.randint(0, 150) for a in range(3)];
cv2.line(img, tup(hidden_corners[index]), tup(parent), (0,0,200), 2);
cv2.line(img, tup(pos), tup(parent), color, 1);
cv2.line(img, tup(pos), tup(source), color, 1);
cv2.circle(img, tup(hidden_corners[index]), 4, (200, 200, 0), -1);
# show
cv2.imshow("Image", img);
cv2.imshow("Mask", mask);
cv2.waitKey(0);

Why does OpenCV's Meanshift tracking algorithm only track object the first time?

I am running the meanshift tracking algorithm to track objects in a live stream(with webcam) in OpenCV however the algorithm only works the first time it is run and does not work when I run the program again unless I restart my computer. Why is this so?
Algorithm taken from: https://docs.opencv.org/trunk/db/df8/tutorial_py_meanshift.html
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
# take first frame of the video
ret,frame = cap.read()
# setup initial location of window
r,h,c,w = 250,90,400,125 # simply hardcoded the values
track_window = (c,r,w,h)
# set up the ROI for tracking
roi = frame[r:r+h, c:c+w]
hsv_roi = cv2.cvtColor(roi, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv_roi, np.array((0., 60.,32.)), np.array((180.,255.,255.)))
roi_hist = cv2.calcHist([hsv_roi],[0],mask,[180],[0,180])
cv2.normalize(roi_hist,roi_hist,0,255,cv2.NORM_MINMAX)
# Setup the termination criteria, either 10 iteration or move by atleast 1 pt
term_crit = ( cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1 )
while(1):
ret ,frame = cap.read()
if ret == True:
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
dst = cv2.calcBackProject([hsv],[0],roi_hist,[0,180],1)
# apply meanshift to get the new location
ret, track_window = cv2.meanShift(dst, track_window, term_crit)
# Draw it on image
x,y,w,h = track_window
img2 = cv2.rectangle(frame, (x,y), (x+w,y+h), 255,2)
cv2.imshow('img2',img2)
k = cv2.waitKey(60) & 0xff
if k == 27:
break
else:
cv2.imwrite(chr(k)+".jpg",img2)
else:
break
cv2.destroyAllWindows()
cap.release()

Can this PyGame code run 60fps for >40 critters?

In my other question, some of the posters asked to see the code and suggested I make a new question. As requested, here is most of the code I'm using. I've removed the Vector class, simply because it's a lot of code. It's well-understood math that I got from someone else (https://gist.github.com/mcleonard/5351452), and cProfile didn't have much to say about any of the functions there. I've provided a link in the code, if you want to make this run-able.
This code should run, if you paste the the vector class where indicated in the code.
The problem is, once I get above 20 critters, the framerate drops rapidly from 60fps to 11fps around 50 critters.
Please excuse the spaghetti-code. Much of this is diagnostic kludging or pre-code that I intend to either remove, or turn into a behavior (instead of a hard-coded value).
This app is basically composed of 4 objects.
A Vector object provides abstracted vector operations.
A Heat Block is able to track it's own "heat" level, increase it and decrease it. It can also draw itself.
A Heat Map is composed of heat blocks which are tiled across the screen. When given coordinates, it can choose the block that those coordinates fall within.
A Critter has many features that make it able to wander around the screen, bump off of the walls and other critters, choose a new random direction, and die.
The main loop iterates through each critter in the "flock" and updates its "condition" (whether or not it's "dying"), its location, its orientation, and the heat block on which it is currently standing. The loop also iterates over each heat block to let it "cool down."
Then the main loop asks the heat map to draw itself, and then each critter in the flock to draw itself.
import pygame
from pygame import gfxdraw
import pygame.locals
import os
import math
import random
import time
(I got a nice vector class from someone else. It's large, and mostly likely not the problem.)
(INSERT CONTENTS OF VECTOR.PY FROM https://gist.github.com/mcleonard/5351452 HERE)
pygame.init()
#some global constants
BLUE = (0, 0, 255)
WHITE = (255,255,255)
diagnostic = False
SPAWN_TIME = 1 #number of seconds between creating new critters
FLOCK_LIMIT = 30 #number of critters at which the flock begins being culled
GUIDs = [0] #list of guaranteed unique IDs for identifying each critter
# Set the position of the OS window
position = (30, 30)
os.environ['SDL_VIDEO_WINDOW_POS'] = str(position[0]) + "," + str(position[1])
# Set the position, width and height of the screen [width, height]
size_x = 1000
size_y = 500
size = (size_x, size_y)
FRAMERATE = 60
SECS_FOR_DYING = 1
screen = pygame.display.set_mode(size)
screen.set_alpha(None)
pygame.display.set_caption("My Game")
# Used to manage how fast the screen updates
clock = pygame.time.Clock()
def random_float(lower, upper):
num = random.randint(lower*1000, upper*1000)
return num/1000
def new_GUID():
num = GUIDs[-1]
num = num + 1
while num in GUIDs:
num += 1
GUIDs.append(num)
return num
class HeatBlock:
def __init__(self,_tlx,_tly,h,w):
self.tlx = int(_tlx)
self.tly = int(_tly)
self.height = int(h)+1
self.width = int(w)
self.heat = 255.0
self.registered = False
def register_tresspasser(self):
self.registered = True
self.heat = max(self.heat - 1, 0)
def cool_down(self):
if not self.registered:
self.heat = min(self.heat + 0.1, 255)
self.registered = False
def hb_draw_self(self):
screen.fill((255,int(self.heat),int(self.heat)), [self.tlx, self.tly, self.width, self.height])
class HeatMap:
def __init__(self, _h, _v):
self.h_freq = _h #horizontal frequency
self.h_rez = size_x/self.h_freq #horizontal resolution
self.v_freq = _v #vertical frequency
self.v_rez = size_y/self.v_freq #vertical resolution
self.blocks = []
def make_map(self):
h_size = size_x/self.h_freq
v_size = size_y/self.v_freq
for h_count in range(0, self.h_freq):
TLx = h_count * h_size #TopLeft corner, x
col = []
for v_count in range(0, self.v_freq):
TLy = v_count * v_size #TopLeft corner, y
col.append(HeatBlock(TLx,TLy,v_size,h_size))
self.blocks.append(col)
def hm_draw_self(self):
for col in self.blocks:
for block in col:
block.cool_down()
block.hb_draw_self()
def register(self, x, y):
#convert the given coordinates of the trespasser into a col/row block index
col = max(int(math.floor(x / self.h_rez)),0)
row = max(int(math.floor(y / self.v_rez)),0)
self.blocks[col][row].register_tresspasser()
class Critter:
def __init__(self):
self.color = (random.randint(1, 200), random.randint(1, 200), random.randint(1, 200))
self.linear_speed = random_float(20, 100)
self.radius = int(round(10 * (100/self.linear_speed)))
self.angular_speed = random_float(0.1, 2)
self.x = int(random.randint(self.radius*2, size_x - (self.radius*2)))
self.y = int(random.randint(self.radius*2, size_y - (self.radius*2)))
self.orientation = Vector(0, 1).rotate(random.randint(-180, 180))
self.sensor = Vector(0, 20)
self.sensor_length = 20
self.new_orientation = self.orientation
self.draw_bounds = False
self.GUID = new_GUID()
self.condition = 0 #0 = alive, [1-fps] = dying, >fps = dead
self.delete_me = False
def c_draw_self(self):
#if we're alive and not dying, draw our normal self
if self.condition == 0:
#diagnostic
if self.draw_bounds:
pygame.gfxdraw.rectangle(screen, [int(self.x), int(self.y), 1, 1], BLUE)
temp = self.orientation * (self.linear_speed * 20)
pygame.gfxdraw.line(screen, int(self.x), int(self.y), int(self.x + temp[0]), int(self.y + temp[1]), BLUE)
#if there's a new orientation, match it gradually
temp = self.new_orientation * self.linear_speed
#draw my body
pygame.gfxdraw.aacircle(screen, int(self.x), int(self.y), self.radius, self.color)
#draw a line indicating my new direction
pygame.gfxdraw.line(screen, int(self.x), int(self.y), int(self.x + temp[0]), int(self.y + temp[1]), BLUE)
#draw my sensor (a line pointing forward)
self.sensor = self.orientation.normalize() * self.sensor_length
pygame.gfxdraw.line(screen, int(self.x), int(self.y), int(self.x + self.sensor[0]), int(self.y + self.sensor[1]), BLUE)
#otherwise we're dying, draw our dying animation
elif 1 <= self.condition <= FRAMERATE*SECS_FOR_DYING:
#draw some lines in a spinningi circle
for num in range(0,10):
line = Vector(0, 1).rotate((num*(360/10))+(self.condition*23))
line = line*self.radius
pygame.gfxdraw.line(screen, int(self.x), int(self.y), int(self.x+line[0]), int(self.y+line[1]), self.color)
def print_self(self):
#diagnostic
print("==============")
print("radius:", self.radius)
print("color:", self.color)
print("linear_speed:", self.linear_speed)
print("angular_speed:", self.angular_speed)
print("x:", self.x)
print("y:", int(self.y))
print("orientation:", self.orientation)
def avoid_others(self, _flock):
for _critter in _flock:
#if the critter isn't ME...
if _critter.GUID is not self.GUID and _critter.condition == 0:
#and it's touching me...
if self.x - _critter.x <= self.radius + _critter.radius:
me = Vector(self.x, int(self.y))
other_guy = Vector(_critter.x, _critter.y)
distance = me - other_guy
#give me new orientation that's away from the other guy
if distance.norm() <= ((self.radius) + (_critter.radius)):
new_direction = me - other_guy
self.orientation = self.new_orientation = new_direction.normalize()
def update_location(self, elapsed):
boundary = '?'
while boundary != 'X':
boundary = self.out_of_bounds()
if boundary == 'N':
self.orientation = self.new_orientation = Vector(0, 1).rotate(random.randint(-20, 20))
self.y = (self.radius) + 2
elif boundary == 'S':
self.orientation = self.new_orientation = Vector(0,-1).rotate(random.randint(-20, 20))
self.y = (size_y - (self.radius)) - 2
elif boundary == 'E':
self.orientation = self.new_orientation = Vector(-1,0).rotate(random.randint(-20, 20))
self.x = (size_x - (self.radius)) - 2
elif boundary == 'W':
self.orientation = self.new_orientation = Vector(1,0).rotate(random.randint(-20, 20))
self.x = (self.radius) + 2
point = Vector(self.x, self.y)
self.x, self.y = (point + (self.orientation * (self.linear_speed*(elapsed/1000))))
boundary = self.out_of_bounds()
def update_orientation(self, elapsed):
#randomly choose a new direction, from time to time
if random.randint(0, 100) > 98:
self.choose_new_orientation()
difference = self.orientation.argument() - self.new_orientation.argument()
self.orientation = self.orientation.rotate((difference * (self.angular_speed*(elapsed/1000))))
def still_alive(self, elapsed):
return_value = True #I am still alive
if self.condition == 0:
return_value = True
elif self.condition <= FRAMERATE*SECS_FOR_DYING:
self.condition = self.condition + (elapsed/17)
return_value = True
if self.condition > FRAMERATE*SECS_FOR_DYING:
return_value = False
return return_value
def choose_new_orientation(self):
if self.new_orientation:
if (self.orientation.argument() - self.new_orientation.argument()) < 5:
rotation = random.randint(-300, 300)
self.new_orientation = self.orientation.rotate(rotation)
def out_of_bounds(self):
if self.x >= (size_x - (self.radius)):
return 'E'
elif self.y >= (size_y - (self.radius)):
return 'S'
elif self.x <= (0 + (self.radius)):
return 'W'
elif self.y <= (0 + (self.radius)):
return 'N'
else:
return 'X'
# -------- Main Program Loop -----------
# generate critters
flock = [Critter()]
# generate heat map
heatMap = HeatMap(60, 40)
heatMap.make_map()
# set some settings
last_spawn = time.clock()
run_time = time.perf_counter()
frame_count = 0
max_time = 0
ms_elapsed = 1
avg_fps = [1]
# Loop until the user clicks the close button.
done = False
while not done:
# --- Main event loop only processes one event
frame_count = frame_count + 1
for event in pygame.event.get():
if event.type == pygame.QUIT:
done = True
# --- Game logic should go here
#check if it's time to make another critter
if time.clock() - last_spawn > SPAWN_TIME:
flock.append(Critter())
last_spawn = time.clock()
if len(flock) >= FLOCK_LIMIT:
#if we're over the flock limit, cull the herd
counter = FLOCK_LIMIT
for critter in flock[0:len(flock)-FLOCK_LIMIT]:
#this code allows a critter to be "dying" for a while, to play an animation
if critter.condition == 0:
critter.condition = 1
elif not critter.still_alive(ms_elapsed):
critter.delete_me = True
counter = 0
#delete all the critters that have finished dying
while counter < len(flock):
if flock[counter].delete_me:
del flock[counter]
else:
counter = counter+1
#----loop on all critters once, doing all functions for each critter
for critter in flock:
if critter.condition == 0:
critter.avoid_others(flock)
if critter.condition == 0:
heatMap.register(critter.x, critter.y)
critter.update_location(ms_elapsed)
critter.update_orientation(ms_elapsed)
if diagnostic:
critter.print_self()
#----alternately, loop for each function. Speed seems to be similar either way
#for critter in flock:
# if critter.condition == 0:
# critter.update_location(ms_elapsed)
#for critter in flock:
# if critter.condition == 0:
# critter.update_orientation(ms_elapsed)
# --- Screen-clearing code goes here
# Here, we clear the screen to white. Don't put other drawing commands
screen.fill(WHITE)
# --- Drawing code should go here
#draw the heat_map
heatMap.hm_draw_self()
for critter in flock:
critter.c_draw_self()
#draw the framerate
myfont = pygame.font.SysFont("monospace", 15)
#average the framerate over 60 frames
temp = sum(avg_fps)/float(len(avg_fps))
text = str(round(((1/temp)*1000),0))+"FPS | "+str(len(flock))+" Critters"
label = myfont.render(text, 1, (0, 0, 0))
screen.blit(label, (5, 5))
# --- Go ahead and update the screen with what we've drawn.
pygame.display.update()
# --- Limit to 60 frames per second
#only run for 30 seconds
if time.perf_counter()-run_time >= 30:
done = True
#limit to 60fps
#add this frame's time to the list
avg_fps.append(ms_elapsed)
#remove any old frames
while len(avg_fps) > 60:
del avg_fps[0]
ms_elapsed = clock.tick(FRAMERATE)
#track longest frame
if ms_elapsed > max_time:
max_time = ms_elapsed
#print some stats once the program is finished
print("Count:", frame_count)
print("Max time since last flip:", str(max_time)+"ms")
print("Total Time:", str(int(time.perf_counter()-run_time))+"s")
print("Average time for a flip:", str(int(((time.perf_counter()-run_time)/frame_count)*1000))+"ms")
# Close the window and quit.
pygame.quit()
One thing you can already do to improve the performance is to use pygame.math.Vector2 instead of your Vector class, because it's implemented in C and therefore faster. Before I switched to pygame's vector class, I could have ~50 critters on the screen before the frame rate dropped below 60, and after the change up to ~100.
pygame.math.Vector2 doesn't have that argument method, so you need to extract it from the class and turn it into a function:
def argument(vec):
""" Returns the argument of the vector, the angle clockwise from +y."""
arg_in_rad = math.acos(Vector(0,1)*vec/vec.length())
arg_in_deg = math.degrees(arg_in_rad)
if vec.x < 0:
return 360 - arg_in_deg
else:
return arg_in_deg
And change .norm() to .length() everywhere in the program.
Also, define the font object (myfont) before the while loop. That's only a minor improvement, but every frame counts.
Another change that made a significant improvement was to streamline my collision-detection algorithm.
Formerly, I had been looping through every critter in the flock, and measuring the distance between it and every other critter in the flock. If that distance was small enough, I do something. That's n^2 checks, which is not awesome.
I'd thought about using a quadtree, but it didn't seem efficient to rebalance the whole tree every frame, because it will change every time a critter moves.
Well, I finally actually tried it, and it turns out that building a brand-new quadtree at the beginning of each frame is actually plenty fast. Once I have the tree, I pass it to the avoidance function where I just extract an intersection of any of the critters in that tree within a bounding box I care about. Then I just iterate on those neighbors to measure distances and update directions and whatnot.
Now I'm up to 150 or so critters before I start dropping frames (up from 40).
So the moral of the story is, trust evidence instead of intuition.