Object tracking on a custom dataset - object-detection

I'm new on computer vision field , I work on a project to detect and track trashs.
I use YOLO v5 to detect objects now I want to count each object that is present on the Video . I want some suggestion if there are some models that we can use on my own dataset.

This repo contains all you need for tracking and counting objects: https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet. If you already have a set of weights you could start tracking by:
python track.py --yolo-weights /path/to/your/weights.pt
Adding the counting functionality should be straight forward

Related

how to get list of objects used in reCaptcha 2+ ? like (Cat,Dog,Bike,...etc)

Basically, I'm creating an ML model for reCaptcha images to identify those I need to load a data set of objects that frequently used in the reCaptcha model
If I get a list of objects used in ReCAPTCHA it will be easy for me to build a proper dataset and train my ML model

Data preprocessing of click stream data in real time

I am working on a project to detect anomalies in web users activity in real-time. Any ill intention or malicious activity of the user has to be detected in real-time. Input data is clickstream data of users. Click data contains user-id ( Unique user ID), click URL ( URL of web page), Click text (Text/function in the website on which user has clicked) and Information (Any information typed by user). This project is similar to an Intrusion detection system (IDS). I am using python 3.6 and I have the following queries,
Which is the best approach to carry out the data preprocessing, Considering all the attributes in the dataset are categorical values.
Encoding methods like hot encoding or label encoding could be applied but data has to be processed in real-time which makes it difficult to apply
As per the requirement of the project 3 columns(click URL, Click Text and Typed information) considered as feature columns.
I am really confused about how to approach data preprocessing. Any insight or suggestions would be appreciated
In some recent personal and professional projects when faced with the challenge of applying ML on streaming data I have had success with the python library River https://github.com/online-ml/river.
Some online algorithms can handle labelled values (like hoeffding trees) so depending on what you want to achieve you may not need to conduct preprocessing.
If you do need to conduct preprocessing, label encoding and one hot encoding could be applied in an incremental fashion. Below is some code to get you started. River also has a number of classes to help out with feature extraction and feature selection e.g: TF-IDF, bag of words or frequency aggregations.
online_label_enc = {}
for click in click_stream:
try:
label_enc = click[click__feature_label_of_interest]
except KeyError:
click[click__feature_label_of_interest] = len(online_label_enc)
label_enc = click[click__feature_label_of_interest]
I am not sure what you are asking - but if you are approaching the problem online/incrementally then extract the features you want and pass them to your online algorithm of choice - which should then be updating and learning at every data increment.

How to add two models in single store or single grid based on -ref and date range in customize rally app?

I am a beginner for the development of customized rally app. I have a portfolio item and revisions for every Marketable feature. I have two models PortfolioItem/MarketableFeature and Revision, using this two models I want to make a grid.
I facing issue for combining two models or two stores, for customized rally app models config is not work for any grid. I am confused on how to get revision description and PortfolioItem/MarketableFeature in one grid based on date range and according to MF Id.
i used
Rally.ui.grid.Grid
for creating a grid,
Rally.data.wsapi.Store
for creating store and
Rally.ui.combobox.FieldValueComboBox
for date range(start-date and end-date).
You'll want to create your grid with the MarketableFeature model. Is there another way to get the data you need other than loading all the revisions? This will be very expensive and slow since you'll be making 1 request per grid row in addition to the initial request to populate the portfolio items.
If you do need to do that then you'll probably want to do a combination of these two examples:
Fetching subcollections:
https://help.rallydev.com/apps/2.1/doc/#!/guide/collections_in_v2-section-collection-fetching
Custom renderer, you can use this to render the Revisions data:
https://help.rallydev.com/apps/2.1/doc/#!/example/custom-data-grid

How to add a list of keywords to an wit.ai entity?

I'm new to wit.ai. I'm building a bot that does voice input for an order processing pipeline. In one field i need to input the client location. And this client_location entity has a keyword search strategy attached to it.
Now i want to add all cities, towns and villages to this entity as keywords. Because only one of this will be considered a valid value for client_location entity.
But there are a couple thousands of them, and adding them by hand, one by one, inside the wit.ai UI, doesn't make much sense.
I want to use a cli tool or a node package or something - to do it programmatically.
How can i do this? And also is ok to have so many keywords for one entity?
For now, the node.js library can't do training.
But you can still do it using HTTP REST api, by posting to /samples path.
POST/samples
The docs are here.

Kinect V2 SDK 2 re-identify bodies

Is there a way to re-identify bodies, which exit the scene and re-enter it?
The SDK 2 gives new IDs on re-enter.
Is there a library for that? Or is it practicable to save body data (length of arm, etc.) and compare it to the bodies re-entering?
You would have to generate your own method of locally tracking body information on a PC the Kinect is hooked up to. You would need to set up an analysis feature where the person stands in a fixed position and either use facial recognition or measure their bodies and find the closest matching data. Then you can temporarily assign the data to the body IDs and use that to track them while they are in view.
There is no way to keep track of a body if they leave the view without going through the processes of analyzing and comparing the body or face.