İçeriğe geç

Referans için ultralytics/trackers/byte_tracker.py

Not

Bu dosya https://github.com/ultralytics/ultralytics/blob/main/ ultralytics/trackers/byte_tracker .py adresinde mevcuttur. Bir sorun tespit ederseniz, lütfen bir Çekme İsteği 🛠️ ile katkıda bulunarak düzeltmeye yardımcı olun. Teşekkürler 🙏!



ultralytics.trackers.byte_tracker.STrack

Üsler: BaseTrack

Durum tahmini için Kalman filtrelemesini kullanan tek nesne izleme temsili.

Bu sınıf, bireysel tracklet'lere ilişkin tüm bilgilerin depolanmasından sorumludur ve durum güncellemelerini gerçekleştirir ve Kalman filtresine dayalı tahminler.

Nitelikler:

İsim Tip Açıklama
shared_kalman KalmanFilterXYAH

Tahmin için tüm STrack örneklerinde kullanılan paylaşılan Kalman filtresi.

_tlwh ndarray

Sol üst köşe koordinatlarını ve sınırlayıcı kutunun genişlik ve yüksekliğini saklamak için özel nitelik.

kalman_filter KalmanFilterXYAH

Bu belirli nesne izi için kullanılan Kalman filtresi örneği.

mean ndarray

Ortalama durum tahmin vektörü.

covariance ndarray

Durum tahmini kovaryansı.

is_activated bool

Parçanın etkinleştirilip etkinleştirilmediğini gösteren boolean bayrağı.

score float

Pistin güven puanı.

tracklet_len int

İz parçasının uzunluğu.

cls any

Nesne için sınıf etiketi.

idx int

Nesne için dizin veya tanımlayıcı.

frame_id int

Geçerli çerçeve kimliği.

start_frame int

Nesnenin ilk tespit edildiği çerçeve.

Yöntemler:

İsim Açıklama
predict

Kalman filtresini kullanarak nesnenin bir sonraki durumunu tahmin edin.

multi_predict

Birden fazla parça için sonraki durumları tahmin edin.

multi_gmc

Bir homografi matrisi kullanarak birden fazla iz durumunu güncelleyin.

activate

Yeni bir tracklet etkinleştirin.

re_activate

Daha önce kaybolmuş bir tracklet'i yeniden etkinleştirin.

update

Eşleşen bir parçanın durumunu güncelleyin.

convert_coords

Sınırlayıcı kutuyu x-y-en-boy-yükseklik biçimine dönüştürün.

tlwh_to_xyah

tlwh sınırlayıcı kutusunu xyah biçimine dönüştürür.

Kaynak kodu ultralytics/trackers/byte_tracker.py
class STrack(BaseTrack):
    """
    Single object tracking representation that uses Kalman filtering for state estimation.

    This class is responsible for storing all the information regarding individual tracklets and performs state updates
    and predictions based on Kalman filter.

    Attributes:
        shared_kalman (KalmanFilterXYAH): Shared Kalman filter that is used across all STrack instances for prediction.
        _tlwh (np.ndarray): Private attribute to store top-left corner coordinates and width and height of bounding box.
        kalman_filter (KalmanFilterXYAH): Instance of Kalman filter used for this particular object track.
        mean (np.ndarray): Mean state estimate vector.
        covariance (np.ndarray): Covariance of state estimate.
        is_activated (bool): Boolean flag indicating if the track has been activated.
        score (float): Confidence score of the track.
        tracklet_len (int): Length of the tracklet.
        cls (any): Class label for the object.
        idx (int): Index or identifier for the object.
        frame_id (int): Current frame ID.
        start_frame (int): Frame where the object was first detected.

    Methods:
        predict(): Predict the next state of the object using Kalman filter.
        multi_predict(stracks): Predict the next states for multiple tracks.
        multi_gmc(stracks, H): Update multiple track states using a homography matrix.
        activate(kalman_filter, frame_id): Activate a new tracklet.
        re_activate(new_track, frame_id, new_id): Reactivate a previously lost tracklet.
        update(new_track, frame_id): Update the state of a matched track.
        convert_coords(tlwh): Convert bounding box to x-y-aspect-height format.
        tlwh_to_xyah(tlwh): Convert tlwh bounding box to xyah format.
    """

    shared_kalman = KalmanFilterXYAH()

    def __init__(self, xywh, score, cls):
        """Initialize new STrack instance."""
        super().__init__()
        # xywh+idx or xywha+idx
        assert len(xywh) in {5, 6}, f"expected 5 or 6 values but got {len(xywh)}"
        self._tlwh = np.asarray(xywh2ltwh(xywh[:4]), dtype=np.float32)
        self.kalman_filter = None
        self.mean, self.covariance = None, None
        self.is_activated = False

        self.score = score
        self.tracklet_len = 0
        self.cls = cls
        self.idx = xywh[-1]
        self.angle = xywh[4] if len(xywh) == 6 else None

    def predict(self):
        """Predicts mean and covariance using Kalman filter."""
        mean_state = self.mean.copy()
        if self.state != TrackState.Tracked:
            mean_state[7] = 0
        self.mean, self.covariance = self.kalman_filter.predict(mean_state, self.covariance)

    @staticmethod
    def multi_predict(stracks):
        """Perform multi-object predictive tracking using Kalman filter for given stracks."""
        if len(stracks) <= 0:
            return
        multi_mean = np.asarray([st.mean.copy() for st in stracks])
        multi_covariance = np.asarray([st.covariance for st in stracks])
        for i, st in enumerate(stracks):
            if st.state != TrackState.Tracked:
                multi_mean[i][7] = 0
        multi_mean, multi_covariance = STrack.shared_kalman.multi_predict(multi_mean, multi_covariance)
        for i, (mean, cov) in enumerate(zip(multi_mean, multi_covariance)):
            stracks[i].mean = mean
            stracks[i].covariance = cov

    @staticmethod
    def multi_gmc(stracks, H=np.eye(2, 3)):
        """Update state tracks positions and covariances using a homography matrix."""
        if len(stracks) > 0:
            multi_mean = np.asarray([st.mean.copy() for st in stracks])
            multi_covariance = np.asarray([st.covariance for st in stracks])

            R = H[:2, :2]
            R8x8 = np.kron(np.eye(4, dtype=float), R)
            t = H[:2, 2]

            for i, (mean, cov) in enumerate(zip(multi_mean, multi_covariance)):
                mean = R8x8.dot(mean)
                mean[:2] += t
                cov = R8x8.dot(cov).dot(R8x8.transpose())

                stracks[i].mean = mean
                stracks[i].covariance = cov

    def activate(self, kalman_filter, frame_id):
        """Start a new tracklet."""
        self.kalman_filter = kalman_filter
        self.track_id = self.next_id()
        self.mean, self.covariance = self.kalman_filter.initiate(self.convert_coords(self._tlwh))

        self.tracklet_len = 0
        self.state = TrackState.Tracked
        if frame_id == 1:
            self.is_activated = True
        self.frame_id = frame_id
        self.start_frame = frame_id

    def re_activate(self, new_track, frame_id, new_id=False):
        """Reactivates a previously lost track with a new detection."""
        self.mean, self.covariance = self.kalman_filter.update(
            self.mean, self.covariance, self.convert_coords(new_track.tlwh)
        )
        self.tracklet_len = 0
        self.state = TrackState.Tracked
        self.is_activated = True
        self.frame_id = frame_id
        if new_id:
            self.track_id = self.next_id()
        self.score = new_track.score
        self.cls = new_track.cls
        self.angle = new_track.angle
        self.idx = new_track.idx

    def update(self, new_track, frame_id):
        """
        Update the state of a matched track.

        Args:
            new_track (STrack): The new track containing updated information.
            frame_id (int): The ID of the current frame.
        """
        self.frame_id = frame_id
        self.tracklet_len += 1

        new_tlwh = new_track.tlwh
        self.mean, self.covariance = self.kalman_filter.update(
            self.mean, self.covariance, self.convert_coords(new_tlwh)
        )
        self.state = TrackState.Tracked
        self.is_activated = True

        self.score = new_track.score
        self.cls = new_track.cls
        self.angle = new_track.angle
        self.idx = new_track.idx

    def convert_coords(self, tlwh):
        """Convert a bounding box's top-left-width-height format to its x-y-aspect-height equivalent."""
        return self.tlwh_to_xyah(tlwh)

    @property
    def tlwh(self):
        """Get current position in bounding box format (top left x, top left y, width, height)."""
        if self.mean is None:
            return self._tlwh.copy()
        ret = self.mean[:4].copy()
        ret[2] *= ret[3]
        ret[:2] -= ret[2:] / 2
        return ret

    @property
    def xyxy(self):
        """Convert bounding box to format (min x, min y, max x, max y), i.e., (top left, bottom right)."""
        ret = self.tlwh.copy()
        ret[2:] += ret[:2]
        return ret

    @staticmethod
    def tlwh_to_xyah(tlwh):
        """Convert bounding box to format (center x, center y, aspect ratio, height), where the aspect ratio is width /
        height.
        """
        ret = np.asarray(tlwh).copy()
        ret[:2] += ret[2:] / 2
        ret[2] /= ret[3]
        return ret

    @property
    def xywh(self):
        """Get current position in bounding box format (center x, center y, width, height)."""
        ret = np.asarray(self.tlwh).copy()
        ret[:2] += ret[2:] / 2
        return ret

    @property
    def xywha(self):
        """Get current position in bounding box format (center x, center y, width, height, angle)."""
        if self.angle is None:
            LOGGER.warning("WARNING ⚠️ `angle` attr not found, returning `xywh` instead.")
            return self.xywh
        return np.concatenate([self.xywh, self.angle[None]])

    @property
    def result(self):
        """Get current tracking results."""
        coords = self.xyxy if self.angle is None else self.xywha
        return coords.tolist() + [self.track_id, self.score, self.cls, self.idx]

    def __repr__(self):
        """Return a string representation of the BYTETracker object with start and end frames and track ID."""
        return f"OT_{self.track_id}_({self.start_frame}-{self.end_frame})"

result property

Güncel izleme sonuçlarını alın.

tlwh property

Geçerli konumu sınırlayıcı kutu biçiminde alın (sol üst x, sol üst y, genişlik, yükseklik).

xywh property

Geçerli konumu sınırlayıcı kutu biçiminde alın (merkez x, merkez y, genişlik, yükseklik).

xywha property

Geçerli konumu sınırlayıcı kutu biçiminde alın (merkez x, merkez y, genişlik, yükseklik, açı).

xyxy property

Sınırlayıcı kutuyu (min x, min y, max x, max y), yani (sol üst, sağ alt) biçimine dönüştürün.

__init__(xywh, score, cls)

Yeni STrack örneğini başlatın.

Kaynak kodu ultralytics/trackers/byte_tracker.py
def __init__(self, xywh, score, cls):
    """Initialize new STrack instance."""
    super().__init__()
    # xywh+idx or xywha+idx
    assert len(xywh) in {5, 6}, f"expected 5 or 6 values but got {len(xywh)}"
    self._tlwh = np.asarray(xywh2ltwh(xywh[:4]), dtype=np.float32)
    self.kalman_filter = None
    self.mean, self.covariance = None, None
    self.is_activated = False

    self.score = score
    self.tracklet_len = 0
    self.cls = cls
    self.idx = xywh[-1]
    self.angle = xywh[4] if len(xywh) == 6 else None

__repr__()

BYTETracker nesnesinin başlangıç ve bitiş karelerini ve parça kimliğini içeren bir dize gösterimini döndürür.

Kaynak kodu ultralytics/trackers/byte_tracker.py
def __repr__(self):
    """Return a string representation of the BYTETracker object with start and end frames and track ID."""
    return f"OT_{self.track_id}_({self.start_frame}-{self.end_frame})"

activate(kalman_filter, frame_id)

Yeni bir tracklet başlatın.

Kaynak kodu ultralytics/trackers/byte_tracker.py
def activate(self, kalman_filter, frame_id):
    """Start a new tracklet."""
    self.kalman_filter = kalman_filter
    self.track_id = self.next_id()
    self.mean, self.covariance = self.kalman_filter.initiate(self.convert_coords(self._tlwh))

    self.tracklet_len = 0
    self.state = TrackState.Tracked
    if frame_id == 1:
        self.is_activated = True
    self.frame_id = frame_id
    self.start_frame = frame_id

convert_coords(tlwh)

Bir sınırlayıcı kutunun üst-sol-genişlik-yükseklik biçimini x-y-yüzey-yükseklik eşdeğerine dönüştürün.

Kaynak kodu ultralytics/trackers/byte_tracker.py
def convert_coords(self, tlwh):
    """Convert a bounding box's top-left-width-height format to its x-y-aspect-height equivalent."""
    return self.tlwh_to_xyah(tlwh)

multi_gmc(stracks, H=np.eye(2, 3)) staticmethod

Bir homografi matrisi kullanarak durum izlerinin konumlarını ve kovaryanslarını güncelleyin.

Kaynak kodu ultralytics/trackers/byte_tracker.py
@staticmethod
def multi_gmc(stracks, H=np.eye(2, 3)):
    """Update state tracks positions and covariances using a homography matrix."""
    if len(stracks) > 0:
        multi_mean = np.asarray([st.mean.copy() for st in stracks])
        multi_covariance = np.asarray([st.covariance for st in stracks])

        R = H[:2, :2]
        R8x8 = np.kron(np.eye(4, dtype=float), R)
        t = H[:2, 2]

        for i, (mean, cov) in enumerate(zip(multi_mean, multi_covariance)):
            mean = R8x8.dot(mean)
            mean[:2] += t
            cov = R8x8.dot(cov).dot(R8x8.transpose())

            stracks[i].mean = mean
            stracks[i].covariance = cov

multi_predict(stracks) staticmethod

Verilen yollar için Kalman filtresi kullanarak çoklu nesne tahmini izleme gerçekleştirin.

Kaynak kodu ultralytics/trackers/byte_tracker.py
@staticmethod
def multi_predict(stracks):
    """Perform multi-object predictive tracking using Kalman filter for given stracks."""
    if len(stracks) <= 0:
        return
    multi_mean = np.asarray([st.mean.copy() for st in stracks])
    multi_covariance = np.asarray([st.covariance for st in stracks])
    for i, st in enumerate(stracks):
        if st.state != TrackState.Tracked:
            multi_mean[i][7] = 0
    multi_mean, multi_covariance = STrack.shared_kalman.multi_predict(multi_mean, multi_covariance)
    for i, (mean, cov) in enumerate(zip(multi_mean, multi_covariance)):
        stracks[i].mean = mean
        stracks[i].covariance = cov

predict()

Kalman filtresi kullanarak ortalama ve kovaryansı tahmin eder.

Kaynak kodu ultralytics/trackers/byte_tracker.py
def predict(self):
    """Predicts mean and covariance using Kalman filter."""
    mean_state = self.mean.copy()
    if self.state != TrackState.Tracked:
        mean_state[7] = 0
    self.mean, self.covariance = self.kalman_filter.predict(mean_state, self.covariance)

re_activate(new_track, frame_id, new_id=False)

Daha önce kaybolmuş bir izi yeni bir algılama ile yeniden etkinleştirir.

Kaynak kodu ultralytics/trackers/byte_tracker.py
def re_activate(self, new_track, frame_id, new_id=False):
    """Reactivates a previously lost track with a new detection."""
    self.mean, self.covariance = self.kalman_filter.update(
        self.mean, self.covariance, self.convert_coords(new_track.tlwh)
    )
    self.tracklet_len = 0
    self.state = TrackState.Tracked
    self.is_activated = True
    self.frame_id = frame_id
    if new_id:
        self.track_id = self.next_id()
    self.score = new_track.score
    self.cls = new_track.cls
    self.angle = new_track.angle
    self.idx = new_track.idx

tlwh_to_xyah(tlwh) staticmethod

Sınırlayıcı kutuyu biçime dönüştür (merkez x, merkez y, en boy oranı, yükseklik), burada en boy oranı genişlik / Yükseklik.

Kaynak kodu ultralytics/trackers/byte_tracker.py
@staticmethod
def tlwh_to_xyah(tlwh):
    """Convert bounding box to format (center x, center y, aspect ratio, height), where the aspect ratio is width /
    height.
    """
    ret = np.asarray(tlwh).copy()
    ret[:2] += ret[2:] / 2
    ret[2] /= ret[3]
    return ret

update(new_track, frame_id)

Eşleşen bir parçanın durumunu güncelleyin.

Parametreler:

İsim Tip Açıklama Varsayılan
new_track STrack

Güncellenmiş bilgileri içeren yeni parça.

gerekli
frame_id int

Geçerli çerçevenin kimliği.

gerekli
Kaynak kodu ultralytics/trackers/byte_tracker.py
def update(self, new_track, frame_id):
    """
    Update the state of a matched track.

    Args:
        new_track (STrack): The new track containing updated information.
        frame_id (int): The ID of the current frame.
    """
    self.frame_id = frame_id
    self.tracklet_len += 1

    new_tlwh = new_track.tlwh
    self.mean, self.covariance = self.kalman_filter.update(
        self.mean, self.covariance, self.convert_coords(new_tlwh)
    )
    self.state = TrackState.Tracked
    self.is_activated = True

    self.score = new_track.score
    self.cls = new_track.cls
    self.angle = new_track.angle
    self.idx = new_track.idx



ultralytics.trackers.byte_tracker.BYTETracker

BYTETracker: Nesne algılama ve izleme için YOLOv8 üzerine inşa edilmiş bir izleme algoritması.

Sınıf, bir videoda algılanan nesneler için izlerin başlatılmasından, güncellenmesinden ve yönetilmesinden sorumludur dizisi. Çerçeveler boyunca izlenen, kaybolan ve kaldırılan parçaların durumunu korur, Kalman filtrelemesini kullanır. yeni nesne konumlarını tahmin eder ve veri ilişkilendirmesini gerçekleştirir.

Nitelikler:

İsim Tip Açıklama
tracked_stracks list[STrack]

Başarıyla etkinleştirilen parçaların listesi.

lost_stracks list[STrack]

Kayıp parçaların listesi.

removed_stracks list[STrack]

Kaldırılan parçaların listesi.

frame_id int

Geçerli çerçeve kimliği.

args namespace

Komut satırı argümanları.

max_time_lost int

Bir parçanın 'kayıp' olarak kabul edileceği maksimum kare sayısı.

kalman_filter object

Kalman Filtresi nesnesi.

Yöntemler:

İsim Açıklama
update

Nesne izleyiciyi yeni algılamalarla günceller.

get_kalmanfilter

Sınırlayıcı kutuları izlemek için bir Kalman filtresi nesnesi döndürür.

init_track

Algılamalarla nesne izlemeyi başlatın.

get_dists

İzler ve tespitler arasındaki mesafeyi hesaplar.

multi_predict

İzlerin konumunu tahmin eder.

reset_id

STrack'in ID sayacını sıfırlar.

joint_stracks

İki strack listesini birleştirir.

sub_stracks

İlk listeden ikinci listede bulunan yığınları filtreler.

remove_duplicate_stracks

IoU'ya dayalı olarak yinelenen dizeleri kaldırır.

Kaynak kodu ultralytics/trackers/byte_tracker.py
class BYTETracker:
    """
    BYTETracker: A tracking algorithm built on top of YOLOv8 for object detection and tracking.

    The class is responsible for initializing, updating, and managing the tracks for detected objects in a video
    sequence. It maintains the state of tracked, lost, and removed tracks over frames, utilizes Kalman filtering for
    predicting the new object locations, and performs data association.

    Attributes:
        tracked_stracks (list[STrack]): List of successfully activated tracks.
        lost_stracks (list[STrack]): List of lost tracks.
        removed_stracks (list[STrack]): List of removed tracks.
        frame_id (int): The current frame ID.
        args (namespace): Command-line arguments.
        max_time_lost (int): The maximum frames for a track to be considered as 'lost'.
        kalman_filter (object): Kalman Filter object.

    Methods:
        update(results, img=None): Updates object tracker with new detections.
        get_kalmanfilter(): Returns a Kalman filter object for tracking bounding boxes.
        init_track(dets, scores, cls, img=None): Initialize object tracking with detections.
        get_dists(tracks, detections): Calculates the distance between tracks and detections.
        multi_predict(tracks): Predicts the location of tracks.
        reset_id(): Resets the ID counter of STrack.
        joint_stracks(tlista, tlistb): Combines two lists of stracks.
        sub_stracks(tlista, tlistb): Filters out the stracks present in the second list from the first list.
        remove_duplicate_stracks(stracksa, stracksb): Removes duplicate stracks based on IoU.
    """

    def __init__(self, args, frame_rate=30):
        """Initialize a YOLOv8 object to track objects with given arguments and frame rate."""
        self.tracked_stracks = []  # type: list[STrack]
        self.lost_stracks = []  # type: list[STrack]
        self.removed_stracks = []  # type: list[STrack]

        self.frame_id = 0
        self.args = args
        self.max_time_lost = int(frame_rate / 30.0 * args.track_buffer)
        self.kalman_filter = self.get_kalmanfilter()
        self.reset_id()

    def update(self, results, img=None):
        """Updates object tracker with new detections and returns tracked object bounding boxes."""
        self.frame_id += 1
        activated_stracks = []
        refind_stracks = []
        lost_stracks = []
        removed_stracks = []

        scores = results.conf
        bboxes = results.xywhr if hasattr(results, "xywhr") else results.xywh
        # Add index
        bboxes = np.concatenate([bboxes, np.arange(len(bboxes)).reshape(-1, 1)], axis=-1)
        cls = results.cls

        remain_inds = scores >= self.args.track_high_thresh
        inds_low = scores > self.args.track_low_thresh
        inds_high = scores < self.args.track_high_thresh

        inds_second = inds_low & inds_high
        dets_second = bboxes[inds_second]
        dets = bboxes[remain_inds]
        scores_keep = scores[remain_inds]
        scores_second = scores[inds_second]
        cls_keep = cls[remain_inds]
        cls_second = cls[inds_second]

        detections = self.init_track(dets, scores_keep, cls_keep, img)
        # Add newly detected tracklets to tracked_stracks
        unconfirmed = []
        tracked_stracks = []  # type: list[STrack]
        for track in self.tracked_stracks:
            if not track.is_activated:
                unconfirmed.append(track)
            else:
                tracked_stracks.append(track)
        # Step 2: First association, with high score detection boxes
        strack_pool = self.joint_stracks(tracked_stracks, self.lost_stracks)
        # Predict the current location with KF
        self.multi_predict(strack_pool)
        if hasattr(self, "gmc") and img is not None:
            warp = self.gmc.apply(img, dets)
            STrack.multi_gmc(strack_pool, warp)
            STrack.multi_gmc(unconfirmed, warp)

        dists = self.get_dists(strack_pool, detections)
        matches, u_track, u_detection = matching.linear_assignment(dists, thresh=self.args.match_thresh)

        for itracked, idet in matches:
            track = strack_pool[itracked]
            det = detections[idet]
            if track.state == TrackState.Tracked:
                track.update(det, self.frame_id)
                activated_stracks.append(track)
            else:
                track.re_activate(det, self.frame_id, new_id=False)
                refind_stracks.append(track)
        # Step 3: Second association, with low score detection boxes association the untrack to the low score detections
        detections_second = self.init_track(dets_second, scores_second, cls_second, img)
        r_tracked_stracks = [strack_pool[i] for i in u_track if strack_pool[i].state == TrackState.Tracked]
        # TODO
        dists = matching.iou_distance(r_tracked_stracks, detections_second)
        matches, u_track, u_detection_second = matching.linear_assignment(dists, thresh=0.5)
        for itracked, idet in matches:
            track = r_tracked_stracks[itracked]
            det = detections_second[idet]
            if track.state == TrackState.Tracked:
                track.update(det, self.frame_id)
                activated_stracks.append(track)
            else:
                track.re_activate(det, self.frame_id, new_id=False)
                refind_stracks.append(track)

        for it in u_track:
            track = r_tracked_stracks[it]
            if track.state != TrackState.Lost:
                track.mark_lost()
                lost_stracks.append(track)
        # Deal with unconfirmed tracks, usually tracks with only one beginning frame
        detections = [detections[i] for i in u_detection]
        dists = self.get_dists(unconfirmed, detections)
        matches, u_unconfirmed, u_detection = matching.linear_assignment(dists, thresh=0.7)
        for itracked, idet in matches:
            unconfirmed[itracked].update(detections[idet], self.frame_id)
            activated_stracks.append(unconfirmed[itracked])
        for it in u_unconfirmed:
            track = unconfirmed[it]
            track.mark_removed()
            removed_stracks.append(track)
        # Step 4: Init new stracks
        for inew in u_detection:
            track = detections[inew]
            if track.score < self.args.new_track_thresh:
                continue
            track.activate(self.kalman_filter, self.frame_id)
            activated_stracks.append(track)
        # Step 5: Update state
        for track in self.lost_stracks:
            if self.frame_id - track.end_frame > self.max_time_lost:
                track.mark_removed()
                removed_stracks.append(track)

        self.tracked_stracks = [t for t in self.tracked_stracks if t.state == TrackState.Tracked]
        self.tracked_stracks = self.joint_stracks(self.tracked_stracks, activated_stracks)
        self.tracked_stracks = self.joint_stracks(self.tracked_stracks, refind_stracks)
        self.lost_stracks = self.sub_stracks(self.lost_stracks, self.tracked_stracks)
        self.lost_stracks.extend(lost_stracks)
        self.lost_stracks = self.sub_stracks(self.lost_stracks, self.removed_stracks)
        self.tracked_stracks, self.lost_stracks = self.remove_duplicate_stracks(self.tracked_stracks, self.lost_stracks)
        self.removed_stracks.extend(removed_stracks)
        if len(self.removed_stracks) > 1000:
            self.removed_stracks = self.removed_stracks[-999:]  # clip remove stracks to 1000 maximum

        return np.asarray([x.result for x in self.tracked_stracks if x.is_activated], dtype=np.float32)

    def get_kalmanfilter(self):
        """Returns a Kalman filter object for tracking bounding boxes."""
        return KalmanFilterXYAH()

    def init_track(self, dets, scores, cls, img=None):
        """Initialize object tracking with detections and scores using STrack algorithm."""
        return [STrack(xyxy, s, c) for (xyxy, s, c) in zip(dets, scores, cls)] if len(dets) else []  # detections

    def get_dists(self, tracks, detections):
        """Calculates the distance between tracks and detections using IoU and fuses scores."""
        dists = matching.iou_distance(tracks, detections)
        # TODO: mot20
        # if not self.args.mot20:
        dists = matching.fuse_score(dists, detections)
        return dists

    def multi_predict(self, tracks):
        """Returns the predicted tracks using the YOLOv8 network."""
        STrack.multi_predict(tracks)

    @staticmethod
    def reset_id():
        """Resets the ID counter of STrack."""
        STrack.reset_id()

    def reset(self):
        """Reset tracker."""
        self.tracked_stracks = []  # type: list[STrack]
        self.lost_stracks = []  # type: list[STrack]
        self.removed_stracks = []  # type: list[STrack]
        self.frame_id = 0
        self.kalman_filter = self.get_kalmanfilter()
        self.reset_id()

    @staticmethod
    def joint_stracks(tlista, tlistb):
        """Combine two lists of stracks into a single one."""
        exists = {}
        res = []
        for t in tlista:
            exists[t.track_id] = 1
            res.append(t)
        for t in tlistb:
            tid = t.track_id
            if not exists.get(tid, 0):
                exists[tid] = 1
                res.append(t)
        return res

    @staticmethod
    def sub_stracks(tlista, tlistb):
        """DEPRECATED CODE in https://github.com/ultralytics/ultralytics/pull/1890/
        stracks = {t.track_id: t for t in tlista}
        for t in tlistb:
            tid = t.track_id
            if stracks.get(tid, 0):
                del stracks[tid]
        return list(stracks.values())
        """
        track_ids_b = {t.track_id for t in tlistb}
        return [t for t in tlista if t.track_id not in track_ids_b]

    @staticmethod
    def remove_duplicate_stracks(stracksa, stracksb):
        """Remove duplicate stracks with non-maximum IoU distance."""
        pdist = matching.iou_distance(stracksa, stracksb)
        pairs = np.where(pdist < 0.15)
        dupa, dupb = [], []
        for p, q in zip(*pairs):
            timep = stracksa[p].frame_id - stracksa[p].start_frame
            timeq = stracksb[q].frame_id - stracksb[q].start_frame
            if timep > timeq:
                dupb.append(q)
            else:
                dupa.append(p)
        resa = [t for i, t in enumerate(stracksa) if i not in dupa]
        resb = [t for i, t in enumerate(stracksb) if i not in dupb]
        return resa, resb

__init__(args, frame_rate=30)

Verilen argümanlar ve kare hızıyla nesneleri izlemek için bir YOLOv8 nesnesi başlatın.

Kaynak kodu ultralytics/trackers/byte_tracker.py
def __init__(self, args, frame_rate=30):
    """Initialize a YOLOv8 object to track objects with given arguments and frame rate."""
    self.tracked_stracks = []  # type: list[STrack]
    self.lost_stracks = []  # type: list[STrack]
    self.removed_stracks = []  # type: list[STrack]

    self.frame_id = 0
    self.args = args
    self.max_time_lost = int(frame_rate / 30.0 * args.track_buffer)
    self.kalman_filter = self.get_kalmanfilter()
    self.reset_id()

get_dists(tracks, detections)

IoU ve füzyon puanlarını kullanarak izler ve tespitler arasındaki mesafeyi hesaplar.

Kaynak kodu ultralytics/trackers/byte_tracker.py
def get_dists(self, tracks, detections):
    """Calculates the distance between tracks and detections using IoU and fuses scores."""
    dists = matching.iou_distance(tracks, detections)
    # TODO: mot20
    # if not self.args.mot20:
    dists = matching.fuse_score(dists, detections)
    return dists

get_kalmanfilter()

Sınırlayıcı kutuları izlemek için bir Kalman filtresi nesnesi döndürür.

Kaynak kodu ultralytics/trackers/byte_tracker.py
def get_kalmanfilter(self):
    """Returns a Kalman filter object for tracking bounding boxes."""
    return KalmanFilterXYAH()

init_track(dets, scores, cls, img=None)

STrack algoritmasını kullanarak tespitler ve puanlarla nesne izlemeyi başlatın.

Kaynak kodu ultralytics/trackers/byte_tracker.py
def init_track(self, dets, scores, cls, img=None):
    """Initialize object tracking with detections and scores using STrack algorithm."""
    return [STrack(xyxy, s, c) for (xyxy, s, c) in zip(dets, scores, cls)] if len(dets) else []  # detections

joint_stracks(tlista, tlistb) staticmethod

İki strack listesini tek bir listede birleştirin.

Kaynak kodu ultralytics/trackers/byte_tracker.py
@staticmethod
def joint_stracks(tlista, tlistb):
    """Combine two lists of stracks into a single one."""
    exists = {}
    res = []
    for t in tlista:
        exists[t.track_id] = 1
        res.append(t)
    for t in tlistb:
        tid = t.track_id
        if not exists.get(tid, 0):
            exists[tid] = 1
            res.append(t)
    return res

multi_predict(tracks)

YOLOv8 ağını kullanarak tahmin edilen izleri döndürür.

Kaynak kodu ultralytics/trackers/byte_tracker.py
def multi_predict(self, tracks):
    """Returns the predicted tracks using the YOLOv8 network."""
    STrack.multi_predict(tracks)

remove_duplicate_stracks(stracksa, stracksb) staticmethod

Maksimum IoU mesafesi olmayan yinelenen dizileri kaldırın.

Kaynak kodu ultralytics/trackers/byte_tracker.py
@staticmethod
def remove_duplicate_stracks(stracksa, stracksb):
    """Remove duplicate stracks with non-maximum IoU distance."""
    pdist = matching.iou_distance(stracksa, stracksb)
    pairs = np.where(pdist < 0.15)
    dupa, dupb = [], []
    for p, q in zip(*pairs):
        timep = stracksa[p].frame_id - stracksa[p].start_frame
        timeq = stracksb[q].frame_id - stracksb[q].start_frame
        if timep > timeq:
            dupb.append(q)
        else:
            dupa.append(p)
    resa = [t for i, t in enumerate(stracksa) if i not in dupa]
    resb = [t for i, t in enumerate(stracksb) if i not in dupb]
    return resa, resb

reset()

İzleyiciyi sıfırla.

Kaynak kodu ultralytics/trackers/byte_tracker.py
def reset(self):
    """Reset tracker."""
    self.tracked_stracks = []  # type: list[STrack]
    self.lost_stracks = []  # type: list[STrack]
    self.removed_stracks = []  # type: list[STrack]
    self.frame_id = 0
    self.kalman_filter = self.get_kalmanfilter()
    self.reset_id()

reset_id() staticmethod

STrack'in ID sayacını sıfırlar.

Kaynak kodu ultralytics/trackers/byte_tracker.py
@staticmethod
def reset_id():
    """Resets the ID counter of STrack."""
    STrack.reset_id()

sub_stracks(tlista, tlistb) staticmethod

DEPRECATED CODE in https://github.com/ultralytics/ultralytics/pull/1890/ stracks = {t.track_id: t for t in tlista} for t in tlistb: tid = t.track_id if stracks.get(tid, 0): del stracks[tid] return list(stracks.values())

Kaynak kodu ultralytics/trackers/byte_tracker.py
@staticmethod
def sub_stracks(tlista, tlistb):
    """DEPRECATED CODE in https://github.com/ultralytics/ultralytics/pull/1890/
    stracks = {t.track_id: t for t in tlista}
    for t in tlistb:
        tid = t.track_id
        if stracks.get(tid, 0):
            del stracks[tid]
    return list(stracks.values())
    """
    track_ids_b = {t.track_id for t in tlistb}
    return [t for t in tlista if t.track_id not in track_ids_b]

update(results, img=None)

Nesne izleyiciyi yeni algılamalarla günceller ve izlenen nesne sınırlayıcı kutularını döndürür.

Kaynak kodu ultralytics/trackers/byte_tracker.py
def update(self, results, img=None):
    """Updates object tracker with new detections and returns tracked object bounding boxes."""
    self.frame_id += 1
    activated_stracks = []
    refind_stracks = []
    lost_stracks = []
    removed_stracks = []

    scores = results.conf
    bboxes = results.xywhr if hasattr(results, "xywhr") else results.xywh
    # Add index
    bboxes = np.concatenate([bboxes, np.arange(len(bboxes)).reshape(-1, 1)], axis=-1)
    cls = results.cls

    remain_inds = scores >= self.args.track_high_thresh
    inds_low = scores > self.args.track_low_thresh
    inds_high = scores < self.args.track_high_thresh

    inds_second = inds_low & inds_high
    dets_second = bboxes[inds_second]
    dets = bboxes[remain_inds]
    scores_keep = scores[remain_inds]
    scores_second = scores[inds_second]
    cls_keep = cls[remain_inds]
    cls_second = cls[inds_second]

    detections = self.init_track(dets, scores_keep, cls_keep, img)
    # Add newly detected tracklets to tracked_stracks
    unconfirmed = []
    tracked_stracks = []  # type: list[STrack]
    for track in self.tracked_stracks:
        if not track.is_activated:
            unconfirmed.append(track)
        else:
            tracked_stracks.append(track)
    # Step 2: First association, with high score detection boxes
    strack_pool = self.joint_stracks(tracked_stracks, self.lost_stracks)
    # Predict the current location with KF
    self.multi_predict(strack_pool)
    if hasattr(self, "gmc") and img is not None:
        warp = self.gmc.apply(img, dets)
        STrack.multi_gmc(strack_pool, warp)
        STrack.multi_gmc(unconfirmed, warp)

    dists = self.get_dists(strack_pool, detections)
    matches, u_track, u_detection = matching.linear_assignment(dists, thresh=self.args.match_thresh)

    for itracked, idet in matches:
        track = strack_pool[itracked]
        det = detections[idet]
        if track.state == TrackState.Tracked:
            track.update(det, self.frame_id)
            activated_stracks.append(track)
        else:
            track.re_activate(det, self.frame_id, new_id=False)
            refind_stracks.append(track)
    # Step 3: Second association, with low score detection boxes association the untrack to the low score detections
    detections_second = self.init_track(dets_second, scores_second, cls_second, img)
    r_tracked_stracks = [strack_pool[i] for i in u_track if strack_pool[i].state == TrackState.Tracked]
    # TODO
    dists = matching.iou_distance(r_tracked_stracks, detections_second)
    matches, u_track, u_detection_second = matching.linear_assignment(dists, thresh=0.5)
    for itracked, idet in matches:
        track = r_tracked_stracks[itracked]
        det = detections_second[idet]
        if track.state == TrackState.Tracked:
            track.update(det, self.frame_id)
            activated_stracks.append(track)
        else:
            track.re_activate(det, self.frame_id, new_id=False)
            refind_stracks.append(track)

    for it in u_track:
        track = r_tracked_stracks[it]
        if track.state != TrackState.Lost:
            track.mark_lost()
            lost_stracks.append(track)
    # Deal with unconfirmed tracks, usually tracks with only one beginning frame
    detections = [detections[i] for i in u_detection]
    dists = self.get_dists(unconfirmed, detections)
    matches, u_unconfirmed, u_detection = matching.linear_assignment(dists, thresh=0.7)
    for itracked, idet in matches:
        unconfirmed[itracked].update(detections[idet], self.frame_id)
        activated_stracks.append(unconfirmed[itracked])
    for it in u_unconfirmed:
        track = unconfirmed[it]
        track.mark_removed()
        removed_stracks.append(track)
    # Step 4: Init new stracks
    for inew in u_detection:
        track = detections[inew]
        if track.score < self.args.new_track_thresh:
            continue
        track.activate(self.kalman_filter, self.frame_id)
        activated_stracks.append(track)
    # Step 5: Update state
    for track in self.lost_stracks:
        if self.frame_id - track.end_frame > self.max_time_lost:
            track.mark_removed()
            removed_stracks.append(track)

    self.tracked_stracks = [t for t in self.tracked_stracks if t.state == TrackState.Tracked]
    self.tracked_stracks = self.joint_stracks(self.tracked_stracks, activated_stracks)
    self.tracked_stracks = self.joint_stracks(self.tracked_stracks, refind_stracks)
    self.lost_stracks = self.sub_stracks(self.lost_stracks, self.tracked_stracks)
    self.lost_stracks.extend(lost_stracks)
    self.lost_stracks = self.sub_stracks(self.lost_stracks, self.removed_stracks)
    self.tracked_stracks, self.lost_stracks = self.remove_duplicate_stracks(self.tracked_stracks, self.lost_stracks)
    self.removed_stracks.extend(removed_stracks)
    if len(self.removed_stracks) > 1000:
        self.removed_stracks = self.removed_stracks[-999:]  # clip remove stracks to 1000 maximum

    return np.asarray([x.result for x in self.tracked_stracks if x.is_activated], dtype=np.float32)





Oluşturma 2023-11-12, Güncelleme 2024-05-18
Yazarlar: glenn-jocher (4), Burhan-Q (1), Laughing-q (1)