μ½˜ν…μΈ λ‘œ κ±΄λ„ˆλ›°κΈ°

μ°Έμ‘° ultralytics/data/dataset.py

μ°Έκ³ 

이 νŒŒμΌμ€ https://github.com/ultralytics/ ultralytics/blob/main/ ultralytics/data/dataset .pyμ—μ„œ 확인할 수 μžˆμŠ΅λ‹ˆλ‹€. 문제λ₯Ό λ°œκ²¬ν•˜λ©΄ ν’€ λ¦¬ν€˜μŠ€νŠΈ (πŸ› οΈ) λ₯Ό 톡해 문제λ₯Ό ν•΄κ²°ν•˜λ„λ‘ λ„μ™€μ£Όμ„Έμš”. κ°μ‚¬ν•©λ‹ˆλ‹€ πŸ™!



ultralytics.data.dataset.YOLODataset

베이슀: BaseDataset

객체 감지 및/λ˜λŠ” μ„ΈλΆ„ν™” λ ˆμ΄λΈ”μ„ YOLO ν˜•μ‹μœΌλ‘œ λ‘œλ“œν•˜κΈ° μœ„ν•œ 데이터 μ„ΈνŠΈ ν΄λž˜μŠ€μž…λ‹ˆλ‹€.

λ§€κ°œλ³€μˆ˜:

이름 μœ ν˜• μ„€λͺ… κΈ°λ³Έκ°’
data dict

데이터 집합 YAML μ‚¬μ „μž…λ‹ˆλ‹€. 기본값은 μ—†μŒμž…λ‹ˆλ‹€.

None
task str

ν˜„μž¬ μž‘μ—…μ„ κ°€λ¦¬ν‚€λŠ” λͺ…μ‹œμ  인수둜, 기본값은 '감지'μž…λ‹ˆλ‹€.

'detect'

λ°˜ν™˜ν•©λ‹ˆλ‹€:

μœ ν˜• μ„€λͺ…
Dataset

객체 감지 λͺ¨λΈ ν•™μŠ΅μ— μ‚¬μš©ν•  수 μžˆλŠ” PyTorch 데이터 μ„ΈνŠΈ κ°μ²΄μž…λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
class YOLODataset(BaseDataset):
    """
    Dataset class for loading object detection and/or segmentation labels in YOLO format.

    Args:
        data (dict, optional): A dataset YAML dictionary. Defaults to None.
        task (str): An explicit arg to point current task, Defaults to 'detect'.

    Returns:
        (torch.utils.data.Dataset): A PyTorch dataset object that can be used for training an object detection model.
    """

    def __init__(self, *args, data=None, task="detect", **kwargs):
        """Initializes the YOLODataset with optional configurations for segments and keypoints."""
        self.use_segments = task == "segment"
        self.use_keypoints = task == "pose"
        self.use_obb = task == "obb"
        self.data = data
        assert not (self.use_segments and self.use_keypoints), "Can not use both segments and keypoints."
        super().__init__(*args, **kwargs)

    def cache_labels(self, path=Path("./labels.cache")):
        """
        Cache dataset labels, check images and read shapes.

        Args:
            path (Path): Path where to save the cache file. Default is Path('./labels.cache').

        Returns:
            (dict): labels.
        """
        x = {"labels": []}
        nm, nf, ne, nc, msgs = 0, 0, 0, 0, []  # number missing, found, empty, corrupt, messages
        desc = f"{self.prefix}Scanning {path.parent / path.stem}..."
        total = len(self.im_files)
        nkpt, ndim = self.data.get("kpt_shape", (0, 0))
        if self.use_keypoints and (nkpt <= 0 or ndim not in {2, 3}):
            raise ValueError(
                "'kpt_shape' in data.yaml missing or incorrect. Should be a list with [number of "
                "keypoints, number of dims (2 for x,y or 3 for x,y,visible)], i.e. 'kpt_shape: [17, 3]'"
            )
        with ThreadPool(NUM_THREADS) as pool:
            results = pool.imap(
                func=verify_image_label,
                iterable=zip(
                    self.im_files,
                    self.label_files,
                    repeat(self.prefix),
                    repeat(self.use_keypoints),
                    repeat(len(self.data["names"])),
                    repeat(nkpt),
                    repeat(ndim),
                ),
            )
            pbar = TQDM(results, desc=desc, total=total)
            for im_file, lb, shape, segments, keypoint, nm_f, nf_f, ne_f, nc_f, msg in pbar:
                nm += nm_f
                nf += nf_f
                ne += ne_f
                nc += nc_f
                if im_file:
                    x["labels"].append(
                        {
                            "im_file": im_file,
                            "shape": shape,
                            "cls": lb[:, 0:1],  # n, 1
                            "bboxes": lb[:, 1:],  # n, 4
                            "segments": segments,
                            "keypoints": keypoint,
                            "normalized": True,
                            "bbox_format": "xywh",
                        }
                    )
                if msg:
                    msgs.append(msg)
                pbar.desc = f"{desc} {nf} images, {nm + ne} backgrounds, {nc} corrupt"
            pbar.close()

        if msgs:
            LOGGER.info("\n".join(msgs))
        if nf == 0:
            LOGGER.warning(f"{self.prefix}WARNING ⚠️ No labels found in {path}. {HELP_URL}")
        x["hash"] = get_hash(self.label_files + self.im_files)
        x["results"] = nf, nm, ne, nc, len(self.im_files)
        x["msgs"] = msgs  # warnings
        save_dataset_cache_file(self.prefix, path, x, DATASET_CACHE_VERSION)
        return x

    def get_labels(self):
        """Returns dictionary of labels for YOLO training."""
        self.label_files = img2label_paths(self.im_files)
        cache_path = Path(self.label_files[0]).parent.with_suffix(".cache")
        try:
            cache, exists = load_dataset_cache_file(cache_path), True  # attempt to load a *.cache file
            assert cache["version"] == DATASET_CACHE_VERSION  # matches current version
            assert cache["hash"] == get_hash(self.label_files + self.im_files)  # identical hash
        except (FileNotFoundError, AssertionError, AttributeError):
            cache, exists = self.cache_labels(cache_path), False  # run cache ops

        # Display cache
        nf, nm, ne, nc, n = cache.pop("results")  # found, missing, empty, corrupt, total
        if exists and LOCAL_RANK in {-1, 0}:
            d = f"Scanning {cache_path}... {nf} images, {nm + ne} backgrounds, {nc} corrupt"
            TQDM(None, desc=self.prefix + d, total=n, initial=n)  # display results
            if cache["msgs"]:
                LOGGER.info("\n".join(cache["msgs"]))  # display warnings

        # Read cache
        [cache.pop(k) for k in ("hash", "version", "msgs")]  # remove items
        labels = cache["labels"]
        if not labels:
            LOGGER.warning(f"WARNING ⚠️ No images found in {cache_path}, training may not work correctly. {HELP_URL}")
        self.im_files = [lb["im_file"] for lb in labels]  # update im_files

        # Check if the dataset is all boxes or all segments
        lengths = ((len(lb["cls"]), len(lb["bboxes"]), len(lb["segments"])) for lb in labels)
        len_cls, len_boxes, len_segments = (sum(x) for x in zip(*lengths))
        if len_segments and len_boxes != len_segments:
            LOGGER.warning(
                f"WARNING ⚠️ Box and segment counts should be equal, but got len(segments) = {len_segments}, "
                f"len(boxes) = {len_boxes}. To resolve this only boxes will be used and all segments will be removed. "
                "To avoid this please supply either a detect or segment dataset, not a detect-segment mixed dataset."
            )
            for lb in labels:
                lb["segments"] = []
        if len_cls == 0:
            LOGGER.warning(f"WARNING ⚠️ No labels found in {cache_path}, training may not work correctly. {HELP_URL}")
        return labels

    def build_transforms(self, hyp=None):
        """Builds and appends transforms to the list."""
        if self.augment:
            hyp.mosaic = hyp.mosaic if self.augment and not self.rect else 0.0
            hyp.mixup = hyp.mixup if self.augment and not self.rect else 0.0
            transforms = v8_transforms(self, self.imgsz, hyp)
        else:
            transforms = Compose([LetterBox(new_shape=(self.imgsz, self.imgsz), scaleup=False)])
        transforms.append(
            Format(
                bbox_format="xywh",
                normalize=True,
                return_mask=self.use_segments,
                return_keypoint=self.use_keypoints,
                return_obb=self.use_obb,
                batch_idx=True,
                mask_ratio=hyp.mask_ratio,
                mask_overlap=hyp.overlap_mask,
                bgr=hyp.bgr if self.augment else 0.0,  # only affect training.
            )
        )
        return transforms

    def close_mosaic(self, hyp):
        """Sets mosaic, copy_paste and mixup options to 0.0 and builds transformations."""
        hyp.mosaic = 0.0  # set mosaic ratio=0.0
        hyp.copy_paste = 0.0  # keep the same behavior as previous v8 close-mosaic
        hyp.mixup = 0.0  # keep the same behavior as previous v8 close-mosaic
        self.transforms = self.build_transforms(hyp)

    def update_labels_info(self, label):
        """
        Custom your label format here.

        Note:
            cls is not with bboxes now, classification and semantic segmentation need an independent cls label
            Can also support classification and semantic segmentation by adding or removing dict keys there.
        """
        bboxes = label.pop("bboxes")
        segments = label.pop("segments", [])
        keypoints = label.pop("keypoints", None)
        bbox_format = label.pop("bbox_format")
        normalized = label.pop("normalized")

        # NOTE: do NOT resample oriented boxes
        segment_resamples = 100 if self.use_obb else 1000
        if len(segments) > 0:
            # list[np.array(1000, 2)] * num_samples
            # (N, 1000, 2)
            segments = np.stack(resample_segments(segments, n=segment_resamples), axis=0)
        else:
            segments = np.zeros((0, segment_resamples, 2), dtype=np.float32)
        label["instances"] = Instances(bboxes, segments, keypoints, bbox_format=bbox_format, normalized=normalized)
        return label

    @staticmethod
    def collate_fn(batch):
        """Collates data samples into batches."""
        new_batch = {}
        keys = batch[0].keys()
        values = list(zip(*[list(b.values()) for b in batch]))
        for i, k in enumerate(keys):
            value = values[i]
            if k == "img":
                value = torch.stack(value, 0)
            if k in {"masks", "keypoints", "bboxes", "cls", "segments", "obb"}:
                value = torch.cat(value, 0)
            new_batch[k] = value
        new_batch["batch_idx"] = list(new_batch["batch_idx"])
        for i in range(len(new_batch["batch_idx"])):
            new_batch["batch_idx"][i] += i  # add target image index for build_targets()
        new_batch["batch_idx"] = torch.cat(new_batch["batch_idx"], 0)
        return new_batch

__init__(*args, data=None, task='detect', **kwargs)

μ„Έκ·Έλ¨ΌνŠΈ 및 ν‚€ν¬μΈνŠΈμ— λŒ€ν•œ 선택적 κ΅¬μ„±μœΌλ‘œ YOLODataset을 μ΄ˆκΈ°ν™”ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def __init__(self, *args, data=None, task="detect", **kwargs):
    """Initializes the YOLODataset with optional configurations for segments and keypoints."""
    self.use_segments = task == "segment"
    self.use_keypoints = task == "pose"
    self.use_obb = task == "obb"
    self.data = data
    assert not (self.use_segments and self.use_keypoints), "Can not use both segments and keypoints."
    super().__init__(*args, **kwargs)

build_transforms(hyp=None)

νŠΈλžœμŠ€νΌμ„ λΉŒλ“œν•˜κ³  λͺ©λ‘μ— μΆ”κ°€ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def build_transforms(self, hyp=None):
    """Builds and appends transforms to the list."""
    if self.augment:
        hyp.mosaic = hyp.mosaic if self.augment and not self.rect else 0.0
        hyp.mixup = hyp.mixup if self.augment and not self.rect else 0.0
        transforms = v8_transforms(self, self.imgsz, hyp)
    else:
        transforms = Compose([LetterBox(new_shape=(self.imgsz, self.imgsz), scaleup=False)])
    transforms.append(
        Format(
            bbox_format="xywh",
            normalize=True,
            return_mask=self.use_segments,
            return_keypoint=self.use_keypoints,
            return_obb=self.use_obb,
            batch_idx=True,
            mask_ratio=hyp.mask_ratio,
            mask_overlap=hyp.overlap_mask,
            bgr=hyp.bgr if self.augment else 0.0,  # only affect training.
        )
    )
    return transforms

cache_labels(path=Path('./labels.cache'))

데이터 μ„ΈνŠΈ λ ˆμ΄λΈ”μ„ μΊμ‹œν•˜κ³ , 이미지λ₯Ό ν™•μΈν•˜κ³ , λ„ν˜•μ„ μ½μŠ΅λ‹ˆλ‹€.

λ§€κ°œλ³€μˆ˜:

이름 μœ ν˜• μ„€λͺ… κΈ°λ³Έκ°’
path Path

μΊμ‹œ νŒŒμΌμ„ μ €μž₯ν•  κ²½λ‘œμž…λ‹ˆλ‹€. 기본값은 Path('./labels.cache')μž…λ‹ˆλ‹€.

Path('./labels.cache')

λ°˜ν™˜ν•©λ‹ˆλ‹€:

μœ ν˜• μ„€λͺ…
dict

λ ˆμ΄λΈ”.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def cache_labels(self, path=Path("./labels.cache")):
    """
    Cache dataset labels, check images and read shapes.

    Args:
        path (Path): Path where to save the cache file. Default is Path('./labels.cache').

    Returns:
        (dict): labels.
    """
    x = {"labels": []}
    nm, nf, ne, nc, msgs = 0, 0, 0, 0, []  # number missing, found, empty, corrupt, messages
    desc = f"{self.prefix}Scanning {path.parent / path.stem}..."
    total = len(self.im_files)
    nkpt, ndim = self.data.get("kpt_shape", (0, 0))
    if self.use_keypoints and (nkpt <= 0 or ndim not in {2, 3}):
        raise ValueError(
            "'kpt_shape' in data.yaml missing or incorrect. Should be a list with [number of "
            "keypoints, number of dims (2 for x,y or 3 for x,y,visible)], i.e. 'kpt_shape: [17, 3]'"
        )
    with ThreadPool(NUM_THREADS) as pool:
        results = pool.imap(
            func=verify_image_label,
            iterable=zip(
                self.im_files,
                self.label_files,
                repeat(self.prefix),
                repeat(self.use_keypoints),
                repeat(len(self.data["names"])),
                repeat(nkpt),
                repeat(ndim),
            ),
        )
        pbar = TQDM(results, desc=desc, total=total)
        for im_file, lb, shape, segments, keypoint, nm_f, nf_f, ne_f, nc_f, msg in pbar:
            nm += nm_f
            nf += nf_f
            ne += ne_f
            nc += nc_f
            if im_file:
                x["labels"].append(
                    {
                        "im_file": im_file,
                        "shape": shape,
                        "cls": lb[:, 0:1],  # n, 1
                        "bboxes": lb[:, 1:],  # n, 4
                        "segments": segments,
                        "keypoints": keypoint,
                        "normalized": True,
                        "bbox_format": "xywh",
                    }
                )
            if msg:
                msgs.append(msg)
            pbar.desc = f"{desc} {nf} images, {nm + ne} backgrounds, {nc} corrupt"
        pbar.close()

    if msgs:
        LOGGER.info("\n".join(msgs))
    if nf == 0:
        LOGGER.warning(f"{self.prefix}WARNING ⚠️ No labels found in {path}. {HELP_URL}")
    x["hash"] = get_hash(self.label_files + self.im_files)
    x["results"] = nf, nm, ne, nc, len(self.im_files)
    x["msgs"] = msgs  # warnings
    save_dataset_cache_file(self.prefix, path, x, DATASET_CACHE_VERSION)
    return x

close_mosaic(hyp)

λͺ¨μžμ΄ν¬, 볡사_λΆ™μ—¬λ„£κΈ° 및 λ―ΉμŠ€μ—… μ˜΅μ…˜μ„ 0.0으둜 μ„€μ •ν•˜κ³  λ³€ν™˜μ„ λΉŒλ“œν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def close_mosaic(self, hyp):
    """Sets mosaic, copy_paste and mixup options to 0.0 and builds transformations."""
    hyp.mosaic = 0.0  # set mosaic ratio=0.0
    hyp.copy_paste = 0.0  # keep the same behavior as previous v8 close-mosaic
    hyp.mixup = 0.0  # keep the same behavior as previous v8 close-mosaic
    self.transforms = self.build_transforms(hyp)

collate_fn(batch) staticmethod

데이터 μƒ˜ν”Œμ„ 일괄 μ²˜λ¦¬ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
@staticmethod
def collate_fn(batch):
    """Collates data samples into batches."""
    new_batch = {}
    keys = batch[0].keys()
    values = list(zip(*[list(b.values()) for b in batch]))
    for i, k in enumerate(keys):
        value = values[i]
        if k == "img":
            value = torch.stack(value, 0)
        if k in {"masks", "keypoints", "bboxes", "cls", "segments", "obb"}:
            value = torch.cat(value, 0)
        new_batch[k] = value
    new_batch["batch_idx"] = list(new_batch["batch_idx"])
    for i in range(len(new_batch["batch_idx"])):
        new_batch["batch_idx"][i] += i  # add target image index for build_targets()
    new_batch["batch_idx"] = torch.cat(new_batch["batch_idx"], 0)
    return new_batch

get_labels()

YOLO ꡐ윑용 λ ˆμ΄λΈ” 사전을 λ°˜ν™˜ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def get_labels(self):
    """Returns dictionary of labels for YOLO training."""
    self.label_files = img2label_paths(self.im_files)
    cache_path = Path(self.label_files[0]).parent.with_suffix(".cache")
    try:
        cache, exists = load_dataset_cache_file(cache_path), True  # attempt to load a *.cache file
        assert cache["version"] == DATASET_CACHE_VERSION  # matches current version
        assert cache["hash"] == get_hash(self.label_files + self.im_files)  # identical hash
    except (FileNotFoundError, AssertionError, AttributeError):
        cache, exists = self.cache_labels(cache_path), False  # run cache ops

    # Display cache
    nf, nm, ne, nc, n = cache.pop("results")  # found, missing, empty, corrupt, total
    if exists and LOCAL_RANK in {-1, 0}:
        d = f"Scanning {cache_path}... {nf} images, {nm + ne} backgrounds, {nc} corrupt"
        TQDM(None, desc=self.prefix + d, total=n, initial=n)  # display results
        if cache["msgs"]:
            LOGGER.info("\n".join(cache["msgs"]))  # display warnings

    # Read cache
    [cache.pop(k) for k in ("hash", "version", "msgs")]  # remove items
    labels = cache["labels"]
    if not labels:
        LOGGER.warning(f"WARNING ⚠️ No images found in {cache_path}, training may not work correctly. {HELP_URL}")
    self.im_files = [lb["im_file"] for lb in labels]  # update im_files

    # Check if the dataset is all boxes or all segments
    lengths = ((len(lb["cls"]), len(lb["bboxes"]), len(lb["segments"])) for lb in labels)
    len_cls, len_boxes, len_segments = (sum(x) for x in zip(*lengths))
    if len_segments and len_boxes != len_segments:
        LOGGER.warning(
            f"WARNING ⚠️ Box and segment counts should be equal, but got len(segments) = {len_segments}, "
            f"len(boxes) = {len_boxes}. To resolve this only boxes will be used and all segments will be removed. "
            "To avoid this please supply either a detect or segment dataset, not a detect-segment mixed dataset."
        )
        for lb in labels:
            lb["segments"] = []
    if len_cls == 0:
        LOGGER.warning(f"WARNING ⚠️ No labels found in {cache_path}, training may not work correctly. {HELP_URL}")
    return labels

update_labels_info(label)

μ—¬κΈ°μ—μ„œ 라벨 ν˜•μ‹μ„ μ‚¬μš©μž μ§€μ •ν•˜μ„Έμš”.

μ°Έκ³ 

λΆ„λ₯˜ 및 의미둠적 μ„ΈλΆ„ν™”μ—λŠ” 독립적인 cls λ ˆμ΄λΈ”μ΄ ν•„μš”ν•©λ‹ˆλ‹€. 거기에 λ”•μ…”λ„ˆλ¦¬ ν‚€λ₯Ό μΆ”κ°€ν•˜κ±°λ‚˜ μ œκ±°ν•˜μ—¬ λΆ„λ₯˜ 및 의미둠적 μ„ΈλΆ„ν™”λ₯Ό 지원할 μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def update_labels_info(self, label):
    """
    Custom your label format here.

    Note:
        cls is not with bboxes now, classification and semantic segmentation need an independent cls label
        Can also support classification and semantic segmentation by adding or removing dict keys there.
    """
    bboxes = label.pop("bboxes")
    segments = label.pop("segments", [])
    keypoints = label.pop("keypoints", None)
    bbox_format = label.pop("bbox_format")
    normalized = label.pop("normalized")

    # NOTE: do NOT resample oriented boxes
    segment_resamples = 100 if self.use_obb else 1000
    if len(segments) > 0:
        # list[np.array(1000, 2)] * num_samples
        # (N, 1000, 2)
        segments = np.stack(resample_segments(segments, n=segment_resamples), axis=0)
    else:
        segments = np.zeros((0, segment_resamples, 2), dtype=np.float32)
    label["instances"] = Instances(bboxes, segments, keypoints, bbox_format=bbox_format, normalized=normalized)
    return label



ultralytics.data.dataset.ClassificationDataset

YOLO λΆ„λ₯˜ μž‘μ—…μ„ μ§€μ›ν•˜λ„λ‘ ν† μΉ˜λΉ„μ „ 이미지 폴더λ₯Ό ν™•μž₯ν•˜μ—¬ 이미지 증강, 캐싱 및 검증과 같은 κΈ°λŠ₯을 μ œκ³΅ν•©λ‹ˆλ‹€. λ”₯λŸ¬λ‹ λͺ¨λΈ ν•™μŠ΅μ„ μœ„ν•œ λŒ€κ·œλͺ¨ 데이터 μ„ΈνŠΈλ₯Ό 효율적으둜 μ²˜λ¦¬ν•˜λ„λ‘ μ„€κ³„λ˜μ—ˆμŠ΅λ‹ˆλ‹€. 이미지 λ³€ν™˜ 및 캐싱 λ©”μ»€λ‹ˆμ¦˜μ„ μ˜΅μ…˜μœΌλ‘œ μ œκ³΅ν•˜μ—¬ ν•™μŠ΅ 속도λ₯Ό 높일 수 μžˆμŠ΅λ‹ˆλ‹€.

이 ν΄λž˜μŠ€λŠ” ν† μΉ˜λΉ„μ „κ³Ό μ•Œλ²„λ©˜ν…Œμ΄μ…˜ 라이브러리λ₯Ό λͺ¨λ‘ μ‚¬μš©ν•˜μ—¬ 증강할 수 있으며, 이미지 캐싱을 μ§€μ›ν•©λ‹ˆλ‹€. λ₯Ό RAM λ˜λŠ” λ””μŠ€ν¬μ— μΊμ‹±ν•˜μ—¬ ν›ˆλ ¨ 쀑 IO μ˜€λ²„ν—€λ“œλ₯Ό 쀄일 수 μžˆμŠ΅λ‹ˆλ‹€. λ˜ν•œ κ°•λ ₯ν•œ 검증 ν”„λ‘œμ„ΈμŠ€λ₯Ό κ΅¬ν˜„ν•˜μ—¬ λ₯Ό κ΅¬ν˜„ν•˜μ—¬ 데이터 무결성과 일관성을 보μž₯ν•©λ‹ˆλ‹€.

속성:

이름 μœ ν˜• μ„€λͺ…
cache_ram bool

RAM 캐싱이 ν™œμ„±ν™”λ˜μ–΄ μžˆλŠ”μ§€ μ—¬λΆ€λ₯Ό λ‚˜νƒ€λƒ…λ‹ˆλ‹€.

cache_disk bool

λ””μŠ€ν¬ 캐싱이 ν™œμ„±ν™”λ˜μ–΄ μžˆλŠ”μ§€ μ—¬λΆ€λ₯Ό λ‚˜νƒ€λƒ…λ‹ˆλ‹€.

samples list

이미지 경둜, 클래슀 인덱슀, .npy μΊμ‹œ 경둜(λ””μŠ€ν¬μ— μžˆλŠ” 경우) 및 λ‘œλ“œλœ 이미지 배열을 ν¬ν•¨ν•˜λŠ” νŠœν”Œ λͺ©λ‘μž…λ‹ˆλ‹€. 파일 경둜(λ””μŠ€ν¬μ— μΊμ‹±ν•˜λŠ” 경우), μ„ νƒμ μœΌλ‘œ λ‘œλ“œλœ 이미지 λ°°μ—΄(RAM에 μΊμ‹±ν•˜λŠ” 경우)을 ν¬ν•¨ν•˜λŠ” νŠœν”Œ λͺ©λ‘μž…λ‹ˆλ‹€.

torch_transforms callable

PyTorch νŠΈλžœμŠ€νΌμ„ 이미지에 μ μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
class ClassificationDataset:
    """
    Extends torchvision ImageFolder to support YOLO classification tasks, offering functionalities like image
    augmentation, caching, and verification. It's designed to efficiently handle large datasets for training deep
    learning models, with optional image transformations and caching mechanisms to speed up training.

    This class allows for augmentations using both torchvision and Albumentations libraries, and supports caching images
    in RAM or on disk to reduce IO overhead during training. Additionally, it implements a robust verification process
    to ensure data integrity and consistency.

    Attributes:
        cache_ram (bool): Indicates if caching in RAM is enabled.
        cache_disk (bool): Indicates if caching on disk is enabled.
        samples (list): A list of tuples, each containing the path to an image, its class index, path to its .npy cache
                        file (if caching on disk), and optionally the loaded image array (if caching in RAM).
        torch_transforms (callable): PyTorch transforms to be applied to the images.
    """

    def __init__(self, root, args, augment=False, prefix=""):
        """
        Initialize YOLO object with root, image size, augmentations, and cache settings.

        Args:
            root (str): Path to the dataset directory where images are stored in a class-specific folder structure.
            args (Namespace): Configuration containing dataset-related settings such as image size, augmentation
                parameters, and cache settings. It includes attributes like `imgsz` (image size), `fraction` (fraction
                of data to use), `scale`, `fliplr`, `flipud`, `cache` (disk or RAM caching for faster training),
                `auto_augment`, `hsv_h`, `hsv_s`, `hsv_v`, and `crop_fraction`.
            augment (bool, optional): Whether to apply augmentations to the dataset. Default is False.
            prefix (str, optional): Prefix for logging and cache filenames, aiding in dataset identification and
                debugging. Default is an empty string.
        """
        import torchvision  # scope for faster 'import ultralytics'

        # Base class assigned as attribute rather than used as base class to allow for scoping slow torchvision import
        self.base = torchvision.datasets.ImageFolder(root=root)
        self.samples = self.base.samples
        self.root = self.base.root

        # Initialize attributes
        if augment and args.fraction < 1.0:  # reduce training fraction
            self.samples = self.samples[: round(len(self.samples) * args.fraction)]
        self.prefix = colorstr(f"{prefix}: ") if prefix else ""
        self.cache_ram = args.cache is True or str(args.cache).lower() == "ram"  # cache images into RAM
        self.cache_disk = str(args.cache).lower() == "disk"  # cache images on hard drive as uncompressed *.npy files
        self.samples = self.verify_images()  # filter out bad images
        self.samples = [list(x) + [Path(x[0]).with_suffix(".npy"), None] for x in self.samples]  # file, index, npy, im
        scale = (1.0 - args.scale, 1.0)  # (0.08, 1.0)
        self.torch_transforms = (
            classify_augmentations(
                size=args.imgsz,
                scale=scale,
                hflip=args.fliplr,
                vflip=args.flipud,
                erasing=args.erasing,
                auto_augment=args.auto_augment,
                hsv_h=args.hsv_h,
                hsv_s=args.hsv_s,
                hsv_v=args.hsv_v,
            )
            if augment
            else classify_transforms(size=args.imgsz, crop_fraction=args.crop_fraction)
        )

    def __getitem__(self, i):
        """Returns subset of data and targets corresponding to given indices."""
        f, j, fn, im = self.samples[i]  # filename, index, filename.with_suffix('.npy'), image
        if self.cache_ram:
            if im is None:  # Warning: two separate if statements required here, do not combine this with previous line
                im = self.samples[i][3] = cv2.imread(f)
        elif self.cache_disk:
            if not fn.exists():  # load npy
                np.save(fn.as_posix(), cv2.imread(f), allow_pickle=False)
            im = np.load(fn)
        else:  # read image
            im = cv2.imread(f)  # BGR
        # Convert NumPy array to PIL image
        im = Image.fromarray(cv2.cvtColor(im, cv2.COLOR_BGR2RGB))
        sample = self.torch_transforms(im)
        return {"img": sample, "cls": j}

    def __len__(self) -> int:
        """Return the total number of samples in the dataset."""
        return len(self.samples)

    def verify_images(self):
        """Verify all images in dataset."""
        desc = f"{self.prefix}Scanning {self.root}..."
        path = Path(self.root).with_suffix(".cache")  # *.cache file path

        with contextlib.suppress(FileNotFoundError, AssertionError, AttributeError):
            cache = load_dataset_cache_file(path)  # attempt to load a *.cache file
            assert cache["version"] == DATASET_CACHE_VERSION  # matches current version
            assert cache["hash"] == get_hash([x[0] for x in self.samples])  # identical hash
            nf, nc, n, samples = cache.pop("results")  # found, missing, empty, corrupt, total
            if LOCAL_RANK in {-1, 0}:
                d = f"{desc} {nf} images, {nc} corrupt"
                TQDM(None, desc=d, total=n, initial=n)
                if cache["msgs"]:
                    LOGGER.info("\n".join(cache["msgs"]))  # display warnings
            return samples

        # Run scan if *.cache retrieval failed
        nf, nc, msgs, samples, x = 0, 0, [], [], {}
        with ThreadPool(NUM_THREADS) as pool:
            results = pool.imap(func=verify_image, iterable=zip(self.samples, repeat(self.prefix)))
            pbar = TQDM(results, desc=desc, total=len(self.samples))
            for sample, nf_f, nc_f, msg in pbar:
                if nf_f:
                    samples.append(sample)
                if msg:
                    msgs.append(msg)
                nf += nf_f
                nc += nc_f
                pbar.desc = f"{desc} {nf} images, {nc} corrupt"
            pbar.close()
        if msgs:
            LOGGER.info("\n".join(msgs))
        x["hash"] = get_hash([x[0] for x in self.samples])
        x["results"] = nf, nc, len(samples), samples
        x["msgs"] = msgs  # warnings
        save_dataset_cache_file(self.prefix, path, x, DATASET_CACHE_VERSION)
        return samples

__getitem__(i)

주어진 μΈλ±μŠ€μ— ν•΄λ‹Ήν•˜λŠ” 데이터 및 λŒ€μƒμ˜ ν•˜μœ„ 집합을 λ°˜ν™˜ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def __getitem__(self, i):
    """Returns subset of data and targets corresponding to given indices."""
    f, j, fn, im = self.samples[i]  # filename, index, filename.with_suffix('.npy'), image
    if self.cache_ram:
        if im is None:  # Warning: two separate if statements required here, do not combine this with previous line
            im = self.samples[i][3] = cv2.imread(f)
    elif self.cache_disk:
        if not fn.exists():  # load npy
            np.save(fn.as_posix(), cv2.imread(f), allow_pickle=False)
        im = np.load(fn)
    else:  # read image
        im = cv2.imread(f)  # BGR
    # Convert NumPy array to PIL image
    im = Image.fromarray(cv2.cvtColor(im, cv2.COLOR_BGR2RGB))
    sample = self.torch_transforms(im)
    return {"img": sample, "cls": j}

__init__(root, args, augment=False, prefix='')

루트, 이미지 크기, 증강 및 μΊμ‹œ μ„€μ •μœΌλ‘œ YOLO 개체λ₯Ό μ΄ˆκΈ°ν™”ν•©λ‹ˆλ‹€.

λ§€κ°œλ³€μˆ˜:

이름 μœ ν˜• μ„€λͺ… κΈ°λ³Έκ°’
root str

ν΄λž˜μŠ€λ³„ 폴더 ꡬ쑰에 이미지가 μ €μž₯된 데이터 μ„ΈνŠΈ λ””λ ‰ν† λ¦¬μ˜ κ²½λ‘œμž…λ‹ˆλ‹€.

ν•„μˆ˜
args Namespace

이미지 크기, 증강, λ§€κ°œλ³€μˆ˜, μΊμ‹œ μ„€μ • λ“± 데이터 μ„ΈνŠΈ κ΄€λ ¨ 섀정이 ν¬ν•¨λœ ꡬ성 λ§€κ°œλ³€μˆ˜ 및 μΊμ‹œ μ„€μ •κ³Ό 같은 데이터 μ„ΈνŠΈ κ΄€λ ¨ 섀정이 ν¬ν•¨λœ κ΅¬μ„±μž…λ‹ˆλ‹€. μ—¬κΈ°μ—λŠ” λ‹€μŒκ³Ό 같은 속성이 ν¬ν•¨λ©λ‹ˆλ‹€. imgsz (이미지 크기), fraction (λΆ„μˆ˜ μ‚¬μš©ν•  λ°μ΄ν„°μ˜ λΉ„μœ¨), scale, fliplr, flipud, cache (λΉ λ₯Έ κ΅μœ‘μ„ μœ„ν•œ λ””μŠ€ν¬ λ˜λŠ” RAM 캐싱), auto_augment, hsv_h, hsv_s, hsv_v및 crop_fraction.

ν•„μˆ˜
augment bool

데이터 집합에 증강을 μ μš©ν• μ§€ μ—¬λΆ€μž…λ‹ˆλ‹€. 기본값은 Falseμž…λ‹ˆλ‹€.

False
prefix str

λ‘œκΉ… 및 μΊμ‹œ 파일λͺ…을 μœ„ν•œ μ ‘λ‘μ‚¬λ‘œ, 데이터 μ„ΈνŠΈ 식별 및 디버깅을 μ§€μ›ν•©λ‹ˆλ‹€. 기본값은 빈 λ¬Έμžμ—΄μž…λ‹ˆλ‹€.

''
의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def __init__(self, root, args, augment=False, prefix=""):
    """
    Initialize YOLO object with root, image size, augmentations, and cache settings.

    Args:
        root (str): Path to the dataset directory where images are stored in a class-specific folder structure.
        args (Namespace): Configuration containing dataset-related settings such as image size, augmentation
            parameters, and cache settings. It includes attributes like `imgsz` (image size), `fraction` (fraction
            of data to use), `scale`, `fliplr`, `flipud`, `cache` (disk or RAM caching for faster training),
            `auto_augment`, `hsv_h`, `hsv_s`, `hsv_v`, and `crop_fraction`.
        augment (bool, optional): Whether to apply augmentations to the dataset. Default is False.
        prefix (str, optional): Prefix for logging and cache filenames, aiding in dataset identification and
            debugging. Default is an empty string.
    """
    import torchvision  # scope for faster 'import ultralytics'

    # Base class assigned as attribute rather than used as base class to allow for scoping slow torchvision import
    self.base = torchvision.datasets.ImageFolder(root=root)
    self.samples = self.base.samples
    self.root = self.base.root

    # Initialize attributes
    if augment and args.fraction < 1.0:  # reduce training fraction
        self.samples = self.samples[: round(len(self.samples) * args.fraction)]
    self.prefix = colorstr(f"{prefix}: ") if prefix else ""
    self.cache_ram = args.cache is True or str(args.cache).lower() == "ram"  # cache images into RAM
    self.cache_disk = str(args.cache).lower() == "disk"  # cache images on hard drive as uncompressed *.npy files
    self.samples = self.verify_images()  # filter out bad images
    self.samples = [list(x) + [Path(x[0]).with_suffix(".npy"), None] for x in self.samples]  # file, index, npy, im
    scale = (1.0 - args.scale, 1.0)  # (0.08, 1.0)
    self.torch_transforms = (
        classify_augmentations(
            size=args.imgsz,
            scale=scale,
            hflip=args.fliplr,
            vflip=args.flipud,
            erasing=args.erasing,
            auto_augment=args.auto_augment,
            hsv_h=args.hsv_h,
            hsv_s=args.hsv_s,
            hsv_v=args.hsv_v,
        )
        if augment
        else classify_transforms(size=args.imgsz, crop_fraction=args.crop_fraction)
    )

__len__()

데이터 μ„ΈνŠΈμ˜ 총 μƒ˜ν”Œ 수λ₯Ό λ°˜ν™˜ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def __len__(self) -> int:
    """Return the total number of samples in the dataset."""
    return len(self.samples)

verify_images()

데이터 μ„ΈνŠΈμ˜ λͺ¨λ“  이미지λ₯Ό ν™•μΈν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def verify_images(self):
    """Verify all images in dataset."""
    desc = f"{self.prefix}Scanning {self.root}..."
    path = Path(self.root).with_suffix(".cache")  # *.cache file path

    with contextlib.suppress(FileNotFoundError, AssertionError, AttributeError):
        cache = load_dataset_cache_file(path)  # attempt to load a *.cache file
        assert cache["version"] == DATASET_CACHE_VERSION  # matches current version
        assert cache["hash"] == get_hash([x[0] for x in self.samples])  # identical hash
        nf, nc, n, samples = cache.pop("results")  # found, missing, empty, corrupt, total
        if LOCAL_RANK in {-1, 0}:
            d = f"{desc} {nf} images, {nc} corrupt"
            TQDM(None, desc=d, total=n, initial=n)
            if cache["msgs"]:
                LOGGER.info("\n".join(cache["msgs"]))  # display warnings
        return samples

    # Run scan if *.cache retrieval failed
    nf, nc, msgs, samples, x = 0, 0, [], [], {}
    with ThreadPool(NUM_THREADS) as pool:
        results = pool.imap(func=verify_image, iterable=zip(self.samples, repeat(self.prefix)))
        pbar = TQDM(results, desc=desc, total=len(self.samples))
        for sample, nf_f, nc_f, msg in pbar:
            if nf_f:
                samples.append(sample)
            if msg:
                msgs.append(msg)
            nf += nf_f
            nc += nc_f
            pbar.desc = f"{desc} {nf} images, {nc} corrupt"
        pbar.close()
    if msgs:
        LOGGER.info("\n".join(msgs))
    x["hash"] = get_hash([x[0] for x in self.samples])
    x["results"] = nf, nc, len(samples), samples
    x["msgs"] = msgs  # warnings
    save_dataset_cache_file(self.prefix, path, x, DATASET_CACHE_VERSION)
    return samples



ultralytics.data.dataset.YOLOMultiModalDataset

베이슀: YOLODataset

객체 감지 및/λ˜λŠ” μ„ΈλΆ„ν™” λ ˆμ΄λΈ”μ„ YOLO ν˜•μ‹μœΌλ‘œ λ‘œλ“œν•˜κΈ° μœ„ν•œ 데이터 μ„ΈνŠΈ ν΄λž˜μŠ€μž…λ‹ˆλ‹€.

λ§€κ°œλ³€μˆ˜:

이름 μœ ν˜• μ„€λͺ… κΈ°λ³Έκ°’
data dict

데이터 집합 YAML μ‚¬μ „μž…λ‹ˆλ‹€. 기본값은 μ—†μŒμž…λ‹ˆλ‹€.

None
task str

ν˜„μž¬ μž‘μ—…μ„ κ°€λ¦¬ν‚€λŠ” λͺ…μ‹œμ  인수둜, 기본값은 '감지'μž…λ‹ˆλ‹€.

'detect'

λ°˜ν™˜ν•©λ‹ˆλ‹€:

μœ ν˜• μ„€λͺ…
Dataset

객체 감지 λͺ¨λΈ ν•™μŠ΅μ— μ‚¬μš©ν•  수 μžˆλŠ” PyTorch 데이터 μ„ΈνŠΈ κ°μ²΄μž…λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
class YOLOMultiModalDataset(YOLODataset):
    """
    Dataset class for loading object detection and/or segmentation labels in YOLO format.

    Args:
        data (dict, optional): A dataset YAML dictionary. Defaults to None.
        task (str): An explicit arg to point current task, Defaults to 'detect'.

    Returns:
        (torch.utils.data.Dataset): A PyTorch dataset object that can be used for training an object detection model.
    """

    def __init__(self, *args, data=None, task="detect", **kwargs):
        """Initializes a dataset object for object detection tasks with optional specifications."""
        super().__init__(*args, data=data, task=task, **kwargs)

    def update_labels_info(self, label):
        """Add texts information for multi modal model training."""
        labels = super().update_labels_info(label)
        # NOTE: some categories are concatenated with its synonyms by `/`.
        labels["texts"] = [v.split("/") for _, v in self.data["names"].items()]
        return labels

    def build_transforms(self, hyp=None):
        """Enhances data transformations with optional text augmentation for multi-modal training."""
        transforms = super().build_transforms(hyp)
        if self.augment:
            # NOTE: hard-coded the args for now.
            transforms.insert(-1, RandomLoadText(max_samples=min(self.data["nc"], 80), padding=True))
        return transforms

__init__(*args, data=None, task='detect', **kwargs)

선택 사양을 μ‚¬μš©ν•˜μ—¬ 객체 감지 μž‘μ—…μ„ μœ„ν•œ 데이터 μ„ΈνŠΈ 객체λ₯Ό μ΄ˆκΈ°ν™”ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def __init__(self, *args, data=None, task="detect", **kwargs):
    """Initializes a dataset object for object detection tasks with optional specifications."""
    super().__init__(*args, data=data, task=task, **kwargs)

build_transforms(hyp=None)

λ©€ν‹° λͺ¨λ‹¬ κ΅μœ‘μ„ μœ„ν•œ ν…μŠ€νŠΈ 증강 μ˜΅μ…˜μœΌλ‘œ 데이터 λ³€ν™˜μ„ κ°œμ„ ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def build_transforms(self, hyp=None):
    """Enhances data transformations with optional text augmentation for multi-modal training."""
    transforms = super().build_transforms(hyp)
    if self.augment:
        # NOTE: hard-coded the args for now.
        transforms.insert(-1, RandomLoadText(max_samples=min(self.data["nc"], 80), padding=True))
    return transforms

update_labels_info(label)

λ©€ν‹° λͺ¨λ‹¬ λͺ¨λΈ ν•™μŠ΅μ„ μœ„ν•œ ν…μŠ€νŠΈ 정보λ₯Ό μΆ”κ°€ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def update_labels_info(self, label):
    """Add texts information for multi modal model training."""
    labels = super().update_labels_info(label)
    # NOTE: some categories are concatenated with its synonyms by `/`.
    labels["texts"] = [v.split("/") for _, v in self.data["names"].items()]
    return labels



ultralytics.data.dataset.GroundingDataset

베이슀: YOLODataset

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
class GroundingDataset(YOLODataset):
    def __init__(self, *args, task="detect", json_file, **kwargs):
        """Initializes a GroundingDataset for object detection, loading annotations from a specified JSON file."""
        assert task == "detect", "`GroundingDataset` only support `detect` task for now!"
        self.json_file = json_file
        super().__init__(*args, task=task, data={}, **kwargs)

    def get_img_files(self, img_path):
        """The image files would be read in `get_labels` function, return empty list here."""
        return []

    def get_labels(self):
        """Loads annotations from a JSON file, filters, and normalizes bounding boxes for each image."""
        labels = []
        LOGGER.info("Loading annotation file...")
        with open(self.json_file, "r") as f:
            annotations = json.load(f)
        images = {f'{x["id"]:d}': x for x in annotations["images"]}
        imgToAnns = defaultdict(list)
        for ann in annotations["annotations"]:
            imgToAnns[ann["image_id"]].append(ann)
        for img_id, anns in TQDM(imgToAnns.items(), desc=f"Reading annotations {self.json_file}"):
            img = images[f"{img_id:d}"]
            h, w, f = img["height"], img["width"], img["file_name"]
            im_file = Path(self.img_path) / f
            if not im_file.exists():
                continue
            self.im_files.append(str(im_file))
            bboxes = []
            cat2id = {}
            texts = []
            for ann in anns:
                if ann["iscrowd"]:
                    continue
                box = np.array(ann["bbox"], dtype=np.float32)
                box[:2] += box[2:] / 2
                box[[0, 2]] /= float(w)
                box[[1, 3]] /= float(h)
                if box[2] <= 0 or box[3] <= 0:
                    continue

                cat_name = " ".join([img["caption"][t[0] : t[1]] for t in ann["tokens_positive"]])
                if cat_name not in cat2id:
                    cat2id[cat_name] = len(cat2id)
                    texts.append([cat_name])
                cls = cat2id[cat_name]  # class
                box = [cls] + box.tolist()
                if box not in bboxes:
                    bboxes.append(box)
            lb = np.array(bboxes, dtype=np.float32) if len(bboxes) else np.zeros((0, 5), dtype=np.float32)
            labels.append(
                {
                    "im_file": im_file,
                    "shape": (h, w),
                    "cls": lb[:, 0:1],  # n, 1
                    "bboxes": lb[:, 1:],  # n, 4
                    "normalized": True,
                    "bbox_format": "xywh",
                    "texts": texts,
                }
            )
        return labels

    def build_transforms(self, hyp=None):
        """Configures augmentations for training with optional text loading; `hyp` adjusts augmentation intensity."""
        transforms = super().build_transforms(hyp)
        if self.augment:
            # NOTE: hard-coded the args for now.
            transforms.insert(-1, RandomLoadText(max_samples=80, padding=True))
        return transforms

__init__(*args, task='detect', json_file, **kwargs)

객체 감지λ₯Ό μœ„ν•΄ GroundingDataset을 μ΄ˆκΈ°ν™”ν•˜μ—¬ μ§€μ •λœ JSON νŒŒμΌμ—μ„œ μ–΄λ…Έν…Œμ΄μ…˜μ„ λ‘œλ“œν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def __init__(self, *args, task="detect", json_file, **kwargs):
    """Initializes a GroundingDataset for object detection, loading annotations from a specified JSON file."""
    assert task == "detect", "`GroundingDataset` only support `detect` task for now!"
    self.json_file = json_file
    super().__init__(*args, task=task, data={}, **kwargs)

build_transforms(hyp=None)

선택적 ν…μŠ€νŠΈ λ‘œλ”©μ„ 톡해 ꡐ윑용 증강 κΈ°λŠ₯을 κ΅¬μ„±ν•©λ‹ˆλ‹€; hyp 증강 강도λ₯Ό μ‘°μ •ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def build_transforms(self, hyp=None):
    """Configures augmentations for training with optional text loading; `hyp` adjusts augmentation intensity."""
    transforms = super().build_transforms(hyp)
    if self.augment:
        # NOTE: hard-coded the args for now.
        transforms.insert(-1, RandomLoadText(max_samples=80, padding=True))
    return transforms

get_img_files(img_path)

이미지 νŒŒμΌμ€ λ‹€μŒμ—μ„œ μ½μŠ΅λ‹ˆλ‹€. get_labels ν•¨μˆ˜λ₯Ό ν˜ΈμΆœν•˜λ©΄ 여기에 빈 λͺ©λ‘μ„ λ°˜ν™˜ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def get_img_files(self, img_path):
    """The image files would be read in `get_labels` function, return empty list here."""
    return []

get_labels()

JSON νŒŒμΌμ—μ„œ 주석을 λ‘œλ“œν•˜κ³  각 μ΄λ―Έμ§€μ˜ λ°”μš΄λ”© λ°•μŠ€λ₯Ό ν•„ν„°λ§ν•˜κ³  μ •κ·œν™”ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def get_labels(self):
    """Loads annotations from a JSON file, filters, and normalizes bounding boxes for each image."""
    labels = []
    LOGGER.info("Loading annotation file...")
    with open(self.json_file, "r") as f:
        annotations = json.load(f)
    images = {f'{x["id"]:d}': x for x in annotations["images"]}
    imgToAnns = defaultdict(list)
    for ann in annotations["annotations"]:
        imgToAnns[ann["image_id"]].append(ann)
    for img_id, anns in TQDM(imgToAnns.items(), desc=f"Reading annotations {self.json_file}"):
        img = images[f"{img_id:d}"]
        h, w, f = img["height"], img["width"], img["file_name"]
        im_file = Path(self.img_path) / f
        if not im_file.exists():
            continue
        self.im_files.append(str(im_file))
        bboxes = []
        cat2id = {}
        texts = []
        for ann in anns:
            if ann["iscrowd"]:
                continue
            box = np.array(ann["bbox"], dtype=np.float32)
            box[:2] += box[2:] / 2
            box[[0, 2]] /= float(w)
            box[[1, 3]] /= float(h)
            if box[2] <= 0 or box[3] <= 0:
                continue

            cat_name = " ".join([img["caption"][t[0] : t[1]] for t in ann["tokens_positive"]])
            if cat_name not in cat2id:
                cat2id[cat_name] = len(cat2id)
                texts.append([cat_name])
            cls = cat2id[cat_name]  # class
            box = [cls] + box.tolist()
            if box not in bboxes:
                bboxes.append(box)
        lb = np.array(bboxes, dtype=np.float32) if len(bboxes) else np.zeros((0, 5), dtype=np.float32)
        labels.append(
            {
                "im_file": im_file,
                "shape": (h, w),
                "cls": lb[:, 0:1],  # n, 1
                "bboxes": lb[:, 1:],  # n, 4
                "normalized": True,
                "bbox_format": "xywh",
                "texts": texts,
            }
        )
    return labels



ultralytics.data.dataset.YOLOConcatDataset

베이슀: ConcatDataset

μ—¬λŸ¬ 데이터 집합을 μ—°κ²°ν•œ 데이터 μ§‘ν•©μž…λ‹ˆλ‹€.

이 ν΄λž˜μŠ€λŠ” 기쑴의 λ‹€μ–‘ν•œ 데이터 집합을 μ‘°ν•©ν•˜λŠ” 데 μœ μš©ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
class YOLOConcatDataset(ConcatDataset):
    """
    Dataset as a concatenation of multiple datasets.

    This class is useful to assemble different existing datasets.
    """

    @staticmethod
    def collate_fn(batch):
        """Collates data samples into batches."""
        return YOLODataset.collate_fn(batch)

collate_fn(batch) staticmethod

데이터 μƒ˜ν”Œμ„ 일괄 μ²˜λ¦¬ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
@staticmethod
def collate_fn(batch):
    """Collates data samples into batches."""
    return YOLODataset.collate_fn(batch)



ultralytics.data.dataset.SemanticDataset

베이슀: BaseDataset

μ‹œλ§¨ν‹± μ„ΈλΆ„ν™” 데이터 μ„ΈνŠΈ.

이 ν΄λž˜μŠ€λŠ” 의미둠적 μ„ΈλΆ„ν™” μž‘μ—…μ— μ‚¬μš©λ˜λŠ” 데이터 집합을 μ²˜λ¦¬ν•˜λŠ” 역할을 λ‹΄λ‹Ήν•©λ‹ˆλ‹€. λ² μ΄μŠ€λ°μ΄ν„°μ…‹ 클래슀의 ν•¨μˆ˜ λ₯Ό μƒμ†ν•©λ‹ˆλ‹€.

μ°Έκ³ 

이 ν΄λž˜μŠ€λŠ” ν˜„μž¬ 자리 ν‘œμ‹œμžμ΄λ©° 의미적 μ„ΈλΆ„ν™” μž‘μ—…μ„ μ§€μ›ν•˜κΈ° μœ„ν•œ λ©”μ„œλ“œμ™€ μ†μ„±μœΌλ‘œ μ±„μ›Œμ•Ό ν•©λ‹ˆλ‹€. λ©”μ„œλ“œμ™€ μ†μ„±μœΌλ‘œ μ±„μ›Œμ•Ό ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
class SemanticDataset(BaseDataset):
    """
    Semantic Segmentation Dataset.

    This class is responsible for handling datasets used for semantic segmentation tasks. It inherits functionalities
    from the BaseDataset class.

    Note:
        This class is currently a placeholder and needs to be populated with methods and attributes for supporting
        semantic segmentation tasks.
    """

    def __init__(self):
        """Initialize a SemanticDataset object."""
        super().__init__()

__init__()

SemanticDataset 객체λ₯Ό μ΄ˆκΈ°ν™”ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/data/dataset.py
def __init__(self):
    """Initialize a SemanticDataset object."""
    super().__init__()





생성 2023-11-12, μ—…λ°μ΄νŠΈ 2024-03-31
μž‘μ„±μž: μ›ƒλŠ”-큐 (1), κΈ€λ Œ-쑰처 (3)