Skip to content

Reference for ultralytics/engine/model.py

Note

Full source code for this file is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/engine/model.py. Help us fix any issues you see by submitting a Pull Request 🛠️. Thank you 🙏!


ultralytics.engine.model.Model

Bases: Module

A base class to unify APIs for all models.

Parameters:

Name Type Description Default
model (str, Path)

Path to the model file to load or create.

'yolov8n.pt'
task Any

Task type for the YOLO model. Defaults to None.

None

Attributes:

Name Type Description
predictor Any

The predictor object.

model Any

The model object.

trainer Any

The trainer object.

task str

The type of model task.

ckpt Any

The checkpoint object if the model loaded from *.pt file.

cfg str

The model configuration if loaded from *.yaml file.

ckpt_path str

The checkpoint file path.

overrides dict

Overrides for the trainer object.

metrics Any

The data for metrics.

Methods:

Name Description
__call__

Alias for the predict method.

_new

str, verbose:bool=True) -> None: Initializes a new model and infers the task type from the model definitions.

_load

str, task:str='') -> None: Initializes a new model and infers the task type from the model head.

_check_is_pytorch_model

Raises TypeError if the model is not a PyTorch model.

reset

Resets the model modules.

info

bool=False) -> None: Logs the model info.

fuse

Fuses the model for faster inference.

predict

Performs prediction using the YOLO model.

Returns:

Name Type Description
list Results

The prediction results.

Source code in ultralytics/engine/model.py
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
class Model(nn.Module):
    """
    A base class to unify APIs for all models.

    Args:
        model (str, Path): Path to the model file to load or create.
        task (Any, optional): Task type for the YOLO model. Defaults to None.

    Attributes:
        predictor (Any): The predictor object.
        model (Any): The model object.
        trainer (Any): The trainer object.
        task (str): The type of model task.
        ckpt (Any): The checkpoint object if the model loaded from *.pt file.
        cfg (str): The model configuration if loaded from *.yaml file.
        ckpt_path (str): The checkpoint file path.
        overrides (dict): Overrides for the trainer object.
        metrics (Any): The data for metrics.

    Methods:
        __call__(source=None, stream=False, **kwargs):
            Alias for the predict method.
        _new(cfg:str, verbose:bool=True) -> None:
            Initializes a new model and infers the task type from the model definitions.
        _load(weights:str, task:str='') -> None:
            Initializes a new model and infers the task type from the model head.
        _check_is_pytorch_model() -> None:
            Raises TypeError if the model is not a PyTorch model.
        reset() -> None:
            Resets the model modules.
        info(verbose:bool=False) -> None:
            Logs the model info.
        fuse() -> None:
            Fuses the model for faster inference.
        predict(source=None, stream=False, **kwargs) -> List[ultralytics.engine.results.Results]:
            Performs prediction using the YOLO model.

    Returns:
        list(ultralytics.engine.results.Results): The prediction results.
    """

    def __init__(self, model: Union[str, Path] = 'yolov8n.pt', task=None) -> None:
        """
        Initializes the YOLO model.

        Args:
            model (Union[str, Path], optional): Path or name of the model to load or create. Defaults to 'yolov8n.pt'.
            task (Any, optional): Task type for the YOLO model. Defaults to None.
        """
        super().__init__()
        self.callbacks = callbacks.get_default_callbacks()
        self.predictor = None  # reuse predictor
        self.model = None  # model object
        self.trainer = None  # trainer object
        self.ckpt = None  # if loaded from *.pt
        self.cfg = None  # if loaded from *.yaml
        self.ckpt_path = None
        self.overrides = {}  # overrides for trainer object
        self.metrics = None  # validation/training metrics
        self.session = None  # HUB session
        self.task = task  # task type
        model = str(model).strip()  # strip spaces

        # Check if Ultralytics HUB model from https://hub.ultralytics.com
        if self.is_hub_model(model):
            from ultralytics.hub.session import HUBTrainingSession
            self.session = HUBTrainingSession(model)
            model = self.session.model_file

        # Load or create new YOLO model
        suffix = Path(model).suffix
        if not suffix and Path(model).stem in GITHUB_ASSETS_STEMS:
            model, suffix = Path(model).with_suffix('.pt'), '.pt'  # add suffix, i.e. yolov8n -> yolov8n.pt
        if suffix in ('.yaml', '.yml'):
            self._new(model, task)
        else:
            self._load(model, task)

    def __call__(self, source=None, stream=False, **kwargs):
        """Calls the 'predict' function with given arguments to perform object detection."""
        return self.predict(source, stream, **kwargs)

    @staticmethod
    def is_hub_model(model):
        """Check if the provided model is a HUB model."""
        return any((
            model.startswith(f'{HUB_WEB_ROOT}/models/'),  # i.e. https://hub.ultralytics.com/models/MODEL_ID
            [len(x) for x in model.split('_')] == [42, 20],  # APIKEY_MODELID
            len(model) == 20 and not Path(model).exists() and all(x not in model for x in './\\')))  # MODELID

    def _new(self, cfg: str, task=None, model=None, verbose=True):
        """
        Initializes a new model and infers the task type from the model definitions.

        Args:
            cfg (str): model configuration file
            task (str | None): model task
            model (BaseModel): Customized model.
            verbose (bool): display model info on load
        """
        cfg_dict = yaml_model_load(cfg)
        self.cfg = cfg
        self.task = task or guess_model_task(cfg_dict)
        self.model = (model or self._smart_load('model'))(cfg_dict, verbose=verbose and RANK == -1)  # build model
        self.overrides['model'] = self.cfg
        self.overrides['task'] = self.task

        # Below added to allow export from YAMLs
        self.model.args = {**DEFAULT_CFG_DICT, **self.overrides}  # combine default and model args (prefer model args)
        self.model.task = self.task

    def _load(self, weights: str, task=None):
        """
        Initializes a new model and infers the task type from the model head.

        Args:
            weights (str): model checkpoint to be loaded
            task (str | None): model task
        """
        suffix = Path(weights).suffix
        if suffix == '.pt':
            self.model, self.ckpt = attempt_load_one_weight(weights)
            self.task = self.model.args['task']
            self.overrides = self.model.args = self._reset_ckpt_args(self.model.args)
            self.ckpt_path = self.model.pt_path
        else:
            weights = check_file(weights)
            self.model, self.ckpt = weights, None
            self.task = task or guess_model_task(weights)
            self.ckpt_path = weights
        self.overrides['model'] = weights
        self.overrides['task'] = self.task

    def _check_is_pytorch_model(self):
        """
        Raises TypeError is model is not a PyTorch model
        """
        pt_str = isinstance(self.model, (str, Path)) and Path(self.model).suffix == '.pt'
        pt_module = isinstance(self.model, nn.Module)
        if not (pt_module or pt_str):
            raise TypeError(
                f"model='{self.model}' should be a *.pt PyTorch model to run this method, but is a different format. "
                f"PyTorch models can train, val, predict and export, i.e. 'model.train(data=...)', but exported "
                f"formats like ONNX, TensorRT etc. only support 'predict' and 'val' modes, "
                f"i.e. 'yolo predict model=yolov8n.onnx'.\nTo run CUDA or MPS inference please pass the device "
                f"argument directly in your inference command, i.e. 'model.predict(source=..., device=0)'")

    def reset_weights(self):
        """
        Resets the model modules parameters to randomly initialized values, losing all training information.
        """
        self._check_is_pytorch_model()
        for m in self.model.modules():
            if hasattr(m, 'reset_parameters'):
                m.reset_parameters()
        for p in self.model.parameters():
            p.requires_grad = True
        return self

    def load(self, weights='yolov8n.pt'):
        """
        Transfers parameters with matching names and shapes from 'weights' to model.
        """
        self._check_is_pytorch_model()
        if isinstance(weights, (str, Path)):
            weights, self.ckpt = attempt_load_one_weight(weights)
        self.model.load(weights)
        return self

    def info(self, detailed=False, verbose=True):
        """
        Logs model info.

        Args:
            detailed (bool): Show detailed information about model.
            verbose (bool): Controls verbosity.
        """
        self._check_is_pytorch_model()
        return self.model.info(detailed=detailed, verbose=verbose)

    def fuse(self):
        """Fuse PyTorch Conv2d and BatchNorm2d layers."""
        self._check_is_pytorch_model()
        self.model.fuse()

    def predict(self, source=None, stream=False, predictor=None, **kwargs):
        """
        Perform prediction using the YOLO model.

        Args:
            source (str | int | PIL | np.ndarray): The source of the image to make predictions on.
                          Accepts all source types accepted by the YOLO model.
            stream (bool): Whether to stream the predictions or not. Defaults to False.
            predictor (BasePredictor): Customized predictor.
            **kwargs : Additional keyword arguments passed to the predictor.
                       Check the 'configuration' section in the documentation for all available options.

        Returns:
            (List[ultralytics.engine.results.Results]): The prediction results.
        """
        if source is None:
            source = ASSETS
            LOGGER.warning(f"WARNING ⚠️ 'source' is missing. Using 'source={source}'.")

        is_cli = (sys.argv[0].endswith('yolo') or sys.argv[0].endswith('ultralytics')) and any(
            x in sys.argv for x in ('predict', 'track', 'mode=predict', 'mode=track'))

        custom = {'conf': 0.25, 'save': is_cli}  # method defaults
        args = {**self.overrides, **custom, **kwargs, 'mode': 'predict'}  # highest priority args on the right
        prompts = args.pop('prompts', None)  # for SAM-type models

        if not self.predictor:
            self.predictor = (predictor or self._smart_load('predictor'))(overrides=args, _callbacks=self.callbacks)
            self.predictor.setup_model(model=self.model, verbose=is_cli)
        else:  # only update args if predictor is already setup
            self.predictor.args = get_cfg(self.predictor.args, args)
            if 'project' in args or 'name' in args:
                self.predictor.save_dir = get_save_dir(self.predictor.args)
        if prompts and hasattr(self.predictor, 'set_prompts'):  # for SAM-type models
            self.predictor.set_prompts(prompts)
        return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)

    def track(self, source=None, stream=False, persist=False, **kwargs):
        """
        Perform object tracking on the input source using the registered trackers.

        Args:
            source (str, optional): The input source for object tracking. Can be a file path or a video stream.
            stream (bool, optional): Whether the input source is a video stream. Defaults to False.
            persist (bool, optional): Whether to persist the trackers if they already exist. Defaults to False.
            **kwargs (optional): Additional keyword arguments for the tracking process.

        Returns:
            (List[ultralytics.engine.results.Results]): The tracking results.
        """
        if not hasattr(self.predictor, 'trackers'):
            from ultralytics.trackers import register_tracker
            register_tracker(self, persist)
        # ByteTrack-based method needs low confidence predictions as input
        kwargs['conf'] = kwargs.get('conf') or 0.1
        kwargs['mode'] = 'track'
        return self.predict(source=source, stream=stream, **kwargs)

    def val(self, validator=None, **kwargs):
        """
        Validate a model on a given dataset.

        Args:
            validator (BaseValidator): Customized validator.
            **kwargs : Any other args accepted by the validators. To see all args check 'configuration' section in docs
        """
        custom = {'rect': True}  # method defaults
        args = {**self.overrides, **custom, **kwargs, 'mode': 'val'}  # highest priority args on the right
        args['imgsz'] = check_imgsz(args['imgsz'], max_dim=1)

        validator = (validator or self._smart_load('validator'))(args=args, _callbacks=self.callbacks)
        validator(model=self.model)
        self.metrics = validator.metrics
        return validator.metrics

    def benchmark(self, **kwargs):
        """
        Benchmark a model on all export formats.

        Args:
            **kwargs : Any other args accepted by the validators. To see all args check 'configuration' section in docs
        """
        self._check_is_pytorch_model()
        from ultralytics.utils.benchmarks import benchmark

        custom = {'verbose': False}  # method defaults
        args = {**DEFAULT_CFG_DICT, **self.model.args, **custom, **kwargs, 'mode': 'benchmark'}
        return benchmark(
            model=self,
            data=kwargs.get('data'),  # if no 'data' argument passed set data=None for default datasets
            imgsz=args['imgsz'],
            half=args['half'],
            int8=args['int8'],
            device=args['device'],
            verbose=kwargs.get('verbose'))

    def export(self, **kwargs):
        """
        Export model.

        Args:
            **kwargs : Any other args accepted by the Exporter. To see all args check 'configuration' section in docs.
        """
        self._check_is_pytorch_model()
        from .exporter import Exporter

        custom = {'imgsz': self.model.args['imgsz'], 'batch': 1, 'data': None, 'verbose': False}  # method defaults
        args = {**self.overrides, **custom, **kwargs, 'mode': 'export'}  # highest priority args on the right
        return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)

    def train(self, trainer=None, **kwargs):
        """
        Trains the model on a given dataset.

        Args:
            trainer (BaseTrainer, optional): Customized trainer.
            **kwargs (Any): Any number of arguments representing the training configuration.
        """
        self._check_is_pytorch_model()
        if self.session:  # Ultralytics HUB session
            if any(kwargs):
                LOGGER.warning('WARNING ⚠️ using HUB training arguments, ignoring local training arguments.')
            kwargs = self.session.train_args
        check_pip_update_available()

        overrides = yaml_load(check_yaml(kwargs['cfg'])) if kwargs.get('cfg') else self.overrides
        custom = {'data': TASK2DATA[self.task]}  # method defaults
        args = {**overrides, **custom, **kwargs, 'mode': 'train'}  # highest priority args on the right
        if args.get('resume'):
            args['resume'] = self.ckpt_path

        self.trainer = (trainer or self._smart_load('trainer'))(overrides=args, _callbacks=self.callbacks)
        if not args.get('resume'):  # manually set model only if not resuming
            self.trainer.model = self.trainer.get_model(weights=self.model if self.ckpt else None, cfg=self.model.yaml)
            self.model = self.trainer.model
        self.trainer.hub_session = self.session  # attach optional HUB session
        self.trainer.train()
        # Update model and cfg after training
        if RANK in (-1, 0):
            ckpt = self.trainer.best if self.trainer.best.exists() else self.trainer.last
            self.model, _ = attempt_load_one_weight(ckpt)
            self.overrides = self.model.args
            self.metrics = getattr(self.trainer.validator, 'metrics', None)  # TODO: no metrics returned by DDP
        return self.metrics

    def tune(self, use_ray=False, iterations=10, *args, **kwargs):
        """
        Runs hyperparameter tuning, optionally using Ray Tune. See ultralytics.utils.tuner.run_ray_tune for Args.

        Returns:
            (dict): A dictionary containing the results of the hyperparameter search.
        """
        self._check_is_pytorch_model()
        if use_ray:
            from ultralytics.utils.tuner import run_ray_tune
            return run_ray_tune(self, max_samples=iterations, *args, **kwargs)
        else:
            from .tuner import Tuner

            custom = {}  # method defaults
            args = {**self.overrides, **custom, **kwargs, 'mode': 'train'}  # highest priority args on the right
            return Tuner(args=args, _callbacks=self.callbacks)(model=self, iterations=iterations)

    def _apply(self, fn):
        """Apply to(), cpu(), cuda(), half(), float() to model tensors that are not parameters or registered buffers."""
        self._check_is_pytorch_model()
        self = super()._apply(fn)  # noqa
        self.predictor = None  # reset predictor as device may have changed
        self.overrides['device'] = str(self.device)  # i.e. device(type='cuda', index=0) -> 'cuda:0'
        return self

    @property
    def names(self):
        """Returns class names of the loaded model."""
        return self.model.names if hasattr(self.model, 'names') else None

    @property
    def device(self):
        """Returns device if PyTorch model."""
        return next(self.model.parameters()).device if isinstance(self.model, nn.Module) else None

    @property
    def transforms(self):
        """Returns transform of the loaded model."""
        return self.model.transforms if hasattr(self.model, 'transforms') else None

    def add_callback(self, event: str, func):
        """Add a callback."""
        self.callbacks[event].append(func)

    def clear_callback(self, event: str):
        """Clear all event callbacks."""
        self.callbacks[event] = []

    def reset_callbacks(self):
        """Reset all registered callbacks."""
        for event in callbacks.default_callbacks.keys():
            self.callbacks[event] = [callbacks.default_callbacks[event][0]]

    @staticmethod
    def _reset_ckpt_args(args):
        """Reset arguments when loading a PyTorch model."""
        include = {'imgsz', 'data', 'task', 'single_cls'}  # only remember these arguments when loading a PyTorch model
        return {k: v for k, v in args.items() if k in include}

    # def __getattr__(self, attr):
    #    """Raises error if object has no requested attribute."""
    #    name = self.__class__.__name__
    #    raise AttributeError(f"'{name}' object has no attribute '{attr}'. See valid attributes below.\n{self.__doc__}")

    def _smart_load(self, key):
        """Load model/trainer/validator/predictor."""
        try:
            return self.task_map[self.task][key]
        except Exception as e:
            name = self.__class__.__name__
            mode = inspect.stack()[1][3]  # get the function name.
            raise NotImplementedError(
                emojis(f"WARNING ⚠️ '{name}' model does not support '{mode}' mode for '{self.task}' task yet.")) from e

    @property
    def task_map(self):
        """
        Map head to model, trainer, validator, and predictor classes.

        Returns:
            task_map (dict): The map of model task to mode classes.
        """
        raise NotImplementedError('Please provide task map for your model!')

device property

Returns device if PyTorch model.

names property

Returns class names of the loaded model.

task_map property

Map head to model, trainer, validator, and predictor classes.

Returns:

Name Type Description
task_map dict

The map of model task to mode classes.

transforms property

Returns transform of the loaded model.

__call__(source=None, stream=False, **kwargs)

Calls the 'predict' function with given arguments to perform object detection.

Source code in ultralytics/engine/model.py
def __call__(self, source=None, stream=False, **kwargs):
    """Calls the 'predict' function with given arguments to perform object detection."""
    return self.predict(source, stream, **kwargs)

__init__(model='yolov8n.pt', task=None)

Initializes the YOLO model.

Parameters:

Name Type Description Default
model Union[str, Path]

Path or name of the model to load or create. Defaults to 'yolov8n.pt'.

'yolov8n.pt'
task Any

Task type for the YOLO model. Defaults to None.

None
Source code in ultralytics/engine/model.py
def __init__(self, model: Union[str, Path] = 'yolov8n.pt', task=None) -> None:
    """
    Initializes the YOLO model.

    Args:
        model (Union[str, Path], optional): Path or name of the model to load or create. Defaults to 'yolov8n.pt'.
        task (Any, optional): Task type for the YOLO model. Defaults to None.
    """
    super().__init__()
    self.callbacks = callbacks.get_default_callbacks()
    self.predictor = None  # reuse predictor
    self.model = None  # model object
    self.trainer = None  # trainer object
    self.ckpt = None  # if loaded from *.pt
    self.cfg = None  # if loaded from *.yaml
    self.ckpt_path = None
    self.overrides = {}  # overrides for trainer object
    self.metrics = None  # validation/training metrics
    self.session = None  # HUB session
    self.task = task  # task type
    model = str(model).strip()  # strip spaces

    # Check if Ultralytics HUB model from https://hub.ultralytics.com
    if self.is_hub_model(model):
        from ultralytics.hub.session import HUBTrainingSession
        self.session = HUBTrainingSession(model)
        model = self.session.model_file

    # Load or create new YOLO model
    suffix = Path(model).suffix
    if not suffix and Path(model).stem in GITHUB_ASSETS_STEMS:
        model, suffix = Path(model).with_suffix('.pt'), '.pt'  # add suffix, i.e. yolov8n -> yolov8n.pt
    if suffix in ('.yaml', '.yml'):
        self._new(model, task)
    else:
        self._load(model, task)

add_callback(event, func)

Add a callback.

Source code in ultralytics/engine/model.py
def add_callback(self, event: str, func):
    """Add a callback."""
    self.callbacks[event].append(func)

benchmark(**kwargs)

Benchmark a model on all export formats.

Parameters:

Name Type Description Default
**kwargs

Any other args accepted by the validators. To see all args check 'configuration' section in docs

{}
Source code in ultralytics/engine/model.py
def benchmark(self, **kwargs):
    """
    Benchmark a model on all export formats.

    Args:
        **kwargs : Any other args accepted by the validators. To see all args check 'configuration' section in docs
    """
    self._check_is_pytorch_model()
    from ultralytics.utils.benchmarks import benchmark

    custom = {'verbose': False}  # method defaults
    args = {**DEFAULT_CFG_DICT, **self.model.args, **custom, **kwargs, 'mode': 'benchmark'}
    return benchmark(
        model=self,
        data=kwargs.get('data'),  # if no 'data' argument passed set data=None for default datasets
        imgsz=args['imgsz'],
        half=args['half'],
        int8=args['int8'],
        device=args['device'],
        verbose=kwargs.get('verbose'))

clear_callback(event)

Clear all event callbacks.

Source code in ultralytics/engine/model.py
def clear_callback(self, event: str):
    """Clear all event callbacks."""
    self.callbacks[event] = []

export(**kwargs)

Export model.

Parameters:

Name Type Description Default
**kwargs

Any other args accepted by the Exporter. To see all args check 'configuration' section in docs.

{}
Source code in ultralytics/engine/model.py
def export(self, **kwargs):
    """
    Export model.

    Args:
        **kwargs : Any other args accepted by the Exporter. To see all args check 'configuration' section in docs.
    """
    self._check_is_pytorch_model()
    from .exporter import Exporter

    custom = {'imgsz': self.model.args['imgsz'], 'batch': 1, 'data': None, 'verbose': False}  # method defaults
    args = {**self.overrides, **custom, **kwargs, 'mode': 'export'}  # highest priority args on the right
    return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)

fuse()

Fuse PyTorch Conv2d and BatchNorm2d layers.

Source code in ultralytics/engine/model.py
def fuse(self):
    """Fuse PyTorch Conv2d and BatchNorm2d layers."""
    self._check_is_pytorch_model()
    self.model.fuse()

info(detailed=False, verbose=True)

Logs model info.

Parameters:

Name Type Description Default
detailed bool

Show detailed information about model.

False
verbose bool

Controls verbosity.

True
Source code in ultralytics/engine/model.py
def info(self, detailed=False, verbose=True):
    """
    Logs model info.

    Args:
        detailed (bool): Show detailed information about model.
        verbose (bool): Controls verbosity.
    """
    self._check_is_pytorch_model()
    return self.model.info(detailed=detailed, verbose=verbose)

is_hub_model(model) staticmethod

Check if the provided model is a HUB model.

Source code in ultralytics/engine/model.py
@staticmethod
def is_hub_model(model):
    """Check if the provided model is a HUB model."""
    return any((
        model.startswith(f'{HUB_WEB_ROOT}/models/'),  # i.e. https://hub.ultralytics.com/models/MODEL_ID
        [len(x) for x in model.split('_')] == [42, 20],  # APIKEY_MODELID
        len(model) == 20 and not Path(model).exists() and all(x not in model for x in './\\')))  # MODELID

load(weights='yolov8n.pt')

Transfers parameters with matching names and shapes from 'weights' to model.

Source code in ultralytics/engine/model.py
def load(self, weights='yolov8n.pt'):
    """
    Transfers parameters with matching names and shapes from 'weights' to model.
    """
    self._check_is_pytorch_model()
    if isinstance(weights, (str, Path)):
        weights, self.ckpt = attempt_load_one_weight(weights)
    self.model.load(weights)
    return self

predict(source=None, stream=False, predictor=None, **kwargs)

Perform prediction using the YOLO model.

Parameters:

Name Type Description Default
source str | int | PIL | ndarray

The source of the image to make predictions on. Accepts all source types accepted by the YOLO model.

None
stream bool

Whether to stream the predictions or not. Defaults to False.

False
predictor BasePredictor

Customized predictor.

None
**kwargs

Additional keyword arguments passed to the predictor. Check the 'configuration' section in the documentation for all available options.

{}

Returns:

Type Description
List[Results]

The prediction results.

Source code in ultralytics/engine/model.py
def predict(self, source=None, stream=False, predictor=None, **kwargs):
    """
    Perform prediction using the YOLO model.

    Args:
        source (str | int | PIL | np.ndarray): The source of the image to make predictions on.
                      Accepts all source types accepted by the YOLO model.
        stream (bool): Whether to stream the predictions or not. Defaults to False.
        predictor (BasePredictor): Customized predictor.
        **kwargs : Additional keyword arguments passed to the predictor.
                   Check the 'configuration' section in the documentation for all available options.

    Returns:
        (List[ultralytics.engine.results.Results]): The prediction results.
    """
    if source is None:
        source = ASSETS
        LOGGER.warning(f"WARNING ⚠️ 'source' is missing. Using 'source={source}'.")

    is_cli = (sys.argv[0].endswith('yolo') or sys.argv[0].endswith('ultralytics')) and any(
        x in sys.argv for x in ('predict', 'track', 'mode=predict', 'mode=track'))

    custom = {'conf': 0.25, 'save': is_cli}  # method defaults
    args = {**self.overrides, **custom, **kwargs, 'mode': 'predict'}  # highest priority args on the right
    prompts = args.pop('prompts', None)  # for SAM-type models

    if not self.predictor:
        self.predictor = (predictor or self._smart_load('predictor'))(overrides=args, _callbacks=self.callbacks)
        self.predictor.setup_model(model=self.model, verbose=is_cli)
    else:  # only update args if predictor is already setup
        self.predictor.args = get_cfg(self.predictor.args, args)
        if 'project' in args or 'name' in args:
            self.predictor.save_dir = get_save_dir(self.predictor.args)
    if prompts and hasattr(self.predictor, 'set_prompts'):  # for SAM-type models
        self.predictor.set_prompts(prompts)
    return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)

reset_callbacks()

Reset all registered callbacks.

Source code in ultralytics/engine/model.py
def reset_callbacks(self):
    """Reset all registered callbacks."""
    for event in callbacks.default_callbacks.keys():
        self.callbacks[event] = [callbacks.default_callbacks[event][0]]

reset_weights()

Resets the model modules parameters to randomly initialized values, losing all training information.

Source code in ultralytics/engine/model.py
def reset_weights(self):
    """
    Resets the model modules parameters to randomly initialized values, losing all training information.
    """
    self._check_is_pytorch_model()
    for m in self.model.modules():
        if hasattr(m, 'reset_parameters'):
            m.reset_parameters()
    for p in self.model.parameters():
        p.requires_grad = True
    return self

track(source=None, stream=False, persist=False, **kwargs)

Perform object tracking on the input source using the registered trackers.

Parameters:

Name Type Description Default
source str

The input source for object tracking. Can be a file path or a video stream.

None
stream bool

Whether the input source is a video stream. Defaults to False.

False
persist bool

Whether to persist the trackers if they already exist. Defaults to False.

False
**kwargs optional

Additional keyword arguments for the tracking process.

{}

Returns:

Type Description
List[Results]

The tracking results.

Source code in ultralytics/engine/model.py
def track(self, source=None, stream=False, persist=False, **kwargs):
    """
    Perform object tracking on the input source using the registered trackers.

    Args:
        source (str, optional): The input source for object tracking. Can be a file path or a video stream.
        stream (bool, optional): Whether the input source is a video stream. Defaults to False.
        persist (bool, optional): Whether to persist the trackers if they already exist. Defaults to False.
        **kwargs (optional): Additional keyword arguments for the tracking process.

    Returns:
        (List[ultralytics.engine.results.Results]): The tracking results.
    """
    if not hasattr(self.predictor, 'trackers'):
        from ultralytics.trackers import register_tracker
        register_tracker(self, persist)
    # ByteTrack-based method needs low confidence predictions as input
    kwargs['conf'] = kwargs.get('conf') or 0.1
    kwargs['mode'] = 'track'
    return self.predict(source=source, stream=stream, **kwargs)

train(trainer=None, **kwargs)

Trains the model on a given dataset.

Parameters:

Name Type Description Default
trainer BaseTrainer

Customized trainer.

None
**kwargs Any

Any number of arguments representing the training configuration.

{}
Source code in ultralytics/engine/model.py
def train(self, trainer=None, **kwargs):
    """
    Trains the model on a given dataset.

    Args:
        trainer (BaseTrainer, optional): Customized trainer.
        **kwargs (Any): Any number of arguments representing the training configuration.
    """
    self._check_is_pytorch_model()
    if self.session:  # Ultralytics HUB session
        if any(kwargs):
            LOGGER.warning('WARNING ⚠️ using HUB training arguments, ignoring local training arguments.')
        kwargs = self.session.train_args
    check_pip_update_available()

    overrides = yaml_load(check_yaml(kwargs['cfg'])) if kwargs.get('cfg') else self.overrides
    custom = {'data': TASK2DATA[self.task]}  # method defaults
    args = {**overrides, **custom, **kwargs, 'mode': 'train'}  # highest priority args on the right
    if args.get('resume'):
        args['resume'] = self.ckpt_path

    self.trainer = (trainer or self._smart_load('trainer'))(overrides=args, _callbacks=self.callbacks)
    if not args.get('resume'):  # manually set model only if not resuming
        self.trainer.model = self.trainer.get_model(weights=self.model if self.ckpt else None, cfg=self.model.yaml)
        self.model = self.trainer.model
    self.trainer.hub_session = self.session  # attach optional HUB session
    self.trainer.train()
    # Update model and cfg after training
    if RANK in (-1, 0):
        ckpt = self.trainer.best if self.trainer.best.exists() else self.trainer.last
        self.model, _ = attempt_load_one_weight(ckpt)
        self.overrides = self.model.args
        self.metrics = getattr(self.trainer.validator, 'metrics', None)  # TODO: no metrics returned by DDP
    return self.metrics

tune(use_ray=False, iterations=10, *args, **kwargs)

Runs hyperparameter tuning, optionally using Ray Tune. See ultralytics.utils.tuner.run_ray_tune for Args.

Returns:

Type Description
dict

A dictionary containing the results of the hyperparameter search.

Source code in ultralytics/engine/model.py
def tune(self, use_ray=False, iterations=10, *args, **kwargs):
    """
    Runs hyperparameter tuning, optionally using Ray Tune. See ultralytics.utils.tuner.run_ray_tune for Args.

    Returns:
        (dict): A dictionary containing the results of the hyperparameter search.
    """
    self._check_is_pytorch_model()
    if use_ray:
        from ultralytics.utils.tuner import run_ray_tune
        return run_ray_tune(self, max_samples=iterations, *args, **kwargs)
    else:
        from .tuner import Tuner

        custom = {}  # method defaults
        args = {**self.overrides, **custom, **kwargs, 'mode': 'train'}  # highest priority args on the right
        return Tuner(args=args, _callbacks=self.callbacks)(model=self, iterations=iterations)

val(validator=None, **kwargs)

Validate a model on a given dataset.

Parameters:

Name Type Description Default
validator BaseValidator

Customized validator.

None
**kwargs

Any other args accepted by the validators. To see all args check 'configuration' section in docs

{}
Source code in ultralytics/engine/model.py
def val(self, validator=None, **kwargs):
    """
    Validate a model on a given dataset.

    Args:
        validator (BaseValidator): Customized validator.
        **kwargs : Any other args accepted by the validators. To see all args check 'configuration' section in docs
    """
    custom = {'rect': True}  # method defaults
    args = {**self.overrides, **custom, **kwargs, 'mode': 'val'}  # highest priority args on the right
    args['imgsz'] = check_imgsz(args['imgsz'], max_dim=1)

    validator = (validator or self._smart_load('validator'))(args=args, _callbacks=self.callbacks)
    validator(model=self.model)
    self.metrics = validator.metrics
    return validator.metrics




Created 2023-07-16, Updated 2023-08-07
Authors: glenn-jocher (5), Laughing-q (2)