์ฝ˜ํ…์ธ ๋กœ ๊ฑด๋„ˆ๋›ฐ๊ธฐ

๋จธ์‹  ๋Ÿฌ๋‹ ๋ชจ๋ฒ” ์‚ฌ๋ก€ ๋ฐ ๋ชจ๋ธ ํ•™์Šต์„ ์œ„ํ•œ ํŒ

์†Œ๊ฐœ

์ปดํ“จํ„ฐ ๋น„์ „ ํ”„๋กœ์ ํŠธ๋ฅผ ์ง„ํ–‰ํ•  ๋•Œ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๋‹จ๊ณ„ ์ค‘ ํ•˜๋‚˜๋Š” ๋ชจ๋ธ ํ›ˆ๋ จ์ž…๋‹ˆ๋‹ค. ์ด ๋‹จ๊ณ„์— ๋„๋‹ฌํ•˜๊ธฐ ์ „์— ๋ชฉํ‘œ๋ฅผ ์ •์˜ํ•˜๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ์ˆ˜์ง‘ํ•˜๊ณ  ์ฃผ์„์„ ๋‹ฌ์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ๊ฐ€ ๊นจ๋—ํ•˜๊ณ  ์ผ๊ด€์„ฑ์ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ๋ฐ์ดํ„ฐ๋ฅผ ์ „์ฒ˜๋ฆฌํ•œ ํ›„์—๋Š” ๋ชจ๋ธ ํ›ˆ๋ จ์œผ๋กœ ๋„˜์–ด๊ฐˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.



Watch: Model Training Tips | How to Handle Large Datasets | Batch Size, GPU Utilization and [Mixed Precision](https://www.ultralytics.com/glossary/mixed-precision)

๊ทธ๋ ‡๋‹ค๋ฉด ๋ชจ๋ธ ํŠธ๋ ˆ์ด๋‹์ด๋ž€ ๋ฌด์—‡์ธ๊ฐ€์š”? ๋ชจ๋ธ ํŠธ๋ ˆ์ด๋‹์€ ๋ชจ๋ธ์ด ์‹œ๊ฐ์  ํŒจํ„ด์„ ์ธ์‹ํ•˜๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์˜ˆ์ธกํ•˜๋„๋ก ๊ฐ€๋ฅด์น˜๋Š” ํ”„๋กœ์„ธ์Šค์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์˜ ์„ฑ๋Šฅ๊ณผ ์ •ํ™•๋„์— ์ง์ ‘์ ์ธ ์˜ํ–ฅ์„ ๋ฏธ์นฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ์ปดํ“จํ„ฐ ๋น„์ „ ๋ชจ๋ธ์„ ํšจ๊ณผ์ ์œผ๋กœ ํ›ˆ๋ จํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๋ชจ๋ฒ” ์‚ฌ๋ก€, ์ตœ์ ํ™” ๊ธฐ์ˆ  ๋ฐ ๋ฌธ์ œ ํ•ด๊ฒฐ ํŒ์„ ๋‹ค๋ฃน๋‹ˆ๋‹ค.

How to Train a Machine Learning Model

์ปดํ“จํ„ฐ ๋น„์ „ ๋ชจ๋ธ์€ ์˜ค๋ฅ˜๋ฅผ ์ตœ์†Œํ™”ํ•˜๊ธฐ ์œ„ํ•ด ๋‚ด๋ถ€ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์กฐ์ •ํ•˜์—ฌ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์ฒ˜์Œ์—๋Š” ๋ชจ๋ธ์— ๋ผ๋ฒจ์ด ์ง€์ •๋œ ๋Œ€๊ทœ๋ชจ ์ด๋ฏธ์ง€ ์„ธํŠธ๊ฐ€ ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. ์ด ์ด๋ฏธ์ง€์— ๋ฌด์—‡์ด ์žˆ๋Š”์ง€ ์˜ˆ์ธกํ•˜๊ณ  ์˜ˆ์ธก์„ ์‹ค์ œ ๋ผ๋ฒจ์ด๋‚˜ ๋‚ด์šฉ๊ณผ ๋น„๊ตํ•˜์—ฌ ์˜ค์ฐจ๋ฅผ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์˜ค์ฐจ๋Š” ๋ชจ๋ธ์˜ ์˜ˆ์ธก์ด ์‹ค์ œ ๊ฐ’๊ณผ ์–ผ๋งˆ๋‚˜ ์ฐจ์ด๊ฐ€ ๋‚˜๋Š”์ง€๋ฅผ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค.

During training, the model iteratively makes predictions, calculates errors, and updates its parameters through a process called backpropagation. In this process, the model adjusts its internal parameters (weights and biases) to reduce the errors. By repeating this cycle many times, the model gradually improves its accuracy. Over time, it learns to recognize complex patterns such as shapes, colors, and textures.

์—ญ์ „ํŒŒ๋ž€ ๋ฌด์—‡์ธ๊ฐ€์š”?

This learning process makes it possible for the computer vision model to perform various tasks, including object detection, instance segmentation, and image classification. The ultimate goal is to create a model that can generalize its learning to new, unseen images so that it can accurately understand visual data in real-world applications.

์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ๋•Œ ๋’ค์—์„œ ์–ด๋–ค ์ผ์ด ์ผ์–ด๋‚˜๋Š”์ง€ ์•Œ์•˜์œผ๋‹ˆ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ๋•Œ ๊ณ ๋ คํ•ด์•ผ ํ•  ์‚ฌํ•ญ์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•œ ๊ต์œก

๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ค๋ ค๋Š” ๊ฒฝ์šฐ ๊ณ ๋ คํ•ด์•ผ ํ•  ๋ช‡ ๊ฐ€์ง€ ์ธก๋ฉด์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ•˜๊ณ , GPU ํ™œ์šฉ๋„๋ฅผ ์ œ์–ดํ•˜๊ณ , ๋ฉ€ํ‹ฐ์Šค์ผ€์ผ ํ•™์Šต์„ ์‚ฌ์šฉํ•˜๋„๋ก ์„ ํƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฐ ์˜ต์…˜์„ ์ž์„ธํžˆ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

๋ฐฐ์น˜ ํฌ๊ธฐ ๋ฐ GPU ํ™œ์šฉ๋„

๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ๋ชจ๋ธ์„ ํ•™์Šตํ•  ๋•Œ๋Š” GPU ์„ ํšจ์œจ์ ์œผ๋กœ ํ™œ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๋ฐฐ์น˜ ํฌ๊ธฐ๋Š” ์ค‘์š”ํ•œ ์š”์†Œ์ž…๋‹ˆ๋‹ค. ๋ฐฐ์น˜ ํฌ๊ธฐ๋Š” ๋จธ์‹ ๋Ÿฌ๋‹ ๋ชจ๋ธ์ด ํ•œ ๋ฒˆ์˜ ํ›ˆ๋ จ ๋ฐ˜๋ณต์—์„œ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐ์ดํ„ฐ ์ƒ˜ํ”Œ์˜ ์ˆ˜์ž…๋‹ˆ๋‹ค. GPU ์—์„œ ์ง€์›ํ•˜๋Š” ์ตœ๋Œ€ ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๊ธฐ๋Šฅ์„ ์ตœ๋Œ€ํ•œ ํ™œ์šฉํ•˜๊ณ  ๋ชจ๋ธ ํ•™์Šต์— ๊ฑธ๋ฆฌ๋Š” ์‹œ๊ฐ„์„ ์ค„์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ GPU ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ๋ถ€์กฑํ•ด์ง€์ง€ ์•Š๋„๋ก ์ฃผ์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฉ”๋ชจ๋ฆฌ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด ๋ชจ๋ธ์ด ์›ํ™œํ•˜๊ฒŒ ํ•™์Šตํ•  ๋•Œ๊นŒ์ง€ ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ์ ์ง„์ ์œผ๋กœ ์ค„์ด์„ธ์š”.

With respect to YOLO11, you can set the batch_size ๋งค๊ฐœ๋ณ€์ˆ˜์˜ ๊ต์œก ๊ตฌ์„ฑ ๋ฅผ GPU ์šฉ๋Ÿ‰์— ๋งž๊ฒŒ ์„ค์ •ํ•˜์„ธ์š”. ๋˜ํ•œ batch=-1 in your training script will automatically determine the batch size that can be efficiently processed based on your device's capabilities. By fine-tuning the batch size, you can make the most of your GPU resources and improve the overall training process.

ํ•˜์œ„ ์ง‘ํ•ฉ ๊ต์œก

ํ•˜์œ„ ์ง‘ํ•ฉ ํ•™์Šต์€ ๋” ํฐ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ๋‚˜ํƒ€๋‚ด๋Š” ์ž‘์€ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์œผ๋กœ ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ค๋Š” ํ˜„๋ช…ํ•œ ์ „๋žต์ž…๋‹ˆ๋‹ค. ํŠนํžˆ ์ดˆ๊ธฐ ๋ชจ๋ธ ๊ฐœ๋ฐœ ๋ฐ ํ…Œ์ŠคํŠธ ์ค‘์— ์‹œ๊ฐ„๊ณผ ๋ฆฌ์†Œ์Šค๋ฅผ ์ ˆ์•ฝํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‹œ๊ฐ„์ด ๋ถ€์กฑํ•˜๊ฑฐ๋‚˜ ๋‹ค์–‘ํ•œ ๋ชจ๋ธ ๊ตฌ์„ฑ์„ ์‹คํ—˜ํ•˜๋Š” ๊ฒฝ์šฐ ํ•˜์œ„ ์ง‘ํ•ฉ ํ•™์Šต์ด ์ข‹์€ ์˜ต์…˜์ž…๋‹ˆ๋‹ค.

When it comes to YOLO11, you can easily implement subset training by using the fraction ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์–ด๋Š ๋ถ€๋ถ„์„ ํ•™์Šต์— ์‚ฌ์šฉํ• ์ง€ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์„ค์ • fraction=0.1 ๋Š” ๋ฐ์ดํ„ฐ์˜ 10%์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ต๋‹ˆ๋‹ค. ์ „์ฒด ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ค๊ธฐ ์ „์— ์ด ๊ธฐ์ˆ ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋น ๋ฅด๊ฒŒ ๋ฐ˜๋ณตํ•˜๊ณ  ๋ชจ๋ธ์„ ํŠœ๋‹ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์œ„ ์ง‘ํ•ฉ ํ•™์Šต์€ ๋น ๋ฅธ ์ง„ํ–‰์„ ๋•๊ณ  ์ž ์žฌ์ ์ธ ๋ฌธ์ œ๋ฅผ ์กฐ๊ธฐ์— ์‹๋ณ„ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค.

๋‹ค์ค‘ ๊ทœ๋ชจ ๊ต์œก

๋ฉ€ํ‹ฐ์Šค์ผ€์ผ ํŠธ๋ ˆ์ด๋‹์€ ๋‹ค์–‘ํ•œ ํฌ๊ธฐ์˜ ์ด๋ฏธ์ง€๋กœ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜์—ฌ ์ผ๋ฐ˜ํ™” ๋Šฅ๋ ฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ๊ธฐ์ˆ ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์€ ๋‹ค์–‘ํ•œ ๊ทœ๋ชจ์™€ ๊ฑฐ๋ฆฌ์—์„œ ๋ฌผ์ฒด๋ฅผ ๊ฐ์ง€ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์—ฌ ๋”์šฑ ๊ฐ•๋ ฅํ•ด์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

For example, when you train YOLO11, you can enable multiscale training by setting the scale ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ํ›ˆ๋ จ ์ด๋ฏธ์ง€์˜ ํฌ๊ธฐ๋ฅผ ์ง€์ •๋œ ๊ณ„์ˆ˜๋งŒํผ ์กฐ์ •ํ•˜์—ฌ ๋‹ค์–‘ํ•œ ๊ฑฐ๋ฆฌ์— ์žˆ๋Š” ๋ฌผ์ฒด๋ฅผ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด scale=0.5 ์€ ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ์ ˆ๋ฐ˜์œผ๋กœ ์ค„์ด๊ณ  scale=2.0 ๋กœ ์„ค์ •ํ•˜๋ฉด ๋‘ ๋ฐฐ๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. ์ด ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ตฌ์„ฑํ•˜๋ฉด ๋ชจ๋ธ์ด ๋‹ค์–‘ํ•œ ์ด๋ฏธ์ง€ ๋ฐฐ์œจ์„ ๊ฒฝํ—˜ํ•˜๊ณ  ๋‹ค์–‘ํ•œ ๊ฐ์ฒด ํฌ๊ธฐ์™€ ์‹œ๋‚˜๋ฆฌ์˜ค์—์„œ ๊ฐ์ง€ ๊ธฐ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

์บ์‹ฑ

์บ์‹ฑ์€ ๋จธ์‹ ๋Ÿฌ๋‹ ๋ชจ๋ธ ํ•™์Šต์˜ ํšจ์œจ์„ฑ์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ์ค‘์š”ํ•œ ๊ธฐ์ˆ ์ž…๋‹ˆ๋‹ค. ์บ์‹ฑ์€ ์‚ฌ์ „ ์ฒ˜๋ฆฌ๋œ ์ด๋ฏธ์ง€๋ฅผ ๋ฉ”๋ชจ๋ฆฌ์— ์ €์žฅํ•จ์œผ๋กœ์จ GPU ์—์„œ ๋””์Šคํฌ์—์„œ ๋ฐ์ดํ„ฐ๊ฐ€ ๋กœ๋“œ๋˜๊ธฐ๋ฅผ ๊ธฐ๋‹ค๋ฆฌ๋Š” ์‹œ๊ฐ„์„ ์ค„์—ฌ์ค๋‹ˆ๋‹ค. ๋ชจ๋ธ์€ ๋””์Šคํฌ I/O ์ž‘์—…์œผ๋กœ ์ธํ•œ ์ง€์—ฐ ์—†์ด ์ง€์†์ ์œผ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ์ˆ˜์‹ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

Caching can be controlled when training YOLO11 using the cache ๋งค๊ฐœ๋ณ€์ˆ˜์ž…๋‹ˆ๋‹ค:

  • cache=True: ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ด๋ฏธ์ง€๋ฅผ RAM์— ์ €์žฅํ•˜์—ฌ ๊ฐ€์žฅ ๋น ๋ฅธ ์•ก์„ธ์Šค ์†๋„๋ฅผ ์ œ๊ณตํ•˜์ง€๋งŒ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์ด ์ฆ๊ฐ€ํ•ฉ๋‹ˆ๋‹ค.
  • cache='disk': ์ด๋ฏธ์ง€๋ฅผ ๋””์Šคํฌ์— ์ €์žฅํ•˜๋ฉฐ, RAM๋ณด๋‹ค๋Š” ๋Š๋ฆฌ์ง€๋งŒ ๋งค๋ฒˆ ์ƒˆ ๋ฐ์ดํ„ฐ๋ฅผ ๋กœ๋“œํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ๋น ๋ฆ…๋‹ˆ๋‹ค.
  • cache=False: ์บ์‹ฑ์„ ๋น„ํ™œ์„ฑํ™”ํ•˜์—ฌ ๊ฐ€์žฅ ๋Š๋ฆฐ ์˜ต์…˜์ธ ๋””์Šคํฌ I/O์—๋งŒ ์˜์กดํ•ฉ๋‹ˆ๋‹ค.

ํ˜ผํ•ฉ ์ •๋ฐ€ ๊ต์œก

Mixed precision training uses both 16-bit (FP16) and 32-bit (FP32) floating-point types. The strengths of both FP16 and FP32 are leveraged by using FP16 for faster computation and FP32 to maintain precision where needed. Most of the neural network's operations are done in FP16 to benefit from faster computation and lower memory usage. However, a master copy of the model's weights is kept in FP32 to ensure accuracy during the weight update steps. You can handle larger models or larger batch sizes within the same hardware constraints.

ํ˜ผํ•ฉ ์ •๋ฐ€ ๊ต์œก ๊ฐœ์š”

To implement mixed precision training, you'll need to modify your training scripts and ensure your hardware (like GPUs) supports it. Many modern deep learning frameworks, such as Tensorflow, offer built-in support for mixed precision.

Mixed precision training is straightforward when working with YOLO11. You can use the amp ํ”Œ๋ž˜๊ทธ๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ์„ค์ • amp=True ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„(AMP) ํ›ˆ๋ จ์ด ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ›ˆ๋ จ์€ ๋ชจ๋ธ ํ›ˆ๋ จ ํ”„๋กœ์„ธ์Šค๋ฅผ ์ตœ์ ํ™”ํ•˜๋Š” ๊ฐ„๋‹จํ•˜๋ฉด์„œ๋„ ํšจ๊ณผ์ ์ธ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค.

์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜

Using pretrained weights is a smart way to speed up your model's training process. Pretrained weights come from models already trained on large datasets, giving your model a head start. Transfer learning adapts pretrained models to new, related tasks. Fine-tuning a pre-trained model involves starting with these weights and then continuing training on your specific dataset. This method of training results in faster training times and often better performance because the model starts with a solid understanding of basic features.

๊ทธ๋ฆฌ๊ณ  pretrained parameter makes transfer learning easy with YOLO11. Setting pretrained=True ๋Š” ๊ธฐ๋ณธ ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜๋ฅผ ์‚ฌ์šฉํ•˜๊ฑฐ๋‚˜ ์‚ฌ์šฉ์ž ์ง€์ • ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ๊ฒฝ๋กœ๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜์™€ ์ „์ด ํ•™์Šต์„ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ์˜ ๊ธฐ๋Šฅ์„ ํšจ๊ณผ์ ์œผ๋กœ ํ–ฅ์ƒ์‹œํ‚ค๊ณ  ํ•™์Šต ๋น„์šฉ์„ ์ ˆ๊ฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ์ฒ˜๋ฆฌํ•  ๋•Œ ๊ณ ๋ คํ•ด์•ผ ํ•  ๊ธฐํƒ€ ๊ธฐ์ˆ 

๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ์ฒ˜๋ฆฌํ•  ๋•Œ ๊ณ ๋ คํ•ด์•ผ ํ•  ๋ช‡ ๊ฐ€์ง€ ๋‹ค๋ฅธ ๊ธฐ์ˆ ์ด ์žˆ์Šต๋‹ˆ๋‹ค:

  • Learning Rate Schedulers: Implementing learning rate schedulers dynamically adjusts the learning rate during training. A well-tuned learning rate can prevent the model from overshooting minima and improve stability. When training YOLO11, the lrf ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์ตœ์ข… ํ•™์Šต๋ฅ ์„ ์ดˆ๊ธฐ ํ•™์Šต๋ฅ ์˜ ์ผ๋ถ€๋ถ„์œผ๋กœ ์„ค์ •ํ•˜์—ฌ ํ•™์Šต๋ฅ  ์Šค์ผ€์ค„์„ ๊ด€๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
  • ๋ถ„์‚ฐ ํ›ˆ๋ จ: ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ฒ˜๋ฆฌํ•  ๋•Œ ๋ถ„์‚ฐ ํ›ˆ๋ จ์€ ํŒ๋„๋ฅผ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํŠธ๋ ˆ์ด๋‹ ์›Œํฌ๋กœ๋“œ๋ฅผ ์—ฌ๋Ÿฌ GPU ๋˜๋Š” ๋จธ์‹ ์— ๋ถ„์‚ฐํ•˜์—ฌ ํŠธ๋ ˆ์ด๋‹ ์‹œ๊ฐ„์„ ๋‹จ์ถ•ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

ํ›ˆ๋ จํ•  ์—ํฌํฌ ์ˆ˜

๋ชจ๋ธ์„ ํ•™์Šตํ•  ๋•Œ ์—ํฌํฌ๋Š” ์ „์ฒด ํ•™์Šต ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ํ•œ ๋ฒˆ ์™„์ „ํžˆ ํ†ต๊ณผํ•˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์—ํฌํฌ ๋™์•ˆ ๋ชจ๋ธ์€ ํ•™์Šต ์„ธํŠธ์˜ ๊ฐ ์˜ˆ์‹œ๋ฅผ ํ•œ ๋ฒˆ ์ฒ˜๋ฆฌํ•˜๊ณ  ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๋”ฐ๋ผ ๋งค๊ฐœ ๋ณ€์ˆ˜๋ฅผ ์—…๋ฐ์ดํŠธํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ๋ชจ๋ธ์ด ์‹œ๊ฐ„์ด ์ง€๋‚จ์— ๋”ฐ๋ผ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ํ•™์Šตํ•˜๊ณ  ๊ฐœ์„ ํ•˜๋ ค๋ฉด ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์—ํฌํฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค.

A common question that comes up is how to determine the number of epochs to train the model for. A good starting point is 300 epochs. If the model overfits early, you can reduce the number of epochs. If overfitting does not occur after 300 epochs, you can extend the training to 600, 1200, or more epochs.

However, the ideal number of epochs can vary based on your dataset's size and project goals. Larger datasets might require more epochs for the model to learn effectively, while smaller datasets might need fewer epochs to avoid overfitting. With respect to YOLO11, you can set the epochs ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค.

์กฐ๊ธฐ ์ค‘์ง€

์กฐ๊ธฐ ์ค‘์ง€๋Š” ๋ชจ๋ธ ํ•™์Šต์„ ์ตœ์ ํ™”ํ•˜๋Š” ๋ฐ ์œ ์šฉํ•œ ๊ธฐ์ˆ ์ž…๋‹ˆ๋‹ค. ์œ ํšจ์„ฑ ๊ฒ€์‚ฌ ์„ฑ๋Šฅ์„ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜์—ฌ ๋ชจ๋ธ ๊ฐœ์„ ์ด ์ค‘๋‹จ๋˜๋ฉด ํ›ˆ๋ จ์„ ์ค‘๋‹จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ณ„์‚ฐ ๋ฆฌ์†Œ์Šค๋ฅผ ์ ˆ์•ฝํ•˜๊ณ  ๊ณผ์ ํ•ฉ์„ ๋ฐฉ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

The process involves setting a patience parameter that determines how many epochs to wait for an improvement in validation metrics before stopping training. If the model's performance does not improve within these epochs, training is stopped to avoid wasting time and resources.

์กฐ๊ธฐ ์ข…๋ฃŒ ๊ฐœ์š”

For YOLO11, you can enable early stopping by setting the patience parameter in your training configuration. For example, patience=5 ๋Š” 5ํšŒ ์—ฐ์†์œผ๋กœ ์œ ํšจ์„ฑ ๊ฒ€์‚ฌ ์ง€ํ‘œ๊ฐ€ ๊ฐœ์„ ๋˜์ง€ ์•Š์œผ๋ฉด ํŠธ๋ ˆ์ด๋‹์ด ์ค‘๋‹จ๋จ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜๋ฉด ํŠธ๋ ˆ์ด๋‹ ํ”„๋กœ์„ธ์Šค๊ฐ€ ํšจ์œจ์ ์œผ๋กœ ์œ ์ง€๋˜๊ณ  ๊ณผ๋„ํ•œ ๊ณ„์‚ฐ ์—†์ด ์ตœ์ ์˜ ์„ฑ๋Šฅ์„ ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

ํด๋ผ์šฐ๋“œ ๊ต์œก๊ณผ ๋กœ์ปฌ ๊ต์œก ์ค‘ ์„ ํƒ

๋ชจ๋ธ ํŠธ๋ ˆ์ด๋‹์—๋Š” ํด๋ผ์šฐ๋“œ ํŠธ๋ ˆ์ด๋‹๊ณผ ๋กœ์ปฌ ํŠธ๋ ˆ์ด๋‹์˜ ๋‘ ๊ฐ€์ง€ ์˜ต์…˜์ด ์žˆ์Šต๋‹ˆ๋‹ค.

ํด๋ผ์šฐ๋“œ ํŠธ๋ ˆ์ด๋‹์€ ํ™•์žฅ์„ฑ๊ณผ ๊ฐ•๋ ฅํ•œ ํ•˜๋“œ์›จ์–ด๋ฅผ ์ œ๊ณตํ•˜๋ฉฐ ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ๋ณต์žกํ•œ ๋ชจ๋ธ์„ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐ ์ด์ƒ์ ์ž…๋‹ˆ๋‹ค. Google Cloud, AWS, Azure์™€ ๊ฐ™์€ ํ”Œ๋žซํผ์€ ๊ณ ์„ฑ๋Šฅ GPU์™€ TPU์— ๋Œ€ํ•œ ์˜จ๋””๋งจ๋“œ ์•ก์„ธ์Šค๋ฅผ ์ œ๊ณตํ•˜์—ฌ ํŠธ๋ ˆ์ด๋‹ ์‹œ๊ฐ„์„ ๋‹จ์ถ•ํ•˜๊ณ  ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ์‹คํ—˜ํ•  ์ˆ˜ ์žˆ๋„๋ก ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ํด๋ผ์šฐ๋“œ ํŠธ๋ ˆ์ด๋‹์€ ํŠนํžˆ ์žฅ๊ธฐ๊ฐ„์— ๊ฑธ์ณ ๋น„์šฉ์ด ๋งŽ์ด ๋“ค ์ˆ˜ ์žˆ์œผ๋ฉฐ ๋ฐ์ดํ„ฐ ์ „์†ก์œผ๋กœ ์ธํ•ด ๋น„์šฉ๊ณผ ์ง€์—ฐ ์‹œ๊ฐ„์ด ๋Š˜์–ด๋‚  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

๋กœ์ปฌ ๊ต์œก์„ ํ†ตํ•ด ์ œ์–ด ๋ฐ ์‚ฌ์šฉ์ž ์ง€์ • ๊ธฐ๋Šฅ์„ ๊ฐ•ํ™”ํ•˜์—ฌ ํŠน์ • ์š”๊ตฌ ์‚ฌํ•ญ์— ๋งž๊ฒŒ ํ™˜๊ฒฝ์„ ์กฐ์ •ํ•˜๊ณ  ์ง€์†์ ์ธ ํด๋ผ์šฐ๋“œ ๋น„์šฉ์„ ํ”ผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์žฅ๊ธฐ ํ”„๋กœ์ ํŠธ์— ๋” ๊ฒฝ์ œ์ ์ผ ์ˆ˜ ์žˆ์œผ๋ฉฐ ๋ฐ์ดํ„ฐ๊ฐ€ ์˜จํ”„๋ ˆ๋ฏธ์Šค์— ์œ ์ง€๋˜๋ฏ€๋กœ ๋” ์•ˆ์ „ํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋กœ์ปฌ ํ•˜๋“œ์›จ์–ด์—๋Š” ๋ฆฌ์†Œ์Šค ์ œํ•œ์ด ์žˆ๊ณ  ์œ ์ง€ ๊ด€๋ฆฌ๊ฐ€ ํ•„์š”ํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ ๊ต์œก ์‹œ๊ฐ„์ด ๊ธธ์–ด์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

์ตœ์ ํ™” ๋„๊ตฌ ์„ ํƒ

An optimizer is an algorithm that adjusts the weights of your neural network to minimize the loss function, which measures how well the model is performing. In simpler terms, the optimizer helps the model learn by tweaking its parameters to reduce errors. Choosing the right optimizer directly affects how quickly and accurately the model learns.

์ตœ์ ํ™” ๋„๊ตฌ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ๋ชจ๋ธ ์„ฑ๋Šฅ์„ ๊ฐœ์„ ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•™์Šต ์†๋„๋ฅผ ์กฐ์ •ํ•˜๋ฉด ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์—…๋ฐ์ดํŠธํ•  ๋•Œ ๋‹จ๊ณ„์˜ ํฌ๊ธฐ๊ฐ€ ์„ค์ •๋ฉ๋‹ˆ๋‹ค. ์•ˆ์ •์„ฑ์„ ์œ„ํ•ด ์ ๋‹นํ•œ ํ•™์Šต ์†๋„๋กœ ์‹œ์ž‘ํ•˜์—ฌ ์‹œ๊ฐ„์ด ์ง€๋‚จ์— ๋”ฐ๋ผ ์ ์ฐจ์ ์œผ๋กœ ํ•™์Šต ์†๋„๋ฅผ ๋‚ฎ์ถ”์–ด ์žฅ๊ธฐ์ ์ธ ํ•™์Šต์„ ๊ฐœ์„ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ๋ชจ๋ฉ˜ํ…€์„ ์„ค์ •ํ•˜๋ฉด ๊ณผ๊ฑฐ ์—…๋ฐ์ดํŠธ๊ฐ€ ํ˜„์žฌ ์—…๋ฐ์ดํŠธ์— ์–ผ๋งˆ๋‚˜ ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š”์ง€ ๊ฒฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ฉ˜ํ…€์˜ ์ผ๋ฐ˜์ ์ธ ๊ฐ’์€ ์•ฝ 0.9์ž…๋‹ˆ๋‹ค. ์ด ๊ฐ’์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์ ์ ˆํ•œ ๊ท ํ˜•์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.

์ผ๋ฐ˜์ ์ธ ์ตœ์ ํ™” ๋„๊ตฌ

์ตœ์ ํ™” ๋„๊ตฌ๋งˆ๋‹ค ๋‹ค์–‘ํ•œ ์žฅ๋‹จ์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋ช‡ ๊ฐ€์ง€ ์ผ๋ฐ˜์ ์ธ ์ตœ์ ํ™” ๋„๊ตฌ์— ๋Œ€ํ•ด ๊ฐ„๋žตํžˆ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

  • SGD(ํ™•๋ฅ ์  ๊ทธ๋ผ๋ฐ์ด์…˜ ํ•˜๊ฐ•):

    • ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์†์‹ค ํ•จ์ˆ˜์˜ ๊ธฐ์šธ๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์—…๋ฐ์ดํŠธํ•ฉ๋‹ˆ๋‹ค.
    • ๊ฐ„๋‹จํ•˜๊ณ  ํšจ์œจ์ ์ด์ง€๋งŒ ์ˆ˜๋ ด ์†๋„๊ฐ€ ๋Š๋ฆฌ๊ณ  ๋กœ์ปฌ ์ตœ์†Œ๊ฐ’์— ๊ฐ‡ํž ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
  • ์•„๋‹ด(์ ์‘ํ˜• ์ˆœ๊ฐ„ ์ถ”์ •):

    • SGD์™€ ๋ชจ๋ฉ˜ํ…€ ๋ฐ RMSProp์˜ ์žฅ์ ์„ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค.
    • ๊ธฐ์šธ๊ธฐ์˜ ์ฒซ ๋ฒˆ์งธ ๋ฐ ๋‘ ๋ฒˆ์งธ ๋ชจ๋ฉ˜ํŠธ์˜ ์ถ”์ •์น˜๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ฐ ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ํ•™์Šต ์†๋„๋ฅผ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค.
    • ๋…ธ์ด์ฆˆ๊ฐ€ ๋งŽ์€ ๋ฐ์ดํ„ฐ์™€ ํฌ๋ฐ•ํ•œ ๊ทธ๋ผ๋ฐ์ด์…˜์— ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค.
    • Efficient and generally requires less tuning, making it a recommended optimizer for YOLO11.
  • RMSProp(๋ฃจํŠธ ํ‰๊ท  ์ œ๊ณฑ ์ „ํŒŒ):

    • ๊ธฐ์šธ๊ธฐ๋ฅผ ์ตœ๊ทผ ๊ธฐ์šธ๊ธฐ์˜ ํฌ๊ธฐ ํ‰๊ท ์œผ๋กœ ๋‚˜๋ˆ„์–ด ๊ฐ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ํ•™์Šต ์†๋„๋ฅผ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค.
    • Helps in handling the vanishing gradient problem and is effective for recurrent neural networks.

For YOLO11, the optimizer ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด SGD, Adam, AdamW, NAdam, RAdam, RMSProp ๋“ฑ ๋‹ค์–‘ํ•œ ์ตœ์ ํ™” ๋„๊ตฌ ์ค‘์—์„œ ์„ ํƒํ•˜๊ฑฐ๋‚˜ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. auto ๋ฅผ ํด๋ฆญํ•˜์—ฌ ๋ชจ๋ธ ๊ตฌ์„ฑ์— ๋”ฐ๋ผ ์ž๋™์œผ๋กœ ์„ ํƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ์—ฐ๊ฒฐํ•˜๊ธฐ

์ปดํ“จํ„ฐ ๋น„์ „ ์• ํ˜ธ๊ฐ€ ์ปค๋ฎค๋‹ˆํ‹ฐ์˜ ์ผ์›์ด ๋˜๋ฉด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ณ  ๋” ๋นจ๋ฆฌ ๋ฐฐ์šฐ๋Š” ๋ฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์—ฐ๊ฒฐํ•˜๊ณ , ๋„์›€์„ ๋ฐ›๊ณ , ์•„์ด๋””์–ด๋ฅผ ๊ณต์œ ํ•˜๋Š” ๋ช‡ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค.

์ปค๋ฎค๋‹ˆํ‹ฐ ๋ฆฌ์†Œ์Šค

  • GitHub Issues: Visit the YOLO11 GitHub repository and use the Issues tab to ask questions, report bugs, and suggest new features. The community and maintainers are very active and ready to help.
  • Ultralytics ๋””์Šค์ฝ”๋“œ ์„œ๋ฒ„: Ultralytics Discord ์„œ๋ฒ„์— ๊ฐ€์ž…ํ•˜์—ฌ ๋‹ค๋ฅธ ์‚ฌ์šฉ์ž ๋ฐ ๊ฐœ๋ฐœ์ž์™€ ์ฑ„ํŒ…ํ•˜๊ณ , ์ง€์›์„ ๋ฐ›๊ณ , ๊ฒฝํ—˜์„ ๊ณต์œ ํ•˜์„ธ์š”.

๊ณต์‹ ๋ฌธ์„œ

  • Ultralytics YOLO11 Documentation: Check out the official YOLO11 documentation for detailed guides and helpful tips on various computer vision projects.

์ด๋Ÿฌํ•œ ๋ฆฌ์†Œ์Šค๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ณ  ์ปดํ“จํ„ฐ ๋น„์ „ ์ปค๋ฎค๋‹ˆํ‹ฐ์˜ ์ตœ์‹  ํŠธ๋ Œ๋“œ์™€ ์‚ฌ๋ก€๋ฅผ ์ตœ์‹  ์ƒํƒœ๋กœ ์œ ์ง€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค.

์ฃผ์š” ๋‚ด์šฉ

Training computer vision models involves following good practices, optimizing your strategies, and solving problems as they arise. Techniques like adjusting batch sizes, mixed precision training, and starting with pre-trained weights can make your models work better and train faster. Methods like subset training and early stopping help you save time and resources. Staying connected with the community and keeping up with new trends will help you keep improving your model training skills.

์ž์ฃผ ๋ฌป๋Š” ์งˆ๋ฌธ

Ultralytics YOLO ๋กœ ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ํ•™์Šตํ•  ๋•Œ GPU ํ™œ์šฉ๋„๋ฅผ ๋†’์ด๋ ค๋ฉด ์–ด๋–ป๊ฒŒ ํ•ด์•ผ ํ•˜๋‚˜์š”?

GPU ํ™œ์šฉ๋„๋ฅผ ๋†’์ด๋ ค๋ฉด batch_size parameter in your training configuration to the maximum size supported by your GPU. This ensures that you make full use of the GPU's capabilities, reducing training time. If you encounter memory errors, incrementally reduce the batch size until training runs smoothly. For YOLO11, setting batch=-1 ๋ฅผ ๊ต์œก ์Šคํฌ๋ฆฝํŠธ์— ์ถ”๊ฐ€ํ•˜๋ฉด ํšจ์œจ์ ์ธ ์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•œ ์ตœ์ ์˜ ๋ฐฐ์น˜ ํฌ๊ธฐ๊ฐ€ ์ž๋™์œผ๋กœ ๊ฒฐ์ •๋ฉ๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋‚ด์šฉ์€ ๊ต์œก ๊ตฌ์„ฑ.

What is mixed precision training, and how do I enable it in YOLO11?

Mixed precision training utilizes both 16-bit (FP16) and 32-bit (FP32) floating-point types to balance computational speed and precision. This approach speeds up training and reduces memory usage without sacrificing model accuracy. To enable mixed precision training in YOLO11, set the amp ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ True ์„ ํŠธ๋ ˆ์ด๋‹ ๊ตฌ์„ฑ์— ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„(AMP) ํ›ˆ๋ จ์ด ํ™œ์„ฑํ™”๋ฉ๋‹ˆ๋‹ค. ์ด ์ตœ์ ํ™” ๊ธฐ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ๊ต์œก ๊ตฌ์„ฑ.

How does multiscale training enhance YOLO11 model performance?

Multiscale training enhances model performance by training on images of varying sizes, allowing the model to better generalize across different scales and distances. In YOLO11, you can enable multiscale training by setting the scale ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด scale=0.5 ์€ ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ์ ˆ๋ฐ˜์œผ๋กœ ์ค„์ด๊ณ  scale=2.0 ๋ฅผ ๋‘ ๋ฐฐ๋กœ ๋Š˜๋ฆฝ๋‹ˆ๋‹ค. ์ด ๊ธฐ์ˆ ์€ ๋‹ค์–‘ํ•œ ๊ฑฐ๋ฆฌ์— ์žˆ๋Š” ๋ฌผ์ฒด๋ฅผ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ํ•˜์—ฌ ๋‹ค์–‘ํ•œ ์‹œ๋‚˜๋ฆฌ์˜ค์—์„œ ๋ชจ๋ธ์„ ๋”์šฑ ๊ฐ•๋ ฅํ•˜๊ฒŒ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์„ค์ • ๋ฐ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ๊ต์œก ๊ตฌ์„ฑ.

How can I use pre-trained weights to speed up training in YOLO11?

Using pre-trained weights can significantly reduce training times and improve model performance by starting from a model that already understands basic features. In YOLO11, you can set the pretrained ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ True ๋ฅผ ์„ ํƒํ•˜๊ฑฐ๋‚˜ ํ•™์Šต ๊ตฌ์„ฑ์—์„œ ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ์‚ฌ์šฉ์ž ์ง€์ • ๊ฐ€์ค‘์น˜์— ๋Œ€ํ•œ ๊ฒฝ๋กœ๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ „์ด ํ•™์Šต์ด๋ผ๊ณ  ํ•˜๋Š” ์ด ์ ‘๊ทผ ๋ฐฉ์‹์€ ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ง€์‹์„ ํ™œ์šฉํ•˜์—ฌ ํŠน์ • ์ž‘์—…์— ๋งž๊ฒŒ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜์™€ ๊ทธ ์žฅ์ ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ธฐ ์—ฌ๊ธฐ.

The number of epochs refers to the complete passes through the training dataset during model training. A typical starting point is 300 epochs. If your model overfits early, you can reduce the number. Alternatively, if overfitting isn't observed, you might extend training to 600, 1200, or more epochs. To set this in YOLO11, use the epochs ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•˜์„ธ์š”. ์ด์ƒ์ ์ธ ์—ํฌํฌ ์ˆ˜๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ถ”๊ฐ€ ๋„์›€๋ง์€ ๋‹ค์Œ ์„น์…˜์„ ์ฐธ์กฐํ•˜์„ธ์š”. ์—ํฌํฌ ์ˆ˜.


๐Ÿ“… Created 3 months ago โœ๏ธ Updated 10 days ago

๋Œ“๊ธ€