# Use case examples

> Use case examples - Sample use cases for how and when to use train-time image augmentation in image
> datasets.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.608970+00:00` (UTC).

## Primary page

- [Use case examples](https://docs.datarobot.com/en/docs/classic-ui/modeling/special-workflows/visual-ai/tti-augment/ttia-examples.html): Full documentation for this topic (HTML).

## Sections on this page

- [Identifying types of plankton](https://docs.datarobot.com/en/docs/classic-ui/modeling/special-workflows/visual-ai/tti-augment/ttia-examples.html#identifying-types-of-plankton): In-page section heading.
- [Classifying groceries](https://docs.datarobot.com/en/docs/classic-ui/modeling/special-workflows/visual-ai/tti-augment/ttia-examples.html#classifying-groceries): In-page section heading.
- [Finding powerlines](https://docs.datarobot.com/en/docs/classic-ui/modeling/special-workflows/visual-ai/tti-augment/ttia-examples.html#finding-powerlines): In-page section heading.

## Related documentation

- [Classic UI documentation](https://docs.datarobot.com/en/docs/classic-ui/index.html): Linked from this page.
- [Modeling](https://docs.datarobot.com/en/docs/classic-ui/modeling/index.html): Linked from this page.
- [Specialized workflows](https://docs.datarobot.com/en/docs/classic-ui/modeling/special-workflows/index.html): Linked from this page.
- [Visual AI](https://docs.datarobot.com/en/docs/classic-ui/modeling/special-workflows/visual-ai/index.html): Linked from this page.
- [Train-time image augmentation](https://docs.datarobot.com/en/docs/classic-ui/modeling/special-workflows/visual-ai/tti-augment/index.html): Linked from this page.
- [Advanced Tuning](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/adv-tuning.html): Linked from this page.

## Documentation content

# Use case examples

Below are some example use cases to help illustrate how you might leverage domain knowledge of your dataset to craft a beneficial augmentation strategy. You can try the suggestions and then modify the settings using the [Advanced Tuning](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/adv-tuning.html) tab. For each, the first screenshot explores the images by expanding the image feature in the Data tab. The second shows previews from the Advanced options > Image Augmentation tab.

## Identifying types of plankton

This dataset contains tens of thousands of images of microscopic life and aquatic debris, taken with the ISIIS underwater imaging system.

To classify them into 24 classes:

- Because of the way that floating plankton and debris move through water, they can be in any orientation, irrespective of gravity. This example supports enablingHorizontalandVertical Flipand settingRotationto a high maximum value.
- Because of the way the images were cropped when the dataset was prepared and labelled, most images are centered with a similar margin. For this reason, you would not enableShiftorScale.
- The images have a variety of blurriness. Enable a slightBlurto match.
- There are not many instances of shapes that occlude the plankton intended to be identified. In addition, since the images are very low resolution, there is probably a low chance of overfitting to specific small patterns or pixels. For these two reasons, do not enableCutout.

## Classifying groceries

This dataset contains a few thousand images—taken with a hand-held camera—of fruits, vegetables, and dairy products found in a grocery store.

Configuration suggestions to classify them into 83 classes:

- Although the fruits and vegetables can be any orientation in the bins, photos are always taken with the ground at the bottom of the photo (right-side-up); best not to enableVertical Flip.
- WhileHorizontal Flipmight be reasonable for fruits and vegetables, what about the dairy cartons? Does the model need to recognize specific text or a logo on the carton that would be harder to recognize if it were flipped? UseHorizontal Flipfor the benefits it might provide to most other classes, but also experiment and compare with a model withoutHorizontal Flip(viaAdvanced Tuning).
- Most photos are taken from approximately an arms length away, so there is probably no need to enableScale.
- Notice that the photos come from a wide variety of angles and are not always centered. To address this, applyRotationandShift.
- The photo resolution seems consistent and the very small details might be necessary to distinguish among varieties of the same fruit. For that reason, don't enableBlur.
- In addition, because there isn't obvious occlusion of the grocery items, first try withoutCutout. Consider also trying withCutoutusingAdvanced Tuning.

## Finding powerlines

This dataset contains a few thousand aerial images of the countryside. The example helps identify which images contain powerlines.

Consider:

- Since the photos are taken from above and could capture the ground at many angles depending on how the airplane is flying, enableHorizontal Flip,Vertical Flip, and a large maximumRotation.
- Because the photos are taken from a variety of altitudes, enableScale.
- There is no centering or consistent margin in the photos, so enableShift.
- EnableBlursince the photos have a variety of blurriness/resolution levels.
- Birds, trees, or discolorations in the ground can decrease the contrast between the powerlines and the ground, which might make it hard for the model to detect the powerlines. EnableCutoutto simulate more instances where part of the powerline might be difficult to detect, in the hopes that the model will more robustly detect any part of the powerline.
