Skip to main content

Create a CoreML Vision request (coreml.new_vision_request)

coreml.new_vision_request loads a compiled CoreML model and creates a Vision-based inference request for image recognition.
Use it when the model consumes an image directly and you want Vision to handle the request pipeline.

Declaration

vnreq, err = coreml.new_vision_request(compiled_model_path)

or

vnreq, err = coreml.new_vision_request({
compiled_model_path = compiled_model_path,
uses_cpu_only = use_cpu_only,
compute_units = compute_units,
image_crop_and_scale_option = crop_and_scale_mode,
})

Parameters

  • compiled_model_path
    String. Path to a compiled .mlmodelc model directory. This API is primarily designed for image classification and object detection models.

  • uses_cpu_only
    Boolean, optional. Whether to use CPU only by default. Defaults to false.

  • compute_units String, optional. Effective on iOS 12+. Supported values:

    • "all"
    • "cpu_only" or "cpu"
    • "cpu_and_gpu" or "gpu"
    • "cpu_and_neural_engine", "ane", or "neural_engine" (iOS 16+)
  • image_crop_and_scale_option
    Integer, optional. Vision crop-and-scale mode used when preparing the input image.

Returns

  • vnreq
    Vision request object, or nil on failure.
  • err
    String. nil on success; error message on failure.

Notes

  • Available since app version 1.3.8+.
  • Not supported on iOS versions below 12.
  • This function creates a Vision inference request from a compiled .mlmodelc model for image-based recognition.
  • Use this API for Vision-based image models.
  • If the model expects general image / MLMultiArray feature inputs instead of a Vision pipeline, consider coreml.new_model_request instead.
  • In addition to :predict() / :run(), the object also exposes methods such as :results(), :is_done(), :compute_units(), :image_crop_and_scale_option(), and :class_labels()

Example

See Run image inference with a CoreML Vision request