Skip to main content

CoreML Module (coreml)

Available since app version 1.3.8+
The CoreML module provides on-device machine learning inference. In addition to Vision-based inference, it also includes:

  • general-purpose model requests / sessions
  • MLMultiArray creation, conversion, and math helpers
  • image/tensor conversion helpers
  • type-selectable text tokenizers
  • generic post-processing helpers for detection / OBB / masks / keypoints / tracking

SentencePiece uses a lightweight compatibility-oriented implementation intended for on-device inference and supports both .vocab and .model sources. After require("onnxruntime"), native copy bridges between MLMultiArray and ORT tensors are also injected.

Platform support notes

The CoreML module is based on iOS 11+ overall, but different APIs have different minimum system requirements:

  • new_model_request / session, MLMultiArray APIs, image tensorization, tokenizer APIs, and most geometry / post-processing helpers: iOS 11+
  • compile_model, new_vision_request, predict_batch() / run_batch(), and compute_units creation/query APIs: iOS 12+
  • Passing an image_object into a general-purpose model request as an image feature input: iOS 13+
  • class_labels(): iOS 14+
  • compute_units = "cpu_and_neural_engine": iOS 16+

Additional notes

  • coreml.session(...) is an alias of coreml.new_model_request(...)
  • coreml.tensor(...) is an alias of coreml.new_multi_array(...)
  • coreml.tensor_from_table(...) is an alias of coreml.multi_array_from_table(...)
  • coreml.image_to_multi_array(...) is still available, but it is now only a compatibility alias of coreml.tensor_from_image(...)
  • coreml.multi_array_from_quad(...) is an alias of coreml.tensor_from_quad(...)
  • coreml.multi_array_from_quads(...) is an alias of coreml.tensor_from_quads(...)
  • coreml.project_masks(...) is an alias of coreml.proto_masks(...)