CoreML Module (coreml)
Available since app version 1.3.8+
The CoreML module provides on-device machine learning inference. In addition to Vision-based inference, it also includes:
- general-purpose model requests / sessions
MLMultiArraycreation, conversion, and math helpers- image/tensor conversion helpers
- type-selectable text tokenizers
- generic post-processing helpers for detection / OBB / masks / keypoints / tracking
SentencePiece uses a lightweight compatibility-oriented implementation intended for on-device inference and supports both .vocab and .model sources. After require("onnxruntime"), native copy bridges between MLMultiArray and ORT tensors are also injected.
Platform support notes
The CoreML module is based on iOS 11+ overall, but different APIs have different minimum system requirements:
new_model_request/session,MLMultiArrayAPIs, image tensorization, tokenizer APIs, and most geometry / post-processing helpers: iOS 11+compile_model,new_vision_request,predict_batch()/run_batch(), andcompute_unitscreation/query APIs: iOS 12+- Passing an
image_objectinto a general-purpose model request as an image feature input: iOS 13+ class_labels(): iOS 14+compute_units = "cpu_and_neural_engine": iOS 16+
- Compile a CoreML model
- Create a CoreML Vision request
- Run image inference with a CoreML Vision request
- Create a general-purpose CoreML request
- Methods on a general-purpose CoreML request
- ML multi-array module
- Text tokenizer module
- ONNX Runtime module
Additional notes
coreml.session(...)is an alias ofcoreml.new_model_request(...)coreml.tensor(...)is an alias ofcoreml.new_multi_array(...)coreml.tensor_from_table(...)is an alias ofcoreml.multi_array_from_table(...)coreml.image_to_multi_array(...)is still available, but it is now only a compatibility alias ofcoreml.tensor_from_image(...)coreml.multi_array_from_quad(...)is an alias ofcoreml.tensor_from_quad(...)coreml.multi_array_from_quads(...)is an alias ofcoreml.tensor_from_quads(...)coreml.project_masks(...)is an alias ofcoreml.proto_masks(...)