Create a general-purpose CoreML request (coreml.new_model_request)
coreml.new_model_request loads a compiled CoreML model and creates a general-purpose request object that does not depend on Vision.
It is suitable for text models, tensor-input models, multi-input/multi-output models, embedding models, and any model whose input is not simply a regular image flowing through Vision.
If the model is primarily an image-recognition workflow and you want to pass an image object directly into a Vision pipeline, prefer new_vision_request.
If you need to prepare input features yourself, or the model expects MLMultiArray, token IDs, or multiple named inputs, this is the correct API.
coreml.session(...) is an alias of this function.
Declaration
request, err = coreml.new_model_request(compiled_model_path)
or
request, err = coreml.new_model_request({
compiled_model_path = compiled_model_path,
uses_cpu_only = use_cpu_only,
compute_units = compute_units,
})
Parameters
-
compiled_model_pathString. Path to a compiled.mlmodelcmodel directory. -
uses_cpu_onlyBoolean, optional. Whether to use CPU only by default. Defaults tofalse. -
compute_unitsString, optional. Effective on iOS 12+. Supported values:"all""cpu_only"or"cpu""cpu_and_gpu"or"gpu""cpu_and_neural_engine","ane", or"neural_engine"(iOS 16+)
When
uses_cpu_only = true, CPU-only execution takes precedence.
Returns
-
request
A general-purpose CoreML request object, ornilon failure. -
err
String.nilon success; error message on failure.
Notes
This function is available in versions released after 20260319
- The base capability is available on iOS 11+
compute_unitsconfiguration requires iOS 12+- If the model declares an image feature and you want to pass an
image_objectdirectly into this general-purpose request, that requires iOS 13+ - Inputs must be passed as a table keyed by input name, for example
{ input_ids = ids } - Input values can be numbers, strings, booleans, dictionaries,
MLMultiArray, andimageobjects when the model declares an image feature input - This object only runs model inference. Image preprocessing and text tokenization should be prepared separately in Lua
- The same request object can be reused across repeated
predict()/run()calls - If older devices or some models are unstable on the default backend, you can set
uses_cpu_only = trueduring creation
Type checks
Declaration
is_request = coreml.is_model_request(value)
or
is_request = coreml.is_session(value)
Parameters
- value
Any value to test.
Returns
- is_request
Boolean. Returnstrueif the value is a general-purpose CoreML request object.
Notes
coreml.is_session()andcoreml.is_model_request()are equivalent aliases- Useful in generic Lua wrappers for type guarding and argument validation
- If you already know the value came from
new_model_request(...)/session(...), you usually do not need this check
Example
local compiled_model_path = XXT_HOME_PATH.."/models/demo_text.mlmodelc"
local req, err = coreml.new_model_request({
compiled_model_path = compiled_model_path,
uses_cpu_only = false,
compute_units = "all",
})
if not req then
error(err)
end
local out = assert(req:predict({
input_ids = ids,
}))
print(req:input_names())
print(req:output_names())
print(out[1])