Tizen Native API  7.0
Machine Learning

Overview

API Description
Pipeline Provides interfaces to create and execute stream pipelines with neural networks and sensors.
Service Provides interfaces to store and fetch the pipeline description for AI application developers.
Single Provides interfaces to invoke a neural network model with a single instance of input data.
Trainer Provides interfaces to create and train Machine Learning models on the device locally.

Functions

int ml_tensors_info_create (ml_tensors_info_h *info)
 Creates a tensors information handle with default value.
int ml_tensors_info_create_extended (ml_tensors_info_h *info)
 Creates an extended tensors information handle with default value.
int ml_tensors_info_destroy (ml_tensors_info_h info)
 Frees the given handle of a tensors information.
int ml_tensors_info_validate (const ml_tensors_info_h info, bool *valid)
 Validates the given tensors information.
int ml_tensors_info_clone (ml_tensors_info_h dest, const ml_tensors_info_h src)
 Copies the tensors information.
int ml_tensors_info_set_count (ml_tensors_info_h info, unsigned int count)
 Sets the number of tensors with given handle of tensors information.
int ml_tensors_info_get_count (ml_tensors_info_h info, unsigned int *count)
 Gets the number of tensors with given handle of tensors information.
int ml_tensors_info_set_tensor_name (ml_tensors_info_h info, unsigned int index, const char *name)
 Sets the tensor name with given handle of tensors information.
int ml_tensors_info_get_tensor_name (ml_tensors_info_h info, unsigned int index, char **name)
 Gets the tensor name with given handle of tensors information.
int ml_tensors_info_set_tensor_type (ml_tensors_info_h info, unsigned int index, const ml_tensor_type_e type)
 Sets the tensor type with given handle of tensors information.
int ml_tensors_info_get_tensor_type (ml_tensors_info_h info, unsigned int index, ml_tensor_type_e *type)
 Gets the tensor type with given handle of tensors information.
int ml_tensors_info_set_tensor_dimension (ml_tensors_info_h info, unsigned int index, const ml_tensor_dimension dimension)
 Sets the tensor dimension with given handle of tensors information.
int ml_tensors_info_get_tensor_dimension (ml_tensors_info_h info, unsigned int index, ml_tensor_dimension dimension)
 Gets the tensor dimension with given handle of tensors information.
int ml_tensors_info_get_tensor_size (ml_tensors_info_h info, int index, size_t *data_size)
 Gets the size of tensors data in the given tensors information handle in bytes.
int ml_tensors_data_create (const ml_tensors_info_h info, ml_tensors_data_h *data)
 Creates a tensor data frame with the given tensors information.
int ml_tensors_data_destroy (ml_tensors_data_h data)
 Frees the given tensors' data handle.
int ml_tensors_data_get_tensor_data (ml_tensors_data_h data, unsigned int index, void **raw_data, size_t *data_size)
 Gets a tensor data of given handle.
int ml_tensors_data_set_tensor_data (ml_tensors_data_h data, unsigned int index, const void *raw_data, const size_t data_size)
 Copies a tensor data to given handle.
const char * ml_error (void)
 Returns a human-readable string describing the last error.
const char * ml_strerror (int error_code)
 Returns a human-readable string describing an error code.
int ml_option_create (ml_option_h *option)
 Creates ml-option instance.
int ml_option_destroy (ml_option_h option)
 Destroys the ml-option instance.
int ml_option_set (ml_option_h option, const char *key, void *value, ml_data_destroy_cb destroy)
 Sets a new key-value in ml-option instance.
int ml_option_get (ml_option_h option, const char *key, void **value)
 Gets a value of key in ml-option instance.

Typedefs

typedef unsigned int ml_tensor_dimension [(16)]
 The dimensions of a tensor that NNStreamer supports.
typedef void * ml_tensors_info_h
 A handle of a tensors metadata instance.
typedef void * ml_tensors_data_h
 A handle of input or output frames. ml_tensors_info_h is the handle for tensors metadata.
typedef enum _ml_tensor_type_e ml_tensor_type_e
 Possible data element types of tensor in NNStreamer.
typedef void(* ml_data_destroy_cb )(void *data)
 The function to be called when destroying the data in machine learning API.
typedef int(* ml_custom_easy_invoke_cb )(const ml_tensors_data_h in, ml_tensors_data_h out, void *user_data)
 Callback to execute the custom-easy filter in NNStreamer pipelines.
typedef void * ml_option_h
 A handle of a ml-option instance.

Defines

#define ML_TENSOR_RANK_LIMIT   (16)
 The maximum rank that NNStreamer supports with Tizen APIs.
#define ML_TENSOR_SIZE_LIMIT   (16)
 The maximum number of other/tensor instances that other/tensors may have.

Define Documentation

#define ML_TENSOR_RANK_LIMIT   (16)

The maximum rank that NNStreamer supports with Tizen APIs.

Since :
5.5
Remarks:
The maximum rank in Tizen APIs is 4 until tizen 7.0 and 16 since 8.0.
#define ML_TENSOR_SIZE_LIMIT   (16)

The maximum number of other/tensor instances that other/tensors may have.

Since :
5.5

Typedef Documentation

typedef int(* ml_custom_easy_invoke_cb)(const ml_tensors_data_h in, ml_tensors_data_h out, void *user_data)

Callback to execute the custom-easy filter in NNStreamer pipelines.

Note that if ml_custom_easy_invoke_cb() returns negative error values, the constructed pipeline does not work properly anymore. So developers should release the pipeline handle and recreate it again.

Since :
6.0
Remarks:
The in can be used only in the callback. To use outside, make a copy.
The out can be used only in the callback. To use outside, make a copy.
Parameters:
[in]inThe handle of the tensor input (a single frame. tensor/tensors).
[out]outThe handle of the tensor output to be filled (a single frame. tensor/tensors).
[in,out]user_dataUser application's private data.
Returns:
0 on success. 1 to ignore the input data. Otherwise a negative error value.
typedef void(* ml_data_destroy_cb)(void *data)

The function to be called when destroying the data in machine learning API.

Since :
7.0
Parameters:
[in]dataThe data to be destroyed.
typedef void* ml_option_h

A handle of a ml-option instance.

Since :
7.0
typedef unsigned int ml_tensor_dimension[(16)]

The dimensions of a tensor that NNStreamer supports.

Since :
5.5

Possible data element types of tensor in NNStreamer.

Since :
5.5
typedef void* ml_tensors_data_h

A handle of input or output frames. ml_tensors_info_h is the handle for tensors metadata.

Since :
5.5
typedef void* ml_tensors_info_h

A handle of a tensors metadata instance.

Since :
5.5

Enumeration Type Documentation

Possible data element types of tensor in NNStreamer.

Since :
5.5
Enumerator:
ML_TENSOR_TYPE_INT32 

Integer 32bit

ML_TENSOR_TYPE_UINT32 

Unsigned integer 32bit

ML_TENSOR_TYPE_INT16 

Integer 16bit

ML_TENSOR_TYPE_UINT16 

Unsigned integer 16bit

ML_TENSOR_TYPE_INT8 

Integer 8bit

ML_TENSOR_TYPE_UINT8 

Unsigned integer 8bit

ML_TENSOR_TYPE_FLOAT64 

Float 64bit

ML_TENSOR_TYPE_FLOAT32 

Float 32bit

ML_TENSOR_TYPE_INT64 

Integer 64bit

ML_TENSOR_TYPE_UINT64 

Unsigned integer 64bit

ML_TENSOR_TYPE_FLOAT16 

FP16, IEEE 754. Note that this type is supported only in aarch64/arm devices. (Since 7.0)

ML_TENSOR_TYPE_UNKNOWN 

Unknown type

enum ml_error_e

Enumeration for the error codes of NNStreamer.

Since :
5.5
Enumerator:
ML_ERROR_NONE 

Success!

ML_ERROR_INVALID_PARAMETER 

Invalid parameter

ML_ERROR_STREAMS_PIPE 

Cannot create or access the pipeline.

ML_ERROR_TRY_AGAIN 

The pipeline is not ready, yet (not negotiated, yet)

ML_ERROR_UNKNOWN 

Unknown error

ML_ERROR_TIMED_OUT 

Time out

ML_ERROR_NOT_SUPPORTED 

The feature is not supported

ML_ERROR_PERMISSION_DENIED 

Permission denied

ML_ERROR_OUT_OF_MEMORY 

Out of memory (Since 6.0)

ML_ERROR_IO_ERROR 

I/O error for database and filesystem (Since 7.0)

Types of hardware resources to be used for NNFWs. Note that if the affinity (nnn) is not supported by the driver or hardware, it is ignored.

Since :
5.5
Enumerator:
ML_NNFW_HW_ANY 

Hardware resource is not specified.

ML_NNFW_HW_AUTO 

Try to schedule and optimize if possible.

ML_NNFW_HW_CPU 

0x1000: any CPU. 0x1nnn: CPU # nnn-1.

ML_NNFW_HW_CPU_SIMD 

0x1100: SIMD in CPU. (Since 6.0)

ML_NNFW_HW_CPU_NEON 

0x1100: NEON (alias for SIMD) in CPU. (Since 6.0)

ML_NNFW_HW_GPU 

0x2000: any GPU. 0x2nnn: GPU # nnn-1.

ML_NNFW_HW_NPU 

0x3000: any NPU. 0x3nnn: NPU # nnn-1.

ML_NNFW_HW_NPU_MOVIDIUS 

0x3001: Intel Movidius Stick. (Since 6.0)

ML_NNFW_HW_NPU_EDGE_TPU 

0x3002: Google Coral Edge TPU (USB). (Since 6.0)

ML_NNFW_HW_NPU_VIVANTE 

0x3003: VeriSilicon's Vivante. (Since 6.0)

ML_NNFW_HW_NPU_SLSI 

0x3004: Samsung S.LSI. (Since 6.5)

ML_NNFW_HW_NPU_SR 

0x13000: any SR (Samsung Research) made NPU. (Since 6.0)

Types of NNFWs.

To check if a nnfw-type is supported in a system, an application may call the API, ml_check_nnfw_availability().

Since :
5.5
Enumerator:
ML_NNFW_TYPE_ANY 

NNFW is not specified (Try to determine the NNFW with file extension).

ML_NNFW_TYPE_CUSTOM_FILTER 

Custom filter (Independent shared object).

ML_NNFW_TYPE_TENSORFLOW_LITE 

Tensorflow-lite (.tflite).

ML_NNFW_TYPE_TENSORFLOW 

Tensorflow (.pb).

ML_NNFW_TYPE_NNFW 

Neural Network Inference framework, which is developed by SR (Samsung Research).

ML_NNFW_TYPE_MVNC 

Intel Movidius Neural Compute SDK (libmvnc). (Since 6.0)

ML_NNFW_TYPE_OPENVINO 

Intel OpenVINO. (Since 6.0)

ML_NNFW_TYPE_VIVANTE 

VeriSilicon's Vivante. (Since 6.0)

ML_NNFW_TYPE_EDGE_TPU 

Google Coral Edge TPU (USB). (Since 6.0)

ML_NNFW_TYPE_ARMNN 

Arm Neural Network framework (support for caffe and tensorflow-lite). (Since 6.0)

ML_NNFW_TYPE_SNPE 

Qualcomm SNPE (Snapdragon Neural Processing Engine (.dlc). (Since 6.0)

ML_NNFW_TYPE_PYTORCH 

PyTorch (.pt). (Since 6.5)

ML_NNFW_TYPE_NNTR_INF 

Inference supported from NNTrainer, SR On-device Training Framework (Since 6.5)

ML_NNFW_TYPE_VD_AIFW 

Inference framework for Samsung Tizen TV (Since 6.5)

ML_NNFW_TYPE_TRIX_ENGINE 

TRIxENGINE accesses TRIV/TRIA NPU low-level drivers directly (.tvn). (Since 6.5) You may need to use high-level drivers wrapping this low-level driver in some devices: e.g., AIFW

ML_NNFW_TYPE_MXNET 

Apache MXNet (Since 7.0)

ML_NNFW_TYPE_TVM 

Apache TVM (Since 7.0)

ML_NNFW_TYPE_SNAP 

SNAP (Samsung Neural Acceleration Platform), only for Android. (Since 6.0)


Function Documentation

const char* ml_error ( void  )

Returns a human-readable string describing the last error.

This returns a human-readable, null-terminated string describing the most recent error that occurred from a call to one of the functions in the Machine Learning API since the last call to ml_error(). The returned string should *not* be freed or overwritten by the caller.

Since :
7.0
Returns:
NULL if no error to be reported. Otherwise the error description.
int ml_option_create ( ml_option_h option)

Creates ml-option instance.

Since :
7.0
Remarks:
The option should be released using ml_option_destroy().
Parameters:
[out]optionNewly created option handle is returned.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.
ML_ERROR_OUT_OF_MEMORYFailed to allocate required memory.
int ml_option_destroy ( ml_option_h  option)

Destroys the ml-option instance.

Note that, user should free the allocated values of ml-option in the case that destroy function is not given.

Since :
7.0
Parameters:
[in]optionThe option handle to be destroyed.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.
int ml_option_get ( ml_option_h  option,
const char *  key,
void **  value 
)

Gets a value of key in ml-option instance.

This returns the pointer of memory in the handle. Do not deallocate the returned value. If you modify the returned memory (value), the contents of value is updated.

Since :
8.0
Parameters:
[in]optionThe handle of ml-option.
[in]keyThe key to get the corresponding value.
[out]valueThe value of the key.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.
int ml_option_set ( ml_option_h  option,
const char *  key,
void *  value,
ml_data_destroy_cb  destroy 
)

Sets a new key-value in ml-option instance.

Note that the value should be valid during single task and be freed after destroying the ml-option instance unless proper destroy function is given. When duplicated key is given, the corresponding value is updated with the new one.

Since :
7.0
Parameters:
[in]optionThe handle of ml-option.
[in]keyThe key to be set.
[in]valueThe value to be set.
[in]destroyThe function to destroy the value. It is called when the ml-option instance is destroyed.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.
const char* ml_strerror ( int  error_code)

Returns a human-readable string describing an error code.

This returns a human-readable, null-terminated string describing the error code of machine learning API. The returned string should *not* be freed or overwritten by the caller.

Since :
7.0
Parameters:
[in]error_codeThe error code of machine learning API.
Returns:
NULL for invalid error code. Otherwise the error description.

Creates a tensor data frame with the given tensors information.

Since :
5.5
Remarks:
Before 6.0, this function returned ML_ERROR_STREAMS_PIPE in case of an internal error. Since 6.0, ML_ERROR_OUT_OF_MEMORY is returned in such cases, so ML_ERROR_STREAMS_PIPE is not returned by this function anymore.
Parameters:
[in]infoThe handle of tensors information for the allocation.
[out]dataThe handle of tensors data. The caller is responsible for freeing the allocated data with ml_tensors_data_destroy().
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
ML_ERROR_OUT_OF_MEMORYFailed to allocate required memory.

Frees the given tensors' data handle.

Note that the opened handle should be closed before calling this function in the case of a single API. If not, the inference engine might try to access the data that is already freed. And it causes the segmentation fault.

Since :
5.5
Parameters:
[in]dataThe handle of tensors data.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
int ml_tensors_data_get_tensor_data ( ml_tensors_data_h  data,
unsigned int  index,
void **  raw_data,
size_t *  data_size 
)

Gets a tensor data of given handle.

This returns the pointer of memory block in the handle. Do not deallocate the returned tensor data. The returned pointer (raw_data) directly points to the internal data of data. If you modify the returned memory block (raw_data), the contents of data is updated.

Since :
5.5
Parameters:
[in]dataThe handle of tensors data.
[in]indexThe index of the tensor.
[out]raw_dataRaw tensor data in the handle.
[out]data_sizeByte size of tensor data.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
int ml_tensors_data_set_tensor_data ( ml_tensors_data_h  data,
unsigned int  index,
const void *  raw_data,
const size_t  data_size 
)

Copies a tensor data to given handle.

Since :
5.5
Parameters:
[in]dataThe handle of tensors data.
[in]indexThe index of the tensor.
[in]raw_dataRaw tensor data to be copied.
[in]data_sizeByte size of raw data.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.

Copies the tensors information.

Since :
5.5
Parameters:
[out]destA destination handle of tensors information.
[in]srcThe tensors information to be copied.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid. Note that src should be a valid tensors info handle and dest should be a created (allocated) tensors info handle.

Creates a tensors information handle with default value.

Since :
5.5
Remarks:
The info should be released using ml_tensors_info_destroy().
Parameters:
[out]infoThe handle of tensors information.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
ML_ERROR_OUT_OF_MEMORYFailed to allocate required memory.

Creates an extended tensors information handle with default value.

An extended tensors support higher rank limit.

Since :
8.0
Remarks:
The info should be released using ml_tensors_info_destroy().
Parameters:
[out]infoThe handle of tensors information.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
ML_ERROR_OUT_OF_MEMORYFailed to allocate required memory.

Frees the given handle of a tensors information.

Since :
5.5
Parameters:
[in]infoThe handle of tensors information.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
int ml_tensors_info_get_count ( ml_tensors_info_h  info,
unsigned int *  count 
)

Gets the number of tensors with given handle of tensors information.

Since :
5.5
Parameters:
[in]infoThe handle of tensors information.
[out]countThe number of tensors.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
int ml_tensors_info_get_tensor_dimension ( ml_tensors_info_h  info,
unsigned int  index,
ml_tensor_dimension  dimension 
)

Gets the tensor dimension with given handle of tensors information.

Since :
5.5
Parameters:
[in]infoThe handle of tensors information.
[in]indexThe index of the tensor.
[out]dimensionThe tensor dimension.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
int ml_tensors_info_get_tensor_name ( ml_tensors_info_h  info,
unsigned int  index,
char **  name 
)

Gets the tensor name with given handle of tensors information.

Since :
5.5
Remarks:
Before 6.0 this function returned the internal pointer so application developers do not need to free. Since 6.0 the name string is internally copied and returned. So if the function succeeds, name should be released using g_free().
Parameters:
[in]infoThe handle of tensors information.
[in]indexThe index of the tensor.
[out]nameThe tensor name.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
int ml_tensors_info_get_tensor_size ( ml_tensors_info_h  info,
int  index,
size_t *  data_size 
)

Gets the size of tensors data in the given tensors information handle in bytes.

If an application needs to get the total byte size of tensors, set the index '-1'. Note that the maximum number of tensors is 16 (ML_TENSOR_SIZE_LIMIT).

Since :
5.5
Parameters:
[in]infoThe handle of tensors information.
[in]indexThe index of the tensor.
[out]data_sizeThe byte size of tensor data.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
int ml_tensors_info_get_tensor_type ( ml_tensors_info_h  info,
unsigned int  index,
ml_tensor_type_e type 
)

Gets the tensor type with given handle of tensors information.

Since :
5.5
Parameters:
[in]infoThe handle of tensors information.
[in]indexThe index of the tensor.
[out]typeThe tensor type.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
int ml_tensors_info_set_count ( ml_tensors_info_h  info,
unsigned int  count 
)

Sets the number of tensors with given handle of tensors information.

Since :
5.5
Parameters:
[in]infoThe handle of tensors information.
[in]countThe number of tensors.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
int ml_tensors_info_set_tensor_dimension ( ml_tensors_info_h  info,
unsigned int  index,
const ml_tensor_dimension  dimension 
)

Sets the tensor dimension with given handle of tensors information.

Since :
5.5
Parameters:
[in]infoThe handle of tensors information.
[in]indexThe index of the tensor to be updated.
[in]dimensionThe tensor dimension to be set.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
int ml_tensors_info_set_tensor_name ( ml_tensors_info_h  info,
unsigned int  index,
const char *  name 
)

Sets the tensor name with given handle of tensors information.

Since :
5.5
Parameters:
[in]infoThe handle of tensors information.
[in]indexThe index of the tensor to be updated.
[in]nameThe tensor name to be set.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
int ml_tensors_info_set_tensor_type ( ml_tensors_info_h  info,
unsigned int  index,
const ml_tensor_type_e  type 
)

Sets the tensor type with given handle of tensors information.

Since :
5.5
Parameters:
[in]infoThe handle of tensors information.
[in]indexThe index of the tensor to be updated.
[in]typeThe tensor type to be set.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported. E.g., in a machine without fp16 support, trying FLOAT16 is not supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
int ml_tensors_info_validate ( const ml_tensors_info_h  info,
bool *  valid 
)

Validates the given tensors information.

If the function returns an error, valid may not be changed.

Since :
5.5
Parameters:
[in]infoThe handle of tensors information to be validated.
[out]validtrue if it's valid, false if it's invalid.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.