Tizen Native API  5.5
Media Vision Inference

Image Classification, Object Detection, Face and Facial landmark detection.

Required Header

#include <mv_inference.h>

Related Features

This API is related with the following features:

It is recommended to use features in your application for reliability.
You can check if the device supports the related features for this API by using System Information, and control your application's actions accordingly.
To ensure your application is only running on devices with specific features, please define the features in your manifest file using the manifest editor in the SDK.
More details on using features in your application can be found in Feature Element.

Overview

Media Vision Inference contains mv_inference_h handle to perform Image Classification, Object Detection, Face and Facial Landmark detection. Inference handle should be created with mv_inference_create() and destoryed with mv_inference_destroy(). mv_inference_h should be configured by calling mv_inference_configure(). After configuration, mv_inference_h should be prepared by calling mv_inference_prepare() which loads models and set required parameters. After preparation, mv_inference_image_classify() has to be called to classify images on mv_source_h, and callback mv_inference_image_classified_cb() will be invoked to process results. Module contains mv_inference_object_detect() function to detect object on mv_source_h, and mv_inference_object_detected_cb() to process object detection results. Module also contains mv_inference_face_detect() and mv_inference_facial_landmark_detect() functionalities to detect faces and their landmark on mv_source_h, and callbacks mv_inference_face_detected_cb() and mv_inference_facial_landmark_detected_cb() to process detection results.

Functions

int mv_inference_create (mv_inference_h *infer)
 Creates inference handle.
int mv_inference_destroy (mv_inference_h infer)
 Destroys inference handle and releases all its resources.
int mv_inference_configure (mv_inference_h infer, mv_engine_config_h engine_config)
 Configures the network of the inference.
int mv_inference_prepare (mv_inference_h infer)
 Prepares inference.
int mv_inference_foreach_supported_engine (mv_inference_h infer, mv_inference_supported_engine_cb callback, void *user_data)
 Traverses the list of supported engines for inference.
int mv_inference_image_classify (mv_source_h source, mv_inference_h infer, mv_rectangle_s *roi, mv_inference_image_classified_cb classified_cb, void *user_data)
 Performs image classification on the source.
int mv_inference_object_detect (mv_source_h source, mv_inference_h infer, mv_inference_object_detected_cb detected_cb, void *user_data)
 Performs object detection on the source.
int mv_inference_face_detect (mv_source_h source, mv_inference_h infer, mv_inference_face_detected_cb detected_cb, void *user_data)
 Performs face detection on the source.
int mv_inference_facial_landmark_detect (mv_source_h source, mv_inference_h infer, mv_rectangle_s *roi, mv_inference_facial_landmark_detected_cb detected_cb, void *user_data)
 Performs facial landmarks detection on the source.

Typedefs

typedef bool(* mv_inference_supported_engine_cb )(const char *engine, bool supported, void *user_data)
 Called to provide information for supported engines for inference.
typedef void(* mv_inference_image_classified_cb )(mv_source_h source, int number_of_classes, const int *indices, const char **names, const float *confidences, void *user_data)
 Called when source is classified.
typedef void(* mv_inference_object_detected_cb )(mv_source_h source, int number_of_objects, const int *indices, const char **names, const float *confidences, const mv_rectangle_s *locations, void *user_data)
 Called when objects in source are detected.
typedef void(* mv_inference_face_detected_cb )(mv_source_h source, int number_of_faces, const float *confidences, const mv_rectangle_s *locations, void *user_data)
 Called when faces in source are detected.
typedef void(* mv_inference_facial_landmark_detected_cb )(mv_source_h source, int number_of_landmarks, const mv_point_s *locations, void *user_data)
 Called when facial landmarks in source are detected.
typedef void * mv_inference_h
 The inference handle.

Defines

#define MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH   "MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH"
 Defines MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH to set inference model's configuration file attribute of the engine configuration.
#define MV_INFERENCE_MODEL_WEIGHT_FILE_PATH   "MV_INFERENCE_MODEL_WEIGHT_FILE_PATH"
 Defines MV_INFERENCE_MODEL_WEIGHT_FILE_PATH to set inference model's weight file attribute of the engine configuration.
#define MV_INFERENCE_MODEL_USER_FILE_PATH   "MV_INFERENCE_MODEL_USER_FILE_PATH"
 Defines MV_INFERENCE_MODEL_USER_FILE_PATH to set inference model's category file attribute of the engine configuration.
#define MV_INFERENCE_MODEL_MEAN_VALUE   "MV_INFERENCE_MODEL_MEAN_VALUE"
 Defines MV_INFERENCE_MODEL_MEAN_VALUE to set inference model's mean attribute of the engine configuration.
#define MV_INFERENCE_MODEL_STD_VALUE   "MV_INFERENCE_MODEL_STD_VALUE"
 Defines MV_INFERENCE_MODEL_STD_VALUE to set an input image's standard deviation attribute of the engine configuration.
#define MV_INFERENCE_BACKEND_TYPE   "MV_INFERENCE_BACKEND_TYPE"
 Defines MV_INFERENCE_BACKEND_TYPE to set the type used for inference attribute of the engine configuration.
#define MV_INFERENCE_TARGET_TYPE   "MV_INFERENCE_TARGET_TYPE"
 Defines MV_INFERENCE_TARGET_TYPE to set the type used for device running attribute of the engine configuration.
#define MV_INFERENCE_INPUT_TENSOR_WIDTH   "MV_INFERENCE_INPUT_TENSOR_WIDTH"
 Defines MV_INFERENCE_INPUT_TENSOR_WIDTH to set the width of input tensor.
#define MV_INFERENCE_INPUT_TENSOR_HEIGHT   "MV_INFERENCE_INPUT_TENSOR_HEIGHT"
 Defines MV_INFERENCE_INPUT_TENSOR_HEIGHT to set the height of input tensor.
#define MV_INFERENCE_INPUT_TENSOR_CHANNELS   "MV_INFERENCE_INPUT_TENSOR_CHANNELS"
 Defines MV_INFERENCE_INPUT_TENSOR_CHANNELS to set the channels, for example 3 in case of RGB colorspace, of input tensor.
#define MV_INFERENCE_INPUT_NODE_NAME   "MV_INFERENCE_INPUT_NODE_NAME"
 Defines MV_INFERENCE_INPUT_NODE_NAME to set the input node name.
#define MV_INFERENCE_OUTPUT_NODE_NAMES   "MV_INFERENCE_OUTPUT_NODE_NAMES"
 Defines MV_INFERENCE_OUTPUT_NODE_NAMES to set the output node names.
#define MV_INFERENCE_OUTPUT_MAX_NUMBER   "MV_INFERENCE_OUTPUT_MAX_NUMBER"
 Defines MV_INFERENCE_OUTPUT_MAX_NUMBER to set the maximum number of output attributes of the engine configuration.
#define MV_INFERENCE_CONFIDENCE_THRESHOLD   "MV_INFERENCE_CONFIDENCE_THRESHOLD"
 Defines MV_INFERENCE_CONFIDENCE_THRESHOLD to set the threshold value for the confidence of inference results.

Define Documentation

#define MV_INFERENCE_BACKEND_TYPE   "MV_INFERENCE_BACKEND_TYPE"

Defines MV_INFERENCE_BACKEND_TYPE to set the type used for inference attribute of the engine configuration.

Switches between two types of the type used for neural network model inference. Possible values of the attribute are:
MV_INFERENCE_BACKEND_OPENCV,
MV_INFERENCE_BACKEND_TFLITE.
The default type is MV_INFERENCE_BACKEND_OPENCV.

Since :
5.5
See also:
mv_engine_config_set_int_attribute()
mv_engine_config_get_int_attribute()
#define MV_INFERENCE_CONFIDENCE_THRESHOLD   "MV_INFERENCE_CONFIDENCE_THRESHOLD"

Defines MV_INFERENCE_CONFIDENCE_THRESHOLD to set the threshold value for the confidence of inference results.

Default value is 0.6 and its range is between 0.0 and 1.0.

Since :
5.5
See also:
mv_engine_config_set_double_attribute()
mv_engine_config_get_double_attribute()
#define MV_INFERENCE_INPUT_NODE_NAME   "MV_INFERENCE_INPUT_NODE_NAME"
#define MV_INFERENCE_INPUT_TENSOR_CHANNELS   "MV_INFERENCE_INPUT_TENSOR_CHANNELS"

Defines MV_INFERENCE_INPUT_TENSOR_CHANNELS to set the channels, for example 3 in case of RGB colorspace, of input tensor.

Since :
5.5
See also:
mv_engine_config_set_int_attribute()
mv_engine_config_get_int_attribute()
#define MV_INFERENCE_INPUT_TENSOR_HEIGHT   "MV_INFERENCE_INPUT_TENSOR_HEIGHT"
#define MV_INFERENCE_INPUT_TENSOR_WIDTH   "MV_INFERENCE_INPUT_TENSOR_WIDTH"
#define MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH   "MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH"

Defines MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH to set inference model's configuration file attribute of the engine configuration.

Inference model's configuration can be changed to specify the path to the file

Since :
5.5
See also:
mv_engine_config_set_string_attribute()
mv_engine_config_get_string_attribute()
#define MV_INFERENCE_MODEL_MEAN_VALUE   "MV_INFERENCE_MODEL_MEAN_VALUE"

Defines MV_INFERENCE_MODEL_MEAN_VALUE to set inference model's mean attribute of the engine configuration.

Since :
5.5
See also:
mv_engine_config_set_double_attribute()
mv_engine_config_get_double_attribute()
#define MV_INFERENCE_MODEL_STD_VALUE   "MV_INFERENCE_MODEL_STD_VALUE"

Defines MV_INFERENCE_MODEL_STD_VALUE to set an input image's standard deviation attribute of the engine configuration.

Since :
5.5
See also:
mv_engine_config_set_double_attribute()
mv_engine_config_get_double_attribute()
#define MV_INFERENCE_MODEL_USER_FILE_PATH   "MV_INFERENCE_MODEL_USER_FILE_PATH"

Defines MV_INFERENCE_MODEL_USER_FILE_PATH to set inference model's category file attribute of the engine configuration.

Inference model's category can be changed to specify the path to the file

Since :
5.5
See also:
mv_engine_config_set_string_attribute()
mv_engine_config_get_string_attribute()
#define MV_INFERENCE_MODEL_WEIGHT_FILE_PATH   "MV_INFERENCE_MODEL_WEIGHT_FILE_PATH"

Defines MV_INFERENCE_MODEL_WEIGHT_FILE_PATH to set inference model's weight file attribute of the engine configuration.

Inference model's weight can be changed to specify the path to the file

Since :
5.5
See also:
mv_engine_config_set_string_attribute()
mv_engine_config_get_string_attribute()
#define MV_INFERENCE_OUTPUT_MAX_NUMBER   "MV_INFERENCE_OUTPUT_MAX_NUMBER"

Defines MV_INFERENCE_OUTPUT_MAX_NUMBER to set the maximum number of output attributes of the engine configuration.

Default value is 5 and a value over 10 will be set to 10. A value under 1 will be set to 1.

Since :
5.5
See also:
mv_engine_config_set_int_attribute()
mv_engine_config_get_int_attribute()
#define MV_INFERENCE_OUTPUT_NODE_NAMES   "MV_INFERENCE_OUTPUT_NODE_NAMES"
#define MV_INFERENCE_TARGET_TYPE   "MV_INFERENCE_TARGET_TYPE"

Defines MV_INFERENCE_TARGET_TYPE to set the type used for device running attribute of the engine configuration.

Switches between CPU, GPU, or Custom:
MV_INFERENCE_TARGET_CPU,
MV_INFERENCE_TARGET_GPU,
MV_INFERENCE_TARGET_CUSTOM.
The default type is CPU.

Since :
5.5
See also:
mv_engine_config_set_int_attribute()
mv_engine_config_get_int_attribute()

Typedef Documentation

typedef void(* mv_inference_face_detected_cb)(mv_source_h source, int number_of_faces, const float *confidences, const mv_rectangle_s *locations, void *user_data)

Called when faces in source are detected.

This callback is invoked each time when mv_inference_face_detect() is called to provide the results of face detection.

Since :
5.5
Remarks:
The confidences and locations should not be released by app. They can be used only in the callback. The number of elements in confidences and locations is equal to number_of_faces.
Parameters:
[in]sourceThe handle to the source of the media where faces were detected. source is the same object for which mv_inference_face_detect() was called. It should be released by calling mv_destroy_source() when it's not needed anymore.
[in]number_of_facesThe number of faces
[in]confidencesConfidences of the detected faces.
[in]locationsLocations of the detected faces.
[in]user_dataThe user data passed from callback invoking code
Precondition:
Call mv_inference_face_detect() function to perform detection of the faces in source and to invoke this callback as a result
See also:
mv_inference_face_detect()
typedef void(* mv_inference_facial_landmark_detected_cb)(mv_source_h source, int number_of_landmarks, const mv_point_s *locations, void *user_data)

Called when facial landmarks in source are detected.

This type callback is invoked each time when mv_inference_facial_landmark_detect() is called to provide the results of the landmarks detection.

Since :
5.5
Remarks:
The locations should not be released by app. They can be used only in the callback. The number of elements in locations is equal to number_of_landmarks.
Parameters:
[in]sourceThe handle to the source of the media where landmarks were detected. source is the same object for which mv_inference_facial_landmark_detect() was called. It should be released by calling mv_destroy_source() when it's not needed anymore.
[in]number_of_landmarksThe number of landmarks
[in]locationsLocations of the detected facial landmarks.
[in]user_dataThe user data passed from callback invoking code
Precondition:
Call mv_inference_face_detect() function to perform detection of the faces in source and to invoke this callback as a result
See also:
mv_inference_face_detect()
typedef void* mv_inference_h

The inference handle.

Since :
5.5
typedef void(* mv_inference_image_classified_cb)(mv_source_h source, int number_of_classes, const int *indices, const char **names, const float *confidences, void *user_data)

Called when source is classified.

This callback is invoked each time when mv_inference_image_classify() is called to provide the results of image classification.

Since :
5.5
Remarks:
The indices, names, and confidences should not be released by the app. They can be used only in the callback. The number of elements in indices, names, and confidences is equal to number_of_classes.
Parameters:
[in]sourceThe handle to the source of the media where an image was classified. source is the same object for which mv_inference_image_classify() was called. It should be released by calling mv_destroy_source() when it's not needed anymore.
[in]number_of_classesThe number of classes
[in]indicesThe indices of the classified image.
[in]namesNames corresponding to the indices.
[in]confidencesEach element is the confidence that the corresponding image belongs to the corresponding class.
[in]user_dataThe user data passed from callback invoking code
Precondition:
Call mv_inference_image_classify() function to perform classification of the image and to invoke this callback as a result
See also:
mv_inference_image_classify()
typedef void(* mv_inference_object_detected_cb)(mv_source_h source, int number_of_objects, const int *indices, const char **names, const float *confidences, const mv_rectangle_s *locations, void *user_data)

Called when objects in source are detected.

This callback is invoked each time when mv_inference_object_detect() is called to provide the results of object detection.

Since :
5.5
Remarks:
The indices, names, confidences, and locations should not be released by app. They can be used only in the callback. The number of elements in indices, names, confidences, and locations is equal to number_of_objects.
Parameters:
[in]sourceThe handle to the source of the media where an image was classified. source is the same object for which mv_inference_object_detect() was called. It should be released by calling mv_destroy_source() when it's not needed anymore.
[in]number_of_objectsThe number of objects
[in]indicesThe indices of objects.
[in]namesNames corresponding to the indices.
[in]confidencesConfidences of the detected objects.
[in]locationsLocations of the detected objects.
[in]user_dataThe user data passed from callback invoking code
Precondition:
Call mv_inference_object_detect() function to perform detection of the objects in source and to invoke this callback as a result
See also:
mv_inference_object_detect()
typedef bool(* mv_inference_supported_engine_cb)(const char *engine, bool supported, void *user_data)

Called to provide information for supported engines for inference.

Since :
5.5
Parameters:
[in]engineThe supported engine. The engine can be used only in the callback. To use outside, make a copy.
[in]supportedThe flag whether the engine is supported or not
[in]user_dataThe user data passed from mv_inference_foreach_supported_engine()
Returns:
true to continue with the next iteration of the loop, otherwise false to break out of the loop
Precondition:
mv_inference_foreach_supported_engine()

Enumeration Type Documentation

Enumeration for inference backend.

Since :
5.5
See also:
mv_inference_prepare()
Enumerator:
MV_INFERENCE_BACKEND_NONE 

None

MV_INFERENCE_BACKEND_OPENCV 

OpenCV

MV_INFERENCE_BACKEND_TFLITE 

TensorFlow-Lite

MV_INFERENCE_BACKEND_MAX 

Backend MAX

Enumeration for inference target.

Since :
5.5
Enumerator:
MV_INFERENCE_TARGET_NONE 

None

MV_INFERENCE_TARGET_CPU 

CPU

MV_INFERENCE_TARGET_GPU 

GPU

MV_INFERENCE_TARGET_CUSTOM 

CUSTOM

MV_INFERENCE_TARGET_MAX 

Target MAX


Function Documentation

int mv_inference_configure ( mv_inference_h  infer,
mv_engine_config_h  engine_config 
)

Configures the network of the inference.

Use this function to configure the network of the inference which is set to engine_config.

Since :
5.5
Parameters:
[in]inferThe handle to the inference
[in]engine_configThe handle to the configuration of engine.
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter in engine_config
MEDIA_VISION_ERROR_INVALID_PATHInvalid path of model data in engine_config

Creates inference handle.

Use this function to create an inference. After the creation the inference has to be prepared with mv_inference_prepare() function to prepare a network for the inference.

Since :
5.5
Remarks:
If the app sets MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH, MV_INFERENCE_MODEL_WEIGHT_FILE_PATH, and MV_INFERENCE_MODEL_USER_FILE_PATH to media storage, then the media storage privilege http://tizen.org/privilege/mediastorage is needed.
If the app sets any of the paths mentioned in the previous sentence to external storage, then the external storage privilege http://tizen.org/privilege/externalstorage is needed.
If the required privileges aren't set properly, mv_inference_prepare() will returned MEDIA_VISION_ERROR_PERMISSION_DENIED.
The infer should be released using mv_inference_destroy().
Parameters:
[out]inferThe handle to the inference to be created
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
MEDIA_VISION_ERROR_OUT_OF_MEMORYOut of memory
See also:
mv_inference_destroy()
mv_inference_prepare()

Destroys inference handle and releases all its resources.

Since :
5.5
Parameters:
[in]inferThe handle to the inference to be destroyed
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
Precondition:
Create inference handle by using mv_inference_create()
See also:
mv_inference_create()
int mv_inference_face_detect ( mv_source_h  source,
mv_inference_h  infer,
mv_inference_face_detected_cb  detected_cb,
void *  user_data 
)

Performs face detection on the source.

Use this function to launch face detection. Each time when mv_inference_face_detect() is called, detected_cb will receive a list of faces and their locations in the media source.

Since :
5.5
Remarks:
This function is synchronous and may take considerable time to run.
Parameters:
[in]sourceThe handle to the source of the media
[in]inferThe handle to the inference
[in]detected_cbThe callback which will be called for detecting faces on media source. This callback will receive the detection results.
[in]user_dataThe user data passed from the code where mv_inference_face_detect() is invoked. This data will be accessible in detected_cb callback.
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
MEDIA_VISION_ERROR_INTERNALInternal error
MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMATSource colorspace isn't supported
Precondition:
Create a source handle by calling mv_create_source()
Create an inference handle by calling mv_inference_create()
Configure an inference handle by calling mv_inference_configure()
Prepare an inference by calling mv_inference_prepare()
Postcondition:
detected_cb will be called to provide detection results
See also:
mv_inference_face_detected_cb()
int mv_inference_facial_landmark_detect ( mv_source_h  source,
mv_inference_h  infer,
mv_rectangle_s roi,
mv_inference_facial_landmark_detected_cb  detected_cb,
void *  user_data 
)

Performs facial landmarks detection on the source.

Use this function to launch facial landmark detection. Each time when mv_inference_facial_landmark_detect() is called, detected_cb will receive a list facial landmark's locations in the media source.

Since :
5.5
Remarks:
This function is synchronous and may take considerable time to run.
Parameters:
[in]sourceThe handle to the source of the media
[in]inferThe handle to the inference
[in]roiRectangular area including a face in source which will be analyzed. If NULL, then the whole source will be analyzed.
[in]detected_cbThe callback which will receive the detection results.
[in]user_dataThe user data passed from the code where mv_inference_facial_landmark_detect() is invoked. This data will be accessible in detected_cb callback.
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
MEDIA_VISION_ERROR_INTERNALInternal error
MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMATSource colorspace isn't supported
Precondition:
Create a source handle by calling mv_create_source()
Create an inference handle by calling mv_inference_create()
Configure an inference handle by calling mv_inference_configure()
Prepare an inference by calling mv_inference_prepare()
Postcondition:
detected_cb will be called to provide detection results
See also:
mv_inference_facial_landmark_detected_cb()

Traverses the list of supported engines for inference.

Using this function the supported engines can be obtained. The names can be used with mv_engine_config_h related getters and setters to get/set MV_INFERENCE_BACKEND_TYPE attribute value.

Since :
5.5
Parameters:
[in]inferThe handle to the inference
[in]callbackThe iteration callback function
[in]user_dataThe user data to be passed to the callback function
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
See also:
mv_inference_supported_engine_cb()
int mv_inference_image_classify ( mv_source_h  source,
mv_inference_h  infer,
mv_rectangle_s roi,
mv_inference_image_classified_cb  classified_cb,
void *  user_data 
)

Performs image classification on the source.

Use this function to launch image classification. Each time when mv_inference_image_classify() is called, classified_cb will receive classes which the media source may belong to.

Since :
5.5
Remarks:
This function is synchronous and may take considerable time to run.
Parameters:
[in]sourceThe handle to the source of the media
[in]inferThe handle to the inference
[in]roiRectangular area in the source which will be analyzed. If NULL, then the whole source will be analyzed.
[in]classified_cbThe callback which will be called for classification on source. This callback will receive classification results.
[in]user_dataThe user data passed from the code where mv_inference_image_classify() is invoked. This data will be accessible in classified_cb callback.
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
MEDIA_VISION_ERROR_INVALID_OPERATIONInvalid operation
MEDIA_VISION_ERROR_INTERNALInternal error
MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMATSource colorspace isn't supported
Precondition:
Create a source handle by calling mv_create_source()
Create an inference handle by calling mv_inference_create()
Configure an inference handle by calling mv_inference_configure()
Prepare an inference by calling mv_inference_prepare()
Postcondition:
classified_cb will be called to provide classification results
See also:
mv_inference_image_classified_cb()
int mv_inference_object_detect ( mv_source_h  source,
mv_inference_h  infer,
mv_inference_object_detected_cb  detected_cb,
void *  user_data 
)

Performs object detection on the source.

Use this function to launch object detection. Each time when mv_inference_object_detect() is called, detected_cb will receive a list of objects and their locations in the media source.

Since :
5.5
Remarks:
This function is synchronous and may take considerable time to run.
Parameters:
[in]sourceThe handle to the source of the media
[in]inferThe handle to the inference
[in]detected_cbThe callback which will be called for detecting objects in the media source. This callback will receive the detection results.
[in]user_dataThe user data passed from the code where mv_inference_object_detect() is invoked. This data will be accessible in detected_cb callback.
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
MEDIA_VISION_ERROR_INTERNALInternal error
MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMATSource colorspace isn't supported
Precondition:
Create a source handle by calling mv_create_source()
Create an inference handle by calling mv_inference_create()
Configure an inference handle by calling mv_inference_configure()
Prepare an inference by calling mv_inference_prepare()
Postcondition:
detected_cb will be called to provide detection results
See also:
mv_inference_object_detected_cb()

Prepares inference.

Use this function to prepare inference based on the configured network.

Since :
5.5
Parameters:
[in]inferThe handle to the inference
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_PERMISSION_DENIEDPermission denied
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
MEDIA_VISION_ERROR_INVALID_DATAInvalid model data
MEDIA_VISION_ERROR_OUT_OF_MEMORYOut of memory
MEDIA_VISION_ERROR_INVALID_OPERATIONInvalid operation
MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMATNot supported format