Tizen Native API  6.0
Media Vision Inference

Image Classification, Object Detection, Face and Facial landmark detection.

Required Header

#include <mv_inference.h>

Related Features

This API is related with the following features:

  • http://tizen.org/feature/vision.inference
  • http://tizen.org/feature/vision.inference.image
  • http://tizen.org/feature/vision.inference.face

It is recommended to use features in your application for reliability.
You can check if the device supports the related features for this API by using System Information, and control your application's actions accordingly.
To ensure your application is only running on devices with specific features, please define the features in your manifest file using the manifest editor in the SDK.
More details on using features in your application can be found in Feature Element.

Overview

Media Vision Inference contains mv_inference_h handle to perform Image Classification, Object Detection, Face and Facial Landmark detection. Inference handle should be created with mv_inference_create() and destoryed with mv_inference_destroy(). mv_inference_h should be configured by calling mv_inference_configure(). After configuration, mv_inference_h should be prepared by calling mv_inference_prepare() which loads models and set required parameters. After preparation, mv_inference_image_classify() has to be called to classify images on mv_source_h, and callback mv_inference_image_classified_cb() will be invoked to process results. Module contains mv_inference_object_detect() function to detect object on mv_source_h, and mv_inference_object_detected_cb() to process object detection results. Module also contains mv_inference_face_detect() and mv_inference_facial_landmark_detect() functionalities to detect faces and their landmark on mv_source_h, and callbacks mv_inference_face_detected_cb() and mv_inference_facial_landmark_detected_cb() to process detection results.

Functions

int mv_inference_create (mv_inference_h *infer)
 Creates inference handle.
int mv_inference_destroy (mv_inference_h infer)
 Destroys inference handle and releases all its resources.
int mv_inference_configure (mv_inference_h infer, mv_engine_config_h engine_config)
 Configures the network of the inference.
int mv_inference_prepare (mv_inference_h infer)
 Prepares inference.
int mv_inference_foreach_supported_engine (mv_inference_h infer, mv_inference_supported_engine_cb callback, void *user_data)
 Traverses the list of supported engines for inference.
int mv_inference_image_classify (mv_source_h source, mv_inference_h infer, mv_rectangle_s *roi, mv_inference_image_classified_cb classified_cb, void *user_data)
 Performs image classification on the source.
int mv_inference_object_detect (mv_source_h source, mv_inference_h infer, mv_inference_object_detected_cb detected_cb, void *user_data)
 Performs object detection on the source.
int mv_inference_face_detect (mv_source_h source, mv_inference_h infer, mv_inference_face_detected_cb detected_cb, void *user_data)
 Performs face detection on the source.
int mv_inference_facial_landmark_detect (mv_source_h source, mv_inference_h infer, mv_rectangle_s *roi, mv_inference_facial_landmark_detected_cb detected_cb, void *user_data)
 Performs facial landmarks detection on the source.
int mv_inference_pose_landmark_detect (mv_source_h source, mv_inference_h infer, mv_rectangle_s *roi, mv_inference_pose_landmark_detected_cb detected_cb, void *user_data)
 Performs pose landmarks detection on the source.
int mv_inference_pose_get_number_of_poses (mv_inference_pose_result_h result, int *number_of_poses)
 Gets the number of poses.
int mv_inference_pose_get_number_of_landmarks (mv_inference_pose_result_h result, int *number_of_landmarks)
 Gets the number of landmarks per a pose.
int mv_inference_pose_get_landmark (mv_inference_pose_result_h result, int pose_index, int pose_part, mv_point_s *location, float *score)
 Gets landmark location of a part of a pose.
int mv_inference_pose_get_label (mv_inference_pose_result_h result, int pose_index, int *label)
 Gets a label of a pose.
int mv_pose_create (mv_pose_h *pose)
 Creates pose handle.
int mv_pose_destroy (mv_pose_h pose)
 Destroys pose handle and releases all its resources.
int mv_pose_set_from_file (mv_pose_h pose, const char *motion_capture_file_path, const char *motion_mapping_file_path)
 Sets a motion capture file and its pose mapping file to the pose.
int mv_pose_compare (mv_pose_h pose, mv_inference_pose_result_h action, int parts, float *score)
 Compares an action pose with the pose which is set by mv_pose_set_from_file().

Typedefs

typedef bool(* mv_inference_supported_engine_cb )(const char *engine, bool supported, void *user_data)
 Called to provide information for supported engines for inference.
typedef void(* mv_inference_image_classified_cb )(mv_source_h source, int number_of_classes, const int *indices, const char **names, const float *confidences, void *user_data)
 Called when source is classified.
typedef void(* mv_inference_object_detected_cb )(mv_source_h source, int number_of_objects, const int *indices, const char **names, const float *confidences, const mv_rectangle_s *locations, void *user_data)
 Called when objects in source are detected.
typedef void(* mv_inference_face_detected_cb )(mv_source_h source, int number_of_faces, const float *confidences, const mv_rectangle_s *locations, void *user_data)
 Called when faces in source are detected.
typedef void(* mv_inference_facial_landmark_detected_cb )(mv_source_h source, int number_of_landmarks, const mv_point_s *locations, void *user_data)
 Called when facial landmarks in source are detected.
typedef void(* mv_inference_pose_landmark_detected_cb )(mv_source_h source, mv_inference_pose_result_h locations, void *user_data)
 Called when poses in source are detected.
typedef void * mv_inference_h
 The inference handle.
typedef void * mv_inference_pose_result_h
 The inference pose result handle.
typedef void * mv_pose_h
 The pose handle.

Defines

#define MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH   "MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH"
 Defines MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH to set inference model's configuration file attribute of the engine configuration.
#define MV_INFERENCE_MODEL_WEIGHT_FILE_PATH   "MV_INFERENCE_MODEL_WEIGHT_FILE_PATH"
 Defines MV_INFERENCE_MODEL_WEIGHT_FILE_PATH to set inference model's weight file attribute of the engine configuration.
#define MV_INFERENCE_MODEL_USER_FILE_PATH   "MV_INFERENCE_MODEL_USER_FILE_PATH"
 Defines MV_INFERENCE_MODEL_USER_FILE_PATH to set inference model's category file attribute of the engine configuration.
#define MV_INFERENCE_MODEL_MEAN_VALUE   "MV_INFERENCE_MODEL_MEAN_VALUE"
 Defines MV_INFERENCE_MODEL_MEAN_VALUE to set inference model's mean attribute of the engine configuration.
#define MV_INFERENCE_MODEL_STD_VALUE   "MV_INFERENCE_MODEL_STD_VALUE"
 Defines MV_INFERENCE_MODEL_STD_VALUE to set an input image's standard deviation attribute of the engine configuration.
#define MV_INFERENCE_BACKEND_TYPE   "MV_INFERENCE_BACKEND_TYPE"
 Defines MV_INFERENCE_BACKEND_TYPE to set the type used for inference attribute of the engine configuration.
#define MV_INFERENCE_TARGET_TYPE   "MV_INFERENCE_TARGET_TYPE"
 Defines MV_INFERENCE_TARGET_TYPE to set the type used for device running attribute of the engine configuration.
#define MV_INFERENCE_TARGET_DEVICE_TYPE   "MV_INFERENCE_TARGET_DEVICE_TYPE"
 Defines MV_INFERENCE_TARGET_DEVICE_TYPE to set the type used for device running attribute of the engine configuration.
#define MV_INFERENCE_INPUT_TENSOR_WIDTH   "MV_INFERENCE_INPUT_TENSOR_WIDTH"
 Defines MV_INFERENCE_INPUT_TENSOR_WIDTH to set the width of input tensor.
#define MV_INFERENCE_INPUT_TENSOR_HEIGHT   "MV_INFERENCE_INPUT_TENSOR_HEIGHT"
 Defines MV_INFERENCE_INPUT_TENSOR_HEIGHT to set the height of input tensor.
#define MV_INFERENCE_INPUT_TENSOR_CHANNELS   "MV_INFERENCE_INPUT_TENSOR_CHANNELS"
 Defines MV_INFERENCE_INPUT_TENSOR_CHANNELS to set the channels, for example 3 in case of RGB colorspace, of input tensor.
#define MV_INFERENCE_INPUT_DATA_TYPE   "MV_INFERENCE_INPUT_DATA_TYPE"
 Defines MV_INFERENCE_INPUT_DATA_TYPE to set data type of input tensor.
#define MV_INFERENCE_INPUT_NODE_NAME   "MV_INFERENCE_INPUT_NODE_NAME"
 Defines MV_INFERENCE_INPUT_NODE_NAME to set the input node name.
#define MV_INFERENCE_OUTPUT_NODE_NAMES   "MV_INFERENCE_OUTPUT_NODE_NAMES"
 Defines MV_INFERENCE_OUTPUT_NODE_NAMES to set the output node names.
#define MV_INFERENCE_OUTPUT_MAX_NUMBER   "MV_INFERENCE_OUTPUT_MAX_NUMBER"
 Defines MV_INFERENCE_OUTPUT_MAX_NUMBER to set the maximum number of output attributes of the engine configuration.
#define MV_INFERENCE_CONFIDENCE_THRESHOLD   "MV_INFERENCE_CONFIDENCE_THRESHOLD"
 Defines MV_INFERENCE_CONFIDENCE_THRESHOLD to set the threshold value for the confidence of inference results.

Define Documentation

#define MV_INFERENCE_BACKEND_TYPE   "MV_INFERENCE_BACKEND_TYPE"

Defines MV_INFERENCE_BACKEND_TYPE to set the type used for inference attribute of the engine configuration.

Switches between two types of the type used for neural network model inference. Possible values of the attribute are:
MV_INFERENCE_BACKEND_OPENCV,
MV_INFERENCE_BACKEND_TFLITE.
The default type is MV_INFERENCE_BACKEND_OPENCV.

Since :
5.5
See also:
mv_engine_config_set_int_attribute()
mv_engine_config_get_int_attribute()
#define MV_INFERENCE_CONFIDENCE_THRESHOLD   "MV_INFERENCE_CONFIDENCE_THRESHOLD"

Defines MV_INFERENCE_CONFIDENCE_THRESHOLD to set the threshold value for the confidence of inference results.

Default value is 0.6 and its range is between 0.0 and 1.0.

Since :
5.5
See also:
mv_engine_config_set_double_attribute()
mv_engine_config_get_double_attribute()
#define MV_INFERENCE_INPUT_DATA_TYPE   "MV_INFERENCE_INPUT_DATA_TYPE"

Defines MV_INFERENCE_INPUT_DATA_TYPE to set data type of input tensor.

Data type of input tensor can be changed according to a given weight file. Switches between Float32 or UInt8:
MV_INFERENCE_DATA_FLOAT32,
MV_INFERENCE_DATA_UINT8,

The default type is MV_INFERENCE_DATA_FLOAT32.

Since :
6.0
See also:
mv_engine_config_set_int_attribute()
mv_engine_config_get_int_attribute()
#define MV_INFERENCE_INPUT_NODE_NAME   "MV_INFERENCE_INPUT_NODE_NAME"
#define MV_INFERENCE_INPUT_TENSOR_CHANNELS   "MV_INFERENCE_INPUT_TENSOR_CHANNELS"

Defines MV_INFERENCE_INPUT_TENSOR_CHANNELS to set the channels, for example 3 in case of RGB colorspace, of input tensor.

Since :
5.5
See also:
mv_engine_config_set_int_attribute()
mv_engine_config_get_int_attribute()
#define MV_INFERENCE_INPUT_TENSOR_HEIGHT   "MV_INFERENCE_INPUT_TENSOR_HEIGHT"
#define MV_INFERENCE_INPUT_TENSOR_WIDTH   "MV_INFERENCE_INPUT_TENSOR_WIDTH"
#define MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH   "MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH"

Defines MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH to set inference model's configuration file attribute of the engine configuration.

Inference model's configuration can be changed to specify the path to the file

Since :
5.5
See also:
mv_engine_config_set_string_attribute()
mv_engine_config_get_string_attribute()
#define MV_INFERENCE_MODEL_MEAN_VALUE   "MV_INFERENCE_MODEL_MEAN_VALUE"

Defines MV_INFERENCE_MODEL_MEAN_VALUE to set inference model's mean attribute of the engine configuration.

Since :
5.5
See also:
mv_engine_config_set_double_attribute()
mv_engine_config_get_double_attribute()
#define MV_INFERENCE_MODEL_STD_VALUE   "MV_INFERENCE_MODEL_STD_VALUE"

Defines MV_INFERENCE_MODEL_STD_VALUE to set an input image's standard deviation attribute of the engine configuration.

Since :
5.5
See also:
mv_engine_config_set_double_attribute()
mv_engine_config_get_double_attribute()
#define MV_INFERENCE_MODEL_USER_FILE_PATH   "MV_INFERENCE_MODEL_USER_FILE_PATH"

Defines MV_INFERENCE_MODEL_USER_FILE_PATH to set inference model's category file attribute of the engine configuration.

Inference model's category can be changed to specify the path to the file

Since :
5.5
See also:
mv_engine_config_set_string_attribute()
mv_engine_config_get_string_attribute()
#define MV_INFERENCE_MODEL_WEIGHT_FILE_PATH   "MV_INFERENCE_MODEL_WEIGHT_FILE_PATH"

Defines MV_INFERENCE_MODEL_WEIGHT_FILE_PATH to set inference model's weight file attribute of the engine configuration.

Inference model's weight can be changed to specify the path to the file

Since :
5.5
See also:
mv_engine_config_set_string_attribute()
mv_engine_config_get_string_attribute()
#define MV_INFERENCE_OUTPUT_MAX_NUMBER   "MV_INFERENCE_OUTPUT_MAX_NUMBER"

Defines MV_INFERENCE_OUTPUT_MAX_NUMBER to set the maximum number of output attributes of the engine configuration.

Default value is 5 and a value over 10 will be set to 10. A value under 1 will be set to 1.

Since :
5.5
See also:
mv_engine_config_set_int_attribute()
mv_engine_config_get_int_attribute()
#define MV_INFERENCE_OUTPUT_NODE_NAMES   "MV_INFERENCE_OUTPUT_NODE_NAMES"
#define MV_INFERENCE_TARGET_DEVICE_TYPE   "MV_INFERENCE_TARGET_DEVICE_TYPE"

Defines MV_INFERENCE_TARGET_DEVICE_TYPE to set the type used for device running attribute of the engine configuration.

Switches between CPU, GPU, or Custom:
MV_INFERENCE_TARGET_DEVICE_CPU,
MV_INFERENCE_TARGET_DEVICE_GPU,
MV_INFERENCE_TARGET_DEVICE_CUSTOM.

The default type is CPU.

Since :
6.0
See also:
mv_engine_config_set_int_attribute()
mv_engine_config_get_int_attribute()
#define MV_INFERENCE_TARGET_TYPE   "MV_INFERENCE_TARGET_TYPE"

Defines MV_INFERENCE_TARGET_TYPE to set the type used for device running attribute of the engine configuration.

Deprecated:
Deprecated since 6.0. Use MV_INFERENCE_TARGET_TYPE instead.

Switches between CPU, GPU, or Custom:
MV_INFERENCE_TARGET_CPU (Deprecated),
MV_INFERENCE_TARGET_GPU (Deprecated),
MV_INFERENCE_TARGET_CUSTOM (Deprecated).

The default type is CPU.

Since :
5.5
See also:
mv_engine_config_set_int_attribute()
mv_engine_config_get_int_attribute()

Typedef Documentation

typedef void(* mv_inference_face_detected_cb)(mv_source_h source, int number_of_faces, const float *confidences, const mv_rectangle_s *locations, void *user_data)

Called when faces in source are detected.

This callback is invoked each time when mv_inference_face_detect() is called to provide the results of face detection.

Since :
5.5
Remarks:
The confidences and locations should not be released by app. They can be used only in the callback. The number of elements in confidences and locations is equal to number_of_faces.
Parameters:
[in]sourceThe handle to the source of the media where faces were detected. source is the same object for which mv_inference_face_detect() was called. It should be released by calling mv_destroy_source() when it's not needed anymore.
[in]number_of_facesThe number of faces
[in]confidencesConfidences of the detected faces.
[in]locationsLocations of the detected faces.
[in]user_dataThe user data passed from callback invoking code
Precondition:
Call mv_inference_face_detect() function to perform detection of the faces in source and to invoke this callback as a result
See also:
mv_inference_face_detect()
typedef void(* mv_inference_facial_landmark_detected_cb)(mv_source_h source, int number_of_landmarks, const mv_point_s *locations, void *user_data)

Called when facial landmarks in source are detected.

This type callback is invoked each time when mv_inference_facial_landmark_detect() is called to provide the results of the landmarks detection.

Since :
5.5
Remarks:
The locations should not be released by app. They can be used only in the callback. The number of elements in locations is equal to number_of_landmarks.
Parameters:
[in]sourceThe handle to the source of the media where landmarks were detected. source is the same object for which mv_inference_facial_landmark_detect() was called. It should be released by calling mv_destroy_source() when it's not needed anymore.
[in]number_of_landmarksThe number of landmarks
[in]locationsLocations of the detected facial landmarks.
[in]user_dataThe user data passed from callback invoking code
Precondition:
Call mv_inference_face_detect() function to perform detection of the faces in source and to invoke this callback as a result
See also:
mv_inference_face_detect()
typedef void* mv_inference_h

The inference handle.

Contains information about location of detected landmarks for one or more poses.

Since :
5.5
typedef void(* mv_inference_image_classified_cb)(mv_source_h source, int number_of_classes, const int *indices, const char **names, const float *confidences, void *user_data)

Called when source is classified.

This callback is invoked each time when mv_inference_image_classify() is called to provide the results of image classification.

Since :
5.5
Remarks:
The indices, names, and confidences should not be released by the app. They can be used only in the callback. The number of elements in indices, names, and confidences is equal to number_of_classes.
Parameters:
[in]sourceThe handle to the source of the media where an image was classified. source is the same object for which mv_inference_image_classify() was called. It should be released by calling mv_destroy_source() when it's not needed anymore.
[in]number_of_classesThe number of classes
[in]indicesThe indices of the classified image.
[in]namesNames corresponding to the indices.
[in]confidencesEach element is the confidence that the corresponding image belongs to the corresponding class.
[in]user_dataThe user data passed from callback invoking code
Precondition:
Call mv_inference_image_classify() function to perform classification of the image and to invoke this callback as a result
See also:
mv_inference_image_classify()
typedef void(* mv_inference_object_detected_cb)(mv_source_h source, int number_of_objects, const int *indices, const char **names, const float *confidences, const mv_rectangle_s *locations, void *user_data)

Called when objects in source are detected.

This callback is invoked each time when mv_inference_object_detect() is called to provide the results of object detection.

Since :
5.5
Remarks:
The indices, names, confidences, and locations should not be released by app. They can be used only in the callback. The number of elements in indices, names, confidences, and locations is equal to number_of_objects.
Parameters:
[in]sourceThe handle to the source of the media where an image was classified. source is the same object for which mv_inference_object_detect() was called. It should be released by calling mv_destroy_source() when it's not needed anymore.
[in]number_of_objectsThe number of objects
[in]indicesThe indices of objects.
[in]namesNames corresponding to the indices.
[in]confidencesConfidences of the detected objects.
[in]locationsLocations of the detected objects.
[in]user_dataThe user data passed from callback invoking code
Precondition:
Call mv_inference_object_detect() function to perform detection of the objects in source and to invoke this callback as a result
See also:
mv_inference_object_detect()
typedef void(* mv_inference_pose_landmark_detected_cb)(mv_source_h source, mv_inference_pose_result_h locations, void *user_data)

Called when poses in source are detected.

This type callback is invoked each time when mv_inference_pose_landmark_detect() is called to provide the results of the pose landmark detection.

Since :
6.0
Remarks:
The locations should not be released by app. They can be used only in the callback.
Parameters:
[in]sourceThe handle to the source of the media where landmarks were detected. source is the same object for which mv_inference_pose_landmark_detect() was called. It should be released by calling mv_destroy_source() when it's not needed anymore.
[in]locationsLocations of the detected pose landmarks.
[in]user_dataThe user data passed from callback invoking code
See also:
mv_inference_pose_landmark_detect()

The inference pose result handle.

Since :
6.0
typedef bool(* mv_inference_supported_engine_cb)(const char *engine, bool supported, void *user_data)

Called to provide information for supported engines for inference.

Since :
5.5
Parameters:
[in]engineThe supported engine. The engine can be used only in the callback. To use outside, make a copy.
[in]supportedThe flag whether the engine is supported or not
[in]user_dataThe user data passed from mv_inference_foreach_supported_engine()
Returns:
true to continue with the next iteration of the loop, otherwise false to break out of the loop
Precondition:
mv_inference_foreach_supported_engine()
typedef void* mv_pose_h

The pose handle.

Since :
6.0

Enumeration Type Documentation

Enumeration for inference backend. MV_INFERENCE_BACKEND_OPENCV An open source computer vision and machine learning software library. (https://opencv.org/about/) MV_INFERENCE_BACKEND_TFLITE Google-introduced open source inference engine for embedded systems, which runs Tensorflow Lite model. (https://www.tensorflow.org/lite/guide/get_started) MV_INFERENCE_BACKEND_ARMNN ARM-introduced open source inference engine for CPUs, GPUs and NPUs, which enables efficient translation of existing neural network frameworks such as TensorFlow, TensorFlow Lite and Caffes, allowing them to run efficiently without modification on Embedded hardware. (https://developer.arm.com/ip-products/processors/machine-learning/arm-nn) MV_INFERENCE_BACKEND_MLAPI Samsung-introduced open source ML single API framework of NNStreamer, which runs various NN models via tensor filters of NNStreamer. (https://github.com/nnstreamer/nnstreamer) MV_INFERENCE_BACKEND_ONE Samsung-introduced open source inference engine called On-device Neural Engine, which performs inference of a given NN model on various devices such as CPU, GPU, DSP and NPU. (https://github.com/Samsung/ONE)

Since :
5.5
See also:
mv_inference_prepare()
Enumerator:
MV_INFERENCE_BACKEND_NONE 

None

MV_INFERENCE_BACKEND_OPENCV 

OpenCV

MV_INFERENCE_BACKEND_TFLITE 

TensorFlow-Lite

MV_INFERENCE_BACKEND_ARMNN 

ARMNN (Since 6.0)

MV_INFERENCE_BACKEND_MLAPI 

ML Single API of NNStreamer (Since 6.0)

MV_INFERENCE_BACKEND_ONE 

On-device Neural Engine (Since 6.0)

MV_INFERENCE_BACKEND_MAX 

Backend MAX

Enumeration for input data type.

Since :
6.0
Enumerator:
MV_INFERENCE_DATA_FLOAT32 

Data type of a given pre-trained model is float.

MV_INFERENCE_DATA_UINT8 

Data type of a given pre-trained model is unsigned char.

Enumeration for human body parts.

Since :
6.0
Enumerator:
MV_INFERENCE_HUMAN_BODY_PART_HEAD 

HEAD, NECK, and THORAX

MV_INFERENCE_HUMAN_BODY_PART_ARM_RIGHT 

RIGHT SHOULDER, ELBOW, and WRIST

MV_INFERENCE_HUMAN_BODY_PART_ARM_LEFT 

LEFT SHOULDER, ELBOW, and WRIST

MV_INFERENCE_HUMAN_BODY_PART_BODY 

THORAX, PELVIS, RIGHT HIP, and LEFT HIP

MV_INFERENCE_HUMAN_BODY_PART_LEG_RIGHT 

RIGHT HIP, KNEE, and ANKLE

MV_INFERENCE_HUMAN_BODY_PART_LEG_LEFT 

LEFT HIP, KNEE, and ANKLE

Enumeration for human pose landmark.

Since :
6.0
Enumerator:
MV_INFERENCE_HUMAN_POSE_HEAD 

Head of human pose

MV_INFERENCE_HUMAN_POSE_NECK 

Neck of human pose

MV_INFERENCE_HUMAN_POSE_THORAX 

Thorax of human pose

MV_INFERENCE_HUMAN_POSE_RIGHT_SHOULDER 

Right shoulder of human pose

MV_INFERENCE_HUMAN_POSE_RIGHT_ELBOW 

Right elbow of human pose

MV_INFERENCE_HUMAN_POSE_RIGHT_WRIST 

Right wrist of human pose

MV_INFERENCE_HUMAN_POSE_LEFT_SHOULDER 

Left shoulder of human pose

MV_INFERENCE_HUMAN_POSE_LEFT_ELBOW 

Left elbow of human pose

MV_INFERENCE_HUMAN_POSE_LEFT_WRIST 

Left wrist of human pose

MV_INFERENCE_HUMAN_POSE_PELVIS 

Pelvis of human pose

MV_INFERENCE_HUMAN_POSE_RIGHT_HIP 

Right hip of human pose

MV_INFERENCE_HUMAN_POSE_RIGHT_KNEE 

Right knee of human pose

MV_INFERENCE_HUMAN_POSE_RIGHT_ANKLE 

Right ankle of human pose

MV_INFERENCE_HUMAN_POSE_LEFT_HIP 

Left hip of human pose

MV_INFERENCE_HUMAN_POSE_LEFT_KNEE 

Left knee of human pose

MV_INFERENCE_HUMAN_POSE_LEFT_ANKLE 

Left ankle of human pose

Enumeration for inference target.

Since :
6.0
Enumerator:
MV_INFERENCE_TARGET_DEVICE_NONE 

None

MV_INFERENCE_TARGET_DEVICE_CPU 

CPU

MV_INFERENCE_TARGET_DEVICE_GPU 

GPU

MV_INFERENCE_TARGET_DEVICE_CUSTOM 

CUSTOM

MV_INFERENCE_TARGET_DEVICE_MAX 

Target MAX

Enumeration for inference target.

Deprecated:
Deprecated since 6.0. Use mv_inference_target_device_e instead.
Since :
5.5
Enumerator:
MV_INFERENCE_TARGET_NONE 

None

MV_INFERENCE_TARGET_CPU 

CPU

MV_INFERENCE_TARGET_GPU 

GPU

MV_INFERENCE_TARGET_CUSTOM 

CUSTOM

MV_INFERENCE_TARGET_MAX 

Target MAX


Function Documentation

int mv_inference_configure ( mv_inference_h  infer,
mv_engine_config_h  engine_config 
)

Configures the network of the inference.

Use this function to configure the network of the inference which is set to engine_config.

Since :
5.5
Parameters:
[in]inferThe handle to the inference
[in]engine_configThe handle to the configuration of engine.
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter in engine_config
MEDIA_VISION_ERROR_INVALID_PATHInvalid path of model data in engine_config

Creates inference handle.

Use this function to create an inference. After the creation the inference has to be prepared with mv_inference_prepare() function to prepare a network for the inference.

Since :
5.5
Remarks:
If the app sets MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH, MV_INFERENCE_MODEL_WEIGHT_FILE_PATH, and MV_INFERENCE_MODEL_USER_FILE_PATH to media storage, then the media storage privilege http://tizen.org/privilege/mediastorage is needed.
If the app sets any of the paths mentioned in the previous sentence to external storage, then the external storage privilege http://tizen.org/privilege/externalstorage is needed.
If the required privileges aren't set properly, mv_inference_prepare() will returned MEDIA_VISION_ERROR_PERMISSION_DENIED.
The infer should be released using mv_inference_destroy().
Parameters:
[out]inferThe handle to the inference to be created
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
MEDIA_VISION_ERROR_OUT_OF_MEMORYOut of memory
See also:
mv_inference_destroy()
mv_inference_prepare()

Destroys inference handle and releases all its resources.

Since :
5.5
Parameters:
[in]inferThe handle to the inference to be destroyed
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
Precondition:
Create inference handle by using mv_inference_create()
See also:
mv_inference_create()
int mv_inference_face_detect ( mv_source_h  source,
mv_inference_h  infer,
mv_inference_face_detected_cb  detected_cb,
void *  user_data 
)

Performs face detection on the source.

Use this function to launch face detection. Each time when mv_inference_face_detect() is called, detected_cb will receive a list of faces and their locations in the media source.

Since :
5.5
Remarks:
This function is synchronous and may take considerable time to run.
Parameters:
[in]sourceThe handle to the source of the media
[in]inferThe handle to the inference
[in]detected_cbThe callback which will be called for detecting faces on media source. This callback will receive the detection results.
[in]user_dataThe user data passed from the code where mv_inference_face_detect() is invoked. This data will be accessible in detected_cb callback.
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
MEDIA_VISION_ERROR_INTERNALInternal error
MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMATSource colorspace isn't supported
Precondition:
Create a source handle by calling mv_create_source()
Create an inference handle by calling mv_inference_create()
Configure an inference handle by calling mv_inference_configure()
Prepare an inference by calling mv_inference_prepare()
Postcondition:
detected_cb will be called to provide detection results
See also:
mv_inference_face_detected_cb()
int mv_inference_facial_landmark_detect ( mv_source_h  source,
mv_inference_h  infer,
mv_rectangle_s roi,
mv_inference_facial_landmark_detected_cb  detected_cb,
void *  user_data 
)

Performs facial landmarks detection on the source.

Use this function to launch facial landmark detection. Each time when mv_inference_facial_landmark_detect() is called, detected_cb will receive a list facial landmark's locations in the media source.

Since :
5.5
Remarks:
This function is synchronous and may take considerable time to run.
Parameters:
[in]sourceThe handle to the source of the media
[in]inferThe handle to the inference
[in]roiRectangular area including a face in source which will be analyzed. If NULL, then the whole source will be analyzed.
[in]detected_cbThe callback which will receive the detection results.
[in]user_dataThe user data passed from the code where mv_inference_facial_landmark_detect() is invoked. This data will be accessible in detected_cb callback.
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
MEDIA_VISION_ERROR_INTERNALInternal error
MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMATSource colorspace isn't supported
Precondition:
Create a source handle by calling mv_create_source()
Create an inference handle by calling mv_inference_create()
Configure an inference handle by calling mv_inference_configure()
Prepare an inference by calling mv_inference_prepare()
Postcondition:
detected_cb will be called to provide detection results
See also:
mv_inference_facial_landmark_detected_cb()

Traverses the list of supported engines for inference.

Using this function the supported engines can be obtained. The names can be used with mv_engine_config_h related getters and setters to get/set MV_INFERENCE_BACKEND_TYPE attribute value.

Since :
5.5
Parameters:
[in]inferThe handle to the inference
[in]callbackThe iteration callback function
[in]user_dataThe user data to be passed to the callback function
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
See also:
mv_inference_supported_engine_cb()
int mv_inference_image_classify ( mv_source_h  source,
mv_inference_h  infer,
mv_rectangle_s roi,
mv_inference_image_classified_cb  classified_cb,
void *  user_data 
)

Performs image classification on the source.

Use this function to launch image classification. Each time when mv_inference_image_classify() is called, classified_cb will receive classes which the media source may belong to.

Since :
5.5
Remarks:
This function is synchronous and may take considerable time to run.
Parameters:
[in]sourceThe handle to the source of the media
[in]inferThe handle to the inference
[in]roiRectangular area in the source which will be analyzed. If NULL, then the whole source will be analyzed.
[in]classified_cbThe callback which will be called for classification on source. This callback will receive classification results.
[in]user_dataThe user data passed from the code where mv_inference_image_classify() is invoked. This data will be accessible in classified_cb callback.
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
MEDIA_VISION_ERROR_INVALID_OPERATIONInvalid operation
MEDIA_VISION_ERROR_INTERNALInternal error
MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMATSource colorspace isn't supported
Precondition:
Create a source handle by calling mv_create_source()
Create an inference handle by calling mv_inference_create()
Configure an inference handle by calling mv_inference_configure()
Prepare an inference by calling mv_inference_prepare()
Postcondition:
classified_cb will be called to provide classification results
See also:
mv_inference_image_classified_cb()
int mv_inference_object_detect ( mv_source_h  source,
mv_inference_h  infer,
mv_inference_object_detected_cb  detected_cb,
void *  user_data 
)

Performs object detection on the source.

Use this function to launch object detection. Each time when mv_inference_object_detect() is called, detected_cb will receive a list of objects and their locations in the media source.

Since :
5.5
Remarks:
This function is synchronous and may take considerable time to run.
Parameters:
[in]sourceThe handle to the source of the media
[in]inferThe handle to the inference
[in]detected_cbThe callback which will be called for detecting objects in the media source. This callback will receive the detection results.
[in]user_dataThe user data passed from the code where mv_inference_object_detect() is invoked. This data will be accessible in detected_cb callback.
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
MEDIA_VISION_ERROR_INTERNALInternal error
MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMATSource colorspace isn't supported
Precondition:
Create a source handle by calling mv_create_source()
Create an inference handle by calling mv_inference_create()
Configure an inference handle by calling mv_inference_configure()
Prepare an inference by calling mv_inference_prepare()
Postcondition:
detected_cb will be called to provide detection results
See also:
mv_inference_object_detected_cb()
int mv_inference_pose_get_label ( mv_inference_pose_result_h  result,
int  pose_index,
int *  label 
)

Gets a label of a pose.

Since :
6.0
Parameters:
[in]resultThe handle to inference result
[in]pose_indexThe pose index between 0 and the number of poses which can be gotten by mv_inference_pose_get_number_of_poses()
[out]labelThe label of a pose
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
See also:
mv_inference_pose_get_number_of_poses()
mv_inference_pose_get_number_of_landmarks()
mv_inference_pose_landmark_detected_cb()
mv_inference_pose_result_h
int mv_inference_pose_get_landmark ( mv_inference_pose_result_h  result,
int  pose_index,
int  pose_part,
mv_point_s location,
float *  score 
)

Gets landmark location of a part of a pose.

Since :
6.0
Parameters:
[in]resultThe handle to inference result
[in]pose_indexThe pose index between 0 and the number of poses which can be gotten by mv_inference_pose_get_number_of_poses()
[in]pose_partThe landmark index between 0 and the number of landmarks which can be gotten by mv_inference_pose_get_number_of_landmarks()
[out]locationThe location of a landmark
[out]scoreThe score of a landmark
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
See also:
mv_inference_pose_get_number_of_poses()
mv_inference_pose_get_number_of_landmarks()
mv_inference_pose_landmark_detected_cb()
mv_inference_pose_result_h
int mv_inference_pose_get_number_of_landmarks ( mv_inference_pose_result_h  result,
int *  number_of_landmarks 
)

Gets the number of landmarks per a pose.

Since :
6.0
Parameters:
[in]resultThe handle to inference result
[out]number_of_landmarksThe pointer to the number of landmarks
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
See also:
mv_inference_pose_landmark_detected_cb()
mv_inference_pose_result_h
int mv_inference_pose_get_number_of_poses ( mv_inference_pose_result_h  result,
int *  number_of_poses 
)

Gets the number of poses.

Since :
6.0
Parameters:
[in]resultThe handle to inference result
[out]number_of_posesThe pointer to the number of poses
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
See also:
mv_inference_pose_landmark_detected_cb()
mv_inference_pose_result_h
int mv_inference_pose_landmark_detect ( mv_source_h  source,
mv_inference_h  infer,
mv_rectangle_s roi,
mv_inference_pose_landmark_detected_cb  detected_cb,
void *  user_data 
)

Performs pose landmarks detection on the source.

Use this function to launch pose landmark detection. Each time when mv_inference_pose_landmark_detect() is called, detected_cb will receive a list of pose landmark's locations in the media source.

Since :
6.0
Remarks:
This function is synchronous and may take considerable time to run.
Parameters:
[in]sourceThe handle to the source of the media
[in]inferThe handle to the inference
[in]roiRectangular area including a face in source which will be analyzed. If NULL, then the whole source will be analyzed.
[in]detected_cbThe callback which will receive the detection results.
[in]user_dataThe user data passed from the code where mv_inference_pose_landmark_detect() is invoked. This data will be accessible in detected_cb callback.
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
MEDIA_VISION_ERROR_INTERNALInternal error
MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMATSource colorspace isn't supported
Precondition:
Create a source handle by calling mv_create_source()
Create an inference handle by calling mv_inference_create()
Configure an inference handle by calling mv_inference_configure()
Prepare an inference by calling mv_inference_prepare()
Postcondition:
detected_cb will be called to provide detection results
See also:
mv_inference_pose_landmark_detected_cb()

Prepares inference.

Use this function to prepare inference based on the configured network.

Since :
5.5
Parameters:
[in]inferThe handle to the inference
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_PERMISSION_DENIEDPermission denied
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
MEDIA_VISION_ERROR_INVALID_DATAInvalid model data
MEDIA_VISION_ERROR_OUT_OF_MEMORYOut of memory
MEDIA_VISION_ERROR_INVALID_OPERATIONInvalid operation
MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMATNot supported format
int mv_pose_compare ( mv_pose_h  pose,
mv_inference_pose_result_h  action,
int  parts,
float *  score 
)

Compares an action pose with the pose which is set by mv_pose_set_from_file().

Use this function to compare action pose with the pose which is set by mv_pose_set_from_file(). Parts to be compared can be selected by mv_inference_human_body_part_e. Their similarity will be given by the score between 0 ~ 1.

Since :
6.0
Remarks:
If action contains multiple poses, the first pose is used for comparison.
Parameters:
[in]poseThe handle to the pose
[in]actionThe action pose
[in]partsThe parts to be compared
[out]scoreThe similarity score
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
MEDIA_VISION_ERROR_INVALID_OPERATIONInvalid operation
Precondition:
Sets the pose by using mv_pose_set_from_file()
Detects the pose by using mv_inference_pose_landmark_detect()
int mv_pose_create ( mv_pose_h pose)

Creates pose handle.

Use this function to create a pose.

Since :
6.0
Remarks:
The pose should be released using mv_pose_destroy().
Parameters:
[out]poseThe handle to the pose to be created
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
MEDIA_VISION_ERROR_OUT_OF_MEMORYOut of memory
See also:
mv_pose_destroy()
int mv_pose_destroy ( mv_pose_h  pose)

Destroys pose handle and releases all its resources.

Since :
6.0
Parameters:
[in]poseThe handle to the pose to be destroyed
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
Precondition:
Create pose handle by using mv_pose_create()
See also:
mv_pose_create()
int mv_pose_set_from_file ( mv_pose_h  pose,
const char *  motion_capture_file_path,
const char *  motion_mapping_file_path 
)

Sets a motion capture file and its pose mapping file to the pose.

Use this function to set a motion capture file and its pose mapping file. These are used by mv_pose_compare() to compare a given pose by mv_inference_pose_landmark_estimation().

Since :
6.0
Remarks:
If the app sets paths to media storage, then the media storage privilege http://tizen.org/privilege/mediastorage is needed.
If the app sets the paths to external storage, then the external storage privilege http://tizen.org/privilege/externalstorage is needed.
If the required privileges aren't set properly, mv_pose_set_from_file() will returned MEDIA_VISION_ERROR_PERMISSION_DENIED.
Parameters:
[in]poseThe handle to the pose
[in]motion_capture_file_pathThe file path to the motion capture file
[in]motion_mapping_file_pathThe file path to the motion mapping file
Returns:
0 on success, otherwise a negative error value
Return values:
MEDIA_VISION_ERROR_NONESuccessful
MEDIA_VISION_ERROR_NOT_SUPPORTEDNot supported
MEDIA_VISION_ERROR_PERMISSION_DENIEDPermission denied
MEDIA_VISION_ERROR_INVALID_PARAMETERInvalid parameter
MEDIA_VISION_ERROR_INVALID_PATHInvalid path of capture or mapping file
MEDIA_VISION_ERROR_INTERNALInternal error