Tizen Native API  5.5

The NNStreamer Single API provides interfaces to invoke a neural network model with a single instance of input data.

Required Header

#include <nnstreamer/nnstreamer-single.h>

Overview

The NNStreamer Single API provides interfaces to invoke a neural network model with a single instance of input data. This function is a syntactic sugar of NNStreamer Pipeline API with simplified features; thus, users are supposed to use NNStreamer Pipeline API directly if they want more advanced features. The user is expected to preprocess the input data for the given neural network model.

This function allows the following operations with NNStreamer:

  • Open a machine learning model with various mechanisms.
  • Close the model.
  • Interfaces to enter a single instance of input data to the opened model.
  • Utility functions to get the information of opened model.

Note that this function set is supposed to be thread-safe.

Related Features

This function is related with the following features:

It is recommended to probe features in your application for reliability.
You can check if a device supports the related features for this function by using System Information, thereby controlling the procedure of your application.
To ensure your application is only running on the device with specific features, please define the features in your manifest file using the manifest editor in the SDK.
For example, your application accesses to the camera device, then you have to add 'http://tizen.org/privilege/camera' into the manifest of your application.
More details on featuring your application can be found from Feature Element.

Functions

int ml_single_open (ml_single_h *single, const char *model, const ml_tensors_info_h input_info, const ml_tensors_info_h output_info, ml_nnfw_type_e nnfw, ml_nnfw_hw_e hw)
 Opens an ML model and returns the instance as a handle.
int ml_single_close (ml_single_h single)
 Closes the opened model handle.
int ml_single_invoke (ml_single_h single, const ml_tensors_data_h input, ml_tensors_data_h *output)
 Invokes the model with the given input data.
int ml_single_get_input_info (ml_single_h single, ml_tensors_info_h *info)
 Gets the information (tensor dimension, type, name and so on) of required input data for the given model.
int ml_single_get_output_info (ml_single_h single, ml_tensors_info_h *info)
 Gets the information (tensor dimension, type, name and so on) of output data for the given model.
int ml_single_set_timeout (ml_single_h single, unsigned int timeout)
 Sets the maximum amount of time to wait for an output, in milliseconds.

Typedefs

typedef void * ml_single_h
 A handle of a single-shot instance.

Typedef Documentation

typedef void* ml_single_h

A handle of a single-shot instance.

Since :
5.5

Function Documentation

int ml_single_close ( ml_single_h  single)

Closes the opened model handle.

Since :
5.5
Parameters:
[in]singleThe model handle to be closed.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid (Pipeline is not negotiated yet.)

Gets the information (tensor dimension, type, name and so on) of required input data for the given model.

Note that a model may not have such information if its input type is flexible. The name of tensors are sometimes unavailable (optional), while its dimensions and types are always available.

Since :
5.5
Parameters:
[in]singleThe model handle.
[out]infoThe handle of input tensors information. The caller is responsible for freeing the information with ml_tensors_info_destroy().
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.

Gets the information (tensor dimension, type, name and so on) of output data for the given model.

Note that a model may not have such information if its output type is flexible and output type is not determined statically. The name of tensors are sometimes unavailable (optional), while its dimensions and types are always available.

Since :
5.5
Parameters:
[in]singleThe model handle.
[out]infoThe handle of output tensors information. The caller is responsible for freeing the information with ml_tensors_info_destroy().
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.
int ml_single_invoke ( ml_single_h  single,
const ml_tensors_data_h  input,
ml_tensors_data_h output 
)

Invokes the model with the given input data.

Even if the model has flexible input data dimensions, input data frames of an instance of a model should share the same dimension. Note that this has a default timeout of 3 seconds. If an application wants to change the time to wait for an output, set the timeout using ml_single_set_timeout().

Since :
5.5
Parameters:
[in]singleThe model handle to be inferred.
[in]inputThe input data to be inferred.
[out]outputThe allocated output buffer. The caller is responsible for freeing the output buffer with ml_tensors_data_destroy().
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.
ML_ERROR_STREAMS_PIPECannot push a buffer into source element.
ML_ERROR_TIMED_OUTFailed to get the result from sink element.
int ml_single_open ( ml_single_h single,
const char *  model,
const ml_tensors_info_h  input_info,
const ml_tensors_info_h  output_info,
ml_nnfw_type_e  nnfw,
ml_nnfw_hw_e  hw 
)

Opens an ML model and returns the instance as a handle.

Even if the model has flexible input data dimensions, input data frames of an instance of a model should share the same dimension.

Since :
5.5
Remarks:
http://tizen.org/privilege/mediastorage is needed if model is relevant to media storage.
http://tizen.org/privilege/externalstorage is needed if model is relevant to external storage.
Parameters:
[out]singleThis is the model handle opened. Users are required to close the given instance with ml_single_close().
[in]modelThis is the path to the neural network model file.
[in]input_infoThis is required if the given model has flexible input dimension, where the input dimension MUST be given before executing the model. However, once it's given, the input dimension cannot be changed for the given model handle. It is required by some custom filters of NNStreamer. You may set NULL if it's not required.
[in]output_infoThis is required if the given model has flexible output dimension.
[in]nnfwThe neural network framework used to open the given model. Set ML_NNFW_TYPE_ANY to let it auto-detect.
[in]hwTell the corresponding nnfw to use a specific hardware. Set ML_NNFW_HW_ANY if it does not matter.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.
ML_ERROR_STREAMS_PIPEFailed to start the pipeline.
ML_ERROR_PERMISSION_DENIEDThe application does not have the privilege to access to the media storage or external storage.
int ml_single_set_timeout ( ml_single_h  single,
unsigned int  timeout 
)

Sets the maximum amount of time to wait for an output, in milliseconds.

Since :
5.5
Parameters:
[in]singleThe model handle.
[in]timeoutThe time to wait for an output.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.