Tizen Native API  7.0

The NNStreamer Service API provides interfaces to store and fetch the pipeline description for AI application developers.

Required Header

#include <nnstreamer/ml-api-service.h>

Overview

The NNStreamer Service API provides utility interfaces for AI application developers.

This function allows the following operations with NNStreamer:

  • Set and get the pipeline description with a given name.
  • Delete the pipeline description with a given name.

Note that this function set is supposed to be thread-safe.

Related Features

This function is related with the following features:

  • http://tizen.org/feature/machine_learning
  • http://tizen.org/feature/machine_learning.service

Functions

int ml_service_set_pipeline (const char *name, const char *pipeline_desc)
 Sets the pipeline description with a given name.
int ml_service_get_pipeline (const char *name, char **pipeline_desc)
 Gets the pipeline description with a given name.
int ml_service_delete_pipeline (const char *name)
 Deletes the pipeline description with a given name.
int ml_service_launch_pipeline (const char *name, ml_service_h *handle)
 Launches the pipeline of given service and gets the service handle.
int ml_service_start_pipeline (ml_service_h handle)
 Starts the pipeline of given service handle.
int ml_service_stop_pipeline (ml_service_h handle)
 Stops the pipeline of given service handle.
int ml_service_destroy (ml_service_h handle)
 Destroys the given service handle.
int ml_service_get_pipeline_state (ml_service_h handle, ml_pipeline_state_e *state)
 Gets the state of given handle's pipeline.
int ml_service_query_create (ml_option_h option, ml_service_h *handle)
 Creates query service handle with given ml-option handle.
int ml_service_query_request (ml_service_h handle, const ml_tensors_data_h input, ml_tensors_data_h *output)
 Requests the query service to process the input and produce an output.
int ml_service_model_register (const char *name, const char *path, const bool activate, const char *description, unsigned int *version)
 Registers new information of a neural network model.
int ml_service_model_update_description (const char *name, const unsigned int version, const char *description)
 Updates the description of neural network model with given name and version.
int ml_service_model_activate (const char *name, const unsigned int version)
 Activates a neural network model with given name and version.
int ml_service_model_get (const char *name, const unsigned int version, ml_option_h *info)
 Gets the information of neural network model with given name and version.
int ml_service_model_get_activated (const char *name, ml_option_h *info)
 Gets the information of activated neural network model with given name.
int ml_service_model_get_all (const char *name, ml_option_h *info_list[], unsigned int *num)
 Gets the list of neural network model with given name.
int ml_service_model_delete (const char *name, const unsigned int version)
 Deletes a model information with given name and version from machine learning service.

Typedefs

typedef void * ml_service_h
 A handle for ml-service instance.

Typedef Documentation

typedef void* ml_service_h

A handle for ml-service instance.

Since :
7.0

Function Documentation

int ml_service_delete_pipeline ( const char *  name)

Deletes the pipeline description with a given name.

Since :
7.0
Parameters:
[in]nameThe unique name to delete.
Returns:
0 on success. Otherwise a negative error value.
Note:
If the name does not exist in the database, this function returns ML_ERROR_NONE without any errors.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.
ML_ERROR_IO_ERRORThe operation of DB or filesystem has failed.

Destroys the given service handle.

If given service handle is created by ml_service_launch_pipeline(), this requests machine learning agent daemon to destroy the pipeline.

Since :
7.0
Parameters:
[in]handleThe service handle.
Returns:
0 on Success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.
ML_ERROR_STREAMS_PIPEFailed to access the pipeline state.
int ml_service_get_pipeline ( const char *  name,
char **  pipeline_desc 
)

Gets the pipeline description with a given name.

Since :
7.0
Remarks:
If the function succeeds, pipeline_desc must be released using g_free().
Parameters:
[in]nameThe unique name to retrieve.
[out]pipeline_descThe pipeline corresponding with the given name.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.
ML_ERROR_IO_ERRORThe operation of DB or filesystem has failed.

Gets the state of given handle's pipeline.

Since :
7.0
Parameters:
[in]handleThe service handle.
[out]stateThe pipeline state.
Returns:
0 on Success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.
ML_ERROR_STREAMS_PIPEFailed to access the pipeline state.
int ml_service_launch_pipeline ( const char *  name,
ml_service_h handle 
)

Launches the pipeline of given service and gets the service handle.

This requests machine learning agent daemon to launch a new pipeline of given service. The pipeline of service name should be set.

Since :
7.0
Remarks:
The handle should be destroyed using ml_service_destroy().
Parameters:
[in]nameThe service name.
[out]handleNewly created service handle is returned.
Returns:
0 on Success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.
ML_ERROR_OUT_OF_MEMORYFailed to allocate required memory.
ML_ERROR_IO_ERRORThe operation of DB or filesystem has failed.
ML_ERROR_STREAMS_PIPEFailed to launch the pipeline.
int ml_service_model_activate ( const char *  name,
const unsigned int  version 
)

Activates a neural network model with given name and version.

Since :
8.0
Parameters:
[in]nameThe unique name to indicate the model.
[in]versionThe version of registered model.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
ML_ERROR_IO_ERRORThe operation of DB or filesystem has failed.
int ml_service_model_delete ( const char *  name,
const unsigned int  version 
)

Deletes a model information with given name and version from machine learning service.

Since :
8.0
Remarks:
This does not remove the model file from file system. If version is 0, machine learning service will delete all information with given name.
Parameters:
[in]nameThe unique name to indicate the model.
[in]versionThe version of registered model.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
ML_ERROR_IO_ERRORThe operation of DB or filesystem has failed.
int ml_service_model_get ( const char *  name,
const unsigned int  version,
ml_option_h info 
)

Gets the information of neural network model with given name and version.

Since :
8.0
Remarks:
If the function succeeds, the info should be released using ml_option_destroy().
Parameters:
[in]nameThe unique name to indicate the model.
[in]versionThe version of registered model.
[out]infoThe handle of model.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
ML_ERROR_IO_ERRORThe operation of DB or filesystem has failed.
ML_ERROR_OUT_OF_MEMORYFailed to allocate required memory.
int ml_service_model_get_activated ( const char *  name,
ml_option_h info 
)

Gets the information of activated neural network model with given name.

Since :
8.0
Remarks:
If the function succeeds, the info should be released using ml_option_destroy().
Parameters:
[in]nameThe unique name to indicate the model.
[out]infoThe handle of activated model.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
ML_ERROR_IO_ERRORThe operation of DB or filesystem has failed.
ML_ERROR_OUT_OF_MEMORYFailed to allocate required memory.
int ml_service_model_get_all ( const char *  name,
ml_option_h info_list[],
unsigned int *  num 
)

Gets the list of neural network model with given name.

Since :
8.0
Remarks:
If the function succeeds, each handle in info_list should be released using ml_option_destroy().
Parameters:
[in]nameThe unique name to indicate the model.
[out]info_listThe handles of registered model.
[out]numTotal number of registered model.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
ML_ERROR_IO_ERRORThe operation of DB or filesystem has failed.
ML_ERROR_OUT_OF_MEMORYFailed to allocate required memory.
int ml_service_model_register ( const char *  name,
const char *  path,
const bool  activate,
const char *  description,
unsigned int *  version 
)

Registers new information of a neural network model.

Since :
8.0
Remarks:
Only one model can be activated with given name. If same name is already registered in machine learning service, this returns no error and old model will be deactivated when the flag activate is true.
http://tizen.org/privilege/mediastorage is needed if model file is relevant to media storage.
http://tizen.org/privilege/externalstorage is needed if model file is relevant to external storage.
Parameters:
[in]nameThe unique name to indicate the model.
[in]pathThe path to neural network model.
[in]activateThe flag to set the model to be activated.
[in]descriptionNullable, description for neural network model.
[out]versionThe version of registered model.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_PERMISSION_DENIEDThe application does not have the privilege to access to the storage.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
ML_ERROR_IO_ERRORThe operation of DB or filesystem has failed.
int ml_service_model_update_description ( const char *  name,
const unsigned int  version,
const char *  description 
)

Updates the description of neural network model with given name and version.

Since :
8.0
Parameters:
[in]nameThe unique name to indicate the model.
[in]versionThe version of registered model.
[in]descriptionThe description for neural network model.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
ML_ERROR_IO_ERRORThe operation of DB or filesystem has failed.
int ml_service_query_create ( ml_option_h  option,
ml_service_h handle 
)

Creates query service handle with given ml-option handle.

Since :
7.0
Remarks:
The handle should be destroyed using ml_service_destroy().
Parameters:
[in]optionThe option used for creating query service.
[out]handleNewly created query service handle is returned.
Returns:
0 on Success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.
ML_ERROR_OUT_OF_MEMORYFailed to allocate required memory.
ML_ERROR_STREAMS_PIPEFailed to launch the pipeline.
ML_ERROR_TRY_AGAINThe pipeline is not ready yet.
int ml_service_query_request ( ml_service_h  handle,
const ml_tensors_data_h  input,
ml_tensors_data_h output 
)

Requests the query service to process the input and produce an output.

Since :
7.0
Remarks:
If the function succeeds, the output should be released using ml_tensors_data_destroy().
Parameters:
[in]handleThe query service handle created by ml_service_query_create().
[in]inputThe handle of input tensors.
[out]outputThe handle of output tensors.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERGiven parameter is invalid.
ML_ERROR_OUT_OF_MEMORYFailed to allocate required memory.
ML_ERROR_STREAMS_PIPEThe input is incompatible with the pipeline.
ML_ERROR_TRY_AGAINThe pipeline is not ready yet.
ML_ERROR_TIMED_OUTFailed to get output from the query service.
int ml_service_set_pipeline ( const char *  name,
const char *  pipeline_desc 
)

Sets the pipeline description with a given name.

Since :
7.0
Remarks:
If the name already exists, the pipeline description is overwritten. Overwriting an existing description is restricted to APP/service that set it. However, users should keep their name unexposed to prevent unexpected overwriting.
Parameters:
[in]nameUnique name to retrieve the associated pipeline description.
[in]pipeline_descThe pipeline description to be stored.
Returns:
0 on success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.
ML_ERROR_IO_ERRORThe operation of DB or filesystem has failed.

Here is an example of the usage:

 const gchar my_pipeline[] = "videotestsrc is-live=true ! videoconvert ! tensor_converter ! tensor_sink async=false";
 gchar *pipeline;
 int status;
 ml_pipeline_h handle;

 // Set pipeline description.
 status = ml_service_set_pipeline ("my_pipeline", my_pipeline);
 if (status != ML_ERROR_NONE) {
   // handle error case
   goto error;
 }

 // Example to construct a pipeline with stored pipeline description.
 // Users may register intelligence pipelines for other processes and fetch such registered pipelines.
 // For example, a developer adds a pipeline which includes preprocessing and invoking a neural network model,
 // then an application can fetch and construct this for intelligence service.
 status = ml_service_get_pipeline ("my_pipeline", &pipeline);
 if (status != ML_ERROR_NONE) {
   // handle error case
   goto error;
 }

 status = ml_pipeline_construct (pipeline, NULL, NULL, &handle);
 if (status != ML_ERROR_NONE) {
   // handle error case
   goto error;
 }

 error:
 ml_pipeline_destroy (handle);
 g_free (pipeline);

Starts the pipeline of given service handle.

This requests machine learning agent daemon to start the pipeline.

Since :
7.0
Parameters:
[in]handleThe service handle.
Returns:
0 on Success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.
ML_ERROR_STREAMS_PIPEFailed to start the pipeline.

Stops the pipeline of given service handle.

This requests machine learning agent daemon to stop the pipeline.

Since :
7.0
Parameters:
[in]handleThe service handle.
Returns:
0 on Success. Otherwise a negative error value.
Return values:
ML_ERROR_NONESuccessful.
ML_ERROR_NOT_SUPPORTEDNot supported.
ML_ERROR_INVALID_PARAMETERFail. The parameter is invalid.
ML_ERROR_STREAMS_PIPEFailed to stop the pipeline.