Tizen Native API
7.0
|
The NNStreamer Service API provides interfaces to store and fetch the pipeline description for AI application developers.
Required Header
#include <nnstreamer/ml-api-service.h>
Overview
The NNStreamer Service API provides utility interfaces for AI application developers.
This function allows the following operations with NNStreamer:
- Set and get the pipeline description with a given name.
- Delete the pipeline description with a given name.
Note that this function set is supposed to be thread-safe.
Related Features
This function is related with the following features:
- http://tizen.org/feature/machine_learning
- http://tizen.org/feature/machine_learning.service
Functions | |
int | ml_service_set_pipeline (const char *name, const char *pipeline_desc) |
Sets the pipeline description with a given name. | |
int | ml_service_get_pipeline (const char *name, char **pipeline_desc) |
Gets the pipeline description with a given name. | |
int | ml_service_delete_pipeline (const char *name) |
Deletes the pipeline description with a given name. | |
int | ml_service_launch_pipeline (const char *name, ml_service_h *handle) |
Launches the pipeline of given service and gets the service handle. | |
int | ml_service_start_pipeline (ml_service_h handle) |
Starts the pipeline of given service handle. | |
int | ml_service_stop_pipeline (ml_service_h handle) |
Stops the pipeline of given service handle. | |
int | ml_service_destroy (ml_service_h handle) |
Destroys the given service handle. | |
int | ml_service_get_pipeline_state (ml_service_h handle, ml_pipeline_state_e *state) |
Gets the state of given handle's pipeline. | |
int | ml_service_query_create (ml_option_h option, ml_service_h *handle) |
Creates query service handle with given ml-option handle. | |
int | ml_service_query_request (ml_service_h handle, const ml_tensors_data_h input, ml_tensors_data_h *output) |
Requests the query service to process the input and produce an output. | |
Typedefs | |
typedef void * | ml_service_h |
A handle for ml-service instance. |
Typedef Documentation
typedef void* ml_service_h |
A handle for ml-service instance.
- Since :
- 7.0
Function Documentation
int ml_service_delete_pipeline | ( | const char * | name | ) |
Deletes the pipeline description with a given name.
- Since :
- 7.0
- Parameters:
-
[in] name The unique name to delete.
- Returns:
0
on success. Otherwise a negative error value.
- Note:
- If the name does not exist in the database, this function returns ML_ERROR_NONE without any errors.
- Return values:
-
ML_ERROR_NONE Successful. ML_ERROR_NOT_SUPPORTED Not supported. ML_ERROR_INVALID_PARAMETER Fail. The parameter is invalid. ML_ERROR_IO_ERROR The operation of DB or filesystem has failed.
int ml_service_destroy | ( | ml_service_h | handle | ) |
Destroys the given service handle.
If given service handle is created by ml_service_launch_pipeline(), this requests machine learning agent daemon to destroy the pipeline.
- Since :
- 7.0
- Parameters:
-
[in] handle The service handle.
- Returns:
0
on Success. Otherwise a negative error value.
- Return values:
-
ML_ERROR_NONE Successful. ML_ERROR_NOT_SUPPORTED Not supported. ML_ERROR_INVALID_PARAMETER Fail. The parameter is invalid. ML_ERROR_STREAMS_PIPE Failed to access the pipeline state.
int ml_service_get_pipeline | ( | const char * | name, |
char ** | pipeline_desc | ||
) |
Gets the pipeline description with a given name.
- Since :
- 7.0
- Remarks:
- If the function succeeds, pipeline_desc must be released using g_free().
- Parameters:
-
[in] name The unique name to retrieve. [out] pipeline_desc The pipeline corresponding with the given name.
- Returns:
0
on success. Otherwise a negative error value.
- Return values:
-
ML_ERROR_NONE Successful. ML_ERROR_NOT_SUPPORTED Not supported. ML_ERROR_INVALID_PARAMETER Fail. The parameter is invalid. ML_ERROR_IO_ERROR The operation of DB or filesystem has failed.
int ml_service_get_pipeline_state | ( | ml_service_h | handle, |
ml_pipeline_state_e * | state | ||
) |
Gets the state of given handle's pipeline.
- Since :
- 7.0
- Parameters:
-
[in] handle The service handle. [out] state The pipeline state.
- Returns:
0
on Success. Otherwise a negative error value.
- Return values:
-
ML_ERROR_NONE Successful. ML_ERROR_NOT_SUPPORTED Not supported. ML_ERROR_INVALID_PARAMETER Fail. The parameter is invalid. ML_ERROR_STREAMS_PIPE Failed to access the pipeline state.
int ml_service_launch_pipeline | ( | const char * | name, |
ml_service_h * | handle | ||
) |
Launches the pipeline of given service and gets the service handle.
This requests machine learning agent daemon to launch a new pipeline of given service. The pipeline of service name should be set.
- Since :
- 7.0
- Remarks:
- The handle should be destroyed using ml_service_destroy().
- Parameters:
-
[in] name The service name. [out] handle Newly created service handle is returned.
- Returns:
0
on Success. Otherwise a negative error value.
- Return values:
-
ML_ERROR_NONE Successful. ML_ERROR_NOT_SUPPORTED Not supported. ML_ERROR_INVALID_PARAMETER Fail. The parameter is invalid. ML_ERROR_OUT_OF_MEMORY Failed to allocate required memory. ML_ERROR_IO_ERROR The operation of DB or filesystem has failed. ML_ERROR_STREAMS_PIPE Failed to launch the pipeline.
int ml_service_query_create | ( | ml_option_h | option, |
ml_service_h * | handle | ||
) |
Creates query service handle with given ml-option handle.
- Since :
- 7.0
- Remarks:
- The handle should be destroyed using ml_service_destroy().
- Parameters:
-
[in] option The option used for creating query service. [out] handle Newly created query service handle is returned.
- Returns:
0
on Success. Otherwise a negative error value.
- Return values:
-
ML_ERROR_NONE Successful. ML_ERROR_NOT_SUPPORTED Not supported. ML_ERROR_INVALID_PARAMETER Fail. The parameter is invalid. ML_ERROR_OUT_OF_MEMORY Failed to allocate required memory. ML_ERROR_STREAMS_PIPE Failed to launch the pipeline. ML_ERROR_TRY_AGAIN The pipeline is not ready yet.
int ml_service_query_request | ( | ml_service_h | handle, |
const ml_tensors_data_h | input, | ||
ml_tensors_data_h * | output | ||
) |
Requests the query service to process the input and produce an output.
- Since :
- 7.0
- Parameters:
-
[in] handle The query service handle created by ml_service_query_create(). [in] input The handle of input tensors. [out] output The handle of output tensors. The caller is responsible for freeing the allocated data with ml_tensors_data_destroy().
- Returns:
- 0 on success. Otherwise a negative error value.
- Return values:
-
ML_ERROR_NONE Successful. ML_ERROR_NOT_SUPPORTED Not supported. ML_ERROR_INVALID_PARAMETER Given parameter is invalid. ML_ERROR_OUT_OF_MEMORY Failed to allocate required memory. ML_ERROR_STREAMS_PIPE The input is incompatible with the pipeline. ML_ERROR_TRY_AGAIN The pipeline is not ready yet. ML_ERROR_TIMED_OUT Failed to get output from the query service.
int ml_service_set_pipeline | ( | const char * | name, |
const char * | pipeline_desc | ||
) |
Sets the pipeline description with a given name.
- Since :
- 7.0
- Remarks:
- If the name already exists, the pipeline description is overwritten. Overwriting an existing description is restricted to APP/service that set it. However, users should keep their name unexposed to prevent unexpected overwriting.
- Parameters:
-
[in] name Unique name to retrieve the associated pipeline description. [in] pipeline_desc The pipeline description to be stored.
- Returns:
0
on success. Otherwise a negative error value.
- Return values:
-
ML_ERROR_NONE Successful. ML_ERROR_NOT_SUPPORTED Not supported. ML_ERROR_INVALID_PARAMETER Fail. The parameter is invalid. ML_ERROR_IO_ERROR The operation of DB or filesystem has failed.
Here is an example of the usage:
const gchar my_pipeline[] = "videotestsrc is-live=true ! videoconvert ! tensor_converter ! tensor_sink async=false"; gchar *pipeline; int status; ml_pipeline_h handle; // Set pipeline description. status = ml_service_set_pipeline ("my_pipeline", my_pipeline); if (status != ML_ERROR_NONE) { // handle error case goto error; } // Example to construct a pipeline with stored pipeline description. // Users may register intelligence pipelines for other processes and fetch such registered pipelines. // For example, a developer adds a pipeline which includes preprocessing and invoking a neural network model, // then an application can fetch and construct this for intelligence service. status = ml_service_get_pipeline ("my_pipeline", &pipeline); if (status != ML_ERROR_NONE) { // handle error case goto error; } status = ml_pipeline_construct (pipeline, NULL, NULL, &handle); if (status != ML_ERROR_NONE) { // handle error case goto error; } error: ml_pipeline_destroy (handle); g_free (pipeline);
int ml_service_start_pipeline | ( | ml_service_h | handle | ) |
Starts the pipeline of given service handle.
This requests machine learning agent daemon to start the pipeline.
- Since :
- 7.0
- Parameters:
-
[in] handle The service handle.
- Returns:
0
on Success. Otherwise a negative error value.
- Return values:
-
ML_ERROR_NONE Successful. ML_ERROR_NOT_SUPPORTED Not supported. ML_ERROR_INVALID_PARAMETER Fail. The parameter is invalid. ML_ERROR_STREAMS_PIPE Failed to start the pipeline.
int ml_service_stop_pipeline | ( | ml_service_h | handle | ) |
Stops the pipeline of given service handle.
This requests machine learning agent daemon to stop the pipeline.
- Since :
- 7.0
- Parameters:
-
[in] handle The service handle.
- Returns:
0
on Success. Otherwise a negative error value.
- Return values:
-
ML_ERROR_NONE Successful. ML_ERROR_NOT_SUPPORTED Not supported. ML_ERROR_INVALID_PARAMETER Fail. The parameter is invalid. ML_ERROR_STREAMS_PIPE Failed to stop the pipeline.