Skip to main content
Version: Beta

Create Profile

Create the summarization Profile that specifies both prompting instructions and session context configuration.

Request Body required
    slice object optional
    mode remove earliest or latest events as defined optional

    Possible values: [UNSPECIFIED, FIRST, LAST, TIMESTAMP]

    Default value: FIRST

    event_limit int32 optional
    duration_limit_ms int64 optional
    start_timestamp date-time optional

    If specified, the event limit and duration limit will be applied to the time beginning at this timestamp.

    context object optional
    include string[] optional

    Controls the session context elements to include, takes precedence over individual inclusion fields. Can be 'user', 'org', 'location', 'device'. If not specified and not excluded, all context elements are included.

    exclude string[] optional

    Controls the session context elements to exclude, takes precedence over individual exclusion fields. Can be 'user', 'org', 'location', 'device'. If not specified and not excluded, all context elements are included.

    exclude_org_context boolean optional
    exclude_user_context boolean optional
    exclude_location boolean optional
    exclude_device boolean optional
    exclude_descriptions boolean optional
    events object optional
    exclude_types string[] optional
    include_types string[] optional
    exclude_defined_events boolean optional
    exclude_api_events boolean optional
    exclude_event_timestamps boolean optional
    exclude_selectors boolean optional
    include_selector_tags boolean optional
    trim_to_last_n_selectors int32 optional
    cache object optional
    enable_event_cache boolean optional
    llm object optional

    Configuration used to select the LLM model and related prompt and inference options.

    pre_prompt string optional

    Text to be included in the Generative AI prompt before including the contextual representation of the requested session.

    post_prompt string optional

    Text to be included in the Generative AI prompt before including the contextual representation of the requested session.

    output_schema Optional JSON Schema to define output optional

    JSON schema to provide to Generative AI for formatting of result and to be used to validate the proper output format.

    model string optional

    Possible values: [GEMINI_2_FLASH, GEMINI_2_FLASH_LITE]

    Default value: GEMINI_2_FLASH_LITE

    Generative AI large language model to use.

    temperature float optional

    Controls the randomness of text generated by a large language model.

Responses
200

A successful response.


{}
Loading...