GenerativeModel

class GenerativeModel


A controller for communicating with the API of a given multimodal model (for example, Gemini).

Summary

Public functions

suspend CountTokensResponse

Counts the amount of tokens in the image prompt.

suspend CountTokensResponse
countTokens(vararg prompt: Content)

Counts the amount of tokens in a prompt.

suspend CountTokensResponse

Counts the amount of tokens in the text prompt.

suspend GenerateContentResponse

Generates a GenerateContentResponse from the backend with the provided image prompt.

suspend GenerateContentResponse
generateContent(vararg prompt: Content)

Generates a GenerateContentResponse from the backend with the provided Content.

suspend GenerateContentResponse

Generates a GenerateContentResponse from the backend with the provided text prompt.

Flow<GenerateContentResponse>

Generates a streaming response from the backend with the provided image prompt.

Flow<GenerateContentResponse>
generateContentStream(vararg prompt: Content)

Generates a streaming response from the backend with the provided Content.

Flow<GenerateContentResponse>

Generates a streaming response from the backend with the provided text prompt.

Chat
startChat(history: List<Content>)

Creates a Chat instance which internally tracks the ongoing conversation with the model

Public functions

countTokens

suspend fun countTokens(prompt: Bitmap): CountTokensResponse

Counts the amount of tokens in the image prompt.

Parameters
prompt: Bitmap

The image to be converted to a single piece of Content to count the tokens of.

Returns
CountTokensResponse

A CountTokensResponse containing the amount of tokens in the prompt.

countTokens

suspend fun countTokens(vararg prompt: Content): CountTokensResponse

Counts the amount of tokens in a prompt.

Parameters
vararg prompt: Content

A group of Content to count tokens of.

Returns
CountTokensResponse

A CountTokensResponse containing the amount of tokens in the prompt.

countTokens

suspend fun countTokens(prompt: String): CountTokensResponse

Counts the amount of tokens in the text prompt.

Parameters
prompt: String

The text to be converted to a single piece of Content to count the tokens of.

Returns
CountTokensResponse

A CountTokensResponse containing the amount of tokens in the prompt.

generateContent

suspend fun generateContent(prompt: Bitmap): GenerateContentResponse

Generates a GenerateContentResponse from the backend with the provided image prompt.

Parameters
prompt: Bitmap

The image to be converted into a single piece of Content to send to the model.

Returns
GenerateContentResponse

A GenerateContentResponse after some delay. Function should be called within a suspend context to properly manage concurrency.

generateContent

suspend fun generateContent(vararg prompt: Content): GenerateContentResponse

Generates a GenerateContentResponse from the backend with the provided Content.

Parameters
vararg prompt: Content

Content to send to the model.

Returns
GenerateContentResponse

A GenerateContentResponse. Function should be called within a suspend context to properly manage concurrency.

generateContent

suspend fun generateContent(prompt: String): GenerateContentResponse

Generates a GenerateContentResponse from the backend with the provided text prompt.

Parameters
prompt: String

The text to be converted into a single piece of Content to send to the model.

Returns
GenerateContentResponse

A GenerateContentResponse after some delay. Function should be called within a suspend context to properly manage concurrency.

generateContentStream

fun generateContentStream(prompt: Bitmap): Flow<GenerateContentResponse>

Generates a streaming response from the backend with the provided image prompt.

Parameters
prompt: Bitmap

The image to be converted into a single piece of Content to send to the model.

Returns
Flow<GenerateContentResponse>

A Flow which will emit responses as they are returned from the model.

generateContentStream

fun generateContentStream(vararg prompt: Content): Flow<GenerateContentResponse>

Generates a streaming response from the backend with the provided Content.

Parameters
vararg prompt: Content

Content to send to the model.

Returns
Flow<GenerateContentResponse>

A Flow which will emit responses as they are returned from the model.

generateContentStream

fun generateContentStream(prompt: String): Flow<GenerateContentResponse>

Generates a streaming response from the backend with the provided text prompt.

Parameters
prompt: String

The text to be converted into a single piece of Content to send to the model.

Returns
Flow<GenerateContentResponse>

A Flow which will emit responses as they are returned from the model.

startChat

fun startChat(history: List<Content> = emptyList()): Chat

Creates a Chat instance which internally tracks the ongoing conversation with the model

Public properties

generationConfig

val generationConfigGenerationConfig?

modelName

val modelNameString

safetySettings

val safetySettingsList<SafetySetting>?

systemInstruction

val systemInstructionContent?

toolConfig

val toolConfigToolConfig?

tools

val toolsList<Tool>?