Optional
fields: ChatCohereInputThe name of the model to use.
{"command"}
Whether or not to stream the response.
{false}
What sampling temperature to use, between 0.0 and 2.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
{0.3}
Protected
getGenerated using TypeDoc
Integration with ChatCohere
Example