Ollama
The Ollama plugin enables local LLM inference without requiring API keys.
Installation
Section titled “Installation”<dependency> <groupId>com.google.genkit</groupId> <artifactId>genkit-plugin-ollama</artifactId> <version>1.0.0-SNAPSHOT</version></dependency>Prerequisites
Section titled “Prerequisites”Install and run Ollama:
# Install Ollamacurl -fsSL https://ollama.com/install.sh | sh
# Pull a modelollama pull gemma3n:e4bConfiguration
Section titled “Configuration”# Optional: configure Ollama host (default: http://localhost:11434)export OLLAMA_HOST=http://localhost:11434import com.google.genkit.plugins.ollama.OllamaPlugin;
Genkit genkit = Genkit.builder() .plugin(OllamaPlugin.create()) .build();
ModelResponse response = genkit.generate( GenerateOptions.builder() .model("ollama/gemma3n") .prompt("Tell me about AI") .build());Models
Section titled “Models”Use any model available in Ollama. Popular choices:
ollama/gemma3n— Google Gemma 3nollama/llama3.1— Meta Llama 3.1ollama/mistral— Mistral 7Bollama/codellama— Code-focused model
Features
Section titled “Features”- Text generation, streaming, local-first (no API key required)
Sample
Section titled “Sample”See the ollama sample.