AI
Mastra
Overview
Voice
Initialize the voice provider with (tts-1-hd, whisper-1) settings.
const voice = new OpenAIVoice({
speechModel: {
name: "tts-1-hd",
},
listeningModel: {
name: "whisper-1",
},
speaker: "alloy", // Default voice
});Initialize the voice provider with Azure
const voice = new AzureVoice({
speechModel: {
name: "neural",
apiKey: "your-azure-speech-api-key",
region: "eastus",
},
listeningModel: {
name: "whisper",
apiKey: "your-azure-speech-api-key",
region: "eastus",
},
speaker: "en-US-JennyNeural", // Default voice
});Initialize the realtime voice provider
const voice = new OpenAIRealtimeVoice({
apiKey: env.OPENAI_API_KEY,
model: "gpt-4o-mini-realtime",
speaker: "alloy",
});Ollama
# first exit running ollama app.
# start ollama service (use OLLAMA_DEBUG=1/2 if you want debug logs)
OLLAMA_DEBUG=2 ollama serve
# to see logs
tail -f ~/.ollama/logs/server.log# list installed models
ollama ls
ollama pull llama3.2:latest
ollama pull bge-m3:latest
ollama pull embeddinggemma:latest
# ollama pull qwen3-embedding:latestTODO
- AGUI gomoku demo
- Using Mastra Agent with the OpenAI Realtime API from your Browser , Source
- AG-UI Mastra Workshop
- How to Build AI Workflows with Mastra and AI SDK (Full Tutorial) video, code
References
How is this guide?
Last updated on