Why Choose Moonshine?
Moonshine combines speed, accuracy, and flexibility in one powerful package that runs entirely on your local device without requiring cloud connectivity.
Process audio files at 5x the speed of OpenAI's Whisper without sacrificing accuracy.
Enjoy the same high-quality transcription accuracy as industry-leading models.
Available in both open-source and commercial versions to suit your project needs.
Powerful Speech Recognition
Moonshine uses advanced optimization techniques to deliver faster results without compromising quality.
Optimized Architecture
Moonshine's architecture has been carefully optimized for speed while maintaining the accuracy of the original Whisper model.
Efficient Processing
Our framework uses parallel processing and advanced caching techniques to dramatically reduce transcription time.
Easy Integration
Simple API makes it easy to integrate Moonshine into your existing applications and workflows.
Simple Python API
import moonshine
# Load the model
model = moonshine.load_model("base")
# Transcribe audio
result = model.transcribe("audio.mp3")
# Print the transcription
print(result["text"])
Just a few lines of code to get high-quality transcriptions.
Moonshine in the Browser
A powerful JavaScript library that adds reliable speech recognition to any web application with zero backend requirements.
JavaScript Library for the Web
Moonshine's JavaScript library provides a drop-in replacement for the Web Speech API, ensuring consistent speech recognition across all major browsers and platforms.
Cross-Browser Compatibility
Works reliably in Chrome, Firefox, Safari, Edge, and mobile browsers, eliminating inconsistencies in native implementations.
Privacy-Focused
All processing happens in the user's browser, ensuring speech data never leaves their device without explicit permission.
Zero Backend Costs
Eliminate server costs and complexity by running speech recognition entirely on the client side.
Use Cases
Moonshine's browser integration enables powerful voice capabilities for web applications.
Voice-Enabled AI Chatbots
Allow users to speak naturally to any AI chatbot and hear responses spoken back, creating more intuitive interactions.
Live Captions
Add accurate, real-time captions to any media content, improving accessibility and user experience.
Voice Commands
Implement voice controls for web applications, enabling hands-free navigation and operation.
// Install via npm
npm install moonshine-web
// Import and initialize
import { MoonshineWebSpeech } from 'moonshine-web';
// Initialize and polyfill Web Speech API
MoonshineWebSpeech.polyfill();
// Now use standard Web Speech API
const recognition = new window.SpeechRecognition();
recognition.continuous = true;
recognition.onresult = (event) => {
const transcript = event.results[0][0].transcript;
console.log(transcript);
};
recognition.start();
Moonshine seamlessly replaces the native Web Speech API with a more reliable implementation.
Voice for Every Device
Add voice interfaces to any hardware with Moonshine's lightweight embedded solution.
Ultra-Lightweight Design
Moonshine's optimized architecture requires minimal resources, making it perfect for embedded systems and IoT devices.
Sub-$5 Hardware Compatible
Runs efficiently on inexpensive SoCs and microcontrollers, making voice interfaces accessible for any budget.
100% Offline Operation
No network connection required — all processing happens locally on the device for complete privacy and reliability.
Technical Specifications
- Minimum RAM:256MB
- Storage Footprint:~50MB
- CPU Requirements:1GHz ARM Cortex-A53 or equivalent
- Power Consumption:0.5W - 2W typical
Applications
Add voice interfaces to virtually any device or system without cloud dependencies.
Create interactive toys with natural voice recognition that work without internet connectivity.
Enhance appliances, speakers, and home devices with reliable voice control that works offline.
Enable hands-free operation in manufacturing, logistics, and field service applications.
Create accessible healthcare devices with voice interfaces that maintain patient privacy.
# Install Moonshine on Raspberry Pi
pip install moonshine-stt
# Create a simple voice assistant
import moonshine
import pyaudio
import numpy as np
# Initialize the model
model = moonshine.load_model("tiny") # Smallest model for embedded devices
# Configure audio input
p = pyaudio.PyAudio()
stream = p.open(format=pyaudio.paInt16, channels=1, rate=16000, input=True, frames_per_buffer=8000)
print("Listening... (Press Ctrl+C to exit)")
try:
while True:
# Capture audio
audio_data = stream.read(16000)
audio_np = np.frombuffer(audio_data, dtype=np.int16)
# Process with Moonshine
result = model.transcribe(audio_np)
if result["text"]:
print(f"Recognized: {result['text']}")
# Simple command handling
if "turn on" in result["text"].lower():
print("Action: Turning on device")
elif "turn off" in result["text"].lower():
print("Action: Turning off device")
except KeyboardInterrupt:
print("Stopping...")
finally:
stream.stop_stream()
stream.close()
p.terminate()
This example shows a basic voice command system running entirely on a Raspberry Pi.
See Moonshine in Action
Try our interactive demo to experience the speed and accuracy of Moonshine.
Record audio to see streaming transcription results