I built a teddy bear that lives inside a hologram. It listens to your voice, understands what you say, responds with emotion, and moves accordingly. All in real time.
HoloBear combines a physical hologram display with a full AI conversation pipeline: voice recognition through Whisper, emotional intelligence through Claude API, natural speech through ElevenLabs, and expressive 3D animation through Unity. The bear doesn't just talk. It reacts, dances, thinks, laughs, and sleeps.
This entire system was designed, built, and integrated by one person.
Live demo
How it works
When you speak, Whisper transcribes your voice into text. That text is sent to the Claude API, which generates a response tagged with an emotion. The emotion tag triggers a matching animation on the 3D bear model in Unity, while ElevenLabs converts the response into natural speech. The bear moves its mouth in sync with the audio through a real-time lip-sync system.
The result is a small bear floating inside a hologram that genuinely feels like it's listening and responding to you.
The bear doesn't just respond. It reacts. That's the difference between a chatbot and a companion.
Features
Tech stack
The pipeline
Why this matters
AI companions today live inside screens. They are text on a phone, a voice from a speaker, an avatar on a monitor. HoloBear is an attempt to bring them one step closer to physical presence.
This is not a product yet. It is a working prototype that proves a full voice-to-hologram AI companion pipeline can be built by a single person with consumer hardware. The next steps are a standalone phone app, a custom hologram display, and eventually, a product that anyone can place on their desk.
AI companions shouldn't just live inside screens. They should feel like they're sitting right next to you.