Mega-Influencer Kwebbelkop Interviewed by his AI Clone, Built by YEPIC AI

March 15, 2024
Announcements
Stylized image of a half-robot, half-human Kwebbelkop

New ground was broken in AI last week when world-renowned influencer Jordi van den Bussche (known as Kwebbelkop, with over 15M subscribers on YouTube) was interviewed by his AI Video Agent in real-time, which looked exactly like him, spoke with his voice and even responded with his personality! Today we bring you the scoop on exactly how this was achieved, the trials and tribulations the team at YEPIC AI went through to pull it out of the bag, and what this means for the way humans interact with AI and the future of real-time AI video generation.

Chapter 1: The Call 

This story begins with a call - a summoning out of the blue by the LEAP conference organisers in Saudi Arabia. They reached out with a mission: build a real-time AI avatar of mega-influencer Kwebbelkop in just one week. We sat in silence for a few minutes, wondering how on earth we could create magic in a couple of business days - we distracted ourselves with some Slack scrolling and consulted our AI avatars to ask if this might be possible. But then, we remembered we’re YEPIC AI, a team with a reputation for embracing the apparently impossible, so we knew this was ours for the taking.

Chapter 2: Mobilisation

With the mission clear, we responded to the call with a resounding 'yes.' We sprang into action, coordinating a rapid mobilisation of our brightest minds and best coders. Within hours, we booked flights to Saudi Arabia, and made arrangements. Our commitment to the work meant personal sacrifices were inevitable. Birthdays would go uncelebrated, date nights postponed, and milestones missed. Our dedicated salesperson even had to skip his son's premier football trials! We had a mission and we were going to see it through.

Chapter 3: AI Kwebbelkop Brought to Life

With the demo day rapidly approaching, our team was faced with the intricate task of crafting a digital doppelgänger of Kwebbelkop. This AI Video Agent required a blend of visual and auditory elements, topped with the unique essence of Kwebbelkop's personality.

The first hurdle was to create a virtual clone of Kwebbelkop's visuals and vocals. LEAP, the event's diligent organisers, provided us with a high-resolution image and a crystal-clear voice recording of the well-known influencer. We employed the cutting-edge capabilities of Yepic’s facial animation engine to breathe life into the portrait, animating it in real-time to achieve a lifelike presence. Simultaneously, we completed the voice cloning in-house. Our aim was to replicate Kwebbelkop's unique vocal timbre and cadence while ensuring the utmost quality and the lowest possible latency during streaming, for an experience as smooth as it was authentic.

Mimicry, however, extends beyond face and voice; it encompasses movement. To capture Kwebbelkop's physicality, we delved into a treasure trove of the influencer's content—his podcasts and interviews. We tracked the nuances of his body language, the characteristic tilts of his head, the rhythm of his speech, and the subtleties of his facial expressions. Through this process, we were able to digitally encode his movement style into the AI Agent, ensuring that our creation would not only look and sound like Kwebbelkop but also move with the same charisma and energy.

Yet, an AI Agent is more than the sum of its physical attributes—it must also embody the spirit of its human counterpart. To achieve this, we invested time in understanding the man behind the influencer persona, Kwebbelkop himself. With a deeper insight into his personality traits, his quirks, and his manner of interaction, we were able to inform and contextualise the Large Language Model (LLM). This AI wasn't just simulating Kwebbelkop—it was, in essence, a digital echo of his very being.

Lastly, we faced the intriguing challenge of ensuring that the AI Agent understood its unique role—as the interviewer of Kwebbelkop, not as a mere participant. This required a subtle yet crucial adjustment in the LLM's contextual framework.

With all the components finely tuned and the data meticulously compiled, our final act was to bring Kwebbelkop's AI Video Agent to life using the powerful and versatile Azure’s OpenAI Studio. This platform was the launchpad for deploying the LLM into production, setting the stage for an unprecedented interaction between Kwebbelkop and his digital twin on demo day.

Chapter 4: The Big Day

As the sun rose above the Saudi horizon, our team embarked on their journey, their vehicles kicking up clouds of sand as they traversed the desert towards the LEAP conference venue. The air was charged with anticipation; today, we would unveil an achievement that could redefine the boundaries of human-AI interaction.

Inside the venue, the atmosphere was electric. Marketing maestro Daniel and sales specialist Dan secured their spots in the front row, their eyes sharp and their iPhone cameras ready to capture what they knew would be content of unprecedented impact. Their mission was clear: to document this historic moment when AI would take a leap into the future, a leap personified by Kwebbelkop's digital twin.

Backstage, Aaron Jones, Aaron Caffrey and Yannis Kazantzidis were the alchemists behind the curtain, meticulously preparing for the demonstration that would soon captivate the audience. There was a palpable sense of focus as they ran through the final checks, ensuring that every line of code and every pixel of the AI Video Agent was primed for the spotlight.

As the demo began, the audience was riveted. Each word from AI Kwebbelkop, each responsive gesture, was met with a mixture of awe and incredulity. It was as if the future had arrived, not with a whisper, but with a conversation—a conversation that flowed as naturally as any human interaction. The story made waves, with prestigious publications like WIRED covering the event, and recognizing it as a roaring success and a milestone for real-time AI video generation.

Chapter 5: The Future

This watershed moment symbolises a new chapter in the narrative of human-AI interaction. It heralds a future where our exchanges with AI will be as natural as chatting with a friend over coffee, rather than the impersonal typing to a chatbot interface we've grown accustomed to. YEPIC AI stands proudly at the vanguard of this future, ready to roll out Video Agents on a global scale and revolutionise the user interface for AI.

The promise of a face-to-face interface with AI is no longer a distant dream—it is a tangible reality. It is a vision where AI virtual assistants don't just understand our words, but also our expressions, our tones, and our emotions. YEPIC AI is not just participating in this future; they are actively shaping it, leading the charge towards a world where technology enhances the human experience in ways once confined to the realms of science fiction.

Keen to partner with us or pilot our real-time Video Agents? Please see our key contacts below.

Marketing & Press Enquiries: daniel@yepic.ai
Sales & Partnership Enquiries: dan@yepic.ai
Customer Support: team@yepic.ai