This article is a Spotlight ✨ – part of a series of interviews with innovative Hubs creators highlighting their work.
We sat down with Ben Weekes, Sr Architect & Principal Software Engineer at Agora, to learn more.
Could you tell us a little about Agora?
Agora (NASDAQ: API) is a pioneer and global leader in Real-Time Engagement, providing developers with simple, flexible, and powerful APIs, to embed real-time voice, video, interactive streaming, chat, and artificial intelligence capabilities into their applications. We offer SDKs and low-code solutions for both Web and Native applications across all hardware.
How did you leverage Hubs to achieve your goals?
When looking at Hubs for a WebVR internal hackathon, I realized Hubs was using Mediasoup for its WebRTC voice and video conferencing. I knew that could cause potential problems due to Mediasoup’s single server architecture and least cost routing over the public internet.
I created an open-source adapter to allow Hubs developers to use Agora instead of Mediasoup.
In addition to providing higher quality voice and video with less lag, the Agora adapter allows Hubs to scale to 1000s of people in the same room. Using Voice Activity Detection (VAD), the Agora adapter subscribes to the 8 nearest talking people plus any admins in the world scene. Subscribing to more than 8 audio streams at the same time typically causes crackling and performance issues on many devices, so this approach is a game changer. The Alakazam team are using this adapter in their customized Hubs platform with great satisfaction and positive feedback.
What are some unique features you have built on top of Hubs? How did you go about building them?
Agora Voice and Video Adapter for Hubs: Github
RTMP publishing into hubs from OBS and other RTMP sources. The screen below on stage is published live from OBS.
Adding a virtual background to local web cam video which can then be used to provide transparency.
We can also place photo realistic 2d people into the scene, a digital humans can also be streamed into the scene with real-time conversational ability.
We have high and low end volumetric solutions to meet different use case and price points.
Load Testing
Test your Hubs scenes by simulating large loads from headless Chrome instances.
What are some things you wish you knew before working with Hubs?
A deeper insight and write up about how A-FRAME, Three JS, WebGL and bitECS all come together and how best to add new components and features.
What would you love to see in Hubs that would enable you to better support their clients?
Support for different avatar skeletons, physics engines and gyroscope support for looking around the scene on mobile.
Anything new you're working on now?
I am planning to add my browser based, facial motion capture capability to Hubs which will transmit the mocap data along with the voice packets for perfect synchronization.
To see actual videos of all these demos or learn more please get in touch ben@agora.io
✨ Thank you so much to Ben Weekes and the team at Agora for providing this great insight into how they are building with Hubs.