In our ongoing efforts to enhance efficiency and promote greater democratic transparency, we have begun using artificial intelligence (AI) in two key areas:
- improved transcripts and on-demand captions quality
- automatic speaker tagging on the Streambox
Subtitles and transcripts
We are pleased to announce that we are now using a new AI-driven transcription and subtitling engine. This has a higher accuracy rate than before.
Please note that any recording done on our platform since 26th of September 2024 will be using our new subtitle and transcription engine.
To demonstrate the quality and accuracy of our new system, we have brought together snippets from meetings at six different councils from the UK and Ireland. You can see how well the engine copes with a broad range of accents:
This development also means that the video data is now being processed only through our servers instead of going to a third party. This is better for data handling and means that the transcripts/subtitles are available more quickly at the end of the meeting.
With these improvements in mind, we invite you to publish your subtitles and transcripts. If you have transcripts and subtitles turned off, please ask your account manager to turn them back on.
At our last user group, we also discussed improvements in live subtitles. This is our next development step in this area, and we will release news about this in the coming weeks.
Automatic speaker tagging on Streambox
The Streambox is our our plug and play encoder for small meetings. It uses the Meeting Owl cameras (Owl3 or HD Owl4) to capture the speakers automatically. This system is perfect for smaller meetings (for 10 or 20 people): it’s simple to set up and use as speakers do not need to operate individual microphones.
When we use push-to-talk microphone systems, it helps us (and the webcast) to get important information about the speaker. This information is important for several reasons:
- a democratic service officer can search the transcript by speaker name to help write minutes
- members of the public watching live can understand who is speaking
- when watching the archive anyone can easily access parts of the meeting by clicking on speaker names.
Speakers are tagged as they participate in the webcast agenda
However, by using AI models on the video, we are now able to tag the speaker names automatically in informal meetings using the Streambox, too.
In a nutshell, the system makes use of speaker profiles that already exist (from Mod.Gov or CMIS or created manually) and lets the operator select the people in the meeting. Then, during the meeting, the system’s face recognition capabilities will highlight the relevant speaker.
New Speaker tab: activate it to manage speaker profiles and automatic tagging.
If new participants are discovered and detected as speaking, it will add them as guest speakers. You can then use the tools to edit their details.
Finally, to be able to benefit from this new feature, there is a new version of our Webcaster software (8.1.0).
Integration with Facebook is back
We have reintroduced Facebook alerts. This feature allows you to automatically post live webcasts as they are about to start directly to your organisation’s Facebook page. With this seamless integration, you can increase your reach and improve transparency. If you need assistance setting up or re-enabling this feature, please reach out to your account manager.
Next steps in developmnet
The next steps in our development roadmap are:
- Live subtitles quality improvements
- Social media alerts on Twitter (X)
- Improvements in HybridLink for Teams, our votes and microphone management solution.
- Summary generation from meeting transcripts (to help write minutes)