We have wrapped NarraScope 2025! It went great. Everybody loved it, on-site attendees and remote folks.
Hybrid conferences are hard, my friend.
This is our third hybrid conference -- but really this is the first time we got a grip on the problem. I think we got it right this time... at a cost. We spent way more effort on tech/AV than in any previous year.
(I say "I think we got it right" because we have not yet posted the videos to Youtube. That's the final step, and it's where we stumbled hard in 2024. More on this below. But we should be on track this year.)
I get no credit for any of this, to be clear. I was not on tech. Our success was due to (a) the planning and foresight of Logan Clare, who made the long voyage down from NYU to be our on-site tech lead; and (b) a cadre of volunteers who ran themselves ragged setting up, testing, and debugging every single talk session.
So I don't have every detail. But I want to write up The Way Things Went, for the benefit of future hybrid conferences everywhere.
The first year (2019)
Let me start with the backstory. In 2019 we decided to run a conference! Well, no, we decided that in 2017. The conference happened in 2019, in Boston. We had no concept of "hybrid" at all; it was an in-person event.
The AV setup was, by and large, trivial. Every MIT classroom had a projector, a screen, and a podium with a microphone and an HDMI cable. You plug the cable into your laptop and talk into the microphone. Your slides appear on the screen and your voice goes over the PA. This is a solved problem.
I know, there's no perfectly solved problem. Some people had laptops with weird video connectors. (I brought a bag of HDMI dongles.) I think some people had trouble getting audio from their laptop into the room -- HDMI is supposed to support that but the dongles don't always.
We have video of some of the 2019 talks. This is thanks to our participants from Articy, who volunteered to bring a hand-held camera and film in one room. That wasn't live-streamed; they posted the videos after the event.
On-line time (2020-2022)
2020 was all-remote, obviously.
We came up with a pretty simple setup. A volunteer ran a machine with OBS, Discord, and Zoom installed. (I did this job for some of the conference.) The volunteer started a Zoom call with the speaker (or speakers). The Zoom window got fed into OBS (with a nice frame!) The OBS virtual camera streamed out to a Discord "stage" channel. Attendees watched that.
(I think I used RogueAmoeba's Loopback to get the Zoom audio into OBS. Macs require a bit of third-party support for that sort of thing.)
This required a fair amount of setup for the volunteers, but the conference speakers just needed to join a Zoom call. And then share their screen for slides. By mid-2020 that was familiar territory.
In 2022 we repeated that plan, except that we had two tracks instead of one, so it required two volunteers on duty. (Two computers, two Zoom calls, etc.) Also we streamed to a platform called Gather.town instead of Discord. Technically it was the same process, though.
Tackling hybridity (2023-2024)
2023 brought us to Pittsburgh. We wanted to keep the remote audience we'd built up, though. "We'll just go hybrid!" we said. "It'll be easy!"
It was kind of easy! But only because we did it badly.
We repeated what we did in 2019. (Pitt classrooms are set up about the same as MIT.) Then we got Pitt's AV crew to bring in three cameras and three laptops. Set up a camera in each room, pointing at the podium and the screen. The camera is plugged into the laptop; the laptop streams to Discord.
Note that the speaker's laptop (almost every speaker brought their own laptop) was completely separate from the room laptop. The speakers never touched Discord or the room camera. (Heck no.) They just plugged into HDMI and talked.
This meant that the streaming of the slides was pretty lossy. It was a videocamera pointed at a projector screen, after all. Washed-out, terrible quality. It worked, but the remote attendees got a second-class experience.
A few speakers presented remotely -- that is, the speaker did not travel to Pittsburgh. We asked them to submit their talks as pre-recorded video. For these, a volunteer (me) ran up to the front of the room and plunked down their own (my own) laptop, plugged in, and played the video file. Just like the other talks, the streaming consisted of pointing the room camera at the projector screen.
We had one surprise remote speaker. Surprise! Again, I used my own laptop, and set up a Zoom call. (Much like 2020.) Same streaming deal as before.
I wanted to improve that plan in 2024. Unfortunately, it didn't improve. In fact it got worse. The hope was to have both a camera feed and a direct feed from the speaker's laptop. A volunteer was supposed to switch back and forth in OBS. That didn't happen -- only the camera feed was ever used. I don't know whether the direct feed was infeasible or if the volunteers just weren't briefed.
Also the room mics were set up badly; some of the talks had bad audio and some were almost unusable. We still, as I write this, don't have most of the videos processed. I am deeply ashamed of this failure. We are working on it.
Lesson: if you decide to follow this plan, give the speakers hand mics or lapel mics. Room mics are just too risky. Also test that what's being streamed is also being recorded, exactly as-is.
The new plan (2025)
Welcome to Philadelphia. We once again have two laptops (per room): one for the speaker, one to record. We also provide a high-quality USB webcam for each. (Better than the laptop built-in webcams, which were pretty cruddy.)
Some of the rooms also have a built-in room camera, hand mics, and a computer to manage them. We make use of these where possible.
The speaker machine is logged into Discord -- using a conference account with stage streaming privileges. So the livestream is run directly from that machine. The speaker can share slides or smile into the webcam, whichever they like. That machine is also plugged into the room projector, so local attendees can see the slides up on the classroom screen.
The recording machine has OBS. Unlike last year, OBS isn't streaming out; it's purely used for recording. It's logged into Discord (as a regular viewer), and OBS is simply recording the Discord window. Note that this machine isn't directly plugged into any hardware, so it can live in a back room rather than the presentation room.
Now, this requires a fair bit of setup on the speaker machine. So we'd really like to provide the speaker machine. (As opposed to having each speaker bring a laptop and configuring it for their talk.) Set up all the hardware in the morning, make sure everything works, and then don't touch it. The speaker sends us their slides in advance; we'll make sure they're available on the speaker machine.
You can already see some pitfalls.
- We need to provide two laptops per room! We had five rooms this year -- which is a lot of rooms, yes -- that's a total of ten machines. (We found a place that would rent us ten laptops.) (Actually we bought them with the explicitly-spoken intent to return them afterwards. So, rented with a 100% deposit. Scary but it worked out.)
- We didn't provide our own room cameras; we relied on what Drexel had available. Hope it fits our setup! Also, not every room had a computer to run the camera. We had to source one extra machine to cover that.
- Speakers hate sending stuff in advance. Some of them will still be editing their slides on Saturday morning.
- What format do we accept for slides? Speakers want to use Google Docs, Apple Keynote, good old Powerpoint, PDF, Canva -- what the heck is Canva? I'm sure there's more. We shoved everything into Google Docs for consistency, but of course there was friction. I saw messed-up fonts, messed-up layouts, animated transitions got lost... you can imagine.
- What if the speaker wants to do a live software demo? We've had those in past years. Sometimes you really do need to bring your own laptop. Then the tech crew has to swap that in for the conference-provided speaker machine. Log it into Discord, make sure it streams, oh god the webcam/wifi/battery isn't working, now what? Argh! Many risk factors here.
It worked! We made it work. (And, again, by "we" I mean Logan and JD and the tech volunteers.) However, making it work was, well, a ton of work. The volunteers were swapping machines and testing and verifying the setup in every room, every session. They were overloaded.
At one point on Saturday, we put the whole conference on pause for 15 minutes so that the tech people could catch up. Just pushed the whole rest of the day's schedule by a quarter-hour. It's good that we were able to do that, but we shouldn't have had to.
And for 2026?
At the end of the conference, JD said "Next year: all slides in advance." That is, no more laptop swapping. Speakers may not present with their own machines. Use the provided speaker machine and like it! (Mind you, we'll try have PowerPoint available.)
This makes me sad! I love the live software demos. I love weird hardware. NarraScope hasn't had much weird hardware, but I sometimes go to @party in Boston and they have, like, Commodore 64 demos. Amiga demos. Oscilloscope demos! I mean, I'm sittin' there on the Group W bench and the biggest, glowiest oscilloscope demo of them all sits down next to me, and he says, "Kid..."
(I couldn't poke my head into @party this year because it was the same weekend, dammit. Maybe next year.)
Now, this isn't a final decision. We're all sad about the idea of strictly requiring Google Docs. We've talked about it in the past week. The current idea is to strongly encourage turning in slides in advance, in a known format. Then the exceptions (live demos, etc) will be a short, known list, and the tech team can focus their efforts appropriately.
More updates as they happen.
(The question of how many tracks we will have is beyond the scope of this post. Like I said, five rooms was a lot. There's a strong sentiment to drop back to three next year. But that decision is over the horizon still.)
More edge cases for the 2025 plan
I list these for completeness. I don't think we planned these in advance, but none was a major headache as far as I know.
A speaker who just wants to talk, no laptop or slides or anything: Point the room camera at them and stream that to the Discord stage. (In this case, we only use one machine. The recording OBS machine will also stream to Discord.)
A live panel discussion: Same as above; you point the camera at three people. They may have to pass a mic back and forth.
A remote speaker: They will log into Discord from home and present from there. You will have to give Discord stage streaming privileges to the speaker's personal account. The presentation machine will have to view Discord and push its display out to the room projector.
An all-remote panel discussion: Everybody logs into Discord and you have an N-way Discord chat. Again, the presentation machine pushes that out to the room projector.
A panel discussion with both local and remote participants: Doesn't work! (Okay, this was a headache.) If the local participants sit in front of the screen they're being projected on, it's audio feedback hell. We had to move the local participants into the room next door, thus making them "remote" -- so it turned into all-remote panel. Don't roll your eyes; it worked.
Comments from Mastodon (live)
Please wait...
This comment thread exists on Mastodon. (Why is this?) To reply, paste this URL into your Mastodon search bar: