Tuesday, June 21, 2022

I went to a conference and nobody spread COVID

People are not all on the same page about COVID planning. I don't just mean vax-deniers. My social circles -- the good and sensible people reading this post -- range from "I am planning in-person social events" to "in-person social events are morally indefensible".
Nor does this boil down to everyone deciding their own risk tolerance. Every person does decide their own risk tolerance, but it's a collective risk and it has to be managed collectively. By people with different goals and different levels of vulnerability. This is not easy! "Minimize all risk" and "minimize risk involved in living my life" aren't even two ends of a spectrum. They're two vectors in a branchy mess of decisions.
How does this apply to conferences? We haven't decided. It's not a minor question. We're now seeing events relax their COVID policies at the last minute, and it's hard not to read that as a calculated attempt to sucker people in. On the flip side, PAX East killed someone. Of my coworkers who went to GDC this year, nearly all of them caught something (not all tested positive for COVID but there were gobs of fevers and sore throats). It scared me good.
Then again, I've been going into grocery stores regularly through the whole pandemic, wearing a cloth -- not N95 -- mask. So who am I to sneer?
No sneering here. A couple of weeks ago I hit my introvert wall: I attended a conference in Montreal. This was Scintillation, a tiny sci-fi convention. I went to the first Scintillation in 2018 and really enjoyed it. I missed 2019; 2020 and 2021 were cancelled; this year the organizers and regulars collectively said "Dammit we're doing this." Reader, I did it. Air travel and all. I had a great time. (I took part in a couple of panel discussions about different authors.)
And: nobody got sick. That we can tell! It's impossible to be certain about these things. One person reported a marginally positive antigen test two days after the conference, but they followed up with a PCR and that was negative. Another person felt like crap a week later, but the first test is negative, and the timing doesn't really fit. Our conclusion is that, by diligence or luck, no COVID was spread at the con.
This post is neither a brag nor a confession. Rather, I want to explain the event policies that kept the risks low and, ultimately, were successful at keeping people safe.
  • This was a small event. I think attendance was about 75 people. Everybody fit in one function room.
  • Proof of vaccination was required to pick up your badge.
  • Indoor masking was required, and we were serious about it. If you were in the hotel, aside from your own hotel room, you wore a mask. (Obviously we couldn't enforce this for other hotel guests but we were the only occupants of the function-room floor.) If you wanted to drink water, you went to the con suite and lowered your mask long enough to take a swallow.
  • The con had some rapid tests available at the check-in desk.
  • Someone made a couple of box-fan air filters for the event. One ran in the function room, one in the con suite. I hadn't heard of this project but it gets good reviews from professionals.
  • Indoor dining was not banned, but for people who wanted to avoid it, the conference posted a list of restaurants which would deliver to your hotel room.
  • A couple of outdoor gatherings (picnics) were scheduled; these were unmasked.
For my own part:
  • I stayed away from social gatherings, even small ones, for several days before the event.
  • I got a PCR test three days before the event. (This turned out to be a waste, because the test web site was down and I didn't see the result until the day I got home! But it was negative after all.)
  • I wore an N95 mask while in the conference space, and also for all air travel (airports and airplanes). I switched to a cloth mask when wandering around Montreal museums and shops.
  • I got my second vaccine booster two weeks before the event, so as to (hopefully) be at max immunity.
  • I did a couple of rapid tests in my hotel room during the event.
  • I got most meals take-out. (Mmm, bao.) I ate in restaurants a few times, but I tried to pick uncrowded restaurants, and I ate either alone or with one other person at the table.
  • I yukked it up without a mask at the outdoor picnics.
  • I kept doing rapid tests for the week after the event. And stayed away from social gatherings, well, at least until Thursday.
So, as you see, we were pretty careful. But we could have been more careful in some ways. But this is what we did.
The intangible factor was that the conference organizers cared about safety and were willing to make firm rules. We had discussions in advance about how masking would work, how hydration would work, how everything would work. What were the accessibility needs? (With 75 people registered and no at-the-door entry, this was a well-defined list.) Would we bring back the singing social events from the first two Scintillations? (No way.) And so on. Everybody was on board with the situation before they arrived. We all knew the people in charge were prepared to say "Mask up or get out," and because of that, they never had to.
I can't prove these precautions will protect everybody. I don't know how to estimate the odds. (If we were lucky enough to have zero contagious people show up, then we wouldn't know how well the masks and filters worked!) But this is, I would say, a minimum level of diligence for events in the 100-person range.
Masks suck, and everybody hates 'em, and this is where we are.
I can't even think about events in the 10000-person range. GDC and PAX still scare me, and will continue to scare me until the vaccine situation changes a lot.
I hope this information is useful.

Wednesday, June 15, 2022

AI ethics questions

Last week an "Google AI ethics" article went round the merry-go-discourse. I won't bother linking except for this apropos comeback from Janelle Shane:
Stunning transcript proving that GPT-3 may be secretly a squirrel. GPT-3 wrote the text in green, completly unedited! (...transcript follows)
-- @janellecshane, June 12
We're facing piles of critical questions about AI ethics. They do not include "Is Google oppressing sentient AIs?" Here's a starter list of real issues:
What's the difference between using an AI algorithm as part of your artistic process and using it as an artistic process in itself?
Using an AI image algorithm as a source of idea prompts? Tracing or redrawing pieces of the output in your own work? Using pieces of the output directly? Generating ranges of output and iterating the prompt in the direction you want? Generating ranges of output and using them as PCG backgrounds in a game? What will we count as legitimate and/or desirable artistic work here?
How much human supervision do we require on procgen output?
If the background imagery of a game (movie, whatever) shows AI-generated cityscapes, sooner or later something horrible will appear. If an AI is generating personalized emails, sooner or later it will send vile crap. Do we hold the artist/author responsible or just say "eh, it's AI, Jake"? Do we insist on a maximum "error rate"? What's the percentage?
(Do we hand the problem of preventing this off to another AI? "Generative adversarial network" in the literal sense!)
How do we think about ownership and attribution of the data that goes into AI training sets?
Is the output of an AI algorithm a derivative work of every work in the training set? Do the creators of those original works have a share in the rights to the output?
If an image processor sucks up a million Creative Commons "noncommercial use only" images for its training set, is the output of the net necessarily Creative Commons? What if it accidentally grabs a couple of proprietary images in the process? Is the whole training set then tainted?
(We're already deep into this problem. The past few years have seen a spurt of AI image tools with trained data sets. They're built into Photoshop, iOS and Android camera apps, AMD/NVidia upscaling features, etc, etc. What's the training data? Can we demand provenance? Is this going to turn into a copyright lawsuit morass?)
What does it mean if the most desirable artistic tools require gobs of cloud CPU? Will a few tech giants monopolize these resources?
Will we wind up with a "Google tax" on art because artists are forced to use Colab or what have you?
(This isn't new to AI, of course. Plenty of artists "have to" use a computer and specific hardware or software tools. The tech companies aren't shy about extracting rents. But AI could push that way farther.)
What about the environmental costs? Will artists get into an arms race of bigger and more resource-intensive AI tools? All computers use energy, but you really don't want a situation where whoever uses the most energy wins. (Bl*ckchain, cough cough.)
What does it mean when AIs are trained on data pulled from an Internet full of AI-generated data? Ad infinitum. Does this feedback loop lead us into cul-de-sacs?
What assumptions get locked in? It's easy to imagine a world where BIPOC people just disappear from cover art and other mass-market image pools. That's the simplest failure mode. AI algorithms are prone to incomprehensible associations. Who knows what bizarre biases could wind up locked into our creative universe?
How do we account for the particular vulnerabilities of AI algorithms? Can we protect against them once this stuff is in common use?
What if saboteurs seed the Internet with pools of images that are innocent to human eyes, but read as mis-tagged garbage to AI algorithms? Or vice versa: hate speech or repugnant images which AI algorithms pick up as "cute kittens". Could that get incorporated into training sets? Turn every AI tool into a Tay-in-waiting?

The meme-y AI art is all visual and text. But I'm particularly interested in how this plays out for audio -- specifically, for voice generation.
I love building messy, generative text structures. I also love good voice acting in a game. These ideas do not play together nicely. (I guess procgen text is a love that dare not speak its name?)
Text variation like this is trivial in Inform 7:
say "[One of]With a start, you[or]Suddenly, you[or]You blink in surprise and[at random] [one of]realize[or]notice[at random] that your [light-source] is dimming. In just [lifespan of light-source], your light will be gone.";
But if you're writing a fully voice-acted game, you don't even consider this sort of thing. Not even so simple an idea as contextual barks in a shooter game: "Get [him/her], [he/she]'s behind the [cover-object]!" It's not in scope. Which is a shame!
AI voice generation is an obvious path towards making this possible. It's also an obvious path to putting all the voice actors out of work.
How do we negotiate this? What does it mean to put an actor's unique performance into an infinitely extensible corpus of text? How do we pay people when "per line" is a meaningless measurement? How much sampling do we need for a good result? Do we need direct-recorded "cut scenes" for the really emotional bits? What about applying "moods" (angry, tired, defeated, scared) to specific lines to match the current state of the character? There's lots of possibilities here, and we have no idea how to work them out in a way that's fair to both designers and performers.

Anyhow, I am nothing like an expert in this stuff. This post is very much off the top of my head. Some folks who know way more than me and have more experience with AI tools: Janelle Shane, Max Kreminski, Mike Cook, Lynn Cherny.

Tuesday, June 7, 2022

Aaron Reed's "50 Years of Text Games" is now crowdfunding

You probably followed Aaron's blog series last year. Now it's becoming a book with revised articles, bonus material, a lovely layout, and fancy binding. A half-century of the history of text-based games. (You may recall that one of my games is on the list.)
I admit to mixed emotions about Kickstarter these days. They haven't backed off on their crypto horseshit. They haven't pushed it forwards much either, that I can tell. There was a followup post in February which doesn't say much beyond "We're listening to feedback." (I think you get the gist of mine.)
But of course I want Aaron's book to succeed. Which it has! -- it crossed the goal line as I was writing this post. Now I want it to do multiples. I also want Kickstarter to see pushback. Aaron has thoughts about this too; see the FAQ on his KS page. He also notes that there will be other ways to pre-order the book after the KS campaign is over. Read, decide what you want to do.
However you get it, the book will be a must-have for the shelf of the IF scholar. Or enthusiast. Or you.