We've gotten a trickle of various Cyan-related news this week, so I figured I'd make a post out of it.
The Myst 25th anniversary collectibles (the book-boxes and inkwells) have been delayed due to a late-caught manufacturing problem. (And the fact that shipping from China is just slow.) We haven't gotten an updated delivery date, but no sooner than February. The DVDs will arrive much sooner, perhaps by the end of this year.
Cyan has also announced a new publishing arm, Cyan Ventures. The idea is to partner with other studios creating VR narrative games. Their first release will be Zed, an adventure game developed by Eagre Games. Zed ran a kickstarter a couple of years ago (which I backed). They haven't had a lot of visible motion since then, but this announcement puts them back on the front burner. Zed is targeted at a Spring 2019 release date for PC, Rift, and Vive.
It's not completely clear what "publishing arm" means here. Normally we think of a game publisher as providing development funding and marketing contacts. But Cyan, by their own admission, "doesn't have huge amounts of cash lying around". (Or they'd be spending it on Firmament!) So I'm putting this deal down to sharing of marketing and distribution expertise, and leveraging Cyan as a prestige game brand. In unofficial chat, Eagre folks talk about Cyan's contacts with major game distribution networks. Plus it gets Cyan's name back in the headlines -- that never hurts.
Zed is a natural first project for Cyan Ventures. Chuck Carter, the designer, is a Cyan alumnus from the original Myst era. We'll see if this turns into a steady stream of Cyan-published adventure games from other studios.
And speaking of Firmament, what's up with that? Cyan hasn't made any public announcement since March. There's been no indication that they've staffed up to a full production team. However, they continue to talk about Firmament as a live project. They exhibited at GeekGirlCon just a couple of weeks ago, and I hear they said that Firmament was still in development. But there wasn't a lot more detail than they had back in March.
My sense is that they're still in the "concept sketches and design documents" stage, and will be until real funding arrives. But I'd be happy to be proved wrong.
Finally, a neat bit of fan art. On one of the Myst fan Discord servers, user Rasi Talon has been experimenting with a (proprietary) deep-learning image tool to scale up images from Riven. You can find them in this imgur album.
|Original Riven image|
|5x original size, naive scaling algorithm|
|5x original size, deep-learning algorithm|
Riven's original artwork was 608x392 pixels. (They appeared slightly blackbarred in a 640x480 display.) These images are scaled up by a factor of 5, to 3040x1960, and they look like they could have been rendered that way! Click through to the large versions of these images to get the full effect.
This is pretty astonishing. I am not an expert on deep learning systems, but I gather that they are trained with enormous collection of real-life images, and are then able to apply the knowledge of texture details to new images.
Naturally, everybody immediately started thinking about a "Riven HD" project using these upscaled images. The stumbling block is video. Riven had a lot of video sequences (motion transitions, full-screen scenes, and small patch details). These are even lower-resolution than the static images, and you can't really apply smart scaling to video frames anyway -- the frames wind up not matching. Maybe there's a deep-learning algorithm out there which is intended for video. Or 3D voxel data, which is kind of the same thing. In the meantime, enjoy the nice big images.
No way... I don't believe it. Those images have to have been recreated from scratch. How can this be an upscale? It looks like the new images are creating detail that wasn't there in the original low res image.ReplyDelete
Then again, I'm mostly familiar with video upscaling and as you mentioned that's a totally different animal (hint: with this kind of 5x scaling factor it pretty much never looks good).
That's what the neural net algorithm does! It synthesizes new detail based on a very large training corpus -- a library of real-world photographs. Those images contain many examples of wood, brick, stone, and so on. The algorithm knows what those examples look like when downscaled. Roughly speaking, it looks for matches in the small Riven images, and replaces them bit-by-bit with the high-detail versions from the library.Delete
Hi! RasiTalon here. What was said above is correct. To Expand on that a bit, the system is trained to improve itself by feeding it low res images that you have the high res for, and then scoring the upscale based on how closely it matches the original, and doing that a whole lot and then letting the software keep the code sets that score higher over the ones that score lower.ReplyDelete