2020 Sure Sucked, but At Least I Taught Myself How to Create Photogrammetry Models and Experience them in Virtual Reality


Well it’s been a weird year. Most of the things that I usually do can’t really be done for lack of airplanes and open international borders, and for the time being I’ve had to pause most of my life’s things and just hunker down until this all blows over. All in all it’s been extremely weird, occasionally scary, frequently dispiriting, and normalcy feels like a distant memory. But despite the general vibes, it hasn’t actually been the end of the world. 2020 has certainly generated its fair share of apocalyptic imagery and headlines, but let’s be realistic: for a lot of us the main challenge has been just trying to stay sane despite having waaaaaaay too much free time to browse through heaps and heaps of Twitter’s anger and Instagram’s total lack of depth. I think I speak for many when I say that self-control was not easy.

On the one hand I’m afraid to tally up those hours — all of those languages left unlearned, books unread, the programming skills left unacquired. But on the other hand nobody’s perfect, and I’m just not sure that it’s even possible for anyone to be maximally productive under such a sudden deluge of totally unstructured time. I’d wager that I wasted about half of it.

But fortunately, even that remaining 50% of my coronavirus lockdown’s free time still adds up to a vast sea of it. One of the few cool things about sheltering in place all day, is that you can completely waste eight hours and still have something like ten hours left to do something useful.

So what did I do that was useful? Well, I learned a totally new way to use and consume photography that I think has some very thought-provoking and possibly profound implications for the future; I learned how to use normal 2D photography to create ultra-realistic 3D scans of real world locations, and I also learned how to share the experience of being in those locations with anyone. It’s called photogrammetry and virtual reality, respectively, and I think that the combination of those two things is a very big deal.

Sounds crazy, right? Well let me explain…

Photogrammetry Isn’t New, but It’s Becoming A Lot More Accessible and Robust

Photogrammetry (the process of getting accurate measurements of real world objects from photographs) isn’t a new thing. It’s been around for about as long as photography itself has been around. Cartographers have used it to make topographical maps since the 1870s, and spy agencies have been doing it with slide rules in order to figure out the size of objects in reconnaissance imagery since at least the 1950s. I once met a guy who set up an array of weatherproof cameras on the mountain slopes opposite a volcano and used it to automatically measure subtle expansions and contractions of the volcano’s surface in order to better predict upcoming eruptions. It’s useful stuff — cameras are cheap as far as remote sensing equipment goes, and measuring things at a distance is obviously useful to a lot of industries.

But as this technology matured and processing power increased, the simple idea of using a series of photographs taken from different perspectives to create isolated measurements of objects grew into the next logical step: using a larger series of photographs to reconstruct the entire structure and even surface appearance of objects, and eventually entire environments. You could turn your 2D photographs of a place into a realistic 3D model of it.

This is all very cool stuff, but up until relatively recently it was cool stuff that lurked in obscure corners of academia, or government, or occasionally business. It was the domain of obscure PhD theses and oil exploration, not that of your average photographer.

But now that even the laptops of today are competitive with the supercomputers of yesteryear, this technology is becoming increasingly accessible and comprehensible to the average photographer. I probably dedicated just four or five months to learning it, and despite only being a beginner in this journey, the results are fairly stunning. Here are examples of just my second and third large-scale attempts:

(Eichinger Sculpture Studio, Portland Oregon)

(Beth El Synagogue, Portland Oregon)

Are either of those 3D models perfect? No, they’re not. But just imagine how long it would take me to carefully reconstruct those environments from scratch. How long would it take me to model that organ in the prayer hall? How long would it take for me to sculpt each of those statues? It took the actual sculptor the better part of four decades to do that. And I scanned it all in just a couple of days.

Of course the 3D scan isn’t quite capable of capturing the sculptures exactly as they are in real life. But it does do a fairly decent job at capturing their essence, and giving you an idea of what the art studio is like. At the very least this all offers key advantages over traditional photography or videography, in that it gives you the ability to view the space and the sculptures on your own terms from whichever perspective you wish, kind of like a videogame.

But unlike a videogame, these 3D models do not offer much in the way of interactivity and excitement. They are static. They look cool at a glance, the idea that creating a realistic scan of a location is so cheap and accessible comes across as interesting, and a lot of people have come to it as a hobby. But it has been remarkably difficult to come up with compelling uses for these things.

For one thing, photogrammetry models are both enormously detailed and completely static. Over the years the field of 3D computer graphics has developed thousands of little visual tricks meant to simplify simulations of reality and reduce the processing burden so that contemporary computer hardware can render it with relative realism and in real-time. Photogrammetry doesn’t do any of these. Instead, photogrammetry spits out huge, single-piece models that contain nothing in the way of sensible simplifications or divisions; don’t include any data about important attributes such as reflectivity, luminance, or roughness; it doesn’t distinguish between an important statue and an unassuming wall; and real-world shadows are baked in as they were when the model was captured. Fixing all of this manually is a lot of work that grows exponentially with increases in the scan’s size and complexity (though I should mention that modern AI capabilities is well-suited to solving many of these issues). Outside of a few niche applications such as archaeology where mere accurate documentation of a place is sufficiently important, and creating certain types of videogame assets, there hasn’t been much use for photogrammetry’s ability to quickly create realistic 3D models because there hasn’t been a great way to easily incorporate it into modern 3D simulation pipelines. For the most part, it’s just easier to just create a model of something from scratch.

But it’s also just unlikely that anyone will ever spend more than a few minutes exploring a 3D model on 2D computer and smartphone screens. As far as I can see, the point of taking the time to capture a model of a real place is to share something essential about the experience of being there. But even the largest computer screen is a small window, and in my experience a small window is simply inadequate. The limited view simply doesn’t hold people’s attention in anything that even resembles the way that actually being there does, and it doesn’t give people an intuitive sense of the place. The limited view doesn’t convey any of the real world sensory cues of scale, perspective, and distance.

Fortunately, there is an emerging technology that addresses many of these issues. It’s relatively new in the grand scheme of things, and to be fair, it faces a lot of interesting technical and social obstacles before it can achieve mainstream adoption. But if it ever does overcome those hurdles, and if it ever does become a legitimate thing that a large number of ordinary people use in their everyday lives, well then the implications are huge.

Virtual Reality is Having A Moment

If I mention virtual reality, the odds are that your first thought will go to videogames. You wouldn’t be wrong — the first commercial applications of virtual reality were indeed born out of the videogame industry. But the real potential for VR is about so much more than videogames. Fundamentally, VR is the ability to convincingly substitute your visual surroundings with something else. It is fundamentally about being able to interact with digital realities on a human scale with all of the visual cues of a real experience. We’ve barely scratched the surface of what we can do with something as powerful as that.

Are you finding it hard to meditate or do yoga in your cramped NYC apartment? Well VR lets you switch your cramped apartment for a mountain or a beach, and a life-sized virtual instructor can even join you to lead the session. Are you a doctor, and do you want to practice that surgical procedure one more time before the real deal tomorrow? VR can place you in a fully prepped operating room within seconds, and you’ll be able to operate on a 3D model of your actual patient’s anatomy that was generated from an MRI scan. Do you have an important meeting to get to, and you don’t want to spend five hours on that Sunday red eye to get to NYC, or maybe your boss jut doesn’t want to pay for it? VR will let you have meetings in a virtual boardroom with photorealistic virtual avatars that actually look like your co-workers and convey their actual gestures and facial interactions in real time. You get the idea. This is just a grab bag of ideas that are at various degrees of progress and completion, but they are all firmly within the realm of near-term technical possibility, and they all have the potential to change the way we live and work in ways that are comprable to those of the smartphone revolution. This is about so much more than video games. It is about bringing computation past the limitations of 2D screens into a 3D world that more closely mirrors our own.

Obviously none of these things will be superior to actually being in a place or doing a thing. But in many cases it can probably get us 70-80% of the way there, and more often than not the sheer convenience of it will make up for the rest. The technology is progressing quickly, and we might get there sooner than you think. At the very least, I hope that you can see why companies like Facebook and Apple are both investing billions into this: it doesn’t have anything to do with video games — they want to be at the forefront of a whole new computing platform.

VR is Obviously a Great Match for Photogrammetry, but…

Using virtual reality to display photogrammetry models in the context of some sort of virtual tourism experience is probably one of the more patently obvious use-cases for this new spatial computing platform. Right now, a single person can go to a famous place with a simple camera and create a convincing virtual model of it with just a few days of work. And then any random person around the world can, from the convenience of their own living room, experience what it’s like to walk around that place using intuitive technology that anyone can buy for less than what they probably spent on the phone in their pocket. Historical attractions tend to command steady interest in the international tourism market over the long haul, so eventual demand for such a virtual experience is a relative certainty. It is only a question of when the technology becomes sufficiently powerful to make such an experience compelling, how long it takes for the platform to achieve sufficiently widespread adoption to sustain a business, and whether or not we can figure out sufficiently robust storytelling techniques for this new form of media. After all, it’s one thing to let someone walk around an old castle by themselves, but it is a whole other thing to make that experience rich, educational, and and compelling.

These are significant obstacles. First off, the processing power available in the affordable, self-contained headsets that run on mobile hardware and are simple for the average consumer to use (in other words, the only sort of headset that will achieve mass adoption) is limited by the physical and technological constraints of portability. Any virtual tourism experience intended to reach a wide audience will have to exist within a very tight computational performance budget. Secondly, the technology is still in its infancy, and is primarily adopted by tech savvy gamers who are less interested in the sort of slow paced experience that is virtual tourism. It has a long way to go before it achieves the sort of mass adoptione that will create a large, capable, and interested customer base — virtual reality isn’t (yet) at the point where the average person spends enough time with it to have an intuitive understanding of how to interact with it. And thirdly, inventing a whole new way of telling a story with a platform that is simultaneously so limitless and so constrained is no easy task.

But we are making progress on all of these counts. Facebook’s considerable investments in virtual reality have for the first time turned it into something that your grandparents can intuitively understand (Facebook owns Oculus, and are increasingly incorporating it into the main company — for better and for worse). There is a vibrant community of artists and technologists that are drawn to the blank canvas of such a new and exciting technology platform, and they are certainly innovating. And the latest generation of standalone headsets (namely the Oculus Quest 2) is opening new doors due to its incredible combination of affordability and considerable advancements in performance. For example, below is a video of that same sculpture studio that I scanned but seen from the perspective of a VR user who is experiencing it with the Oculus Quest 2. The experience is completely intuitive (I’ve had 80 year olds figure it out just as quickly as ten year olds), and the ability for the user to explore a life-sized model freely based on their own curiosities is infinitely more engaging than the static 30 second video of a simulated fly through that you saw earlier.

I think the implications of this are huge.

Napster, but for Tourist Attractions

Remember back in the early 2000s, when Metallica sued Napster for making their music easily and freely available for download? Up until then, few people gave much thought to the idea that digital goods can threaten a physical industry. Storing and transmitting music had always required a physical medium, or at the very least the source was easily traceable to a particular radio station or movie production that could then be sent a bill for royalties. But then all of a sudden anyone with a bit of computer knowhow could enjoy pretty much any album for free. But then all of a sudden, normal people could generate infinite copies of any song, it was mostly untraceable, and as history has proven: completely unstoppable. It took a few decades to figure out a new system, but we seem to have landed on a subscription model that peddles the convenience of accessing all of the music for a nominal monthly fee. The music industry has adapted and learned to live with this reality, but it will never be the same.

I envision something analogous happening in the tourism industry. It won’t be on the same scope or scale as the changes that happened in the music industry when the .MP3 came into widespread use, and I don’t think that the damage will be anywhere near as significant. Actual travel, and actual life experiences, will always be far superior to any limited digital facsimile, so foreseeable versions of virtual experiences will not replace it — obviously. And it is also just a lot easier for the general public to rip a CD, than it is to create a realistic photogrammetry model and built an experience around it. But we are entering an era were it will be possible for us to digitally transmit a convincing digital representation of the experience of being in a real-world location. Ownership of these experiences will no longer be secured by mere physical ownership of the real-world location, and that will pose many challenges and questions. But it will also create new opportunities and new possibilities of conveying knowledge, understanding, and meaning. I’m excited to be a part of that process.

Previous Share Post :

Leave a Reply