Spatial computing
{{Short description|Computing paradigm emphasizing 3D spatial interaction with technology}}
{{Distinguish|Spatial navigation}}
Spatial computing is any of various 3D human–computer interaction techniques that are perceived by users as taking place in the real world, in and around their natural bodies and physical environments, instead of constrained to and perceptually behind computer screens. This concept inverts the long-standing practice of teaching people to interact with computers in digital environments, and instead teaches computers to better understand and interact with people more naturally in the human world. This concept overlaps with and encompasses others including extended reality, augmented reality, mixed reality, natural user interface, contextual computing, affective computing, and ubiquitous computing. The usage for labeling and discussing these adjacent technologies is imprecise.{{cite news |date=2024-02-02 |last=Ovide |first=Shira |title=Apple's Vision Pro is 'spatial computing.' Nobody knows what it means |newspaper=Washington Post |url=https://www.washingtonpost.com/technology/2024/02/02/apple-vision-pro-what-is-spatial-computing-ar-vr/ |accessdate=2024-02-02}}
Spatial computing devices include sensors—such as RGB cameras, depth cameras, 3D trackers, inertial measurement units, or other tools—to sense and track nearby human bodies (including hands, arms, eyes, legs, mouths) during ordinary interactions with people and computers in a 3D space.{{Cite web |title=Tracking in Virtual Reality and Beyond – VR 101: Part III |url=https://blog.vive.com/us/tracking-in-virtual-reality-and-beyond-vr-101-part-iii/ |access-date=2024-04-19 |website=blog.vive.com |language=en}} They further use computer vision to attempt to understand real world scenes, such as rooms, streets or stores, to read labels, to recognize objects, create 3D maps, and more. Quite often they also use extended reality and mixed reality to superimpose virtual 3D graphics and virtual 3D audio onto the human visual and auditory system as a way of providing information more naturally and contextually than traditional 2D screens.
Spatial computing does not technically require any visual output. For example, an advanced pair of headphones, using an inertial measurement unit and other contextual cues could qualify as spatial computing, if the device made contextual audio information available spatially, as if the sounds consistently existed in the space around the headphones' wearer. Smaller internet of things devices, like a robot floor cleaner, would be unlikely to be referred to as a spatial computing device because it lacks the more advanced human-computer interactions described above.
Spatial computing often refers to personal computing devices like headsets and headphones, but other human-computer interactions that leverage real-time spatial positioning for displays, like projection mapping or cave automatic virtual environment displays, can also be considered spatial computing if they leverage human-computer input for the participants.
History
Spatial computing as a field could be considered to have begun as early as 1969 with the work of Myron Krueger, who developed a number of AR related computing systems, most notably in 1970 the graphical environment Videoplace.{{Cite web |title=Myron Krueger's Videoplace Pioneers "Artificial Reality" : History of Information |url=https://www.historyofinformation.com/detail.php?entryid=4699 |access-date=2025-05-28 |website=www.historyofinformation.com}}{{Cite book |last1=Krueger |first1=Myron W. |last2=Gionfriddo |first2=Thomas |last3=Hinrichsen |first3=Katrin |chapter=VIDEOPLACE---an artificial reality |date=1985-04-01 |title=Proceedings of the SIGCHI conference on Human factors in computing systems - CHI '85 |chapter-url=https://dl.acm.org/doi/10.1145/317456.317463 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=35–40 |doi=10.1145/317456.317463 |isbn=978-0-89791-149-8}} A precursor of this system took digital images and overlayed them with peoples' live bodies. Videoplace took in a user's image and projected it onto a screen, where the users could utilize their projection to "touch" and interact with digital objects and "critters", demonstrating a number of the capabilities modern AR systems exhibit in a more primitive form. Videoplace allowed for the user's projection to be scaled in size, moved in position, and manipulated through rotation depending on the user's position and actions.
The term apparently originated in the field of GIS around 1985{{Cite journal |last=Reeve |first=D. E. |date=April 1985 |title=Computing in the geography degree: limitations and objectives |url=http://www.tandfonline.com/doi/full/10.1080/03098268508708923 |journal=Journal of Geography in Higher Education |language=en |volume=9 |issue=1 |pages=37–44 |doi=10.1080/03098268508708923 |issn=0309-8265|url-access=subscription }} or earlier to describe computations on large-scale geospatial information. Early examples of spatial computing in GIS include ArcInfo and its iterations, initially released in 1981{{Cite web |last=GISuser |date=2007-06-28 |title=ESRI, Arc/Info, ArcGIS, ArcView... 25 Years in the making - A Time Line |url=https://gisuser.com/2007/06/esri-arcinfo-arcgis-arcview-25-years-in-the-making-a-time-line/ |access-date=2025-05-28 |website=GIS user technology news |language=en-US}}, a part of ArcGIS along with ArcEditor, which together provide mapping, analysis, editing, and geoprocessing for geodatabases.{{Cite web |title=ArcObjects 10 .NET SDK Help |url=https://help.arcgis.com/en/sdk/10.0/arcobjects_net/conceptualhelp/index.html#//00010000026z000000 |access-date=2025-05-28 |website=help.arcgis.com}} This is somewhat related to the modern use, but on the scale of continents, cities, and neighborhoods.{{Cite web |date=1993 |title=Towards intelligent spatial computing for the Earth sciences in South Africa |url=https://journals.co.za/doi/pdf/10.10520/AJA00382353_9209}} Modern spatial computing is more centered on the human scale of interaction, around the size of a living room or smaller. But it is not limited to that scale in the aggregate.
In the early 1990s, as field of virtual reality was beginning to be commercialized beyond academic and military labs, a startup called Worldesign in Seattle used the term Spatial Computing{{Cite web |date=1993 |title=The Virtual Environment Theater using Spatial Computing |url=http://www.realityprime.com/images/spatial-computing.png |website=RealityPrime |vauthors=Jacobson, Bar-Zeev, Wong, Dagit}} to describe the interaction between individual people and 3D spaces, operating more at the human end of the scale than previous GIS examples may have contemplated. The company built a CAVE-like environment it called the Virtual Environment Theater, whose 3D experience was of a virtual flyover of the Giza Plateau, circa 3000 BC. Robert Jacobson, CEO of Worldesign, attributes the origins of the term to experiments at the Human Interface Technology Lab, at the University of Washington, under the direction of Thomas A. Furness III. Jacobson was a co-founder of that lab before spinning off this early VR startup.
In 1997, an academic publication by T. Caelli, Peng Lam, and H. Bunke called "Spatial Computing: Issues in Vision, Multimedia and Visualization Technologies" introduced the term more broadly for academic audiences,{{Cite book |last1=Caelli |first1=Terry |url=https://books.google.com/books?id=opTkF1ayK54C&q=%22spatial+computing%22 |title=Spatial Computing: Issues in Vision, Multimedia and Visualization Technologies |last2=Bunke |first2=Horst |date=1997 |publisher=World Scientific |isbn=978-981-02-2924-5 |language=en}} focusing on a variety of topics such as image processing, dead reckoning navigation, object recognition, and visualizing spatial data.
The specific term "spatial computing" was later referenced again in 2003 by Simon Greenwold,{{cite web |last1=Greenwold |first1=Simon |title=Spatial Computing |url=https://acg.media.mit.edu/people/simong/thesis/SpatialComputing.pdf |publisher=MIT Graduate Thesis |accessdate=22 December 2019 |date=June 2003}} as "human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces". MIT Media Lab alumnus John Underkoffler gave a TED talk in 2010{{cite web |last1=Underkoffler |first1=John |title=Pointing to the Future of UI |url=https://www.ted.com/talks/john_underkoffler_pointing_to_the_future_of_ui |publisher=TED Conference |date=2010}} giving a live demo of the multi-screen, multi-user spatial computing systems being developed by Oblong Industries, which sought to bring to life the futuristic interfaces conceptualized by Underkoffler in the films Minority Report and Iron Man.
Google Earth, initially released by Keyhole Inc. in 2001 and re-released by Google in 2005 can be considered a capable GIS and includes advanced geospatial tools and capabilities.{{Cite web |title=Google Earth capabilities for no-code geospatial evaluation and analytics |url=https://mapsplatform.google.com/maps-products/earth/capabilities/ |access-date=2025-05-28 |website=Google Maps Platform |language=en}}
Notable instances of the use of spatial computing
In 2019, Microsoft HoloLens released a video outlining Airbus' partnership with Microsoft Azure to utilize the latter's mixed reality services for streamlining and improving the aircraft design process, as well as reducing the error in development.{{Cite AV media |url=https://www.youtube.com/watch?v=lxjC4Z05qh8 |title=Airbus drives innovation and accelerates production with Azure mixed reality and HoloLens 2 |date=2019-06-17 |last=Microsoft HoloLens |access-date=2025-05-28 |via=YouTube}} Airbus utilized the HoloLens 2 to this end, and the executive vice president of engineering claimed that their design process' validation phases were "hugely accelerated by 80 percent", as well as "strongly believe[d]" that up to 30% improvements in their industrial tasks could be attained with the HoloLens 2. During the presentational video, Airbus cited the maturity of Microsoft Azure services as "key" for their usage of the HoloLens 2.
Also in 2019, the U.S. army partnered with Microsoft to produce a HoloLens based Integrated Visual Augmentation System (IVAS) to enhance infantry members by giving troops various abilities, including but not limited to using holographs to train, projecting 3D maps into their vision, and seeing through smoke and corners.{{Cite web |title=U.S. Army to use HoloLens technology in high-tech headsets for soldiers |url=https://news.microsoft.com/source/features/digital-transformation/u-s-army-to-use-hololens-technology-in-high-tech-headsets-for-soldiers/ |access-date=2025-05-28 |website=Source |language=en-US}} Microsoft received tens of thousands of hours of feedback for their systems by 2021. Sergeant Marc Krugh at the time claimed that Microsoft's partnership has already caused the army to rethink some of its troops' operation strategy.
Products
File:Apple Vision Pro on display.jpg is a spatial computing product developed by Apple]]
In March 2022, a real-time spatial computing holographic communication product was invented by Mária Virčíková and Matúš Kirchmayer, creating the world’s first holographic presence app requiring only a smartphone camera. Their company, MATSUKO, patented single-camera technology, enabling users to transmit and interact as realistic 3D holograms in XR environments, supported by 5G networks and mixed reality glasses.
Apple announced Apple Vision Pro, a device it markets as a "spatial computer", on June 5, 2023. It includes several features such as Spatial Audio, two 4K micro-OLED displays, the Apple R1 chip and eye tracking, and released in the United States on February 2, 2024.{{cite web |title=Apple Vision Pro |url=https://www.apple.com/apple-vision-pro/ |website=Apple |publisher=Apple Inc. |access-date=5 June 2023}} In announcing the platform, Apple invoked its history of popularizing 2D graphical user interfaces that supplanted prior human-computer interface mechanisms such as the command line. Apple suggests the introduction of spatial computing as a new category of interactive device, on the same level of importance as the introduction of the 2D GUI.
Magic Leap had also previously used the term “spatial computing” to describe its own devices, starting with the Magic Leap 1. Their use seems consistent with Apple's, although the company did not continue using it in the long term.{{cite web |title=Magic Leap |url=https://www.magicleap.com |website=Magic Leap |publisher=Magic Leap Inc. |access-date=9 Feb 2024}}
In 2017, IKEA released IKEA place, allowing users to freely drop and manipulate digital pieces of furniture within their rooms through an AR app.{{Cite web |title=Launch of new IKEA Place app – IKEA Global |url=https://www.ikea.com/global/en/newsroom/innovation/ikea-launches-ikea-place-a-new-app-that-allows-people-to-virtually-place-furniture-in-their-home-170912/ |access-date=2025-05-28 |website=IKEA |language=en}} IKEA place automatically finds the dimensions of the room in question and utilizes those to adjust the size of their overlayed digital furniture options and is based off of Apple's ARkit software, which now includes LiDAR and topological map generation to model scenes.{{Cite web |title=ARKit 6 - Augmented Reality |url=https://developer.apple.com/augmented-reality/arkit/ |access-date=2025-05-29 |website=Apple Developer |language=en}}
On February 24, 2019, Microsoft released the HoloLens 2, which includes mixed reality tools and can generate interactable, manipulatable holograms in 3D space.{{Cite web |title=HoloLens 2 gives Microsoft the edge in next generation of computing |url=https://news.microsoft.com/source/features/innovation/hololens-2/ |access-date=2025-05-28 |website=Source |language=en-US}} The holograms in question can be related to a physical object or completely independent and free-floating. The Azure Spatial Anchors cloud service was released simultaneously, which gives the holograms capability to persist across time and many individuals' devices.
The Meta Quest 3, a mixed reality gaming headset that includes spatial audio, two RGB cameras, and grants the ability to interact with virtual characters released on October 9, 2023, at a notably cheaper price than the Apple Vision Pro, but with reduced capabilities.{{Cite web |date=May 28, 2025 |title=Meta Quest 3 |url=https://www.meta.com/quest/quest-3/ |access-date=May 28, 2025 |website=meta.com}}{{Cite web |date=2023-09-28 |title=The Meta Quest 3: Spatial Computing and Mixed Reality Are Things Now |url=https://displaydaily.com/the-meta-quest-3-spatial-computing-and-mixed-reality-are-things-now/ |access-date=2025-05-28 |website=Display Daily |language=en-US}}
Further advancements, including a spatial computing holographic meeting experience developed by MATSUKO with Telefónica and NVIDIA, were unveiled and demonstrated at Mobile World Congress (MWC) 2024 in February 2024. This iteration leveraged 5G, edge computing, and AI to enhance realism with eye contact and facial expression tracking, supporting devices like Apple Vision Pro and Meta Quest.
Other uses of the term
In computing, the word "spatial" has also been used to refer to the unrelated concept of moving data between processing elements that are arranged in a physical space. In 1992, "spatial machines" were suggested as an approach to parallel computation.Yosee Feldman and Ehud Shapiro, CACM, 35(10), pages 61 to 73, 1992. In 2013, a programming standard was proposed for "spatial computing".HPCWire, [https://www.hpcwire.com/off-the-wire/openspl-consortium-unveils-new-programming-standard-spatial-computing/ OpenSPL Consortium Unveils New Programming Standard for Spatial Computing]. Computer scientists at ETH Zurich have proposed a "spatial computer" model for energy-efficient parallel computation.[https://htor.inf.ethz.ch/publications/img/gianinazzi-spatial.pdf Lukas Gianinazzi, et al, 2023]. AMD describe AMD XDNA as a "spatial dataflow NPU architecture", and the University of Illinois is developing a compiler framework for "spatial dataflow accelerators".AMD, [https://www.amd.com/en/technologies/xdna.html AMD XDNA Architecture].NSF, [https://www.nsf.gov/awardsearch/showAward?AWD_ID=2338739 CAREER: An Agile Compiler Framework for Spatial Dataflow Accelerators].
See also
- {{annotated link|A-Frame (virtual reality framework)}}
- {{annotated link|Brain–computer interface}}
- {{annotated link|Cyberspace}}
- {{annotated link|Extended reality}}
- {{annotated link|OpenXR}}
- {{annotated link|Smart city}}
- {{annotated link|Spatial audio}}
- {{annotated link|Supranet}}
- {{annotated link|Technological singularity}}
- {{annotated link|Transhumanism}}
- {{annotated link|Virtual community}}
- {{annotated link|Virtual world}}
- {{annotated link|WebXR}}
- {{annotated link|Wirehead (science fiction)|Wirehead}}
References
{{reflist}}
{{Extended reality|state=collapsed}}
{{Ambient intelligence|state=collapsed}}
Category:Science fiction themes