extended
reality

eXtended Reality (XR) is a comprehensive term for a multitude of technologies used to create immersive experiences. Some of these technologies are Augmented Reality, Virtual Reality and Mixed Reality. In the ‘new normal,’ with business travels and office commutations being restricted, the need for physical spaces and physical interactions is rapidly diminishing. ‘Working from home’ and collaboration through digital means has become the ‘new reality.’ Currently, it is imperative for enterprises to explore newer possibilities using XR to drive next-gen digital collaboration and sales to ensure business continuity.

Manufacturers are gradually opening up to XR to showcase new products to customers in highly immersive setups. E-commerce conglomerates are using AR to transform the online shopping arena, by blending the benefits of both physical stores and online experiences to create immersive virtual amalgamations. E-commerce is morphing into ‘AR Commerce,’ allowing customers to immerse into virtual stores, interact with and experience products in real-time 3D, while enjoying the convenience of remote shopping.

Immersive technologies, with its strong appeal and prominence, will certainly influence and transform industries across the value chain, from collaborative prototyping to simulated trainings, immersive sales to customizations, assembly instructions to quality assurance, service and operations to remote assistance.

The hardware market for immersive technologies is still evolving. The focus is on mobile-based solutions to cater to the needs of a larger audience through democratized solutions. Enterprises can unleash the possibilities of immersive and interactive experiences by combining XR with AI, CV, Voice and Gestures.

eXtended Reality Invest dt eXtended Reality Invest tab eXtended Reality Invest mob
ARCore

ARCore is a platform by Google to build augmented reality applications that integrate virtual content by using motion tracking, light estimation, and estimation understanding techniques. ARCore enables your smartphone to understand the world and interact with information. Some of the features of ARCore are open for both Android and iOS platforms, to create shared AR experiences.

ARKit

Augmented reality (AR) describes user experiences that add 2D or 3D elements to the live view from a device's camera in a way that makes those elements appear to inhabit the real world. The ARKit combines device motion tracking, camera scene capture, advanced scene processing, and display conveniences to simplify the task of building an AR experience. It integrates iOS device camera and motion features to produce augmented reality experiences in the application.

Vuforia

Vuforia is an Augmented Reality SDK for devices, which enables the creation of augmented reality applications. It uses computer vision technology to recognize and track image targets and simple 3D objects, such as boxes, in real time. This image registration capability enables developers to position and orient virtual objects, such as 3D models and other media, in relation to real-world images when these are viewed through the camera of a mobile device / HoloLens. The virtual object then tracks the position and orientation of the image in real time so that the viewer’s perspective of the object corresponds with their perspective on the image target. The virtual objects appear as a part of the real-world scene.

Microsoft HoloLens

Microsoft HoloLens SDK helps create multiuser, interactive and spatially aware applications. It provides the flexibility to use popular development platforms such as Unity, Unreal, and Vuforia to create mixed reality experiences.

Unity

Unity is a cross-platform game engine developed by Unity Technologies, It is used to create 2D/3D applications, video games, animations for websites, gaming consoles, mobile devices, etc. Unity can target games to multiple platforms. Within a project, developers have control over delivery to mobile devices, web browsers, desktops, and consoles. This tool is also used to create content for extended reality devices – Google VR, Microsoft HoloLens, HTC Vive, Oculus Rift, and Android and iOS smartphones

Unreal Engine

Unreal Engine, offered by Epic Games, is a complete suite of creation tools for game development, architectural and automotive visualization, linear film and television content creation, broadcast and live event production, training and simulation, and other real-time applications. It can be used to develop eXtended Reality applications for HTC Vive, mobiles and PC.

Light variants of ML

Just having 3D models with basic functions are not sufficient for providing great experiences. Therefore, AI is used to power up the apps and enable them to behave naturally, Frameworks like TensorFlow, TensorFlow Lite, CoreML, ML Toolkit, etc. are used for this purpose. They provide the necessary gesture/object/face/body detection and perform multiple analysis right on the user’s palm.

OpenCV

OpenCV is a library of functions used to resolve real-time computer vision problems. It is supported on many platforms such as Windows, Linux, MacOS, smartphones – iOS and Android. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code. It has over 2500 optimized algorithms, which can be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce a high resolution image of an entire scene, find similar images from an image database, remove red eyes from images taken using flash, follow eye movements, recognize scenery and establish markers to overlay it with augmented reality, etc.

Cloud Services

Several services provided by Microsoft Azure, Amazon Web Services and Google Cloud services are very useful for eXtended Reality applications. These are mainly cognitive services that are vision and speech-based, with remote rendering, and are spatial anchors that enable applications to offload substantial work to the cloud, whenever devices do not have sufficient computation power to support complex features; these services perform the processing on cloud to ensure the best experience

Autodesk Products

Autodesk has been a leader in the area of 3D content creation for years. Some of the popular products are Maya, AutoCAD, 3DS Max, Revit, which are used by many industrial designers, architects, real estate designers, game artists, and several animation studios. Due to the vast availability of pre-curated content and artists, this is one of the popular choices to enable 3D content on eXtended Reality applications.

Adobe CC

Creative Cloud is a set of applications and services developed by Adobe for graphic design, video editing, web development, photography, UX, 3D designs and cloud services. Adobe CC helps with texturing, physical based rendering of 3D content, composing, and video editing

Blender

Blender is a free, open-source 3D creation suite for powering up the eXtended Reality applications. This application can handle full 3D pipeline – modelling, rigging, animation, simulation, rendering, compositing, and motion tracking. It even supports Python scripting for basic animation and rendering.

Kinetic AR

Kinetic AR combines robotics with augmented reality (AR), wherein, complex robotic instructions can be abstracted in the form of simple AR gesture-based instructions. For example, robotic cleaning drones for large windmills can be controlled with a set of simple AR-based instructions.

Shared AR

One of the upcoming trends across industries is using shared AR, where the augmented information is shared with distributed teams across locations. For example, a car sales executive could connect with a remotely located customer and share a 3D model of the car while explaining the specifications and features. Shared AR is highly effective for collaboration and remote assistance, allowing multiple participants to view, modify, interact and share the AR experiences.

Generative Modelling

Generative modelling uses simple gestures/contours/commands to automatically generate complex 3D shapes. This is in contrast to manual modelling, which is time consuming. For example, a sport shoes manufacturer can use generative modelling to design 3D models for their large inventory of shoes, instead of manually modelling each shoe.

Motion Graphics

Motion Graphics and Motion Tracking enable developers and designers to track and simulate realistic human body movements through a combination of CV, DL and motion technologies (Inverse Kinematics, Skinning and Rigging). This technology can be used to simulate lifelike avatars for live trainings and interactive sessions.

Hologram

Hologram is a three-dimensional virtual projection of an object in free space which can be seen by anyone walking around it, without wearing 3D spectacles or any other kind of visual aid. Interactions with these holograms with voice and gestures provide an immersive experience to users. While all advanced holographic display technologies rely on creating an illusion for the viewer, the technique for generating the illusion varies from eye tracking, mist projection to glass projection, and so on.

One could see 3D holograms only in movies and science fiction literature, but now, it is set to become a reality. With the advent of 5G, holographic telepresence (that is, live streaming of a life-sized hologram of a person situated miles away, in a meeting room) and hologram calls ( communicate with colleagues, friends, and family) is set to become pervasive.

Photogrammetry

Photogrammetry is the science and technology of auto-generating photo-realistic 3D models with limited manual effort, helping 3D artists save significant amounts of time that they would have otherwise spent during manual modelling. Photogrammetry extracts 3D information of physical objects and environment from photographs to generate 3D models.

This process involves taking photographs of an object, its structure, or space in multiple angles and using this data to obtain information about the texture, shape, volume, and depth of the object.

In recent years, an increasing number of scenarios include 3D modelling of products in large inventories and catalogues. For example, with WebXR being used in recent times, there is a compelling need for retailers to convert the existing 2D product catalogues to 3D models for their websites and mobile applications.

WebXR

Currently, there is a possibility where JavaScript can create an eXtended Reality experience on the website, without installing any special applications. The computer/smartphone screens will be the portal for end users to enter the world of eXtended Reality. However, the set of libraries is supported on a few browsers and devices. However, this is bound to mature into the next platform to experience.

LiDAR

Light Detection and Ranging (LiDAR) is a surveying method that measures the distance to a target by illuminating the target with laser light and measuring the reflected light with a sensor. Differences in laser return time and wavelengths is used to make digital 3D representations of the target. It has evolved to be a useful tool for AR/VR/MR. The latest versions of iPhone Pro and iPad Pro have LiDAR sensors, which can provide highly accurate AR tracking. The accuracy of 3D scanning and tracking that LiDAR offers, can create new possibilities for better AR/VR/MR experiences and can reduce the time required to build such solutions (compared to manually modelling the entire environment). LiDAR is already being deployed for large-scale archaeological excavations and has created a revolution in archaeology.

Programmable Matter

Programmable Matter can change its physical properties (shape, density, moduli, conductivity, optical properties, etc.) programmatically, based on user input or autonomous sensing. Programmable matter is linked to the concept of a material, which inherently has the ability to perform information processing. Currently, a significant number of research projects are trying to leverage the ‘Immersiveness’ and interactivity of AR, enabling users to interact with chunks of programmable matter and create physical prototypes

AR Cloud

AR Cloud is a technology that organizes and stores AR twin information of physical world objects. It relays contextual information of real-world objects in real time and delivers digital content across devices like mobile phones, tablets, and hands-free devices such as AR Glasses or other head-mounted devices (HMD). For example, an architect can interact with, and modify the shape of a lump of raw material to create a physical prototype of a building through simple, AR-based gestures.

Intelligent Motion Interpolation

Motion Interpolation allows intelligent interpolation between sparsely spaced keyframes to create smooth animations with very minimal effort. Animations (2D as well as 3D) are created through ‘keyframes.’ Keyframes are intermediate positions or configurations of a particular object at a particular point in time. All the intermediate frames are then interpolated based on such keyframes. For example, if a cube needs to be moved from point A to point B in 2 seconds, a keyframe can be created with the cube being at A and another keyframe with the cube being at B (at the end of 2 secs). All intermediate cube positions are automatically interpolated. 3D characters and objects are typically animated through very closely spaced, complex keyframes.