Artificial intelligence (AI) looms large at CES 2018. Here are three applications of the technology that are revolutionizing the video experience.
AI enables “volumetric video”
Brian Krzanich, Intel CEO, gave the keynote presentation at this year’s CES. One of the topics he focused on was how artificial intelligence is an essential component in creating new video experiences. One of the key ways AI will be applied is in the processing of truly huge amounts of video data. To illustrate, Mr. Krzanich talked about the creation of what he called “volumetric video.”
To explain volumetric video, Mr. Krzanich talked about a VR project Intel has been working on with the NFL. The company ringed a football stadium with dozens of 5K video cameras to record the game. These multi-lens cameras record everything taking place around them. AI is used to stitch all the images together. However, the objective is not just to create a single VR experience. It is to record everything, from every angle, taking place within the stadium.
Mr. Krzanich says this allows the video system to divide the entire stadium space into what Intel calls “voxels.” These are the equivalent of pixels, with the addition of depth and volume characteristics. Once the stadium space is divided up into voxels, a VR experience can be created from any position, not just from where the camera is. For example, a viewer could watch standing by the quarterback and then switch views to the end zone to see the view from the receivers perspective.
This process generates three terabytes of data per minute during a game. To illustrate how much data that is Mr. Krzanich said:
“We are creating the data equivalent of all of the text in the library of congress in the first quarter of any football game.”
Intel has built a line of neural networking processors and AI software which is powering this volumetric video approach.
Adapting to our needs
One of the limitations of “Intelligent” assistants is that users need to ask for something they need. That is not how LG envisions the technology working. The company’s new ThinQ AI platform will learn a device user’s habits and adapt to their needs. It will also go further, by combining user data with external data to alter device settings to better complete tasks. For example, an LG television powered by ThinQ might automatically change inputs to a game console when a gamer household member sits down in front of the TV.
LG will be including ThinQ technology into its webOS TVs and combining it with Google Assistant. The combination will allow owners to control and search media and Internet of Things technology using their voice through their TV.
Making the TV picture better
Samsung is making a big commitment to 8K at CES. However, the company has a problem: there is no 8K content. So, it is employing AI technology to convert lower resolution video sources to 8K. In the new Q9S 8K 85” television, Samsung claims:
“The Q9S incorporates AI technology to deliver clear and pristine 8K resolution for any type of content. Using a proprietary algorithm, the Q9S continuously learns from itself to intelligently upscale the resolution of the content it shows — no matter the source of that content — to gorgeous 8K.”
Samsung has analyzed millions of videos and built a database of video profiles. AI is used to identify the characteristics of the target video and select the best video profile as a template for the up-conversion to 8K. Samsung says the conversion corrects brightness, contrast, and blurring without causing any gradation loss. The process also enhances the video soundtrack. For example, the crowd noise in live sports can be enhanced and made more prominent in the audio image.
Why it matters
Intel is using AI to help process multiple camera angles to record the entire volume in which a game or event takes place.
LG is using AI to help televisions adapt to our needs and make control and discovery tasks easier.
Samsung is using AI to convert lower resolution video to 8K correcting brightness, contrast, and blurring.