top of page

AI Computer Graphic Technologies in Movie Production

By: mINSEO kIM 

 

“The job requirements for a visual effects artist are no longer owning a beret and being good at painting backdrops; the industry now calls for engineers who are good at training deep learning algorithms to do the mundane work, like manually smoothing out an effect or making a digital character look realistic. In doing so, the creative artists who still work in the industry can spend less time hunched over a computer meticulously editing frame-by-frame and go do more interesting things.”

 

     Artificial Intelligence has become increasingly prominent throughout the last decade, and it’s being applied to various fields. Around the end of 2018, Google’s AI firm DeepMind took new strides in the medical sector by introducing AlphaFold, a program that  determines the shape of a protein, like Cas 9, based on its amino acids information held in the DNA sequence. Machine Learning (ML) is also progressing in areas like facial recognition using convolutional neural networks (CNN), voice synthesizing, music composition, natural language processing, climate predictions, and so on and so forth. As Stanford University Computer Science professor Andrew Ng famously stated, “It is difficult to think of a major industry that AI will not transform. This includes healthcare, education, transportation, retail, communications, and agriculture. There are surprisingly clear paths for AI to make a big difference in all of these industries.” ML technology is having a profound impact upon our daily lives.

 

      A notable sector of stakeholders that AI technology is bolstering are the entertainment industry’s movie production firms, which include companies like Disney, Pixar, and NVIDIA (major provider of GPU resources). AI is being used not only for the rendering and creation process of animated films, but also for marketing and advertisement. 

 

Animation Technologies & AI

     Currently, AI is giving a major helping hand in computer graphics (CG); companies in the movie-making industry like Pixar are utilizing it to enhance screen quality and to render complicated, vivid visual effects. Pixar has developed Renderman, an API (Application Program Interface) tasked with bolstering many of the visual effects, accurately shaping difficult forms like liquids, and denoising animation scenes for better on-screen quality. This slightly parallels Disney’s previously-used render engine, which constructed a well-developed CNN architecture (with kernel prediction, softmax, and asymmetric loss functions for activation) which was applied to movie scenes. 

 

     With the development of Renderman came a massive technological breakthrough using AI, which assured the optimization and efficiency of well-created lighting in animations. But in order to fully comprehend the role of AI lighting in scenes of Big Hero 6’s San Fransokyo or the movie Inside-Out, let’s start off by talking about a key component of these animations: light rays. The light which bounces off object surfaces, characters, and background areas all contribute to the visual effects of composition, shape, and movement in a film. Disney explains this concept using the term “path tracing”. When there is a set light source, or multiple, in a given scene, you can observe the trajectory of the light as it hits an object, and then bounces off its surface to reach another object at a close proximity, which in turn lights up or tints from the interaction. These levels of brightness alter based on the surface texture of the objects: some elements, such as glass or clear water, may be more reflective than other solids, like granite, which would absorb more of the light. The light rays may bounce between objects up to four consecutive times, and the greatest challenge which arises is the difficulty in capturing all the millions of such light paths. 

 

      This is where the technological solution steps into the picture. Because there are many rays of light that don’t make it to the “camera” in the scene at all - they instead bounce off into the sky or to other unseeable locations - Disney’s software team decided to follow the light rays backwards from the point of visual perception, or the “camera”, and use path tracing to determine the luminance and texture of the objects which the light has touched upon. Disney’s Hyperion Team has worked on the technical methodology of this concept for a couple of decades, where they started in the early 2000’s by using diffusion models and preprocessing images with point clouds. After the span of several years, as computing power grew and the process was refined with more effective, robust models, the modifications led to optimized raytracing engines (Christinsen et al. 2018) which could even cover the subsurface effects of light rays, by taking into account the diffusion of light and its direction after interacting with geometrically-concave forms.

 

     This technology’s importance is highlighted in one of Pixar’s recent movies, Coco (2017). Disney’s special Lightspeed team, managed by Renee Tam, was responsible for finding any problems, such as lengthy render times, occurring during the animation’s creation process and to find solutions. For this particular film, there were thousands of lights incorporated into the complicated but beautifully lit scenes in the City of the Dead: there were roughly 18,000 light sources in a cemetery scene, 2,000 significant sources in the city scene, and up to 29,000 at the train station. This resulted in the Renderman API handling around 8.2 million lights for the film, and the Lightspeed team worked on solutions to handle the complex scenes, which brought the overall per frame render time from 1,000 hours down to 75. The fruits of work were brought in the shape of wonderfully-lit architectural marvels and colorful lights representing aspects of Mexican culture throughout the movie. 

 

Marketing & Advertisement

     Even for the actual animations and character features, movie companies are using state-of-the-art technologies to run simulations (leading to programs like Autodesk Maya and Katana), and are even creating deep learning models to help animate naturally-looking facial features with synchronize fluidly with the input speeches and audio. Machine learning can be applied to all sorts of tasks, and one widely-used aspect is classification through training using a pre-created, sometimes pre-labeled, dataset. Then what kinds of roles can this serve in the movie industry?

 

     AI can be used to make various predictions by collecting data and then observing patterns, trends, or even user preferences. For example, in 2015 a program called Scriptbook was launched; it would analyze a movie’s script, and after five minutes, provide a report on possible movie ratings, character analysis, a target audience profile, and box office sales predictions. In another case, IBM collaborated with 20th Century Fox in creating the trailer for Morgan, a 2016 sci-fi horror film directed by Luke Scott, by using an AI system called Watson. The model was trained to classify logical input moments for visuals, audio clips, and other components in one hundred sample horror movies; it was then able to create a 6-minute movie trailer strip in a single day.

 

     AI technologies are even shaping the world of live-action, and have revolutionized marketing methods for firms like Hollywood and Netflix. It can range from projecting correlations between the features of certain consumer packaged goods and rise in sales, to pooling data on social media posts, ticketing information, and even weather or demographics. This can all help in decisions for whether or not ot greenlight which films, and how resources can be distributed efficiently for the production. “You can make very hyper-personalized recommendations to consumers,” Stephen F. DeAngelis, CEO and founder of AI provider Enterra Solutions, stated during a discussion with some of Hollywood’s technology executives.

 

 

The Next Steps

     Thanks to the committed work of engineers, cutting-edge technologies are able to render realistic animated scenes that almost seem to have been plucked out of our fantasies. Furthermore, other subset technologies residing in the same overarching theme of AI have stepped up to analyze data and statistics for the purpose of pinpointing specific trends and marketing films. AI has brought many benefits to the movie production industry, and will likely continue to bring changes into the near future as technologies continue to reform and develop in many fields of research.

bottom of page