FMP – Troubleshooting
I’m still facing the render problem I encountered last week and haven’t been able to fully resolve it. When I created a test scene and rendered it, everything worked fine. I suspect that some settings in the project file might be incorrect. During this process, I realized that when the model disappears, it shows the backside of the building, which looks unnatural since the backside wouldn’t typically be visible. Additionally, this increases the polycount unnecessarily, leading to longer render times.
To address this, I applied an anamorphic screen technique similar to those used for real 3D billboards. The basic steps are:

- Render the scene based on the current camera.
- Apply the rendered scene as a texture to a flat, front-facing screen model.
- Set the texture to camera-based wrapping.
- Bake the texture using the material bake function.
- Apply the baked texture to the screen model.

With this technique, the screen appears distorted when viewed from other angles but looks accurate when seen through the intended camera. I tested this method with a human model and a simple animation, and it worked well. Afterward, I applied the disappearing effect, which looked more natural than previous renders, and the color change rendered successfully.
I also downloaded emoji files from Sketchfab but encountered issues importing the .fbx files into Maya and C4D—the meshes broke, or the animations didn’t work. Another option was the .gltf format, but neither Maya nor C4D could open it. After some research, I found that Blender supports this format, so I imported the files into Blender and exported them. Unfortunately, the same issues persisted. Eventually, I found a solution: exporting the files from Blender in .abc format to bake the animation into the mesh. This approach worked well, and I was able to import the files into C4D with only minor face subdivision issues. Since the emojis will appear very small, I decided to use them as is.

Using these, I created a simple particle simulation. I used a rectangular emitter to shoot particles towards the road. My initial plan was for the particles to remain on the ground without a lifetime, accumulating where I walked. To achieve this, I used a walking animation from Mixamo to approximate a collider for the particles to interact with pedestrians. As the human models passed through the particle field, they kicked and dispersed the particles using rigid body simulation. While I originally planned to create a matte from this data, it was difficult to match the simulated models to the people in the footage. Instead, I adjusted the particles to disappear a few frames after being emitted.
When testing the render, I encountered memory errors related to RAM and VRAM. Despite my laptop meeting the requirements, it couldn’t render properly. I realized my graphics card driver hadn’t been updated in a while, so I updated it. Afterward, the render worked smoothly. I suspect the memory issue I experienced during the scene 5 render was caused by the same outdated driver.
However, a new issue arose: the simulation didn’t appear in the rendered sequences. This problem was similar to the previous render issue but involved simulation rather than texture. After searching extensively, I couldn’t find a case exactly like mine. I attempted a common solution: baking the simulation using the Mograph tag and exporting it as an .abc file. Initially, I thought this failed, but I realized I had only baked the particle and cloner simulations, not the rigid body simulation. Once I baked all the simulations, the render worked as expected.

During the feedback session with Emily in the last class, she suggested emphasizing the message about excessive phone use in scene 3. To incorporate this, I decided to change the poster in the window to display a message. Using the design from the original footage and references from online posters, I created a simple design. Since I planned to composite only a scrolling animation onto the original footage, I had to add glass and an inner room for the poster. This process involved trial and error due to the refraction and reflection on the glass, which required additional adjustments. Rendering took a long time, but I resolved the frame-skipping issue I experienced previously with help from the tech team. They identified that the issue occurred with specific render farm computers, so excluding them resolved the problem. Thanks to the render farm, I was able to render the footage quickly.

For scene 5, I downloaded animation files from Mixamo and learned how to blend them in Cinema 4D. Once I understood the process, it was fairly straightforward. Using several animations, I created a mixed animation for the human model. Afterward, I prepared text for the particle system. I’ll test it first, and if it’s too resource-intensive, I’ll explore alternative tools in C4D.
There’s less than 10 days left. I hope everything going well..
Leave a Reply