Decentralised Power
Drag, drop, render: thousands of GPUs without packing, syncing, or file-management pain. How Render Network completely replaced our in-house render farm.

The Problem
You've finished the shot. The lookdev is done, the lighting is locked, the scatter is exactly what you wanted. Now you need to render it, not on your workstation, but at scale. Hundreds of frames. Distributed. Fast.
And here's where most pipelines quietly fall apart.
Asset collection. Path remapping. Dependency tracking. Plugin version matching. The ritual of checking whether the farm will actually find everything your scene needs, in the right place, with the right software, before you submit and walk away. If you've run a render farm, or worked with one, you know the specific dread of checking your inbox the next morning. Half the frames failed. Missing texture. Wrong path. Asset that was on your local drive and never made it to the farm. The render cost was real. The frames aren't.
This is not a solved problem in the traditional pipeline. Studios build tooling around it. They hire people to manage it. They accept it as a cost of doing business. With Arnold or Karma, packaging a complex scene for distributed rendering is a process, sometimes a long and painful one, that sits between finishing the work and actually rendering it.
The assembly pipeline removes that process entirely. Not streamlines it. Removes it.
Here's why.
The OCS scene is already a manifest
Think about what an OCS file actually is: a structured list of every asset the scene needs, with the exact path to where each one lives. That's it. That's the whole file.
Now think about what a render farm needs before it can render your scene: a list of every asset the scene needs, with the exact path to where each one lives.
You've already done the work. The OCS is the manifest. There's no separate collection step because the collection has been implicit in your working format from the beginning. Every time you saved your scene, you were maintaining a precise, human-readable record of its dependencies. The farm doesn't need you to package anything. It just needs to read the file you already have.
This is the structural advantage of the assembly pipeline, and it's not an accident. It's what falls out naturally when you work in a reference-based architecture. Your scene is already organised the way a render farm wants it.
Drag, drop, done.
In practice, the workflow looks like this.
You take your OCS file (a few kilobytes of plain text) and drag it into Render Network Manager.
Manager reads the OCS, identifies every asset it references, and checks each one against what's already on the network. This check happens via a hash system: every file on the network has a unique identifier derived from its contents. If even a single byte in the file has changed, the hash changes. If the hash matches, the asset is already there, and there's no need to upload it again.
On your first submission, only genuinely new assets upload. Everything your scene needs that doesn't already exist on the network gets sent; everything that does is already there.
On every subsequent submission, only what changed uploads. You tweaked the lighting and adjusted the camera. Those two things upload. Your entire library of trees, rocks, and terrain assets, the same ones from last time, don't move. They're already there.
And the OCS file itself? It's a few kilobytes. It transfers instantly.
What this means in practice
The first time you experience a fast, clean farm submission; no collection, no failures, no path debugging - it feels slightly unreal.
You've been conditioned to expect friction. It isn't there.
But the deeper value shows up over time, and it compounds.
Within a shot: Change your lighting pass. Submit again. Only the lighting assets upload. The render is on the farm within minutes of you finishing the adjustment.
Across a project: Multiple shots share the same tree libraries, the same rock collections, the same shader sets. Each new shot submission checks what's already on the network and skips everything that's already there. A project that took hours to upload the first time takes minutes on the second shot, because most of it is already cached.
Across a team: This is where the architecture becomes genuinely powerful. The deduplication isn't per-user - it's network-wide. If your colleague submitted their shot yesterday and it included the same Megascan library you're both referencing, Render Network already has it. You don't upload it again. The whole team shares the same cache. A library that one person submitted on Monday is available to everyone on Tuesday without anyone having to do anything.
This is not how traditional render farms work. On a traditional farm, assets travel from your machine to the farm nodes, and the efficiency of that transfer depends entirely on how well you've packaged your scene. With Render Network and OCS, the efficiency is structural, it comes from the way the scene is built, not from any extra work you do at submission time.
Replacing Deadline
If you've worked in a larger studio, you know Deadline, or Qube, or Royal Render, or whichever job management system your facility runs. These tools exist to solve the problem of coordinating distributed rendering: tracking jobs, managing dependencies, routing assets to the right nodes, handling failures, retrying tasks.
They're sophisticated pieces of infrastructure. They're also significant operational overhead. Someone has to configure them, maintain them, debug them, and manage the asset collection pipelines that feed them.
Render Network Manager, in the context of this pipeline, replaces the asset management layer of that whole system. Not all of it; job tracking, priority queuing, those still exist in the Manager's own interface, but the painful part, the part that produces failed renders and burned budgets, is gone. The OCS file is the dependency manifest. The hash system handles what needs to transfer. The Manager coordinates the rest.
For independent artists and small studios, this is the difference between distributed rendering being viable or not. You don't need a pipeline TD to set up and babysit your farm submission. You drag the file in, you check that the assets are confirmed, you submit.
For larger teams, it's a meaningful reduction in infrastructure complexity. Fewer moving parts means fewer things to go wrong.
The Hardware independence
There's an aspect of this that isn't purely technical, but it matters enormously in practice.
Ninety percent of the work on Meridian Forest, a scene with a groomed creature, a river simulation, hundreds of thousands of animated tree instances, and a forest dense enough that it had been impossible to render in the earlier pipeline, was completed on a MacBook.
Not a workstation. Not a machine with a rack of GPUs behind it. A MacBook, wherever I happened to be working.
The assembly pipeline makes this possible because the work happening on your local machine is assembly and lookdev, lightweight by design. The OCS file and the libraries are small. Standalone is responsive even on modest hardware because it's working with references and pre-built assets, not trying to resolve a fully loaded production scene in real time.
The heavy work, the actual rendering, happens on Render Network. And Render Network is accessible from anywhere, from any machine, as long as you can upload an OCS file and a set of assets.
This decouples your creative capability from your hardware. A small team with good laptops and a Render Network account can produce work that was previously gated behind a capital investment in local render farm infrastructure. That's not a minor convenience. It's a fundamental change in who can make what.
What Network is not
One thing worth being direct about: Render Network, due to its decentralised architecture, cannot obtain TPN (Trusted Partner Network) certification. TPN certification is required for pre-release work for major studios; Marvel, Netflix originals, and so on. If your pipeline relies on handling that category of content, this is a real constraint.
What the pipeline serves well is everything else: independent film, episodic TV, commercials, game cinematics, archviz, and any production where the render cost and turnaround time are meaningful business problems. That's a substantial portion of the industry.
For the work this pipeline is designed for, the decentralised model isn't a compromise. It's the point. The nodes are distributed, the pricing reflects that, and the result is GPU rendering at a cost and speed that centralised infrastructure can't match.
The pipeline itself
is the competitive advantage
Everything in this chapter flows from one structural fact: the assembly pipeline produces inherently clean, well-referenced projects. Not because you have to do extra work to keep them clean. Because the architecture makes mess structurally difficult.
Your assets are in libraries. Your scene is a description that points to them. There are no dangling references, no local-only textures buried in a project folder, no "it works on my machine" ambiguity. The project is, by construction, ready to hand off.
That cleanliness is what makes the Render Network workflow feel effortless. It's not that the farm submission tooling is magical, it's that the scene you're submitting is already organised correctly. The farm just reads what you've already written.
Build the pipeline right, and distributed rendering stops being a problem you solve at the end of the project. It becomes something that was always going to work.
This research and the development of the Assembly Pipeline and its related tools were made possible through the support of Render Foundation.