BLOG > DIVER'S LOG

Behind ShapeDiver: How We Scale Grasshopper For Cloud Applications!

March 9, 2019 by Alex Schiftner

<< Hi, I am Alex, CTO and co-founder of ShapeDiver. In this blog post I will provide you with some insights into the core of ShapeDiver: How we got started and how we managed to scale Grasshopper for cloud applications.>>

But First… Story Time: How ShapeDiver Came To Be


It was a mild summer evening in 2015, and Mathias and me were pondering over some beers about 3D printing platforms and how they were (and still are!) missing great customization features. Suddenly, in one of those rare moments in life when the same idea pops up in two disconnected brains at the same time, we looked at each other’s eyes and said “But what if? Yeah.. what if we took Grasshopper to the cloud!?!”

From left to right: Mathias Höbinger, Mathieu Huard, Alex Schiftner.

Mathias, Mathieu and me were colleagues at the time, and had gained a lot of experience in parametric 3D modeling and optimization for geometrically complex architectural projects. We hadn’t been using Grasshopper professionally so far, but could see a huge potential for cloud applications based on it.

Taming The Beast: Scaling Grasshopper

“Most people overestimate what they can do in one year and underestimate what they can do in ten years.”

Whomever said that was completely right…

Half a year later (and with our company fully setup) ShapeDiver was taking, ehm.. shape. In the previous months we had been working on a Proof Of Concept (POC) which was based on a WebGL viewer directly connecting to a single instance of Grasshopper. The POC was perfect to convince others of the idea and apply for some funding, but other than that it was useless. We had to take the next big step and implement the core of ShapeDiver: Grasshopper parallelized in the cloud.


ShapeDiver is powered by AWS.

We chose to use AWS as our infrastructure provider, though we were still greenhorns in using it back then. What a plethora of new concepts to understand and documentation to read! Luckily, I had brought experience in systems architecture and in what is nowadays called DevOps from my first job, when I was working for Europe’s first provider of taxi fleet management systems. Keep on reading to discover the principles of the ShapeDiver backend system we developed, and how that relates to dispatching rides to taxis.

ShapeDiver’s Three Main Principles

While we knew at the time that a solution like ours would appeal to potentially tens of thousands of Rhino and Grasshopper users, we didn’t want to come out with a half-baked solution. We wanted ShapeDiver to be a synonym of security and reliability. Therefore we chose these as 2 of the 3 main design principles for our backend:

  • Design Principle #1: Security
    We knew that the Grasshopper definitions of our users (and clients) would contain their core intellectual property. Nobody wants their IP to get copied, lost or stolen. We wanted them to trust us so much that they would be willing to upload their definitions to our cloud application. It was clear to us, security of our users’ data was our top priority.

  • Design Principle #2: Reliability
    Offering a service that would reliably work 24/7 was a must. We wanted our users to rest assured that no matter the time of day, ShapeDiver would simply work. Always.


  • Design Principle #3: Scalability
    We knew one of the many potential use cases for Grasshopper in the cloud was to power online 3D configurators. From startups to big corporate businesses, we wanted to provide a service that could grow with them in a sustainable way.

Grasshopper In The Cloud, At Scale

In case you are a Grasshopper user already, you might wonder how it is possible to serve potentially very high numbers of concurrent end users, configuring products using our embedded viewer in their browser at the same time, without running a separate instance of Rhino + Grasshopper for each one of them. Answers to this question can be found in queueing theory.

Queueing Theory: An analogy often used is that of the cashier at a supermarket.

Essentially it’s all about decoupling the end users’ requests from the workers who serve them, and adding a dispatching algorithm in between them. It’s similar to how rides are dispatched to taxis, just much faster. In practice we do this using a high-speed shared memory caching system.

Dispatching Taxis And Grasshoppers


Our design principles result in some implications which you might already have experienced if you are already a ShapeDiver user:

  • Computation Time Limit
    We impose a strict time limit on the computations taking place in Grasshopper. 5 seconds for Free accounts and 10 seconds for PRO. This allows us to ensure the reliability and availability of our backend. It also pushes our users to optimize the performance of their Grasshopper definitions, which in turn provides a better UX to their end users. You don’t want to wait forever for your taxi, do you?

  • Priority Of Serving Requests
    Our dispatching algorithm gives higher priority and more computation time to computations of paying clients, to ensure the backend is performing at its best for them. Limousine rides come at an extra cost.

  • Script Checking
    We support all the scripting options available in Grasshopper definitions (C#, Python, VB). Therefore, we need to review these scripts before newly uploaded Grasshopper definitions get accepted by our backend. Security implications that could be caused by scripts can thereby be avoided. Taxi drivers have to protect themselves from violent passengers.

  • Limited Amount of Plugins
    We are carefully reviewing which plugins to support on our backend. Mainly they have to make sense for our core user base plus they have to be stable when running on ShapeDiver. You can’t take a Kangaroo for a taxi ride, can you?

Side Note: All these previous limitations listed above can be lifted for our Enterprise accounts, to whom we offer dedicated ShapeDiver backend systems, which are operated independently.

One Last Thing: Smart Caching System

More can be done than just cleverly dispatching compute requests to Grasshoppers:

Imagine you are a tourist hopping on and off taxis to visit sights in a city. You take pictures of all the places you have been, so you can look them up later and show them to your friends and family (or clients), right?

That’s exactly what we are doing with the solutions that have been previously computed by ShapeDiver. We store them in a smart caching system, so next time these solutions (hint: product configurations) are requested by anyone else using your configurator we can serve them immediately.

Even better than that, we are using the graph of the Grasshopper definition to dissect the model into independent parts. So after a little warm-up of  your model’s cache, the most popular versions can be served almost instantly, resulting in a more pleasant UX for your end users.

What’s Next: To Infinity And Beyond.

Although we started ShapeDiver about one and a half years before Rhino 6 was released, (meaning our first backend system was running using Rhino 5), nowadays our clients can choose to have their models operated on either Rhino 5 or Rhino 6. We operate shared ShapeDiver backend systems for our PRO clients in Europe and the US, with plans to expand to Asia very soon. For our growing number of Enterprise clients we run dedicated systems at the AWS locations of their choice.

ShapeDiver PRO Systems ready in the US and Europe. Asia coming soon.

A few months back McNeel announced to be working on Rhino InsideTM, a possibility to run Rhino headless in 64bit applications on Windows. One of the first examples for using Rhino Inside was the open source project called Rhino Compute, a REST API exposing the core functionality of Rhino (called RhinoCommon). RestHopper is another example along these lines, which reminded me of the POC we had been implementing back in 2015.

How Does This Relate To ShapeDiver?

Most importantly it shows us that we are on the right track. There’s a need for this type of solutions and we are thrilled to be part of this community. We have spent the last 3 years perfectioning how to use Grasshopper in parallel for cloud applications at scale, and are constantly growing the ecosystem of tools and interfaces around it.

Secondly, Rhino InsideTM is making our life easier. Being able to rely on McNeel’s excellent developer support for running Rhino 7 headless makes us sleep well. Rest assured that we will support Rhino 7 on our backend systems as soon as it will be released!


Curious to know more? Subscribe to our Newsletter so you don’t miss our most important updates, like new releases, plugin updates and more.

More posts