Erik Dunteman was selected as a Pioneer in December 2020 for his work on Booste, Twilio for ML. In the last 7 months, he's gone through Pioneer Camp, presented to Natalie Sandman in our February Demo Livestream, bound past $10k MRR, raised a $220k seed round and rebranded to Banana.

On the heels of a viral tweet mentioning his exceptional growth, Erik raised $1.6M, which we were thrilled to lead. A suite of other great investors joined the round, including founders of Zapier, Loom, Labelbox, Coder and more. Long journey ahead and an exciting one behind.

Erik's 1-minute application video in the Pioneer Tournament.

Where were you when you started working on Booste?
I started from a bunkbed hackerhome in Mountain View in 2019. I was just on the backside of failing my first startup attempt, and doing a job hunt when I realized I had zero interest in being an employee.

How has it changed over time? What was the first MVP?
The first version was a raspberry pi set up to receive a remote desktop stream from a large cloud VM. The idea was to do something like Suhail Doshi’s Mighty, in that we’d host heavyweight apps in the cloud and stream them to cheaper devices, making compute more of a monthly “utility” than an upfront hardware cost. This kicked off my obsession with cloud compute as a construct.

From there, pivots include:

  • Streaming remotely hosted Adobe Suite
  • Streaming remotely hosted VS Code
  • A command line interface to sync dev environments between team members
  • A command line interface that ran any command in the cloud rather than locally
  • A deployment tool for long-running batch jobs
  • Heroku for ML
  • Twilio for ML
  • Github for ML (in which we’ve rebranded to Banana)

What are the major milestones you’ve hit since becoming a Pioneer
Well first of all, there were two moments in which we almost hit runway zero, so not dying is a neat milestone.

Beyond that:

  • We got our first MRR.
  • We broke $10k MRR.
  • We grew the team from 1 to 6.
  • We raised our first two successful funding rounds, $220k and $1.6m respectively.

Assess the company as it exists today. Where are you successful and where are you unsuccessful?
We’ve been most successful at making friends and cultivating goodwill. Our sales have all been inbound via founders who we’ve helped with ML over the months and eventually got to the point of them wanting to host their own models, so it was a clear trigger to use Banana. Some of our customers even went on to invest in us.

But all-inbound is a double edged sword. Inbound sales is incredible but our next step of focus as we go into this seed-funded stage is to build out replicable sales processes, content strategies, and community.

Having been the only engineer on the backend until this month, I’d hardly managed to keep afloat with the inbound interest, so now myself and our soon-to-onboard head of sales are going to be tackling that GTM as we work our way toward Series A.

Describe a specific upcoming feature you’re really excited to build.
We’re really excited about Forking.

We do a lot of manual work to white-glove customers into our backend, and we’re very excited to bring some Github-like features that allow for user-to-user collaboration. Forking will allow customers to fork the configuration and ML model from another user’s publicly published API, dramatically reducing the amount of per-customer work we need to do.

What are you doing today that doesn’t scale?
We manually set up machine learning models and their inference scripts for customers. Generally that means we receive a Google Colab notebook and rework it to fit in the context of our serving engine. Each new model takes us about 16 hrs of dev time, depending on how foreign the ML library is. Figuring out GPT-Neo was a two week grind.

The upcoming forking feature saves our backs on this one.

When was your last time talking to a user? How’d it go? What’d you talk about?
We actually talked today. Our customers have us set up on shared slack channels so we’re treated almost as their own ML Ops team. Today we chatted about what it would look like to retrain on a new NLP model, as well as about how users would like to experience autoscaling.

How feasible is Twilio for ML? How close are you and what are the main technical hurdles?
This hasn’t been done before because it’s actively difficult to serve machine learning models at production scale. Especially if they’re billion+ parameter models like GPT-2, where a single model takes up an entire GPU’s worth of memory.

The current tooling in the machine learning space has been built for research, and does not keep in mind the environment of production serving (high throughput, low latency, concurrent calls, self-healing, containerization). With those factors currently going unaddressed, production setups are currently quite expensive.

So at a high level, the biggest thing blocking hosted ML inference is unit economics. It’s incredibly expensive to host ML models on research-oriented tools frankensteined together. Our angle is to build a model-serving engine and a suite of compilers from the ground up.

One of our engineers is focusing specifically on building a layer that sits between ML frameworks like Pytorch and the hardware itself. We’ve already MVPed a demo that derisks the tech, and are continuing to build it out, which when finished will allow us to run anywhere from 50% to 3.1% of current costs.

Our goal is to democratize the use of ML as building blocks in production applications, and it’s going to take deep tech work to get to a price point that’s accessible.

What companies do you look at as inspiration?
Replit is our biggest inspiration. They’ve created a community of young hackers who, despite how anarchistic the discord server appears, come together to learn how to code and launch incredible projects. Replit has removed the “it’s scary” barrier from writing code. All the while they ship like mad, and are building the feature set underneath their customers such that a Replit-native developer never needs to leave Replit.

We want to be the same community and hosting platform for ML.

What will it take to be successful and how do you measure success?
I found my drive as a founder after going from couch to marathon over a summer. The genesis of the idea was “to do something that seemed legitimately impossible”. So for me success is defined by moments in which I can look back and think “wow, I didn’t think I could get here.” I already feel successful in having arrived where we are now. But looking forward, the next “impossible” goal is to hit $100m ARR and unicorn status. Beyond that, I imagine the next impossible goal is reaching the singularity by networking our ML models together to create a multi-lobe supermodel AGI. But we’ll focus on unicorn status for now.

What is something weird or unusual you’ve built or done?
I once had a college buddy inject my hand with an RFID chip on our dining room counter. I then rebuilt an 80’s motorcycle with an Arduino as the core of the electrical system, and coded it up so that my hand was the key that allowed the motorcycle to start.

The bike almost killed me multiple times due to bad soldering, so I generally stick to software now.

What influenced the decision to add a cofounder? How does that change the structure of your role and the organization as a whole?
I just this month added Kyle Morris as a cofounder. There’s a blog here about his credentials, but at the core of it, Kyle is a long time friend who I trust and deeply respect. We met in a hacker home two years ago and have now lived together across multiple countries working on our respective startups.

I’d always taken pride in being a solo founder. But then, in the process of raising the seed round, I realized how mature the vision had become, and how difficult it would be to try to tackle it alone. Kyle had always been top of mind for people I’d die to work with, and his skillsets and passions aligned perfectly with ours. It was a no brainer to bring him on.

In the last month we’ve grown from two to six including myself, so I’m no longer the code monkey keeping things alive. With Kyle being the trusted lead on the backend, and moving faster than I ever could, I got to fire myself from that job. I’m turning into the janitor for the rest of the team, picking up odds and ends. Sales calls to feed the funnel. Dev Ops tooling to keep our engineers productive. Negotiations with new hires. Accidentally taking prod down with some DNS changes. Whatever’s needed.

How has Pioneer been most helpful to you?
We almost got acquired back in February, when I was a solo team with one month of runway left. I showed up in a panic to office hours with Daniel Gross, wanting to take the deal but being afraid to give up too soon on my dreams. Daniel looked at me and said “follow your heart and not your mind”.

Now we’re halfway to our Series A and just closed our 5th hire. Sometimes the most helpful thing is to give someone permission to be human and make illogical decisions.

Erik's first progress update in the Tournament, 11/2/2020.

What is your advice to others who were in your position, tinkering on a project in the tournament.
When I did the tournament back in 2020, during feedback sessions I found myself giving the same advice over and over again. I’d actually made a cheat sheet that I’d drop into my feedback messages with the bitly link https://bit.ly/some-elaboration, often pointing founders to specific points they should read.

Now that I’m out of the tournament and have raised this Seed, I’d still say the same points. But I’d especially lean into Point 4: “Build cool shit and the right people will find you”. I attribute the vast amount of traction and fundraising success we’ve had behind building in public on Twitter. Having a clear area of interest, and a platform of discoverability (twitter, pioneer) is an incredible way to be found by other leverage-able nerds in the space.

What is the next major milestone?
$1m ARR.


Working on a project? Join the Tournament.