Edge AI Revolution: Building Private Enterprise Automations with Knapsack’s Mark Heynen

Mark Heynen is the Co-founder and Chief Product Officer at Knapsack, where he’s building private AI automations for enterprise use. A seasoned entrepreneur and technology executive, Mark has founded five companies and held key positions at tech giants including Google and Meta (formerly Facebook). His career spans from pioneering online pricing analytics in London to expanding mobile technology access in emerging markets.

Episode Highlights:

[00:00-03:21] From Startup to Big Tech: Heinen’s Journey

[03:21-06:36] Knapsack’s Three Pillars for Enterprise AI

[06:36-10:00] Edge Computing Transforms Small Language Models

[10:00-15:40] AI Applications Across Industry Sectors

[15:40-20:05] AI Automation Reshapes Future of Work

[20:05-23:23] Transforming Professional Work Through AI

Episode Links:

Knapsack: https://www.knapsack.ai/

Mark Heynen’s LinkedIn: https://www.linkedin.com/in/markheynen/

Mark Heynen’s Twitter: http://x.com/markheynen

PODCAST INFO:

Podcast website: https://www.humainpodcast.com

Apple Podcasts: https://apple.co/4cCF6PZ

Spotify: https://spoti.fi/2SsKHzg

RSS: https://feeds.redcircle.com/99113f24-2bd1-4332-8cd0-32e0556c8bc9

Full episodes playlist:   https://www.humainpodcast.com/episodes/

SOCIAL:

– Twitter:  https://x.com/dyakobovitch

– LinkedIn:  https://www.linkedin.com/in/davidyakobovitch/

– Events: https://lu.ma/tpn

– Newsletter: https://bit.ly/3XbGZyy

Transcript:

David: Welcome back to the HumaAIn Podcast, your Tech Insider podcast on the data economy. From physical circuit chips on your smartphone to the software powering GPT models, we live in a data-first world. HumAIn interviews the founders, investors, executives, and tech leaders creating the world we live in for consumers and enterprises.

Today’s episode features Mark Heynen, co-founder and chief product officer of Knapsack, in partnership with our AI Realized Summit.

David: Mark, you’ve had a fantastic career spanning big tech corporations like Google, Meta, and other incredible companies. Can you walk us through your journey through startups and big tech that led you to co-founding Knapsack?

Mark: I started my first company in my 20s while living in London working for Kingfisher. In 1999-2002, I identified an opportunity to create an online version of Nielsen for pricing data. We launched the company, secured venture backing, and sold it in 2006. The experience was exciting but grueling as a first-time founder dealing with venture funds and B2B customer acquisition.

I joined Google in 2006 to learn how organizations scale sustainably. During that time, I became interested in frontier technologies adoption. At Google and later Facebook, I focused on expanding tools into emerging markets.

After Facebook, I started several companies, including PayJoy in 2015 with two friends. We focused on smartphone adoption in emerging markets, developing pay-as-you-go solutions. That’s where I met my co-founder Cooper from Knapsack. In 2022, we teamed up again to tackle AI as a new frontier technology, looking at workplace AI challenges. We realized we could create a new architecture to help people use AI with their enterprise data, and launched the company in 2023.

David: Knapsack is described as offering instant private workflow automations. Can you break down for our listeners what this means and how it differs from other AI solutions in the market?

Mark: There are three key elements. First, “instant” means you can use AI at work immediately without extensive coordination with your CTO or worrying about violating internal policies.

Second, “private” means we enable AI use with your data by bringing the AI to the data, rather than the other way around. Many current solutions require uploading data to a cloud. For healthcare or finance sectors dealing with PII or sensitive commercial information, uploading to a new AI cloud isn’t comfortable or feasible. We solved this by allowing users to download the AI to their laptop or company server, enabling private analysis of data without leakage concerns.

Third, we realized there’s no limit to how often you can run operations when using your own computer. Unlike cloud or ChatGPT solutions with usage limits, you can run automations continuously during the day. This increases AI utility for knowledge workers, potentially running automations up to 100 times daily or more.

Mark: To give an example, our flagship automation is meeting preparation. We all have meetings and want to be well-prepared. For high-stakes meetings, this preparation often takes significant time. At Morgan Stanley, they have dedicated staff members who manually do this work for advisors with high-net-worth clients. Our automation can save time by automatically preparing you for meetings by pulling in enterprise data from Google Drive, local desktop, and other sources.

That’s just the beginning. Another automation we’re exploring analyzes SQL databases of customer or financial data for anomalies. A CEO friend recently mentioned he doesn’t know what he doesn’t know – he wants to understand unusual patterns in his customer data without predicting them through multiple dashboards. An LLM can automatically analyze this data.

Another example involves healthcare. Clinical notes stored in SQL databases are unstructured, making AI analysis complicated but possible. However, sharing sensitive medical data with external clouds is expensive and complex. A Microsoft OpenAI Azure service agreement typically costs a million dollars annually. With our tool, a doctor can instantly write an automation to check if any patients have had similar symptoms to their current patient, all while keeping data private and secure.

David: Let’s talk about running AI models locally versus in the cloud. What’s your perspective on the direction of models, especially regarding small language models (SLMs) on the edge?

Mark: I’m really bullish about developments in this space. Just last year, we saw Facebook release MobileLLM, a one-billion parameter model with significant capabilities. Microsoft released open-source models that are sub-10 billion parameters yet very effective. Google’s recent Gemma 2 can run on a MacBook M3 with performance characteristics similar to GPT 4.0.

The quality output is becoming more accessible as compute costs dramatically decrease. McKinsey analysis shows Edge AI will reduce cloud costs by about 40%, while Forrester predicts Edge AI will cut cloud spending by up to 50% in healthcare.

I believe the future will feature small language models for different purposes, with orchestration models deciding which SLM is most suitable for specific prompts. With open source possibilities, we’ll see innovation in custom small language models being orchestrated together into a web of models running locally on different devices.

David: Congratulations on your recent pre-seed round. How do you plan to use this funding to revolutionize industries like finance and healthcare with your AI automations?

Mark: Our core mission is making people more productive through automation. We want to build the largest library of automations and enable as many people as possible to use them frequently. We expect 90% of people won’t create automations but will use others’ automations – that’s typical for user-generated communities, as I’ve seen at Meta and Facebook.

We want users to get value within minutes and start collaborating with colleagues. Our vision is having these automations running continuously for end users, making them significantly more productive. Eventually, this could enable four-day work weeks. While AI automation will bring changes requiring adjustment, there’s still room for people’s comparative advantages – focusing more time on what they do well and less on what they don’t want to do.