If you could stand over the shoulder of everyone onboarding to your platform for the first time, what would you wish you could say to them?

Bryce Vernon (Zapier): I’d probably ask them if they are clear about their own work, because anytime you set up an agent or any automated process, it’s like looking in a mirror, realizing how little you know about your own work or how unclear you are about your own processes and what’s important. It’s really about delegation. Managers have a leg up in this world, because they can be clear about expectations, clear about what they want, and they’re comfortable with giving autonomy. More like a philosophy before jumping into it, so you don’t just get in the weeds and realize you spent 5 hours and you don’t have anything practical to show for it.

What’s the difference between AI automation vs. AI workflow vs. AI agent?

Jacob Bank (Relay): There are two categories of how we’re using AI in our daily lives. First, tools where we go to a chat box and type something - like ChatGPT or Claude. Second, things happening automatically behind the scenes where the AI is automatically invoked, reading data and taking action. Within those automatic processes:
  • AI automation is typically a one-shot burst of AI intelligence to solve a specific problem, like “given this transcript, summarize it.”
  • AI workflow chains many AI automations together - like “take the transcript, generate a LinkedIn post, then generate a tweet.” But it’s still very deterministic and structured.
  • AI agent differs in that you give it a goal and tools it can call, but then it uses reasoning at runtime to decide which tools to call and how many times.
My hot take is that very few of the use cases I find valuable in product management work actually require runtime authentic reasoning. Sometimes it’s wasting tokens and creating more unreliability than you need. Flo Crivello (Lindy): I always revert to Harrison Chase’s definition: an agent is a piece of software which has at least part of the control flow defined by an LLM. The more of that control flow is defined by an LLM, the more “agent-like” it is. Agency is a spectrum, not a binary thing. I tend to use AI agents whenever my automation is thrown into very uncertain conditions - like meeting scheduling where a human can tell you anything.

What’s a good way for product managers to think about where to start with their first AI automation?

Flo (Lindy): If you find yourself doing something more than once, it probably should be done by an agent. Once you’re good at creating agents, it really takes like 2 or 3 minutes. So if you find yourself doing something more than once that takes more than 2 or 3 minutes, have agents do it for you. Small use cases compound into becoming so much more productive by the end of the day. Jacob (Relay): First, what are AI models really good at today? They’re really good at extracting structured information, summarizing, synthesizing across multimodal information, and drafting content. Second, what do I need help with? If a really bright college intern showed up at my desk tomorrow and said “I’ll do whatever you want,” what would I ask them to do first? Those two things combined give me 25 or 30 ideas right there. Bryce (Zapier): The question I would ask people is: if you had $50,000 to hire someone to do some work for you, what would you have them do? Focus on what AI is really good at, and start thinking about how you can collect massive amounts of data into one spot. How can I get data in there first? From there, the power of agents shows in use cases that spark from there.

How do you handle hallucinations and fears around errors or mistakes?

Bryce (Zapier): When you’re working in something like Zapier agents, if you require it to do something and it can’t, it will actually stop. There’s a conversation happening - you can see what it’s thinking. From a security perspective, that’s why it’s important to use software like ours with authentication embedded and fine-tuned app controls. The clearer your input, the better the output. If you’re pointing to a data source, it’s just looking at your data, not making things up. Flo (Lindy): I actually don’t see hallucinations as nearly as big a problem as they used to be. The best mental model is: imagine you have an intern. They’re very good, hardworking, but junior - they’ll screw up here and there. I actually don’t think LLMs screw up as much as junior interns do. How do you manage an intern? For their first few days, you have them run their work by you. This is why we built that toggle for human-in-the-loop. They’ll screw up less and less until eventually you can say “You’ve got it, you don’t have to ask for permission anymore.”

How should we think about pricing and cost optimization for AI agents?

Jacob (Relay): These things are so valuable if you get them right compared to the cost of having a human do it that I would not worry about nickel and diming on 5or5 or 10 when the alternative would have been hiring someone that costs $10,000 a month. When working with an AI step, there are two ways to reduce cost: pick a cheaper model and ask it to do less work in the form of fewer tokens. I always start with a really good model like Claude 3.5 Sonnet. If it’s working and only costing me a few bucks a week, I don’t even bother with cost optimization. If it’s something that’s going to run a hundred times a day and cost $10 per time, then I optimize by giving it less data and switching to a cheaper model. Bryce (Zapier): A lot of times when you’re building out prompts, you just want to get them right. So even just going to ChatGPT or wherever and talking through the prompt to dial it in before you use it is one way to make sure it’s crystal clear.

How should we think about security, privacy, and sensitive information with AI agents?

Flo (Lindy): I feel like I’m gonna get killed for saying this, but no one cares about your data. We’re doing all the right things - agreements with model providers so they never train on your data, SOC 2 compliance, internal audits. But when people ask “What about my data?” - no one’s going to look at it, no one’s going to train on it. I don’t think it is meaningfully different from regular SaaS. All your data is in the cloud already if you use Google Workspace. Jacob (Relay): Think about it like you’ve just hired a human employee. You need to give them access to all your internal systems to do good work. You should trust Zapier, Lindy, Relay in the way that you trust any SaaS provider. There’s one extra dimension of making sure model providers don’t use your data for training, but all the model providers have a bit on the API that you can set. People are way too casual about hiring employees and giving them lots of information, and way overly strict about SaaS products that have really good security practices.

Where do you see agents evolving in the next 6 to 12 months?

Jacob (Relay): First, all the core capabilities will continue to improve - summarization, extraction, translation, speech to text, text to speech. Reasoning is going to get better, so more agentic use cases will work. I’m really excited about tools that help observe you throughout your day or brainstorm with you about opportunities to use AI more effectively. Flo (Lindy): We describe our vision as: we want to build an AI employee, and it’s coming. Companies have millions in payroll, and then they spend a thousand dollars on an agent. You actually want these lines to cross! In 6 months you are going to have something that really starts to look like an AI employee, and you’re going to be able to give huge jobs to these things. Bryce (Zapier): I would hope to see a project management agent that just does it all. Right now, agents are pretty task-oriented and lower level. Having that oversight might be really interesting to see. Jacob: It’s an intern, then it gets promotions, then it’s got 5 years of experience, and then it’s someone better than all of us at each function. Very soon!