AI Conference HumanX Takeaways and Frameworks
6,714 AI leaders from around the world just gathered for a massive AI conference in San Francisco. There were people from 79 countries and 500+ sessions.
From over 35,000 minutes of content at the event, these are the 9 takeaways to know. There are also frameworks and mental models after the takeaways – one you absolutely need to know is the three waves of AI by Jensen Huang.
Every sales leader knows the story: your best reps carry the number while everyone else tries to get lucky. HockeyStack closes the gap with Revenue Agents for the enterprise.
Every deal gets its own dedicated agent, surfacing the right play based on how your company actually wins. Reps focus on judgment, relationships, and the conversations that close deals. Agents handle everything else.
That’s why teams like RingCentral, Outreach, ActiveCampaign, and Fortune 100 companies rely on HockeyStack. Learn more at hockeystack.com
Outbound sales today is still built around manual work: writing emails, building lists, and managing sequences across disconnected tools.
Nooks is breaking the mold. With their new AI Sequencing product, Nooks is an agent workspace where AI works alongside reps, helping them understand accounts, prioritize the right prospects, and generate context-rich outreach based on actual first-party interactions.
No more wasting time in multiple tools – it’s one unified workspace for all your outbound channels. Arm your sales team with intelligent outbound. Learn more at https://www.nooks.ai/gtmfund
Takeaways
1. Spatial intelligence is the next frontier.
Dr. Fei-Fei Li (World Labs founder, Stanford AI Institute) delivered a thesis that will define the next five years of AI investment: everything we’ve built so far is “wordsmiths in the dark.” Language models can generate text brilliantly. But they have zero understanding of the 3D world, physics, movement, or causality. They’re entirely abstract.
Spatial intelligence – the ability to perceive, reason, and generate in actual 3D space – is the missing layer separating AI that can do real work from AI that can only theorize about work. A language model can’t teach a robot to move, a text generator can’t build a dynamic game world, and an abstract medical algorithm can’t diagnose from 3D scans without spatial reasoning. You can’t build intelligence on language alone.
This creates a data constraint. Spatial training data is exponentially scarcer than language data. Which means the companies that crack the synthetic data flywheel first – building models that generate realistic 3D worlds that then train robots and other models, creating new training data – will own the next decade the way foundation models owned this one.
If your application doesn’t interact with the physical world, language models are probably sufficient. If it does (robotics, supply chain, manufacturing, design, healthcare), you need spatial grounding. You need partners building spatial intelligence. The vendors dominating spatial intelligence in the next 5 years will be the infrastructure layer of the 2030s.
2. There are 3 waves: Generative → Reasoning → Agentic.
Jensen Huang (NVIDIA) mapped the actual trajectory of AI that most people misunderstand.
-
Wave 1 (2023-2024) was generative: can you make text, images, code?
-
Wave 2 (2024-2025) is reasoning: can you think through problems, condition outputs, do research?
-
Wave 3 (2025-2026+) is agentic: can the model take a business problem in plain English and execute end-to-end across your actual systems?
The pattern is in the prompts.
-
Year 1: “What is?” “Where is?” “How does?” (extraction).
-
Year 2: “Reason about this.” “Summarize this.” (thinking).
-
Year 3: “Build this for me.” “Execute this.” “Figure it out and report back.” (autonomous work).
His prediction flips the org chart: within 1-2 years, a non-technical CEO will be able to describe a business problem and have an agent execute the solution. Not draft an email – actually integrate systems, read documentation, iterate, solve. You don’t need engineers to orchestrate anymore, you need a person who understands the outcome and can articulate it to the agent.
This collapses the entire skill ladder. Traditionally, engineers have been the bottleneck. They connect systems, pull data, build workflows. In 12-18 months, someone describes the outcome and agents build the workflow. This is a categorical shift.
You’re shouldn’t be waiting for agents to be “ready,” because they’re already ready for 70% of your workflows. You’re betting on orchestration and adoption, not technology. The teams that move fastest now – that start orchestrating agents into their core workflows in the next 3 months – will set the pattern for how their entire industry operates. Everyone else will be copying.
3. The skill becoming less valuable is execution, and the skill becoming valuable is taste.
Inside OpenAI, Srinivas Narayanan’s engineers don’t write code anymore. They guide agents that write code. 80% percent of their time is spent on judgment – knowing what the AI did wrong, what to try next, whether the output is accurate. A full repository is generated by CodeX, but the human decides if it’s right.
Open Evidence’s 70 engineers built what would have taken 500 because the execution layer collapsed.
What’s left? Taste.
This inverts your hiring and promotion calculus. You can’t hire for “fast executor” anymore. That’s a commodity. You need people with strong opinions about the domain, the ability to see flaws in AI output before they ship, and judgment at speed.
If you’re still optimizing for speed to execution, you’re hiring for yesterday’s bottleneck.
4. “Human in the lead, not in the loop” is how enterprises scale.
When enterprises say “human oversight,” they usually mean “humans constantly validating every decision.” That doesn’t scale. The competitive advantage is human in the lead. This is where humans set vision, strategy, what to measure. Then, agents handle execution at scale.
This can impact who you’re selling to and how. If you were targeting IT or operations heads who look to control variables, you may now be pitching business leaders who want to set direction and move at venture speed. Different buyer, different message, completely different sales cycle. Be mindful of this distinction.
5. The model is free now. What’s actually scarce is problem insight.
Open Evidence shouldn’t exist. 70 employees, 35 engineers, $12+ billion valuation, 60%+ of US physicians using it daily. If owning the model was the defensible asset, a startup with no custom models should be impossible. But they don’t own the model, they own the problem.
The entire narrative around “model superiority” has inverted. Mark Terbeek (Greycroft) and Hans Tung (Notable Capital) see the same pattern across winners: the companies winning aren’t the ones with the best technology, they’re the ones closest to the customer problem. The ones that built backward from “what do doctors actually need” instead of “what can our model do?”
Speed kills everything else now. When the model improves weekly, shipping fast and staying close to customer problems beats feature parity every single time. A smaller team moving at venture speed will always outrun an enterprise that needs to deploy changes through committee.
This kills the traditional moat-building playbook. You can’t win on technology ownership. Models are commodities improving on a one-week cycle. What’s defensible is problem insight – the founders and team who understand the domain so deeply that they know what to build before customers can articulate it. And the organizational speed to ship it before the model catches up and makes it free.
6. Unit economics are what separate durable companies from trends.
Salil Deshpande (solo GP managing $750M) named the crisis everyone’s avoiding: most AI infrastructure has negative gross margins. Models improve weekly, closed-source models stay 9 months ahead. The math doesn’t work. This is dot-com infrastructure all over again – massive overbuilding followed by a collapse that wipes out companies built on trends, not unit economics.
When you can’t sustain your CAC, improve margins every quarter, or scale profitably, you’re not a business. You’re a feature waiting to be copied or a vendor waiting to disappear.
For operators evaluating vendors: if a vendor has sub-70% gross margin, they’re not thinking about durability. They’re thinking about growth at any cost. Which means within 18 months they could pivot, get acquired, shut down, or be forced to raise at a down round and cut support. You’ll be stuck with a tool that became orphaned. Ask about CAC payback and gross margin trajectory. If they hedge or refuse to answer, that’s your signal.
7. Agents don’t replace your team, they replace your processes.
Madhav Thattai (Salesforce, Agentforce) and Rob Seaman (Slack) exposed a pattern that kills the “AI will displace workers” narrative: Agentforce is at $800M ARR with 25,000 customers running billions of agent transactions. Companies aren’t shrinking, they’re moving faster and becoming more ambitious.
The fear is misplaced. Agents don’t replace your team, they expose what your team should actually be doing. When an agent handles the 80% of customer service that’s rote, the remaining 20% becomes visible: the cases requiring judgment, empathy, problem-solving. That human? They’re worth more now, not less. Engineers don’t disappear when agents write code. They become the people who brief agents on what to build next, review architecture, make strategic calls.
8. Where agents sit matters more than what they do.
Slack hit a billion messages a day. 1,000% increase in AI apps being built. And the agents didn’t get smarter, they just moved. That’s the entire story!
The gap between an agent people use and an agent people ignore is placement. An agent on a separate website? Adoption dies. That same agent, invisible in Slack where work actually happens? Adoption jumps 25% immediately.
This is where most AI investments fail: they’re building brilliant agents and then burying them three clicks away in a separate tool. Or worse, they’re expecting sales teams to “discover” the agent, log in to a new system, learn a new interface. That’s never going to compete with the agent that just appears in Slack when someone says “I don’t know what to do.”
Rob Seaman explained how Slackbot becomes the invisible router: when someone needs benefits info, Slackbot surfaces the benefits agent. When they need to file a ticket, it routes to Linear. The agent never needs to be named or discovered, it just emerges contextually from the flow of work.
For product builders: your agent’s capability is probably fine. Maybe it’s 70/10/20. But if it’s not in the flow of work – if it requires an extra step to access – you’re betting against human friction. Humans are lazy. They use what’s in front of them. Build the agent, then architect the placement.
9. Day 2 is harder than day 1…observability is the real battle.
Every founder thinks Day 1 is the hard part: build the agent, get it live, ship it. That’s when the real work starts, and most teams aren’t prepared for it.
Companies sprint to build an agent in 2-3 weeks, celebrate the launch, and then discover they have no visibility into whether it’s actually working. The agent is live. Is it performing? Is it handling the right cases? Is it drifting? In healthcare and financial services, “mostly works” is code for “about to create a lawsuit.”
The difference between a high-growth company and a stalled one is observability. Most vendors obsess over agent capability. The ones winning obsess over measurement. Companies need to track agentic work units (actual completed tasks, not tokens), monitor KPI delivery, and spot drift the moment business conditions change but the agent still follows instructions from three months ago.
Salesforce itself learned this the hard way: building the agent took 1.5 months. Actually operating it – refining, measuring, optimizing – took another 2+ months. And it’s continuous. When a KPI changes, the agent gets coached and re-measured. When business rules shift, the agent doesn’t auto-adapt. Someone has to notice and intervene.
The vendors pitching “build agents in hours” are selling you the first mile. The vendors talking about observability, monitoring, and Day 2 operations are selling you the other 99.
Frameworks & Mental Models
These are the reusable patterns underneath the takeaways. Use these to think about your own situation.
The Five-Layer AI Stack (Jensen Huang, NVIDIA)
Power → Chips → Infrastructure → Models → Applications
Every layer has its own ecosystem, margins, and competitive dynamics. Most venture capital flows to applications, but applications are worthless without the layer below them. The constraint shifts over time. Right now, power and chip design are bottlenecks. In 18 months, it might be models. In 3 years, it might be applications.
A critical insight: the most important layer is applications. Not because it’s the sexiest, but because it’s where value accrues for customers. Chips are commodities unless they enable new applications. Models are infrastructure unless they unlock new work. Energy is useless unless it powers something people want.
When evaluating a company or a vendor, ask: which layer are they actually winning in? Are they defending a layer they’re good at or pretending to compete everywhere?
The Three Waves of AI (Jensen Huang)
Wave 1: Generative (2023-2024)
-
What it does: Generate text, images, code from language
-
Prompt pattern: “What is?” “Where is?” “How does?”
-
User behavior: I ask, AI answers
-
Business impact: New content, new marketing, new coding velocity
Wave 2: Reasoning (2024-2025)
-
What it does: Reason through problems, do research, condition outputs
-
Prompt pattern: “Summarize this.” “Reason about this.” “Why did X happen?”
-
User behavior: I ask complex questions, AI reasons aloud
-
Business impact: Better accuracy, grounded outputs, trust increases
Wave 3: Agentic (2025-2026+)
-
What it does: Take business problems in natural language and execute end-to-end
-
Prompt pattern: “Create X for me.” “Build Y.” “Execute this task and report.”
-
User behavior: I describe outcome, AI does the work autonomously
-
Business impact: Discontinuous productivity, new categories of work, skill compression
If you’re still operating in Wave 1 or Wave 2 thinking, you need to get to Wave 3.
Spatial Intelligence vs. Language Intelligence (Fei-Fei Li)
Language Intelligence (today)
-
What it understands: Text, patterns in text, how to generate new text
-
What it’s blind to: 3D space, physics, movement, causality in the physical world
-
Metaphor: “Wordsmiths in the dark”—brilliant at language but not grounded in reality
Spatial Intelligence (emerging)
-
What it understands: 3D space, geometry, physics, dynamics, movement, interaction
-
What it enables: Robotics, autonomous systems, medical imaging, game worlds, design tools
-
Output: Not text or images, but 3D worlds, simulations, predictions of next states
The convergence: Language models + spatial models together = AI that can reason about work and execute it in the physical world. Language alone is theory. Spatial alone is mechanics. Together, they’re intelligence.
The Maturity Curve: Automation → Discovery → Real Work (Salesforce/Slack)
Every agent deployment follows this progression:
Month 1: Task Automation
-
Agent handles simple, repetitive tasks
-
Status checks, email drafting, policy lookup
-
User sees: “The system can do this for me”
-
Business value: Time savings on low-impact work
Month 2: Information Discovery
-
Agent can answer questions, synthesize information, find context
-
Customer history, policy explanations, data lookups
-
User sees: “The system understands my domain”
-
Business value: Faster decision-making, less context-switching
Month 3+: Real Work
-
Agent executes consequential tasks: orders, service resolutions, transactions
-
Full workflow autonomy with guardrails
-
User sees: “The system actually moves business forward”
-
Business value: Revenue impact, customer experience transformation
Don’t judge agents on Month 1 capability. They all look mediocre. The companies winning are the ones that have the infrastructure and discipline to ship Month 3 work.
Agentic Work Units vs. Tokens (Salesforce)
Tokens = input mechanism
-
How many words/pieces of text does the model consume?
-
Like measuring a truck by how much fuel it burns, not how much it delivers
Agentic Work Units = output mechanism
-
How many meaningful tasks did the agent complete?
-
Actual business outcomes: orders processed, customers resolved, decisions made
The mistake: optimizing for token efficiency instead of work unit impact. You can burn tokens forever without delivering value.
Three Constraints on Frontier Models (Fei-Fei Li)
For any frontier AI company, three things constrain growth:
-
Compute: Access to GPUs, data centers, training infrastructure
-
Models: Research capability to innovate new architectures
-
Data: Training data that teaches the model what actually matters
Most companies obsess over compute and models. Data is the hidden constraint. Spatial data is even scarcer than language data. The data flywheel—where output becomes input for the next generation—is where defensibility lives.
Two Ways to Think About “Busy” (Jensen Huang)
Prescriptive: “Here’s exactly what you should do with AI”
-
Pros: Clear, measurable, predictable
-
Cons: Misses most of the opportunity, forces rigid thinking
Inspiring: “Here’s why this matters. I can’t predict exactly how you’ll use it, but this is the direction.”
-
Pros: Captures optionality, lets teams find their own breakthroughs
-
Cons: Harder to measure, requires trust in distributed judgment
The best leaders let a thousand flowers bloom everywhere except the critical core (coding, supply chain, chip design). In those domains, you go to the frontier and don’t fail.
HumanX
Following a landmark second edition at the Moscone Center in San Francisco, HumanX 2027 will take place from March 7–10, 2027, at Mandalay Bay in Las Vegas.
Tag @GTMnow so we can see your takeaways and help amplify them.
Anthropic hit a $3B revenue run rate and is doubling down on infrastructure, partnering with Google and Broadcom to build custom AI chips. The bet: owning silicon is how you control cost-per-token at scale, and at this growth rate, that math matters more every quarter.
Trimble is acquiring Document Crunch, bringing AI-powered contract risk analysis into its construction project management ecosystem. Construction is one of the last industries to modernize around document compliance, and purpose-built AI that catches payment disputes and notification failures before they escalate is a real wedge. Vertical AI finding its way into incumbent platforms is the pattern to watch.
GTM 185: How One Hackathon Took Zapier’s AI Usage From 10% to 97% | CEO of Zapier
For the full thing, listen on Apple, Spotify, YouTube or wherever you get your podcasts by searching “The GTMnow Podcast.”
Armada – named among Fast Company’s Most Innovative Companies in computing, alongside NVIDIA and Google. They’re building distributed AI infrastructure for the environments most platforms ignore. Worth watching as enterprises and governments push AI into the field, not just the cloud. To learn more you can also check out the GTMnow episode with CEO Dan Wright.
Examen – raised $4.3M in total funding and launched. Examen is building an autonomous analyst for commercial real estate.
-
Enterprise Account Executive, US & Canada at Semrush (Dallas, TX)
-
Enterprise Account Executive at Patch (Remote – San Francisco / New York)
-
Enterprise Customer Success Manager at Noibu (Hybrid – Ottawa, ON)
-
Growth Strategist at Mutiny (New York, NY)
-
Senior Account Executive at Statusphere (Remote – Orlando, FL)
-
Senior Account Executive, Brands at Fastbreak AI (Charlotte, NC)
See more top GTM jobs on the GTMfund Job Board.
Upcoming events you won’t want to miss:
-
MicroConf 2026: April 12–14, 2026 (Portland, OR)
-
GTMfund Dinner: April 14, 2026 (Austin, TX)
-
HockeyStack’s Launch Party: April 16, 2026 (San Francisco, CA)
-
SaaStock USA: April 15–16, 2026 (Austin, TX)
-
Forrester B2B Summit: April 26–29, 2026 (Phoenix, AZ)
-
SaaStr Annual: May 12–14, 2026 (San Mateo, CA)
-
GTMfund Dinner: May 14, 2026 (San Francisco, CA)
-
GTMfund Dinner: June 9, 2026 (London, UK)
-
Dreamforce 2026: September 15–17, 2026 (San Francisco, CA)
-
INBOUND: September 16–18, 2026 (Boston, MA)
-
Pavilion GTM2026: September 28–October 1, 2026 (NYC, NY)
-
CVC Week by Counterpart Ventures: September 29, 2026 (San Francisco, CA)
-
Customer Success Week: October 5-9, 2026 (NYC, NY)
-
TechCrunch DISRUPT: October 13–15, 2026 (San Francisco, CA)
Some GTMnow Network love to close it out – we appreciate you.














