The Rise of AI Workers - Anthropomorphizing AI Agents

5 Jun 2024

As a relatively self-obsessed species, we have always had the tendency to apply human traits to other things. 

From the melodramatic lives of Greek deities to the swashbuckling animal adventures in Brian Jacques’ Redwall, we simply cannot resist imbuing the non-human with the human. 

David Hume sums this up well: “There is a universal tendency amongst mankind to conceive all beings like themselves. We find human faces in the moon, armies in the cloud”.



From Copilot to Coworker

In AI, this paradigm rings particularly true; from ELIZA to Alexa, conversational AI systems are the perfect canvas for our foibles. 


And nowhere is this more obvious than in the recent wave of agentic AI, where orchestrated models can generate multiple outputs in sequence - enabling them to complete full tasks autonomously. Our systems have evolved; from copilots to autopilots.

We’ve seen a plethora of ‘AI worker’ startups pop up, seeking to capitalise on this advance in ways that were previously impossible.

To give you an example, Artisan is building a consolidated GTM workforce, including Ava (Sales Rep), Liam (Marketer) and James (CS Rep).


Similarly, 11x is creating digital AI workers - starting with Alice, the AI SDR.

Basepilot is also building AI employees for a range of departments, from finance to recruitment.

The landing pages of all these companies highlight the profiles and capabilities of these employees, alongside realistic (albeit AI-generated) avatars.

I guess the question is - why? What are the benefits to selling the systems in this way? And what do the buyers and users think of this humanisation?







The good

  1. Increased trust with customers and users

  2. Easier for customers to understand - both in terms of roles and ROI

  3. Building new categories


Increased trust with customers and users 

Funny though it sounds, multiple studies have shown that the more anthropomorphic an AI is, the more likely we are to trust it and follow its instructions. 

Other studies have shown that customers had higher trust, satisfaction levels and were more likely to make a purchase when interacting with an anthropomorphic AI sales agent compared to a non-anthropomorphic one.




Easier for customers to understand - both in terms of roles and ROI

In many cases, these agentic AI systems - and their capabilities - are very new to buyers; after all, 47% of Americans have never even heard of ChatGPT. With significant levels of automation, it’s often easier to align the AI with employee role functions, where there may already be defined processes or budgets set. 




Building new categories


The last thing startups want to do today is to compete with a Copilot or chat product (like Github Copilot / ChatGPT). This is bread and butter for incumbents - incumbents who can tack AI onto existing workflows and can afford to take losses to build market share.

Instead of the binary copilot model (one input equals one output), agentic AI promises to complete complex tasks end-to-end, just like an employee would. 


If (when!) these promises become reality, there will be a massive delta between value provided and the typical $20 per month fees for copilots. Startups - having taken the product and reputational risks to bridge this gap - will be in pole position to capture some of this value. 

🚨 Unintended consequence warning 🚨: the mere whiff of category creation will have VCs knocking down your door!






The bad

  1. Internal pushback by employees

  2. It’s limiting


Internal pushback by employees

Agentic AI has really ignited the conversation around the technology and its effect on staffing / layoffs.

It’s natural that many people - particularly those in creative industries - will push back around the automation of tasks that they used to do. This can cause some issues for execs in onboarding products, especially if the product is brazenly packaged as a human replacement. 

Studies have shown that 72% of executives are actively showing restraint in AI adoption, with the primary reason being societal pressures.

🚨Personal opinion warning 🚨: whilst I acknowledge that there will be an uncomfortable transition into AI-led workforces, I’m a Jevons Paradox stan. In other words, I think that this transition will create far more jobs than it destroys (and hopefully reduce menial work, too).



It’s limiting

We chose not to (fully) anthropomorphise our AI agent - Sawyer - and this was the biggest reason why.

The best new products feel like magic; able to do things that were previously unimaginable. By mapping work only to our human outputs, we might miss the chance to create magic. 

I actually believe that it’s ok for AI to be different - to be better and worse than us at certain tasks. Certainly, no human can monitor an entire codebase, prune a CRM 24/7, or write full-length emails in seconds.

It’s a different form factor and it might end up being more alien intelligence than human intelligence; so let's remember we can build alien products too.






Wait and see

  1. Regulation

  2. Company org structures 


Regulation

We have seen some regulation over platforms that imitate humans; for instance, robocalls using AI-generated voices have now been banned. Now obviously these are used nefariously - but there is an increasingly blurred line. If an AI SDR was to reach out with a personalised avatar video message - could that also be seen as misleading? If it is deemed so, will we have to stamp outreach with an AI notice? Does that then reduce trust and conversion rates with customers who crave that special human touch?


Company org structures  

No-one knows how the orgs of tomorrow look; it’s an open-ended question.

Serial entrepreneur Ash Barbour noted: “It’s a total mess. I think people will start rebranding their AI assistants to 'nothing', as it won’t add any value and will just make the product more confusing.”

It’s certainly true that we don’t yet know how agents will interact with each other and how they’ll fit into our org charts. What happens when I have 8 AIs called Tim? I will probably need an agent routing system but then what’s the point in having multiple, named AIs?




Conclusion

In all honesty, we're so early on in this cycle that that there isn’t really a conclusion to be drawn yet - other than to say how fascinating it is to be working on a startup today. I personally can't wait to see what sticks.