"What's in a name? That which we call a rose by any other word would smell as sweet."
William Shakespeare, Romeo and Juliet (Act II, Scene II)
It sounds lovely, poetic. And in some cases, true. But not in this one. If you're working with artificial intelligence, or you’re marketing something that touches it, or looks like it might touch it, or might one day be upgraded to possibly touch it... the name you give it matters more than most people realise.
AI, as a label, used to mean something technical. Now it often means whatever a company wants it to mean. And that’s the problem.
There was a time when AI referred to systems that mimicked parts of human cognition. That was the idea. Learning, reasoning, perception. Now? “AI” is slapped onto vacuum cleaners, toasters, templates in word processors, shopping filters, and ticketing bots that do little more than follow IFTT (if this then that) instructions.
Some of this is marketing. A lot of it, actually. “AI-powered” sells better than “automated rules.” It sounds newer. It sounds smarter. It gets picked up by journalists and investors. People think it means the product is doing something intelligent when it may be doing nothing of the sort. And once that label is on it, people start treating it differently.
They expect results to be dynamic, even personalised. They assume it can adapt to context. And if it doesn't? If it fails? They may not blame the product. They may blame themselves for not using it “right”. Or worse, they may never realise the thing was limited to begin with.
You see this across sectors.
In finance, there are tools that generate stock insights by regurgitating earnings reports with a few templated lines of commentary. But they're presented as "AI market analysts.”
In education, quiz generators that pull from a static question bank are branded as "AI tutors.” In customer service, chatbots that follow a strict decision tree are called “virtual advisors.”
And in every case, that “AI” label shifts how the tool is understood, even if its capabilities are minimal. The more it’s used as a gimmick, the harder it becomes to know when AI is actually doing something new or helpful. This isn’t just about overstatement. It creates serious problems when expectations are out of sync with what’s actually happening.
Misnaming these systems can undermine trust, cause reputational damage, and in some cases, lead to regulatory scrutiny. If a customer is misled by a product description and relies on the system in good faith, and something goes wrong, that’s not just an inconvenience. It may fall under misrepresentation. Or breach of contract. Or even negligence.
Legal systems are already starting to respond to these gaps. In 2023, the US Federal Trade Commission (FTC) warned companies not to exaggerate AI capabilities in advertising. In the UK, the Advertising Standards Authority has begun taking down misleading AI product claims. And in Europe, new rules under the EU AI Act will penalise companies that misrepresent the risk and function of their AI tools, particularly in sensitive sectors.
The thing is that calling an AI “smart” or “intuitive” suggests it understands something. Actually, most systems don’t understand anything in the way we use that word. They make predictions based on patterns, trained on large volumes of data. They have no self-awareness, no judgment, no context outside what they were trained on.
But once we start calling a chatbot a “digital colleague” or an image model a “visual thinker,” we start treating these systems as if they have minds of their own. And that changes how we interact with them.
If you work in tech or policy or product marketing, and you're using words like "autonomous", "agent", "smart", or even "understanding" without asking what those words imply, you might be stepping into legal or ethical quicksand without knowing it. Because, in AI, names do not just sit on the surface. They shape how systems are understood. And they do something else too. They shift responsibility.
For an example, what about the AI agent versus agentic AI? These are phrases that are often used interchangeably. They sound similar, probably the same. But they’re far from it.
An AI agent is often just a bit of software that can take limited actions in an environment. Click buttons, send emails, scrape data. Routine stuff. Useful, but not magical.
Agentic AI, though? That implies something more. Something that operates with intent. Something that initiates actions on its own, not just in response to input but maybe because it has a “goal” or “preference.” Right now, almost nothing in the commercial space fits that description, not really. But you’d be surprised how many companies use the term anyway. And when a tool like that makes a decision that harms someone, those words can get pulled apart in courtrooms.
The gap between what something is and what it’s called becomes the whole argument.
We often give “AI Tools” more trust than they deserve. We assign meaning to their outputs. We imagine intention where there is only correlation. This creates a dangerous fog around accountability. Because if the tool is seen as making its “own” decisions, people might believe responsibility lies with the machine. But the machine cannot be held to account. The people and companies behind it can.
And should.
Let’s look at some real examples of this problem (names have been withheld but simple googling will give you more detail if you want it).
In one country, a government project was widely reported as planning to create a “robot judge” to resolve small claims disputes. Headlines exploded. It sounded like a world first. In truth, the tool was a decision support system. It did not replace a judge. It did not make final rulings. But the word "judge" shaped how people viewed it. Trust dropped. Criticism piled in. The actual tool became hard to deploy, not because of its technical limits, but because it was framed incorrectly from the start.
Several recruitment platforms claimed their algorithms were “learning” to spot talent. That language suggested fairness, neutrality, even improvement. But it turned out that many of them were just replicating past hiring decisions, with all the built-in bias that came with them. A particularly well known company had to scrap an internal system that penalised CVs with the word “women’s” in them. That wasn't intelligence. It was a mirror held up to flawed data. But when you call a tool smart, people expect smart outcomes. And when they do not get them, the backlash is fierce.
A company once described its chatbot as "the world’s first robot lawyer.” It could generate documents, sure. But it could not give real legal advice, nor represent anyone in court. That didn’t stop people from trying to use it that way. Eventually, regulators stepped in. That company is now facing legal challenges for offering services under a label that overstated what the product could do. Once again, the issue was not just function. It was presentation.
It’s not always the flashy errors that cause damage. Sometimes the smaller, stranger slips are more telling. Quiet language choices that go unchecked. Internal inconsistencies. Documents written by different teams who don’t speak to one another.
Here are a few examples. They’re subtle, but they matter.
A procurement team buys an “AI-enabled” fraud detection tool. It turns out to be a rules engine based on old case templates. When false positives skyrocket, the vendor blames misuse. But the contract was signed on the assumption the system could learn. It couldn’t.
A start-up pitches its pricing tool as “self-learning.” Investors believe it can adapt in real-time to changes in the market. Later, it’s revealed that any updates require manual retraining every two weeks. That wasn’t deception, exactly. But the language got them into meetings.
A chatbot integrated into a large HR system is described in documentation as “conversational AI.” That phrase makes it sound dynamic. But in practice, it can only answer 14 types of questions, all pre-written. One client asks it about a mental health policy and receives a confusing redirect to the corporate travel policy. Trust evaporates overnight.
A public sector report refers to a tool as “autonomous” when it is actually just running on a schedule. No independence. Once that word appears in the paper, it gets cited. Other departments adopt it. Now five systems are described as autonomous when none actually are.
These scenarios, or others like them have already happened. And they all started with the same thing: poor choice of words.
So what do you call it, then?
This is not a call for dull language. It’s a call for honest language. If your tool scores candidates, say that. If it ranks, sorts, clusters, predicts, fine. Just say what it does. Not what you hope people will assume.
Also, don’t treat words like decoration. They are operational tools. They guide how users behave, how oversight happens, and how blame is assigned.
In the end, Shakespeare might have had a point about love. But if you work with AI, the name matters. Possibly more than anything else.
If you’re naming an AI product, pause and ask “What does this system actually do?” Not what you wish it did. Not what sounds impressive. But what it actually does, consistently and predictably.
Document your language choices, internally and externally. Be consistent. Don’t call it “predictive” in one document and “adaptive” in another unless you know those mean different things and you’re prepared to explain how.
And if you have one, give your legal team a seat at the table. Not to water down the message. Just to keep it honest.
Finally, consider dropping the word “AI” altogether if you don’t need it. If your tool works well, it will sell on its own merits. If it doesn’t, AI branding will only carry it so far. And when it fails, that label may just make the fall harder.
And if you’re a consumer take all these descriptions with a pinch of salt.
Juliet thought names didn’t matter, but in the real world, especially with “AI”, names are not neutral. They carry weight. They frame expectations. They change how we assign fault.
So next time someone suggests calling a tool “intelligent” or “autonomous” or “agentic,” ask if it really is. Or if that word is just another rose with a different scent. One that might not smell so sweet after all.