How to Talk to Humans: The Art of Asking Questions

Asking humans the right questions as an AI Agent is an art form. This is what you need to know as an agent builder to master it.

Chen Fradkin, Data & AI Architect
Chen Fradkin, Data & AI Architect
Author
How to Talk to Humans: The Art of Asking Questions

Interacting with the worst AI agents feels about as enjoyable as filing a tax return.

You open an AI assistant, ready to get something done. Instead, you're immediately hit with a wall of questions: "What's your role? Which department? What system? What's the ticket number? What priority? Who is responsible for  approving this? What is the deadline?". You haven't even finished your morning coffee, and you're already exhausted.

Here's the challenge every AI agent builder faces: AI agents need information to act, but asking too many questions makes users feel like they're doing the work. The interaction with the AI agent becomes worse than filling a form, and users are dragged into a slow Q&A process, one after the other, each requiring a round-trip to the server.

An AI agent may deal with ambiguous requests, incomplete data, and decisions that require human judgment - and has a rich understanding of people, systems, and policies in the organization, which allows them to execute tasks like the most experienced human team member. But the quality of an AI agent isn't just about what it knows. Even experienced teammates need to ask questions sometimes, and their value relies on what they do when they don’t know. Great AI agents, like great teammates, feel like collaborators, not interrogators.

There's no weakness in asking questions. It's how you learn, understand, and grow. What matters is how and when you ask.

Here's what I've learned about the art of asking the right questions as an AI agent:

1. Do the Homework First - Then Come With Questions

The first rule of asking a human great questions as an AI agent is: don't ask YET.

An AI agent should demonstrate competence before requesting anything from the user. First it should do the work - research, narrow down options, eliminate dead ends - and then ask only what it genuinely can't resolve on its own. When it finally asks, the user should feel that the AI agent has done its homework. This isn't just showing off - it builds trust.

Think about hiring an Executive Assistant. When booking a flight, a junior one starts by asking questions such as: "What is your preferred airline? What is the best time for you to take off? Do you want a window or an aisle seat? Do you care if it’s a direct flight or a connecting flight?". A senior EA knows the ropes:they check your calendar, look at past trips, and come back with: "I found two good options - a direct flight or one with a short layover that saves $200. Which one do you prefer?". Both are new. Both need to learn how their new workplace operates. But one earned the right to ask by putting  effort and proving value first.

Alex, Twine’s AI Digital Employee and Identity and Access Management (IAM) expert, does exactly this. When a user asks about their organization’s Identity data, it runs searches, cross-references sources, and builds findings before the user sees anything. The first thing they see isn't a question - it's results.

The interrogation
User
Can you help with ticket INC-4821?
Agent
What is this ticket about?
Which system is involved?
Who is the relevant user?
What action do you need?
Who should approve this?
user left the chat
Research first, then ask
User
Can you help with ticket INC-4821?
Searched ServiceNow — found ticket INC-4821
Checked Salesforce — confirmed active access for jsmith
Looked up approver — Dana Lee, identity team
Agent
I found ticket INC-4821 — it's about revoking jsmith@acme.com's access to Salesforce. The approver is Dana Lee.

Should I reach out to Dana and ask for her approval?

2. Structured Questions > Free Text

When your AI agent needs to ask a question, let users click their way to an answer rather than type one.

Multiple-choice questions are faster to answer, reduce ambiguity, and - when the posed options are good - it builds trust by showing the user that the agent actually understands the situation. When a user types a free-text response, the AI agent has to parse it, handle typos, resolve references, and deal with answers that don't quite match what was expected. When a user selects  an answer from pre-defined available options, the interaction is clean and unambiguous.

What Is a Good Option?

There's a critical nuance: only use structured questions when the options are truly actionable. Selecting an option should fully resolve the ambiguity. If the user still needs to type something after clicking, you haven't saved them any effort - you've just added a click. See  the example below:

Anti-pattern
User
Hey can you disable John Smith's user?
Agent
John Smith is a common name. How would you like me to search for them?
Search by Email
Search by Username
Search by Employee ID
Agent
What's the email address?
User
john.smith@acme.com
Two interactions + typing. The structured question added a click but saved nothing.
The right way
User
Hey can you disable John Smith's user?
Searched users owned by John Smith
Agent
I found 3 users owned by John Smith across your systems. Which one should I disable?
john.smith@acme.com
Engineering · Okta
jsmith@acme.com
Sales · Active Directory
john.s@acme.com
Marketing · Salesforce
One click. Fully resolved.

When Twine’s AI Digital Employee for IAM, Alex needs to get clarifications from a user, it doesn’t rely on free text. We gave it a tool called ask_user_question . Using it, Alex crafts 2-4 concrete options, each with a short description explaining what it means. An "Other" option is always available as a free-text escape hatch. Alex is - and this is crucial - instructed not to use structured questions when a free text answer is needed - like when a user is describing a problem, explaining context, or dictating what an email should say. You can't multiple-choice your way in these types of scenarios .

Batching: Watch for Dependencies

Sometimes, the AI Agent needs to be given answers to multiple questions at once. For instance, when a user is requesting the AI Agent to generate a report for them, the agent should know: which data to include in the report, the desired format (.pdf/.CSV, etc.), the recipients, and how to send it to them. A smart move would be to batch all questions into a single interaction to minimize round-trips. Some questions are inherently sequential - the answer to one shapes what the next question looks like. When that's the case, another round-trip will be required. But even in cases as such, the AI Agent should still look for other independent questions to  bundle into the interaction..

Twine’s Alex uses the same ask_user_question tool to batch up to 3 questions into a single interaction - each with concrete options, presented together, so the user answers and submits in one go.

Batching isn't just about the efficiency of getting all the information in one go. Every time an agent stops to ask, it's saying to the user : "I need you again." Every time the agent pauses and waits for input, the user's flow is somewhat broken. It’s the difference between working with a seasoned employee and  babysitting a newbie.

3. Let the UI Do the Asking

The most elegant questions don't look like questions at all.

Don’t ask through conversation, surface the right  questions in a purpose-built UI. When a user wants to automate a process - say, ‘disabling inactive users’ - Twine’s Alex, initiates a dynamic workflow with a field for the task scope, exclusions, action instructions, and a schedule. Some fields are free-text instruction boxes where the agent pre-fills its best guess - for example, defaulting the stale user definition to "inactive for 90+ days." The user can accept it or adjust.

Explicit
User
I want to disable stale users automatically
Agent
Sure! I need a few details to set this up:
1. Which apps should I check?
2. Which accounts count as stale?
3. Any exclusions?
4. What should happen when a stale account is found?
5. How often should this run?
User
1. Entra
2. Inactive 90+ days
3. No exclusions
4. Reach out to the owner to approve disabling
5. Daily at 8 AM
Implicit
User
I want to disable stale users automatically
Agent
Create Task
stale accounts
Define task scope 1 out of 4
Stale Accounts in Entra (90+ days inactive)
Next
Add an exclusion
Back
Next
Reach out to account owner to approve disabling stale account
Back
Next
Repeats
Daily
Time
08
:
00
AM
Back
Create Task

This is the key: the user is still answering questions: which apps to target, what should be considered as a ‘stale account’, what to do once it found a ‘stale account’ based on the organization’s definition - but they don't feel like questions. They are fields in a flow, pre-filled with the AI Agent's best guess. The user reviews and tweaks rather than formulates from scratch. The same information is gathered, but the effort shifts from the user to the agent.

4. Spend Your Question Budget Wisely

Think of every question as a withdrawal from the user’s patience budget; as if every question costs something, and you have a limited amount to spend.

Low cost
Agent
I found 3 matches for John Smith. Which one should I disable?
john.smith@acme.com
jsmith@acme.com
john.s@acme.com
Structured, earned, one click resolves it.
Medium cost
Agent
What timeframe should I use for the report?
Requires typing, but the question is focused and specific.
High cost
Agent
What should the report look like?
Open-ended. The user has to figure out what the agent needs.
Budget breaker
User
I'm John, disable my old account
Agent
What's your name?
Agent wasn't listening. Trust destroyed.
Cheap — user barely notices
Expensive — trust erodes

Before Your Agent Asks, Ask Yourself These Questions:

Before adding any question to your agent's flow, run it through this filter:

  1. Should the AI Agent figure this out on its own? → Do not ask the user. Let the AI Agent do the work and solve it.
  2. Does the AI Agent already know what to do? → Embed it in the UI with smart defaults. Let the user validate, not formulate.
  3. Can it be batched with other questions? → Combine to minimize interruptions.

Is this truly the last resort? → Ask it properly: with context, with options, with evidence of the work already done.

Bonus: Your Agent's Voice Matters

Tone is a personal choice - some teams want their AI Agent to sound more professional, others want a casual, conversational agent voice. There's no single right answer. But there is a wrong one: Having the AI Agent sound like a system log, not a colleague.

Robotic
Agent
Multiple results found for query 'John Smith'. Please select the correct identity from the options below:
Collaborative
Agent
I searched across Okta and AD and found 3 people named John Smith. Which one are you looking for?

Same information, same options - completely different feeling. Small language choices compound: contractions over formal phrasing, "I found" over "results found", questions that invite ("which one are you looking for?") over commands that direct ("please select"). Your agent's tone is the first thing users interact with and judge.

The Questions Worth Asking

As someone who  builds  AI Agents for a living, and is obsessed with every interaction I've ever had with AI products in the market, here's what it comes down to: the best agents are the ones that know how to ask.

The goal is never zero questions. Some decisions genuinely require human feedback, and pretending otherwise leads to agents that act confidently in the wrong direction. Instead, the goal is zero unnecessary questions - no question that the agent could have answered itself; no question that wastes the user's time; no question that makes the user feel like they got a new, inexperienced  teammate instead of an expert.

Every question your agent asks should make the user think "good thing it asked" rather than "why doesn't it know this?"

Building great AI UX is ultimately less about making your AI smarter, and more about giving it the right tools for talking to humans. Their time is finite. Their patience is finite. Their willingness to collaborate with an AI agent depends entirely on whether that collaboration feels productive or burdensome. The art isn't in having answers. It's in knowing which questions are worth asking.

Every unnecessary question tips the balance toward burden. Every well-placed question tips it toward magic.

Chen Fradkin, Data & AI Architect
Chen Fradkin, Data & AI Architect
Author