top of page
SOUTHREN BLOG (2).png

Just Because We Can Use AI, Should We?

By Rebecca Marquez AI tools are everywhere. Faster summaries, automated follow-ups, smarter tracking, better insights. All of it is promising, and there’s a subtle trap teams can fall into: over-engineering processes simply because the tools allow it. When that happens, the work doesn’t necessarily get better, it just gets heavier.


When the Tool Leads Instead of the Need


This usually starts with good intentions. A team wants better visibility into client work, stronger follow-through, or more consistency across operations. An AI tool gets introduced to “solve” the problem, but instead of starting with the outcome, the conversation starts with features.


Before long there’s a complex workflow to log interactions and new terminology layered on top of existing systems. Staff are expected to adapt while the original question remains fuzzy: What problem is this actually solving?


In any client-focused or operational environment where time, trust, and clarity are everything, processes that aren’t immediately intuitive tend to get bypassed. Even well-designed systems fail if they don’t clearly make the work easier.


Image is of an office scene with graphics related to AI over top.

When Meaning Gets Lost in Jargon


Meaning can quietly be sacrificed to jargon. Terms like “automation,” “pipelines,” “agents,” or “AI-driven insights” start circulating while confidence about what they actually mean in practice is unclear.


A good signal that this is happening is when a process requires explanation before action. If someone has to translate the system before they can use it, that friction adds up quickly.


Jargon isn’t the problem on its own. It becomes a problem when it replaces clear thinking, or when it gives the impression of alignment without real understanding.


Talk It Through Before You Tool It Up


One of the simplest and most effective ways to avoid over-engineering is also one of the most overlooked. Talking the process through with another person, explaining out loud what a team is trying to improve forces clarity. Is the goal to ensure clients are progressing through a program? To reduce administrative follow-up? To give leadership better visibility into engagement? When those answers surface in conversation, the solution often becomes much simpler than originally imagined.


In operational contexts especially, this step matters because so much work relies on judgment, relationships, and nuance. AI can support that work—what it can’t do is replace clarity of intent.


A Client-Tracking Example


Consider the process of tracking client or project meetings. Without clarity, teams can over-engineer the workflow by updating multiple calendars, duplicating notes across tools, sending extra reminders, or building dashboards no one checks. The system may look sophisticated, but operationally it slows people down.


By contrast, a purpose-driven approach starts with the kernel: ensuring progress is visible and actionable. From there, the process can be simple:


  • A single source of truth for sessions or touchpoints

  • Clear status markers (scheduled, completed, cancelled, rescheduled)

  • Lightweight check-in protocols for missing updates


AI can then be applied thoughtfully without overcomplicating the workflow. The difference is that the process supports the work, rather than creating extra work to “feed” the system.


Protect the Kernel


Every process has a kernel—the core reason it exists. That kernel might be:


  • keeping clients or projects on track

  • ensuring consistent follow-up across teams

  • reducing administrative overhead

  • creating transparency without micromanagement


Whatever it is, the kernel needs to be named before AI enters the picture. Without it, systems can look sophisticated on paper while creating extra steps, duplicating effort, or quiet resistance in practice.


Simple Scales Better Than Smart-Sounding


To be clear, this isn’t an argument against AI; it’s an argument for intention.

Simple, well-understood processes supported lightly and deliberately by AI almost always outperform complex ones that require ongoing explanation or policing. If a system is hard to explain to a new team member or a client-facing staff member, it’s worth revisiting whether it’s actually solving the right problem.


The best processes often fade into the background supporting the work without demanding attention.


What This Means in Practice


Before adding AI to a workflow, pause the rush to “solution.” Talk it through with a real human. Ask what you’re genuinely trying to make easier, clearer, or more consistent. Use plain language until the purpose is obvious. If it still feels fuzzy, keep going, you’re not there yet.


The problem should earn the tool, not the other way around. When purpose leads and AI follows, the result isn’t just a smarter process—it’s one that people actually use. And that’s what operational success looks like, no matter the field.



 
 
 

Comments


bottom of page