You have learnt how to use AI, great, now do not let it drive the bus
Trent Steenholdt
December 9, 2025
4 minutes to read
AI is everywhere now
Whether we want it or not, it is here to stay. At least until this AI bubble bursts the same way the dotcom bubble did back in the early 2000s.
AI can write pretty much anything. It can draft documents, generate code, answer questions and fill gaps in our day to day work. For many people, myself included, it feels like a supercharged helper that removes the dull parts of the job. For others it creates pressure to keep up with a moving target. Both feelings can be true at the same time.
The real challenge is not learning how to use AI. Most people can do that with a few prompts, and they have already been adding fuel to the AI machine with every prompt they submit to places like ChatGPT. The harder part is remembering that you are still the one driving the bus. AI is helpful, but it does not understand the consequences of its suggestions. It does not carry accountability, reputation, trust or responsibility. You do.
AI agents follow the same rule. You are the bus driver with all the agents in the back. They are not steering while you relax. You are still responsible for the direction and the outcome.
A good friend put it even better based on some work we’ve been seeing lately:
“Yep, the people on the bus do not even have tickets, let alone know which way the bus is going.”
Course correction
This raises a good point! If you see someone who is not driving the bus, call it out. If a colleague is letting AI make decisions for them, say something. It is not about embarrassment, it is about protecting the quality of the work and the trust placed in you and your team. We hold each other to account in every other part of our jobs, and AI should be no different.
In the IT consultancy space, we are already seeing organisations like Deloitte create lasting damage to their trust, and to the trust and reputation of their consultants. As someone who works in IT consulting, I have no doubt this trend will continue into 2026 and beyond.
AI will make mistakes
AI can and will suggest code patterns that will not scale, or architectural shortcuts that look tidy (or like a good shortcut) but are difficult to operate. It can reinforce incorrect assumptions because you only asked it one way (Garbage in, garbage out). It can give you confidence in something that still needs a proper second look. These moments are reminders that human judgement remains the essential part of the loop.
What you can do to take control again
Good engineers and good consumers of AI treat it as a tool. They ask it to explore, summarise or accelerate, but they still check the map before taking a turn. You are still the quality assurance. You are still responsible and accountable for what it produces. Good consumers validate design decisions. They consider security, cost, ethics and long term impact. They treat AI as a companion, not a replacement for themselves or a shortcut around another person or team.
The risk is not that people will refuse to use AI. The risk is that they will use it without questioning the output. When that happens, decisions drift away from intent, platforms and solutions become fragile, teams lose context and quality slips through the gaps because no one slowed down long enough to review the work. It is the same AI slop that is now covering social media which could quite easily end up in the enterprise.
In conclusion
AI delivers strong value when used well. It supports learning, curiosity and creativity. It lifts productivity in meaningful ways. Definetely use it. Just make sure it never takes the driver’s seat. Let it check the gauges or carry the map, but keep your hands on the wheel.