
Artificial intelligence has entered the workplace without knocking.
Not as a loud revolution, but as a constant presence, almost banal from dint of repetition. It has changed the way people work, yes, but also the way they think about work. Today we talk about human-machine collaboration as if it had always existed, when in fact it is a rather recent thing, still a bit lopsided, still settling in.
This transition wasn’t linear. It came after a confusing period of skills shortages, forced remote work, and companies struggling to keep the pieces together. In that context, many organizations began to focus less on processes and more on people, on the employee experience. Not out of kindness, but out of necessity.
Meanwhile, data has increased, significantly. More than a single person can truly read, understand, or connect. Spreadsheets, dashboards, systems that speak different languages. Artificial intelligence, strategically inserted, promises to bring order, or at least reduce the noise. It personalizes, suggests, summarizes. It doesn’t think for people, but it does take some of the burden off their shoulders.
The point is that adoption has been rapid. First automation (machine learning), then AI, then generative AI, now autonomous agents. All in just a few years.
This has also shifted something culturally: AI is no longer just an IT concern. It’s becoming a cross-functional skill, something that affects very different roles.
Not all leaders feel comfortable. Some data clearly shows this: many executives expect AI to transform the core of their business, but at the same time struggle to achieve real value. Disconnected systems, technologies that don’t communicate, investments that outpace the organization. And then there’s culture , which weighs more heavily than technology . Many CEOs openly say: changing mindsets is harder than changing software.
Meanwhile, the labor market is shifting, whether we like it or not. Estimates point to the automation of a significant portion of working hours and millions of career transitions . Jobs disappear, others are born. It’s not a new story, but the pace is different.
Faster , less forgiving (we saw this with the offers from junior programmers: nonexistent). Today’s skills are no longer enough for tomorrow, and this applies to everyone to some extent.
Some companies are responding systematically. Not with one-off courses or isolated initiatives, but with AI integrated into operational processes . The results, according to some studies, are better : greater loyalty, greater growth . And along with this , there’s growing attention to new management and training models, designed for people who must live with AI, not endure it.
In this scenario, the present is seen as a window. A moment in which we can rethink work by valuing the human contribution, building more resilient systems. Planning is needed, of course, but so is a culture that doesn’t react fearfully to every change. Put that way, it seems simple.
In practice, it’s much less organized, and you can see it in many companies. It’s difficult to keep up with today’s digitalization.
On the technological level, several families of tools are driving this change. Generative AI, based on large-scale linguistic models, produces text, code, and images. It has become popular, almost overused, but it has very concrete applications in businesses: personalized marketing, translations, training materials, and content synthesis.
Sometimes it works great, sometimes less so. It depends.
Then there are AI assistants, integrated into everyday software that today create more problems than anything else, but tomorrow will work just as well as everything else that digital and human technology have built: “Accelerators.” They respond, suggest, and support decisions. In some cases, they even replace entire workflows.
There are examples of assistants that handle hundreds of requests a day with little human intervention. Within companies, they’re doing something similar, helping people access contextual information without having to search everywhere. Will they replace them? A million-dollar question, or maybe more.
In fact, artificial intelligence agents are the next step in GenAI. More autonomous systems, capable of carrying out complex tasks, accessing external data, and ” remembering .” They are already used in diverse fields, from healthcare to human resources, from customer service to information analysis. They are not simple chatbots. They are tireless digital workers.
All this is changing the nature of work. Routine tasks are delegated, speed increases . Workflows are broken down into parts: some to machines, others to people. Humans provide context and judgment, AI recognizes patterns and executes. It’s not just a technical issue, it’s a shift in mental posture.
Law, as often happens, comes later. It’s almost a historical constant: innovation runs, experiments, makes mistakes, breaks things; regulations arrive later, when the chaos is already visible. It happened with the internet , with social media , with data . No surprise there. Except that with artificial intelligence, things get stranger than usual.
Because this time we’re not simply late. We’re standing still, watching . We don’t yet know what shape AI will truly take, where it will settle, if ever. It’s unclear whether it will become a silent tool, an invisible infrastructure, or something more invasive, more opaque . And while we wait to understand this, the laws are lagging behind. Not out of inertia, but out of lack of purpose. What, exactly, should we regulate if the perimeter continues to shift?
Ethics, then, is another matter. Or rather, it would be a matter, if it weren’t often reduced to slides, generic principles, and advisory committees that arrive after decisions have already been made by the algorithm, by the black box and its guardrails. Ethics should anticipate, rather than follow. Or it should simply comment. Meanwhile, systems are deployed, used, and standardized. And what is standardized quickly ceases to be scary, even if it hasn’t been fully understood.
The paradox lies here. We’re building rules for something we don’t yet understand, while that very thing is already changing work, decision-making power, responsibility, and society . It’s not just a legal problem. It’s a practical one. Who’s accountable when a system fails? Who decides what’s acceptable to delegate? Where does automation end and renunciation begin?
Perhaps the issue isn’t chasing AI with ever-evolving new laws. Perhaps it’s accepting that this time the regulatory vacuum isn’t just a delay, but a signal. An invitation to slow down, to observe what we’re truly moving beyond the human realm, and what we’re pretending to control.
Artificial intelligence doesn’t ask permission.
It doesn’t wait for regulations, nor well-written ethical guidelines.
It works, it gets adopted, it becomes a habit.
And when something becomes habit, it ceases to be questioned. Not because it hasn’t done its job, but because the real work, in the meantime, has already changed. And perhaps we too, without realizing it.
Follow us on Google News to receive daily updates on cybersecurity. Contact us if you would like to report news, insights or content for publication.
